text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
Cross-Scripting Errors Cause Most Web App VulnerabilitiesDespite being easy to spot and fix, XSS bugs now account for more than half of all Web application vulnerabilities, reports Veracode.
(click image for larger view)
Slideshow: How Firesheep Can Hijack Web Sessions
According to static application security testing vendor Veracode, cross-site scripting (XSS) errors are now responsible for more than half of all Web application vulnerabilities. Where present, such errors can be exploited by attackers to bypass many security controls and execute malicious scripts via a user's browser.
Many XSS errors could have been prevented, however, by ensuring that developers practice secure coding techniques. "We strongly believe that many XSS errors are straightforward and easy to fix, and that much can be done to greatly reduce their occurrence," said Matt Moynahan, CEO of Veracode, in a statement. "Developer and product security teams must accept greater accountability for writing better code."
Not introducing XSS errors in the first place is a relatively straightforward process, though it requires more upfront work by developers. According to the Open Web Application Security Project (OWASP), "XSS flaws occur whenever an application takes untrusted data and sends it to a Web browser without proper validation and escaping," with escaping referring to removing or blocking characters that might be used to launch an attack.
Numerous software vendors now sell security testing tools to help ensure that developers properly validate and escape their code, among other essentials. According to Gartner Group, such vendors include HP, IBM, Veracode, Armorize Technologies, Checkmarx, Coverity, GrammaTech, Koicwork, and Parasoft.
Many software-developing organizations, however, seemingly prioritize time-to-market over secure coding. As a result, even easy-to-prevent errors, such as not blocking injection attacks or XSS bugs, have become endemic. That's in spite of numerous studies which have found that remediating software bugs costs far less in the early stages of the software development lifecycle, and especially prior to code going into production.
When done early enough in the software development lifecycle, many code fixes are also relatively easy. According to Chris Eng, senior director of security research at Veracode, "We see thousands -- sometimes tens of thousands -- of XSS vulnerabilities a week. Many are those we describe as 'trivial' and can be fixed with a single line of code,"
Veracode said that the average time required to remediate an XSS bug, based on companies that used its service to scan code before and after remediating it, was 16 days.
But if required, fixes can be made in almost no time at all, as has happened in the wake of XSS exploits against Web sites such as Facebook and Twitter. "Sometimes those companies push XSS fixes to production in a matter of hours. Are their developers really that much better? Of course not. The difference is how seriously the business takes it. When they believe it's important, you can bet it gets fixed," said Eng. | <urn:uuid:01893162-28b4-41a2-9d6b-f93c6ff7ee6f> | CC-MAIN-2017-04 | http://www.darkreading.com/vulnerabilities-and-threats/cross-scripting-errors-cause-most-web-app-vulnerabilities/d/d-id/1095793 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00246-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.948373 | 641 | 2.546875 | 3 |
Infographic: Reinventing Networks Through NFV
It’s difficult to get an industry with a long depreciation cycle for capital equipment to support any sort of revolution, but networking is facing one at this moment and that is NFV (Network Function Virtualisation).
What is Network Function Virtualisation – NFV?
Network functions virtualisation is a network architecture concept that leverages the technologies of IT virtualisation to virtualise entire classes of network node functions into building blocks that may connect or chain together, to create communication services. In other words, NFV is an initiative to virtualise the network services that are now being carried out by proprietary dedicated hardware.
Network functions virtualisation relies on, but differs from, traditional server-virtualisation techniques, such as those used in enterprise IT. A virtualised network function, or VNF, may consist of one or more virtual machines running different software and processes, on top of standard high-volume servers, switches and storage, or even cloud computing infrastructure, instead of having custom hardware appliances for each network function.
Example: A virtual session border controller could be deployed to protect a network without the typical cost and complexity of obtaining and installing physical units.
What Is NFV Orchestration?
What are the NFV Drivers and Enablers?
Few NFV drivers and enablers are listed below.
1. Time to Market
2. Targeted Service Introduction
3. Energy Savings
4. R&D Agility
1. Virtualization Machanisms
2. Industry Standard Servers
3. Open APIs for Management
How big is the NFV market?
NFV market size is increasing at a rapid pace. In 2015, the Asia/Pacific SDN/NFV market had a value of approximately 600 million euros. The market is expected to grow by 15x times by 2023.
Where is NFV Today?
Network functions virtualisation has made significant progress in 2015, moving from lab tests to pilots and early deployments. The technology is proven, deployable, scalable and reliable for a wide variety of applications, including Evolved Packet Core (EPC), Customer Premises Equipment (CPE), analytics, Deep Packet Inspection (DPI), security, IMS (Information management System, policy, video servers and edge routing.
What are the network functions being virtualised?
NFV decouples the network functions like:
1. Network Address Translation (NAT)
3. Intrusion Detection
4. Domain Name Service (DNS)
from proprietary hardware appliances so they can run in software.
How is NFV being deployed?
Network Functions Virtualization deployment follows a pentad of steps, that are penned below.
Step 1: Physical Network Function
Step 2: Bare Metal Solution on COTS Server
Step 3: Virtual Network Function on COTS Server with Hypervisor
Step 4: Elastic/Auto-scale Management with Orchestration
Step 5: SDN Integration
What are the applications of NFV?
Example of NFV Applications include the following:
- Virtualized Load Balancers
- Intrusion Detection Devices
- WAN Accelerators
In near future, industries will look for new business services for the Internet of Things (IoT) with Network Functions Virtualization (NFV). | <urn:uuid:1d0fd816-8e42-4ff3-846a-ad7d22df358a> | CC-MAIN-2017-04 | http://www.altencalsoftlabs.com/blogs/2016/06/22/infographic-reinventing-networks-through-nfv/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00144-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.856544 | 679 | 2.609375 | 3 |
Authors: Rob Shimonski and Sean-Philip Oriyano
Whether it’s security vulnerabilities in software used by millions of home users and employees, or the natural human tendency to trust what comes at us, but even the most complex and far-reaching attacks today start with the compromise of a single endpoint.
Unfortunately, this trend will continue until we either all learn to avoid these threats, or software and hardware developers churn out completely secure solutions – which means never. But, let’s do what we can, shall we? Educating ourselves shouldn’t be a chore, but a welcome option.
About the authors
Sean-Philip Oriyano has spent his time in the field working with nearly all aspects of IT and management with special emphasis on Information Security concepts, techniques, and practices.
Rob Shimonski is a best-selling author and editor with over 15 years experience developing, producing and distributing print media in the form of books, magazines and periodicals.
Inside the book
It is natural for attackers to choose to strike where defenses are poorest. Servers and networks have become well-defended, so attackers are going for the users and their computers and devices. Client-side attacks are many and varied, and this books addresses them all.
Using Cross-Site Scripting (XSS) as an introductory example, the authors have thoroughly dissected the attack and get readers through it step by step. Without getting into too many details at first, they explained simply the environment in which it is deployed, how it’s planned, and the main types of vulnerabilities this and other client-side attacks depend on for success.
Client-side attacks can be aimed at popular computer software such as browsers and mail clients, web applications, active content technologies, and mobile devices. Each of these attack types get a chapter, but browser attacks encompasses four. It is understandable, as they are the users’ main door to the Internet.
After a brief explanation of the common functions and features of modern browsers, the authors addressed those of Internet Explorer, Firefox, Chrome, Safari and Opera, along with their known flaws and security issues, then followed up with advanced web browser defenses.
Peppered with tips, warnings and screenshots, this last chapter is a great source of information on how to “lock down” each of the browsers and their various active content elements such as Java, Flash, ActiveX, and others introduced and explained beforehand.
Email client attacks – spam, malware, malicious code, DoS, hoaxes and phishing – are detailed and accompanied with concrete and theoretical examples. The chapters dedicated to web application and mobile attacks are thorough, and the latter should be compulsory reading for everyone owning a “smart” mobile device – whether it is one of Apple’s iDevices, those running on Google’s Android OS, or RIM’s Blackberry.
Finally, the authors address the necessity of security planning (security policies), and of considering security needs from the very start. The pros for securing apps and infrastructure with things like digital signatures, certificates and PKI are explained, as well as these solutions’ limitations, and the book finishes with methods for securing clients (AV, patching, etc.)
I really enjoyed how the authors eased gently into the subject, each new chapter offering enough new information to make it interesting, but not too much to prevent readers from feeling overwhelmed. They explained things in a way that should be understandable to anyone using the software and apps daily and looking for ways to make their computer use safer.
I would recommend this book to inquisitive home users, but have to say that security professionals – apart from those only beginning their work in the field – will not find much to hold their interest. | <urn:uuid:0d645a38-c450-4c1c-909c-9292b086b213> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2013/03/20/client-side-attacks-and-defense/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00144-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.935652 | 779 | 2.546875 | 3 |
Multipath TCP is an extension of TCP that will soon be standardized by IETF. It is a succesful attempt to resolve major TCP shortcomings emerged from the change in the way we use our devices to communicate. There’s particularly the change in the way our new devices like iPhones and laptops are talking across network. All the devices like the networks are becoming multipath. Networks redundancy and devices multiple 3G and wireless connections made that possible.
Almost all today’s web applications are using TCP to communicate. This is due to TCP virtue of reliable packet delivery and ability to adapt to variable network throughput conditions. Multipath TCP is created so that it is backwards compatible with standard TCP. In that way it’s possible for today’s applications to use Multipath TCP without any changes. They think that they are using normal TCP.
We know that TCP is single path. It means that there can be only one path between two devices that have TCP session open. That path is sealed as a communication session defined by source and destination IP address of communicating end devices. If some device wants to switch the communication from 3G to wireless as it happens on smartphones when they come in range of known WiFi connection, TCP session is disconnected and new one is created over WiFi. Using multiple paths/subsessions inside one TCP communication MPTCP will enable that new WiFi connection makes new subsession inside established MPTCP connection without braking TCP that’s already in place across 3G. Basically more different paths that are available will be represented by more subsessions inside one MPTCP connection. Device connected to 3G will expand the connection to WiFi and then will use algorithm to decide if it will use 3G and WiFi in the same time or it will stop using 3G and put all the traffic to cheaper and faster WiFi.
TCP single path property is TCP’s fundamental problem
In datacenter environment there is a tricky situation where two servers are talking to each other using TCP to communicate and that TCP session is created across random path between servers and switches in the datacenter. If there are more paths of course. If there are (and there are!) another two servers talking in the same time, it will possibly happen that this second TCP session will be established using partially the same path as the first TCP session. In that situation there will be a collision that will reduce the throughput for both sessions. There is actually no way to control this phenomenon in TCP world. As in our datacenter example the same thing works for every multipath environment so it it true for example for the Internet.
Answer is MPTCP!
Multipath TCP – MPTCP is better as TCP in that enables the use of multiple paths inside a single transport connection. It meets the goal to work well at any place where “normal” TCP would work. | <urn:uuid:7737678a-ac99-4072-98b9-6f06539222af> | CC-MAIN-2017-04 | https://howdoesinternetwork.com/tag/multipath | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00172-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.93352 | 588 | 3.40625 | 3 |
Enterprise resource planning (ERP) systems are generic and comprehensive business software systems based on a distributed computing platform including one or more database management systems. They combine a global enterprise information system covering large parts of the information needs of an enterprise with a large number of application programs implementing all kinds of business processes that are vital for the operation of an enterprise. These systems help organizations to deal with basic business functions such as purchase/sales/inventory ('distribution') management, financial accounting and controlling, and human resources management, as well as with advanced business functions such as project management, production planning, supply chain management, and sales force automation.
First generation ERP systems now run the complete back office functions of the worlds largest corporations. The ERP market rose at 50% per year to $8.6 billion in 1998 with 22,000 installations of the market leader, SAP R/3. The benefits of a properly implemented ERP system can be significant.
Typically, ERP systems run in a three-tier client/server architecture. They provide multi-instance database management as well as configuration and version (or 'customization') management for the underlying database schema, the user interface, and the numerous application programs associated with them. Since ERP systems are designed for multinational companies, they have to support multiple languages and currencies as well as country-specific business practices. The sheer size and the tremendous complexity off these systems make them difficult to deploy and maintain. Despite the worldwide success of systems like SAP R/3 and BaanERP, the underlying architectures, data models, transaction mechanisms and programming techniques are to a large degree unknown to computer scientists.
The goal of this tutorial is to present the information technology of BaanERP, as a representative of the ERP system paradigm, from a computer science (rather than from a business management) perspective, relating it to established database and distributed systems concepts and techniques. A critical assessment of BaanERP will point out some of its merits and weaknesses. The tutorial will help attendees to understand the potential of ERP system technology in general, and of Baan ERP system technology in particular, and how it relates to their own research and development work. | <urn:uuid:4d3ab1d9-2488-432c-95d1-a7bf850944af> | CC-MAIN-2017-04 | http://baanboard.com/node/2?s=35ad7fdbaf76ecc9d2f418e63de1eec9 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00292-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.929994 | 445 | 2.796875 | 3 |
Manmohan Singh - Biography & Biodata of Dr. Manmohan singh
Dr. Manmohan Singh was the 14th Prime Minister of the Republic of India. He was born on Monday, 26 September 1932. He was the first Sikh to hold the post of Prime Minister in India and he was the first Indian Prime Minister since Jawahar Lal Nehru to return to power after completing a full five-year term. Manmohan Singh was also the 12th Prime Minister under an Indian National Congress Government.
Manmohan Singh Biography
Manmohan Singh was born on Monday, September 26, 1932 at Gah in the Punjab before the partition of the subcontinent. He was born to Gurmukh Singh and Amrit Kaur on 26 September 1932, in Gah, Punjab (now in Chakwal District, Pakistan), British India, into a Sikh family. He lost his mother when he was very young, and he was raised by his paternal grandmother, to whom he was very close. He was a hard working student who studied by candlelight, as his village did not have electricity. After the Partition of India, he migrated to Amritsar, India, where he studied at Hindu College. He attended Punjab University, Chandigarh studying Economics and attaining his bachelor's and master's degrees in 1952 and 1954 respectively, standing first throughout his academic career. He went on to read for the Economics Tripos at Cambridge as a member of St John's College.
He was the 14th Prime Minister of the Republic of India. He is a famous head of state from India of Sikh religion. Dr. Mahmohan Singh graduated from Punjab University in 1948 and attended Cambridge University in Britain, earning a First Class Honours degree in economics in 1957.
He continued with his graduate studies at Oxford University and achieved a doctorate in economics in 1962. He returned to India, lecturing at Punjab University and at the Delhi School of Economics. In 1971 he joined the Indian civil service as an economic adviser in the commerce ministry. His talents were quickly rewarded, and he was appointed chief economic adviser in the ministry of finance in 1972.
Singh made the transition from bureaucrat to politician in 1991 when he was appointed a member of India's upper house of parliament (the Rajya Sabha). While a member of the upper house between 1991 and 1996, he also became the finance minister in Prime Minister P. V. Narasimha Rao's government. With Rao's support, he initiated successful economic reforms aimed at slashing India's infamous red tape, enhancing productivity, and liberalizing the economy. His goals were to end protectionism and open the Indian economy to foreign investment so that India would evolve to a mixed economy saving it from the verge of bankruptcy. As a result the economy became reinvigorated, inflation was controlled, and Indian industry began to show signs of strength.
An economist by profession, Singh was the Governor of the Reserve Bank of India from 1982 to 1985, the Deputy Chairman of the Planning Commission of India from 1985 to 1987 and the Finance Minister of India from 1991 to 1996. He is also a Rajya Sabha member from Assam, currently serving his fourth term. Dr. Manmohan Singh previously carried out economic reforms in India during his tenure as the Finance Minister from 1991 to 1996. These reforms resulted in the end of the Licence Raj system, helping to open the Indian economy to greater international trade and investment.
Manmohan Singh Biodata
|Father's Name||Mr Gurmukh Singh|
|Mother's Name||Mrs Amrit Kaur|
|Date of Birth||September 26, 1932|
|Place of Birth||Village Gah (West Punjab), Now in Pakistan|
|Marital Status||Married Since 1958|
|Spouse Name||Gursharan Kaur|
|Children||3 Daughters (Upinder, Daman & Amrit)|
|Profession||Economist, Civil Servant, Social Worker & Professor|
|Contact Address||7, Race Course Road, New Delhi - 110011|
Manmohan Singh is a graduate of Punjab University, Chandigarh, the University of Cambridge, and the University of Oxford. After serving as the Governor of the Reserve Bank of India and the Deputy Chairman of the Planning Commission of India, Singh was appointed as the Union Minister of Finance in 1991 by the then Prime Minister Narasimha Rao, who chose a professional economist breaking the tradition of political appointments to Finance Ministry. Narasimha Rao took up the task of political management largely insulating Dr. Manmohan Singh from political pressure and interference. During his tenure as the Finance Minister, Singh was widely credited for carrying out liberalising reforms in India in 1991 which resulted in the weakening of Licence Raj system.
Manmohan Singh married Gursharan Kaur in 1958. However, the family has largely stayed out of the limelight. Their three daughters - Upinder, Daman and Amrit, have successful, non-political, careers. Upinder Singh is a professor of history at Delhi University. She has written six books, including Ancient Delhi (1999) and A History of Ancient and Early Medieval India (2008). Daman Singh is a graduate of St. Stephen's College, Delhi and Institute of Rural Management, Anand, Gujarat, and author of The Last Frontier: People and Forests in Mizoram and a novel Nine by Nine. Amrit Singh is a staff attorney at the ACLU.
In 1997, the University of Alberta presented him with an Honorary Doctor of Laws. The University of Oxford awarded him an honorary Doctor of Civil Law degree in June 2006, and in October 2006, the University of Cambridge followed with the same honour. St. John's College further honoured him by naming a PhD Scholarship after him, the Dr Manmohan Singh Scholarship.
Following the 2004 general elections, Singh was unexpectedly declared as the Prime Ministerial candidate of the Indian National Congress-led United Progressive Alliance. He was sworn in as the prime minister on 22 May 2004, along with the First Manmohan Singh Cabinet. After the Indian National Congress won the 2009 general elections, On 22 May 2009, Manmohan Singh was sworn in for his second tenure as the Prime Minister at the Asoka Hall of Rashtrapati Bhavan.
Eminent writer Khushwant Singh lauded Mr. Singh as the best Prime Minister India has had; even rating him higher than Jawahar Lal Nehru, the first Prime Minister of India. He mentioned of an incident in his book Absolute Khushwant: The Low-Down on Life, Death and Most things In-between where after losing the 1999 Lok Sabha elections, Mr.Singh immediately returned Rs 2 lakh he had borrowed from the writer for hiring taxis. Terming him as the best example of integrity, Mr. Khushwant Singh stated, " When people talk of integrity, I say the best example is the man who occupies the country's highest office."
Manmohan Singh has undergone multiple cardiac bypass surgeries, most recently in January 2009.
Political Career of Dr. Manmohan Singh
Singh's political career was turbulent because he was neither charismatic nor a traditional politician. He lost the only time he contested a parliamentary election for the lower house (Lok Sabha). From 1998 to 2004 he was leader of the opposition but became prime minister in May 2004 when the Congress Party won a coalition majority in the national election. This is because Sonia Gandhi turned down the prime minister-ship. Singh became India's first Sikh prime minister. This is impressive due to the troubled relationship between India's Sikhs and the Hindu majority during the 1980s. (In 1984 government forces stormed the sacred Sikh Golden Temple in Amritsar to root out Sikh militants. Prime Minister Indira Gandhi's Sikh bodyguards avenged this act by assassinating her months later.)
Although governing such a diverse nation as India with a coalition is difficult, during his first two years in office Singh achieved a measure of success. The Indian economy continued to grow at an impressive rate. The fractured relationship with Pakistan showed signs of slowly improving, although the deeper issue of who controls Kashmir remained unresolved. Equally as important, political and trade relations with the United States improved considerably. The government also spearheaded a massive project aimed at eradicating rural poverty. In large part due to Manmohan Singh's reforms and pragmatic managerial style, India's economy continued to expand and under his government, showed signs of emerging as a global economic power.
Singh was always an unlikely politician, who was routed in a parliamentary election in 1999. In fact, he has never won an election and sits in the upper house. Politically, Manmohan Singh is the classic example of the stateless politician.
After the Indian National Congress won the 2009 general elections, Singh was reappointed as the Prime Minister of India on May 22, 2009, making him the first Indian Prime Minister since Jawahar Lal Nehru to return to power after completing a full five-year term before this over 40 years ago.
In 2010, TIME magazine listed him among the 100 most influential people in the world. Newsweek magazine also lists him as one of 10 world leaders who have won respect and was described as "the leader other leaders love".
Dr. Manmohan Singh stayed with the Congress Party despite continuous marginalization and defeats in the elections of 1996, 1998 and 1999. He did not join the rebels in a major split which occurred in 1999, when many major Congress leaders objected to Sonia Gandhi's rise as Congress President and leader of the opposition. Being touted as the Congress choice for the PM's job, she became a target for nationalists who objected to her Italian birth. It seemed that a party which turned to old links to the Nehru-Gandhi dynasty and a foreigner for political leadership had no future or potential to look forward to. But Singh continued as a prominent leader, rising in confidence and helping to revamp the party's platform and organization.
The Congress alliance won a surprisingly high number of seats in the Parliamentary elections of 2004, owing to a nationwide disenchantment of millions of poorer citizens with the BJP's focus on the surging middle-class, and also its dismal record in handling religious tensions. The Left Front decided to support a Congress alliance government from outside in order to keep the "communal forces" out of power. Sonia Gandhi was elected leader of the Congress Parliamentary Party and was expected to become the Prime Minister but in a surprise move, declined to accept the post and instead nominated Dr. Manmohan Singh as Prime Minister. There were protests within the Congress about her refusal but eventually people accepted her decision and the allies too accepted her choice. Singh secured the nomination for prime minister on May 19, 2004 when President of India Dr. A.P.J. Abdul Kalam officially asked him to form a government. Although most expected him to head the Finance Ministry himself, he did not do so. His political mentor Sonia Gandhi retains absolute control over the MPs and organization of the Congress Party. His appointment is notable as it comes 20 years after India witnessed significant tensions between the Indian central government and the Punjabi Sikh community.
Official State Visit at the White House
Prime Minister Dr. Manmohan Singh had the first official state visit to the White House during the administration of U.S. President Barack Obama. The visit took place in November 2009, and several discussions took place, including on trade and nuclear power. It was set during a wider visit to the United States by Dr. Manmohan Singh.
Manmohan Singh with American President Barack Obama at the White House.
Important Facts about Manmohan Singh
- Being born on September 26th, Manmohan Singh is a Libra.
- He joined the prime minister's office on 22 May 2004.
- He is the 14th prime minister of India.
- He was the Deputy Chairperson of the Planning Commission from 15 January 1985 to 31 August 1987.
- He was the Governor of the Reserve Bank of India from 15 September 1982 to 15 January 1985.
- His wife's name is Gursharan Kaur.
- He belongs to Sikh Religion.
- He has worked with 2 Presidents; Dr. APJ Abdul Kalam & Pratibha Patil.
- He preceded Atal Bihari Vajpayee (BJP) as the Prime Minister of India.
- His ethnicity is Asian/Indian.
- He attended the BA Economics, Punjab University (in 1952).
Positions Held by Dr. Manmohan Singh
- Chief, Financing for Trade Section, UNCTAD, United Nations Secretariat, Manhattan, New York
- Economic Advisor, Ministry of Foreign Trade, India (1971-1972)
- Chief Economic Advisor, Ministry of Finance, India, (1972-1976)
- Honorary Professor, Jawaharlal Nehru University, New Delhi (1976)
- Director, Reserve Bank of India (1976-1980)
- Director, Industrial Development Bank of India (1976-1980)
- Secretary, Ministry of Finance (Department of Economic Affairs), Government of India, (1977-1980)
- Governor, Reserve Bank of India (1982-1985)
- Deputy Chairman, Planning Commission of India, (1985-1987)
- Secretary General, South Commission, Geneva (1987-1990)
- Advisor to Prime Minister of India on Economic Affairs (1990-1991)
- Finance Minister of India, (21 June 1991 - 15 May 1996)
- Leader of the Opposition in the Rajya Sabha (1998-2004)
- Prime Minister of India (22 May 2004 - Present)
This list is based on the content of the title and/or content of the article displayed above which makes them more relevant and more likely to be of interest to you.
We're glad you have chosen to leave a comment. Please keep in mind that all comments are moderated according to our comment policy, and all links are nofollow. Do not use keywords in the name field. Let's have a personal and meaningful conversation.comments powered by Disqus | <urn:uuid:8c3da2c8-cb5c-4f7f-870b-e7eb13bef9d4> | CC-MAIN-2017-04 | http://www.knowledgepublisher.com/article-963.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280899.42/warc/CC-MAIN-20170116095120-00016-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.975624 | 2,899 | 2.65625 | 3 |
This course introduces application developers to IBM WebSphere Operational Decision Management V8 modules for developing event-based solutions.
Through presentations, hands-on lab exercises, and a case study, students will learn about the WebSphere Operational Decision Management V8 concepts, processes, and procedures that are needed to develop and implement an integrated business event solution. The course begins by introducing students to an architectural and technical overview of WebSphere Operational Decision Management V8. Subsequent units and exercises cover how to develop business event solutions by using Event Designer to define event rules, action, and business objects. Students will learn the characteristics of business rules and event rules, and build an event project with information technology assets. The course also covers technology connectors, deployment, and integrating rules with event projects. Students will learn how to use the event test widget in Business Space to test application logic, and build their own dashboard for monitoring business events. Finally, students will learn the fundamental concepts of governance so that they can support lifecycle control.
In the hands-on lab exercises provided throughout the course, students will create business event projects, and deploy and test a solution that uses the core features of IBM WebSphere Operational Decision Management V8. | <urn:uuid:c6cbe485-cb29-4a6e-929b-0188964c3cbd> | CC-MAIN-2017-04 | https://www.globalknowledge.com/ca-en/course/119568/developing-event-solutions-in-ibm-websphere-operational-decision-management-v8-1/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279923.28/warc/CC-MAIN-20170116095119-00136-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.916354 | 246 | 2.609375 | 3 |
With the recent launch of the iPhone there’s been renewed interest in so-called “push” email technology. “Push” means that when an email message arrives in your mailbox on the server, it is by one method or another replicated to your client, e.g. your desktop email application or mobile device.
Let’s start by addressing our definition of “push”. I’m not sure there is a *true* definition of this, as it’s more of a concept than a hard technical standard. As such, I’ll offer my own thoughts on the matter:
What IDLE Isn’t
IDLE is not — according to my definition above — a true push technology. IDLE requires an active IMAP connection in order to work, and that connection cannot be initiated by the server.
- Client connects to IMAP server and verifies that the server supports the IDLE command.
- If the client also supports it, the client sends the IDLE command to the server.
- Having received the IDLE command from the client, the server is now able to “push” messages down the existing connection to the client, as they come in.
In other words, IDLE’s push functionality only works when an existing, healthy connection already exists between the client and server. Anything that disrupts that connection kills the ability for the server to send messages down to the client. And only the client can re-initiate the connection once it has broken.
From the IDLE RFC:
The server MAY consider a client inactive if it has an IDLE command running, and if such a server has an inactivity timeout it MAY log the client off implicitly at the end of its timeout period. Because of that, clients using IDLE are advised to terminate the IDLE and re-issue it at least every 29 minutes to avoid being logged off.
This isn’t to say that IDLE isn’t useful; it clearly is — especially for receiving emails more immediately on stable network connections. But in a mobile device environment it’s often difficult to maintain the requisite and relatively delicate IMAP connections over long periods of time. This will continue to be a challenge for mobile email providers not using more robust server-to-client delivery methods like those available from RIM and Microsoft.:
The IDLE RFC
[ July 2007 ] | <urn:uuid:b1b26c91-88cd-4380-8126-b9a4d3ccdfdf> | CC-MAIN-2017-04 | https://danielmiessler.com/study/imap_idle/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00044-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.914565 | 503 | 2.609375 | 3 |
Basic Linux commands you need to know for CompTIA’s new A+
As is the norm every few years, the CompTIA A+ exams (currently numbers 220-801 and 220-802) are being updated to reflect the current technology environment. The new exams (to be named 220-901 and 220-902) are expected by the end of the year and — for the most part — they expand what is being currently tested: adding more content than subtracting.
One of the topics being added is that of Basic Linux commands. The most recent iterations of the exams focused on Microsoft Windows, but with administrators regularly encountering Linux-based servers it makes sense that the coverage is being enlarged to require them to know of a few command-line tools that work there. The commands they have honed in on, in domain order, are:
● pwd vs passwd
Because of space limitations, we will look at approximately the first half of the commands this month and the remainder next month. Before getting to the commands, though, it is important to understand some basics. With Linux, there is a shell that serves as an interpreter between the user and OS: this is often bash, but can be one of several others as well. Because a shell interprets what you type, knowing how the shell processes the text you enter is important. All shell commands have the following general format (assuming the command takes options — some commands have no options).
command [option1] [option2] . . . [optionN]
On a command line, you enter a command, followed by zero or more options (or arguments). The shell uses a blank space or a tab to distinguish between the command and options. This means you must use a space or a tab to separate the command from the options and the options from one another. If an option contains spaces, you put that option inside quotation marks. For example, to search for a name in the password file, enter the following grep command (grep is used for searching for text in files):
grep “Emmett Dulaney” /etc/passwd
When grep prints the line with the name, it looks like this:
If you create a user account with your username, type the grep command with your username as an argument to look for that username in the /etc/passwd file. In the output from the grep command, you can see the name of the shell (/bin/bash) following the last colon (:). Because the bash shell is an executable file, it resides in the /bin directory; you must provide the full path to it.
The number of command-line options and their format depend on the actual command. Typically, these options look like -X, where X is a single character. For example, you can use the -l option with the ls command. The command lists the contents of a directory, and the option provides additional details. Here is a result of typing ls -l in a user’s home directory:
drwxr-xr-x 2 edulaney users 48 2015-09-08 21:11 bin
drwx—— 2 edulaney users 320 2015-09-08 21:16 Desktop
drwx—— 2 edulaney users 80 2015-09-08 21:11 Documents
drwxr-xr-x 2 edulaney users 80 2015-09-08 21:11 public_html
drwxr-xr-x 2 edulaney users 464 2015-09-17 18:21 sdump
If a command is too long to fit on a single line, you can press the backslash key (\) followed by Enter. Then, continue typing the command on the next line. For example, type the following command. (Press Enter after each line.)
The cat command then displays the contents of the /etc/passwd file.
You can concatenate (that is, string together) several shorter commands on a single line by separating the commands by semicolons (;). For example, the following command …
cd; ls -l; pwd
… changes the current directory to your home directory, lists the contents of that directory, and then shows the name of that directory.
You can combine simple shell commands to create a more sophisticated command. For example, suppose that you want to find out whether a device file named sbpcd resides in your system’s /dev directory because some documentation says you need that device file for your CD-ROM drive. You can use the ls /dev command to get a directory listing of the /dev directory and then browse through it to see whether that listing contains sbpcd.
Unfortunately, the /dev directory has a great many entries, so you may find it hard to find any item that has sbpcd in its name. You can, however, combine the ls command with grep and come up with a command line that does exactly what you want. Here’s that command line:
ls /dev | grep sbpcd
The shell sends the output of the ls command (the directory listing) to the grep command, which searches for the string sbpcd. That vertical bar (|) is known as a pipe because it acts as a conduit (think of a water pipe) between the two programs — the output of the first command is fed into the input of the second one.
There are literally hundreds, if not thousands, of Linux commands that exist within the shell and the system directories. Fortunately, CompTIA asks that you know a much smaller number than that. The following table lists the Linux commands by category.
Linux Commands by Category
|Managing Files and Directories|
|cd||Change the current directory|
|chmod||Change file permissions|
|chown||Change the file owner and group|
|ls||Display the contents of a directory|
|mv||Rename a file and move the file from one directory to another|
|pwd||Display the current directory|
|dd||Copy blocks of data from one file to another (used to copy data from devices)|
|grep||Search for regular expressions in a text file|
|apt-get||Download files from a repository site|
|ps||Display a list of currently running processes|
|shutdown||Shut down Linux|
|vi||Start the visual file editor|
|passwd||Change the password|
|su||Start a new shell as another user (the other user is assumed to be root when the command is invoked without any argument)|
|sudo||Allows you to run a command as another user (usually the root user)|
|ifconfig||View and change information related to networking configuration|
|iwconfig||Similar to ifconfig, but used for wireless configuration|
Becoming the Root/Superuser
When you want to do anything that requires a high privilege level (for example, administering your system), you have to become root. Normally, you log in as a regular user with your everyday username. When you need the privileges of the superuser, though, use the following command to become root:
That’s su followed by a space and the minus sign (or hyphen). The shell then prompts you for the root password. Type the password and press Enter. After you’ve finished with whatever you want to do as root (and you have the privilege to do anything as root), type exit to return to your normal username.
Instead of becoming root by using the su – command, you can also type sudo followed by the command that you want to run as root. In some distributions, such as Ubuntu, you must use the sudo command because you don’t get to set up a root user when you install the operating system. If you’re listed as an authorized user in the /etc/sudoers file, sudo executes the command as if you were logged in as root. Type man sudoers to read more about the /etc/sudoers file.
Every time the shell executes a command that you type, it starts a process. The shell itself is a process as are any scripts or programs that the shell runs. Use the ps ax command to see a list of processes. When you type ps ax, bash shows you the current set of processes. Here are a few lines of output from the command e ps ax -cols 132. (The -cols 132 option is used to ensure seeing each command in its entirety.)
PID TTY STAT TIME COMMAND
1 ? S 0:01 init
2 ? SN 0:00 [ksoftirqd/0]
3 ? S< 0:00 [events/0]
4 ? S< 0:00 [khelper]
9 ? S< 0:00 [kthread]
19 ? S< 0:00 [kacpid]
75 ? S< 0:00 [kblockd/0]
115 ? S 0:00 [pdflush]
116 ? S 0:01 [pdflush]
118 ? S< 0:00 [aio/0]
117 ? S 0:00 [kswapd0]
711 ? S 0:00 [kseriod]
1075 ? S< 0:00 [reiserfs/0]
2086 ? S 0:00 [kjournald]
2239 ? S < s 0:00 /sbin/udevd -d
. . . lines deleted . . .
6374 ? S 1:51 /usr/X11R6/bin/X :0 -audit 0 -auth /var/lib/gdm/:0.Xauth -nolisten tcp vt7
6460 ? Ss 0:02 /opt/gnome/bin/gdmgreeter
6671 ? Ss 0:00 sshd: edulaney [priv]
6675 ? S 0:00 sshd: edulaney@pts/0
6676 pts/0 Ss 0:00 -bash
6712 pts/0 S 0:00 vsftpd
14702 ? S 0:00 pickup -l -t fifo -u
14752 pts/0 R+ 0:00 ps ax –cols 132
In this listing, the first column has the heading PID and shows a number for each process. PID stands for process ID (identification), which is a sequential number assigned by the Linux kernel. If you look through the output of the ps ax command, you see that the init command is the first process and has a PID of 1. That’s why init is referred to as the mother of all processes.
The COMMAND column shows the command that created each process, and the TIME column shows the cumulative CPU time used by the process. The STAT column shows the state of a process: S means the process is sleeping, and R means it’s running. The symbols following the status letter have further meanings; for example < indicates a high-priority process, and + means that the process is running in the foreground. The TTY column shows the terminal, if any, associated with the process.
The process ID, or process number, is useful when you have to forcibly stop an errant process. Look at the output of the ps ax command and note the PID of the offending process. Then, use the kill command with that process number to stop the process. For example, to stop process number 8550, start by typing the following command:
In Linux, when you log in as root, your home directory is /root. For other users, the home directory is usually in the /home directory, for example the home directory for a user logging in as edulaney is /home/edulaney. This information is stored in the /etc/passwd file. By default, only you have permission to save files in your home directory, and only you can create subdirectories in your home directory to further organize your files.
Linux supports the concept of a current directory, which is the directory on which all file and directory commands operate. After you log in, for example, your current directory is the home directory. To see the current directory, type the pwd command.
To change the current directory, use the cd command. To change the current directory to /usr/lib, type the following:
Then, to change the directory to the cups subdirectory in /usr/lib, type this command:
Now, if you use the pwd command, that command shows /usr/lib/cups as the current directory.
These two examples show that you can refer to a directory’s name in two ways: Absolute or Relative. An example of an absolute pathname is /usr/lib, which is an exact directory in the directory tree (think of the absolute pathname as the complete mailing address for a package that the postal service will deliver to your next-door neighbor). An example of a relative pathname is cups, which represents the cups subdirectory of the current directory, whatever that may be (think of the relative directory name as giving the postal carrier directions from your house to the one next door so the carrier can deliver the package).
If you type cd cups in /usr/lib, the current directory changes to /usr/lib/cups. However, if you type the same command in /home/edulaney, the shell tries to change the current directory to /home/edulaney/cups.
Use the cd command without any arguments to change the current directory back to your home directory. No matter where you are, typing cd at the shell prompt brings you back home. The tilde character (~) is an alias that refers to your home directory. Thus, you can change the current directory to your home directory also by using the command cd ~. You can refer to another user’s home directory by appending that user’s name to the tilde. Thus, cd ~superman changes the current directory to the home directory of superman.
Also, a single dot (.) and two dots (. .) , often referred to as dot-dot, also have special meanings. A single dot (.) indicates the current directory, whereas two dots (. .) indicate the parent directory. For example, if the current directory is /usr/share, you go one level up to /usr by typing the following:
cd . .
You can get a directory listing by using the ls command. By default, the ls command, without any options, displays the contents of the current directory in a compact, multicolumn format. To tell the directories and files apart, use the -F option (ls –F). The output will show the directory names with a slash (/) appended to them. Plain filenames appear as is. The at sign (@) appended to a indicates that this file is a link to another file. (In other words, this filename simply refers to another file; it’s a shortcut.) An asterisk (*) is appended to executable files (the shell can run any executable file).
You can see even more detailed information about the files and directories with the -l option. The rightmost column shows the name of the directory entry. The date and time before the name show when the last modifications to that file were made. To the left of the date and time is the size of the file in bytes. The file’s group and owner appear to the left of the column that shows the file size. The next number to the left indicates the number of links to the file. (A link is like a shortcut in Windows.) Finally, the leftmost column shows the file’s permission settings, which determine who can read, write, or execute the file.
This column shows a sequence of nine characters, which appear as rwxrwxrwx when each letter is present. Each letter indicates a specific permission. A hyphen (-) in place of a letter indicates no permission for a specific operation on the file. Think of these nine letters as three groups of three letters (rwx), interpreted as follows:
Leftmost group: Controls the read, write, and execute permission of the file’s owner. In other words, if you see rwx in this position, the file’s owner can read (r), write (w), and execute (x) the file. A hyphen in the place of a letter indicates no permission. Thus, the string rw- means the owner has read and write permission but not execute permission. Although executable programs (including shell programs) typically have execute permission, directories treat execute permission as equivalent to use permission: A user must have execute permission on a directory before he or she can open and read the contents of the directory.
Middle group: Controls the read, write, and execute permission of any user belonging to that file’s group.
Rightmost group: Controls the read, write, and execute permission of all other users (collectively thought of as the world).
Thus, a file with the permission setting rwx—— is accessible only to the file’s owner, whereas the permission setting rwxr–r– makes the file readable by the world.
Most Linux commands take single-character options, each with a hyphen as a prefix. When you want to use several options, type a hyphen and concatenate (string together) the option letters, one after another. Thus, ls -al is equivalent to ls -a -l as well as ls -l -a.
Next month, we will look at the rest of the basic Linux commands CompTIA wants you to be familiar with for the upcoming A+ certification exams. | <urn:uuid:f06fba64-5b1a-48ea-8169-2afbf3133afc> | CC-MAIN-2017-04 | http://certmag.com/basic-linux-commands-need-know-comptias-new/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00438-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.883116 | 3,705 | 3.78125 | 4 |
One each year we take time out to celebrate Earth, home to humankind, the animal and plant kingdom, and whatever aliens walk in our midst.
The rest of the year, of course, we try our best to make our beloved Blue Planet uninhabitable. We're doing a pretty good job, but there's always a chance that Earth will be destroyed by outside forces before we get around to finishing our work.
The video below explores 10 different ways that Earth could meet a sudden and violent demise. It's both fascinating and frightening.
Also, it did not escape our attention that some of the scientists interviewed seemed inappropriately excited about their pet doomsday theories. Scientists, is being right that important to you? You're talking about all of us dying here! Can't any of you act a bit somber? It's bad optics.
We're hard-pressed to pick a favorite way for Earth to be destroyed, if there can be such a thing. But perhaps the most ironic would be the scenario in which the planet we're so desperate to travel to and colonize -- Mars -- falls out of its orbit and collides with Earth. Talk about a cosmic joke.
This story, "Enjoy Earth Day while we still have Earth" was originally published by Fritterati. | <urn:uuid:2114949e-8ba8-4a95-941b-db8dcf592d9b> | CC-MAIN-2017-04 | http://www.itnews.com/article/2913199/enjoy-earth-day-while-we-still-have-earth.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00346-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.956961 | 261 | 2.65625 | 3 |
Globalisation has increased the rewards for manufacturers who innovate and get to market first. As a result, more manufacturers are turning to the computer to design, test and refine products and processes "in silicon" before they commit big money to the project.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
A case in point is Bombardier, the Belfast-based aerospace manufacturer of Learjets and other commercial aircraft. The company has recently completed a pilot computer simulation project, the CRJ1000 flights project, to model the design and manufacturing of components for Bombardier aircraft.
The key, says Bombardier, was to combine computer aided design, engineering and manufacturing tools in a way that let the firm's designers, engineers and shop-floor staff work in parallel to create and build production lines to make the items.
The result has been a quicker start-up, fewer test runs to reach acceptable quality standards, and less money tied up for less time, said Brian Welch, Bombardier's manufacturing engineering manager, and one of the four-man team responsible for the project. "We now have a proven tool that we will use on all future projects," Welch told Computer Weekly.
Bombardier is part of a British government project to design and build next generation wings made from carbon fibre-reinforced plastics known as composites.
The £103m, 17-member project announced in May is run by Airbus UK, which has been making composite wings for the Airbus A400M military transport at its 8,000 square metre Broughton factory since 2006.
Instead of cutting and bending aluminium sheets, wing components are baked whole in curing ovens to exacting tolerances.
Airbus will design, test and build the wings in the computer before it does it for real. It will build on its experience gained from automating virtually its entire wing-building line.
Will Searle,research leader for virtual manufacturing at Airbus UK, says,"We have proved that we can go from designing the wing to packaging it [for assembly elsewhere], digitally and physically."
Everything to do with the line is modelled in three dimensions from the start. "The aim is to knock costs out and make conditions on the assembly line more comfortable for the workers," Searle says.
One challenge is that building wings requires a lot of manual labour that is hard to digitise, and is therefore hard to model accurately, says Searle. "If a rivet is slightly too large for the hole, the fitter will get a hammer and make it fit. But you cannot do that with a composite wing because you might compromise its integrity."
By tying the manufacturing system to its enterprise management system Airbus is able to identify and control costs better, and to see the impact of a design or engineering change on costs. This provides a better platform for decision making as managers can test more options on computer before committing themselves.
"Rate of production is crucial for us," Searle says. "We are aiming at a system where we can go from making 10 to 40 wings a day over a weekend, without compromising quality or scrap. The capital expenditure involved is so great that we have to be right first time. That is why we are digitising to optimise."
The government wants the UK to lead the world in composite wing technology, and to become the world's mass manufacturer of aircraft wings. Companies like Airbus and Bombardier believe that digital simulation will put them at the forefront of aircraft manufacturing.
Bombardier's computer aided design
Bombardier is using Dassault Systemes' Catia v5 computer aided design and Optegra plant design management software to design aircraft components.
The Delmia system combines 3D design and product structure data with that of other components to create an electronic bill of materials that together make up the product. This allows production engineers to model the assembly processes and to suggest changes to the product's design that improve its manufacturability.
The system links to the company's enterprise resourcemanagement system for costing and other data. This allows engineers to monitor the effects of changes on cost and profitability, as well as produce assembly documentation for shop-floor workers.
Airbus usesDassault Systemes' Catia computer aided design software as well as its Delmia product lifecycle management software. It chose Delmia to help manage final assembly of theA380, and is using both product sets in the £103m Next GenerationComposite Wing project it manages for the British government.
Each A380 requires an assembly space 80mby 80mand at least 25mhigh. The Delmia tools were used to simulate and visualise critical manufacturing processesto unify and integrate components from four different European sites for assembly inHamburg.
What do you need to simulate a factory?
A study at the University of Sutherland's Institute for Automotive and advanced manufacturing practice found that businesses need to model long list of processes to create an accuratemanufacturing simulation.
• Product design and testing
• Engineering analysis,
• Process planning
• Cost estimation
• Factory layout
• Factory simulation
• Engineering and manufacturing data and process management
• Supply chain collaboration
Bombardier and Airbus spoke to Computer Weekly at Dassault Systemes' Delmia's customer conference in Stuttgart in October. | <urn:uuid:ebb62963-8302-42c2-bb6a-05482d19e2be> | CC-MAIN-2017-04 | http://www.computerweekly.com/news/1280096783/Manufacturers-turn-to-silicon-factories-to-save-money | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00403-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.941062 | 1,101 | 2.65625 | 3 |
How technology can reinvigorate the education system
The path to a global No. 1 ranking in education requires technology
President Barack Obama set a goal of making the United States first in the world in post-secondary academic degrees by 2020, and Jim Shelton says technology is what will get us there.
- By Alice Lipowicz
- Oct 07, 2010
Named assistant deputy secretary for technology at the Education Department’s Office of Innovation and Improvement in April 2009, Shelton is in charge of grant-making and educational technology strategy at the department. He also coordinates Education's technology efforts with other federal and state agencies.
Previously, he was education program director at the Bill and Melinda Gates Foundation, where he spurred investments in next-generation models of learning. He has a bachelor's degree in computer science from Morehouse College and a master's degree in education and master's degree in business administration from Stanford University. He began his career in computer system development and became a senior consultant at McKinsey and Co. He also co-founded an educational company, worked on education reform issues for New York City and launched a private nonprofit venture capital fund for education.
At the recent Gov 2.0 Summit in Washington, Shelton talked about how customized software for instruction is being used to try to reduce the teacher/student ratio to as close to 1-to-1 as possible. With that ratio down to 15-to-1 now — from 27-to-1 in 1970 — he said the United States is unlikely to see more improvements in the classroom without technology.
He also noted that collecting and analyzing massive amounts of data is taking the guesswork out of understanding how students learn and what teaching methods work best. Using adaptive algorithms, we have the ability to personalize education. And the availability of low-cost devices, broadband access and near-universal connectivity are further driving improvements in education.
Shelton recently spoke with reporter Alice Lipowicz about technology, education and the challenges inherent in his position. The interview has been edited for style, clarity and length.
FCW: What’s the role of the Office of Innovation and Improvement?
Shelton: We do a lot of work with demonstration grants for teacher preparedness, charter schools and the investment innovation fund. We want to stimulate the identification of solutions, drive best practices, and support the ecosystem of research and development.
We define innovation as a solution that is significantly better than the status quo. Technology is going to be a driver for educational innovations as we more forward.
FCW: Building on what you said at the Gov 2.0 Summit, what more can you tell us about how technology is being used to improve education?
Shelton: Technology is being used to help students read and learn better, to connect teachers to resources, and as a platform for research.
No. 1, there is an opportunity to use technology as a support for student assessments. We are using that technology to make informed decisions. Kids can take tests or use learning software online that determines where they are in learning. The districts can buy that software.
Second, we want to use technology to make it easier for teachers to connect to peers and experts. You see how easy it is for students to connect. It should be easy for teachers to go online to meet their needs.
Right now, there is no good technology to help teachers and students personalize instruction. Some of the platforms will be free; others will be provided by states and school districts.
The Gov 2.0 Summit was a great opportunity to hear about the interesting work going on at government agencies and see the available solutions and tools. Some of the Web 2.0 developers have not thought about applying their tools to education yet.
FCW: What can you tell us about the department’s National Education Technology Plan?
Shelton: It’s a federal strategy not only for the department but for the country. It is a blueprint. As for implementing it, part of the responsibility is under my office. We coordinate the IT aspects.
A lot of our work is about coordination and understanding the vision. We work with the White House Office of Science and Technology Policy, Federal Communications Commission, and also with the National Science Foundation and the Defense Department. Technology is deeply embedded in what we do.
We work with the FCC on the E-Rate program, [which funds school and library broadband access through the Universal Service fee charged to companies that provide telecommunications services]. We work on education technology related to the communities, on [science, technology, engineering and math] education programs and professional development.
The question is: How quickly will states and districts move to IT solutions for their problems? What are the top-priority solutions?
Some of this is already happening. California has a network of teachers sharing information. Through technology, students will continue at home and organizations can keep track of performance.
A number of communities have embraced the one-to-one computing idea — one laptop per student — including communities in Maine, Vermont and Virginia. Some folks are pushing the envelope with work on devices and phones. Houghton Mifflin is trying out a large pilot project for teaching with iPads.
FCW: What are the greatest challenges of your position?
Shelton: The hardest part is getting people to take a risk from the current way to the future. They can see the benefits, but they get nervous about making a change. I wind up talking to a lot of folks, both on the demand side with states and school districts and with vendors on the supply side.
We are in an environment where we have to do more and be more efficient. Money is tight and will get tighter.
Technology has turned out to be a way to help with the problem. We can do it in education.
The risks are that if you try something very difficult, it might not work. People are becoming risk-averse.… There needs to be a form of accountability so that people are not penalized for taking risks.
Alice Lipowicz is a staff writer covering government 2.0, homeland security and other IT policies for Federal Computer Week. | <urn:uuid:df2bfa66-02ea-4bb7-b25c-73c40c3a0727> | CC-MAIN-2017-04 | https://fcw.com/articles/2010/10/11/feat-jim-shelton-education-qanda.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00311-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.958444 | 1,262 | 2.53125 | 3 |
Definition: Find the longest substring of two or more strings.
See also longest common subsequence, shortest common superstring.
Note: The longest common substring is contiguous, while the longest common subsequence need not be.
Dan Hirschberg's pseudocode as an example of dynamic programming
If you have suggestions, corrections, or comments, please get in touch with Paul Black.
Entry modified 2 September 2014.
HTML page formatted Mon Feb 2 13:10:39 2015.
Cite this as:
Paul E. Black, "longest common substring", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 2 September 2014. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/longestCommonSubstring.html | <urn:uuid:a81a7735-2e97-47e1-9d01-714ec3a85330> | CC-MAIN-2017-04 | http://www.darkridge.com/~jpr5/mirror/dads/HTML/longestCommonSubstring.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00219-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.837007 | 182 | 3.421875 | 3 |
Joined: 22 Aug 2005 Posts: 413 Location: Colarado, US
Below listed are the differences between Static and Dynamic Call.
Check out the answers.
1)Identified by Call literal.
Ex: CALL ?PGM1?.
2)Default Compiler option is NODYNAM and so all the literal calls are considered as static calls.
3)If the subprogram undergoes change, sub program and main program need to be recompiled.
4)Sub modules are link edited with main module.
5)Size of load module will be large
8)Sub-program will not be in initial stage the next time it is called unless you explicitly use INITIAL or you do a CANCEL after each call.
1)Identified by Call variable and the variable should be populated at run time.
01 WS-PGM PIC X(08).
Move ?PGM1? to WS-PGM
2)If you want convert the literal calls into DYNAMIC, the program should be compiled with DYNAM option.
3)By default, call variables and any un-resolved calls are considered as dynamic.
4)If the subprogram undergoes change, recompilation of subprogram is enough.
5)Sub modules are picked up during run time from the load library.
6)Size of load module will be less.
7)Slow compared to Static call.
9)Program will be in initial state every time it is called.
Can any one tell me what is the difference between Static and Dynamic call with example? Please elaborate in simple terms.
in static call calling prog and called prog are physically linked together. u cannot delete one.u have to submit prog again for compilation
in dynamic call calling prog and called prog are not physically linked together. There are 2 seperate load module .if one can be deleted no problem is there u can only compile one prog linke edit it and sub it | <urn:uuid:7d5c141b-9602-4c16-b61f-714f9f5d8bae> | CC-MAIN-2017-04 | http://ibmmainframes.com/about7411.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00127-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.859776 | 411 | 2.640625 | 3 |
Performance bottlenecks can lead an otherwise functional computer or server to slow down to a crawl. The term “bottleneck” refers to both an overloaded network and the state of a computing device in which one component is unable to keep pace with the rest of the system, thus slowing overall performance. Addressing bottleneck issues usually results in returning the system to operable performance levels; however, fixing bottleneck issues requires first identifying the underperforming component.
These five bottleneck causes are among the most common:
According to Microsoft, “processor bottlenecks occur when the processor is so busy that it cannot respond to requests for time.” Simply put, the CPU is overloaded and unable to perform tasks in a timely manner.
CPU bottleneck shows up in two forms: a processor running at over 80 percent capacity for an extended period of time, and an overly long processor queue. CPU utilization bottlenecks often stem from insufficient system memory and continual interruption from I/O devices. Resolving these issues involves increasing CPU power, adding more RAM, and improving software coding efficiency.
A memory bottleneck implies that the system does not have sufficient or fast enough RAM. This situation cuts the speed at which the RAM can serve information to the CPU, which slows overall operations. In cases where the system doesn’t have enough memory, the computer will start offloading storage to a significantly slower HDD or SSD to keep things running. Alternatively, if the RAM cannot serve data to the CPU fast enough, the device will experience both slowdown and low CPU usage rates.
Resolving the issue typically involves installing higher capacity and/or faster RAM. In cases where the existing RAM is too slow, it needs to be replaced, whereas capacity bottlenecks can be dealt with simply by adding more memory. In other cases, the problem may stem from a programming error called a “memory leak,” which means a program is not releasing memory for system use again when done using it. Resolving this issue requires a program fix.
Network bottlenecks occur when the communication between two devices lacks the necessary bandwidth or processing power to complete a task quickly. According to Microsoft, network bottlenecks occur when there’s an overloaded server, an overburdened network communication device, and when the network itself loses integrity. Resolving network utilization issues typically involves upgrading or adding servers, as well as upgrading network hardware like routers, hubs, and access points.
Sometimes bottleneck-related performance dips originate from the software itself. In some cases, programs can be built to handle only a finite number of tasks at once so the program won’t utilize any additional CPU or RAM assets even when available. Additionally, a program may not be written to work with multiple CPU streams, thus only utilizing a single core on a multicore processor. These issues are resolved through rewriting and patching software.
The slowest component inside a computer or server is typically the long-term storage, which includes HDDs and SSDs, and is often an unavoidable bottleneck. Even the fastest long-term storage solutions have physical speed limits, making this bottleneck cause one of the more difficult ones to troubleshoot. In many cases, disk usage speed can improve by reducing fragmentation issues and increasing data caching rates in RAM. On a physical level, address insufficient bandwidth by switching to faster storage devices and expanding RAID configurations.
The experts at Apica offer load testing and monitoring tools for your business’s online platforms that excel at identifying bottleneck problems that hinder performance. If you’re looking to get the most out of your platforms, contact the experts at Apica today.
Facebook Photo Credit: https://stevecutts.wordpress.com/tag/buffering/ | <urn:uuid:2e085843-5b4c-46e7-831d-8dcd39b7f905> | CC-MAIN-2017-04 | https://www.apicasystem.com/blog/5-common-performance-bottlenecks/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00155-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.91515 | 762 | 3.328125 | 3 |
The rise of social networks has led to an increase in unstructured data available for analysis, with a large proportion of this data being in text format such as tweets, blog posts, and Facebook posts. This data has a wide range of applications, for example it is often used in marketing to understand people's opinions on a new product or campaign, or to learn more about the target market for a particular brand.
When dealing with large volumes of unstructured text data, it can be difficult to extract useful information efficiently and effectively. There is almost always too much data to read through manually, so a method is needed that will extract the relevant information from the data and summarise it in a useful way.
Topic modelling is one method of doing this. Topic modelling is a technique that can automatically identify topics (groups of commonly co-occurring words) within a set of documents (e.g. tweets, blog posts, emails).
An effective topic model should output a number of very distinct groups of related words, which are easily identifiable as belonging to the same subject. For example, if the topic model was trained on thousands of tweets related to diet, one group of words might include "gluten","glutenfree", "coeliac", "intolerance", which would correspond to a "gluten free diet" topic. Another group of words might be "vegan", "dairyfree", "meatfree", which would represent a "vegan diet" topic.
Latent Dirichlet Modelling (LDA) is one of the most popular approaches for topic modelling, and is what will be discussed here.
The first step is to collect and prepare the documents to be analysed. The text within the documents should be cleaned so that the words that define each topic make sense, and would be relevant only to that topic. Usernames, URLs, symbols and common words (e.g. and, or, I, a, etc.) should all be removed before running the model.
These cleaned documents are then passed to the topic model. The model iterates through all of the words in each document and identifies words that occur together frequently. Every document is iterated over until the model becomes internally consistent (i.e. it does not change how words are allocated to topics during subsequent iterations).
The model outputs lists of frequently co-occurring words in the documents, along with the probability of each word belonging to that list. Each of these lists represents a topic. These topics can be visualised in a way that shows their relative sizes and how distinct they are from one another. This can be helpful in determining the overlap between topics, which may indicate if any of them should actually be merged into a single topic, and which topics are the most common within the documents. However, most of the interpretation of these lists of words into meaningful topics is a manual process and can be difficult if the words in the list are too common, or do not seem to be strongly related to one another.
In addition to summarising groups of documents, topic models can be useful for finding similarity between documents, or finding the relevancy of a document to a particular subject.
Topic modelling can be very powerful, but there are some potential issues with this technique. Firstly, it is computationally expensive, and if there are a very large number of documents it can take a long time to run and it might not even be possible to run them on a common laptop (e.g. 1 million documents, with 1000 topics and 500 iterations can take around 40 hours). These computational limitations can be overcome by using parallelisation techniques (e.g. running the model using multiple processers at once).
In addition, the model requires the number of topics to be identified before it can be run. This can be difficult to do, especially in cases where the documents are unseen and the content of the documents is unknown. A technique called Hierarchical Dirichlet Processing (HDP) has been developed which will select the most appropriate number of topics for a given set of documents, and this method can be used if the desired number of topics is not known in advance.
We will now look at an example with real data, using an LDA model to find what the general topics are within this food conversation on Twitter.
Over 20,000 tweets were collected from people in the UK talking about food and eating. After following the methodology described above using Python 3.5 and the Gensim LDA model, 10 topics were found in the data.
These topics were visualised using a package called LDAvis, which shows the intertopic distance (i.e. how similar or distinct the topics are), and the relative sizes of the topics. The figure below shows the output of the topic model using this visualisation, and it shows the words most strongly associated with each topic. The occurrence of specific words within each topic can also be visualised by selecting a word instead of a topic.
Words related to eating (e.g. eat, eating, eats, ate) were removed from the documents before analysis, as this was one of the words searched for when collecting the data. This means that at least one of these words would likely appear in every topic making interpretation of the topics more difficult.
Topic 1 is the largest topic in the data, and is comprised of people tweeting about what they had to eat the previous day, or what they will eat today. The closest topic to this is topic 6, which is people talking about what they will eat today. Instagram is one of the most relevant words for topic 6, suggesting that this topic could also be interpreted as people sharing photos of their food.
Topic 2 is "eating too much", and is close to topics 3 and 7 which are "hunger" and "need to stop eating" respectively.
Topics 4 ("what people feel like eating") and 5 ("haven't eaten for a while") are very close together in the intertopic distance map, which suggests overlap between people skipping meals and people craving different foods. Pizza, for example, is one of the words with the most overlap between these topics.
Topic 9 is "weight loss/health", and breakfast is the most relevant meal within this topic. There is also a topic around eating with family or friends (topic 10), and again, breakfast is the most relevant meal within this topic. This can be a useful insight, for example companies that produce breakfast foods could use this to drive social engagement online by sharing recipes around healthy, family-friendly breakfasts.
The final topic, number 8, is YouTube videos of food. Animals also feature heavily in this topic, suggesting that a large number of videos about food that are shared on Twitter are about dogs or cats eating.
This example shows how topic modelling can be valuable in helping to understand the themes in the data, how people talk about a particular subject and how the different topics within the documents are related to each other. | <urn:uuid:6c09e66a-bafe-4698-bddc-e3fefb11d8b5> | CC-MAIN-2017-04 | https://www.capgemini.com/blog/insights-data-blog/2016/10/topic-modelling-deriving-insight-from-large-volumes-of-unstructure-4 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00155-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.961557 | 1,423 | 3.03125 | 3 |
Computing and simulation methods have had a transformative effect on science and research. From computational biology to the recently coined petascale humanities, the computer sciences are pervasive throughout the academic and research landscape. And as the computer resources become both more powerful and easier to use, this transformative potential is even greater. The biomedical research sector is perhaps particularly illustrative of this on-going paradigm shift.
At a recent conference, prominent experts in biomedicine from government, academia and industry gathered to discuss HPC’s role in transforming bioscience. The event, titled “Current Challenges in Computing 2013: Biomedical Research,” or CCubed, was held earlier this month in Napa thanks to support from Lawrence Livermore National Laboratory and funding from IBM.
The attendees identified two key points:
+ The latest generation of high performance computers has the potential to transform the biomedical field in ways unthinkable just a few years ago.
+ There are further opportunities for accelerating the development of biomedical tools using petaflop class supercomputers.
“Computing is at a tipping point where it can play a much larger role in biomedical research,” said Fred Streitz, director of LLNL’s Institute for Scientific Computing Research and the High Performance Computing Innovation Center. “This is because of the level of computing power we’re now reaching and the fact that the biomedical community is becoming aware of HPC potential.”
Streitz added that “the promise of computing in the biology space was overhyped 10 years ago.” According to him the claims were on-target, but a decade too early. The best systems of the time were outmatched in terms of scale and sophistication. In the last ten years, however, both biology and computing have improved sufficiently to tackle these complex modeling challenges.
The current generation of leadership-class machines has crossed over into new territory by being able to model entire systems and not just parts of a system. For example, scientists can now model organs, like the human heart, beating in almost real-time. “The field is starting to realize they can use computing as a tool in a way they were not able to five years ago,” observed Streitz.
The inflection point does not only apply to computing; the event also emphasized the co-occurring need for data-based insight. The big data aspect of bioscience is the purview of bioinformatics. The two sides – HPC and big data – will inform and shape each other.
As Anna Barker, director, Transformative Healthcare Knowledge Networks, said: “Events like CCubed help advance biomedical research by bringing together eclectic thinkers who offer fresh perspective on how we can begin to manage and analyze this data, and how we can turn it into real knowledge.” | <urn:uuid:3110ccc9-14ff-4043-b0ae-c79f66c77fab> | CC-MAIN-2017-04 | https://www.hpcwire.com/2013/09/30/biomedicine_soars_on_hpc_s_wings/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280834.29/warc/CC-MAIN-20170116095120-00063-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.949302 | 579 | 2.625 | 3 |
If you’re in a technical computer field you should know your binary and hex. Many don’t and they’re secretly ashamed. Others mostly know (or at least they used to) and they have to re-remember the rules whenever they need to use it or explain it to others.
Let’s fix this right now.
The following system will apply to binary, octal, decimal, or hex (as well as any other base). Whenever you need to translate something to decimal (the most common task), simply perform the following steps and you’ll peg it every time.
Starting on the right of the number you’re converting, get the decimal value of that character/number and multiply it by the base (2,8,16) to the power of the exponent of the place you’re on (starting with 0 on the right).
So if your hex number is 3AF, the ‘F’ is 15 times 160, the ‘A’ is 10 times 161, and the ‘3’ is 3 times 162.
Or, in English:
Value times base to the exponent (VBE).
Just remember “VBE” and you’re set.
An even faster way is to just multiply the base by the exponent first, so if you’re in the third position from the right, and you’re doing hex, then you’re at 162, which is 256, and then multiply that by your value.
So if your value is A, that’s a 10, so it’s 10 X 256. Super simple.
Just remember VBE, but do the base x exponent piece first.
If you want/need to visualize this better, try writing it down.
- On the first line write down the base of the number system you’re converting, e.g. 2 for binary, 16 for hex, etc. We’ll call this the base
- On the line below, write out the total range of values available, e.g. 1-2 for binary, 0123456789ABCDEF for hex, etc. We’ll call this the symbol.
- On the next line, under each symbol, put the decimal equivalent of that symbol, e.g. A=10, F=15, etc. We’ll call that the decimal.
- Below that, write out the number you’re trying to convert, e.g. 3CAF (hex). We’ll call that the value.
- Under each value, starting from the rightmost place, label each place starting with zero (0). We’ll call that the exponent.
- Start from the right and look up the value on your symbols line. Find the associated decimal for it. Multiply the decimal number with the base to the power of the exponent.
- Do this for each character you’re converting as you move to the left and add up the results.
That means we start from the right and take the F value. Look that up on the symbol chart to find the decimal, which is 15. Multiply that by the base (16) to the power of the exponent (0) to get: 15 x 160. Anything to the power of zero is 1, so it’s 15×1, or 15.
For the second value from the right (A), we do the same. Look it up to get the decimal value (10). Multiply that by the base (16) to the power of the exponent (1). So, 16 times 1 is 16, meaning we’re doing 10 times 16, or 160.
For the third value from the right (C), look it up to get the decimal value (12). Multiply that by the base (16) to the power of the exponent (2). So, 16 to the power of 2 is 256, and that’s multiplied by 12 for 3072.
For the final value from the right (3) it’s decimal value is still 3. Just multiply that by the base (16) to the power of the exponent (3). So, 16 to the power of 3 is 4096, and that times three is 12,288.
Now add those all up and you get 15+160+3072+12288, which equals 15,535.
So the formula for each character/number in the value is:
…and then you just add those all up and that’s your result.
The way this all works is that we’re combining larger and larger ranges into the character sets themselves (1-2 for binary, A-F for hex, etc.) which allows for a higher value to start with (a two max with binary vs. a 15 max with hex). Then you multiply that by your base to the exponent of what place you’re on (rightmost, second from the right, etc.).
As you add more and more places (like with a 6-digit hex number) the combination of the range (0-15) being multiplied by the baseexponent (166) produces really high numbers. The advantage is that when you use a number system with a higher base (radix) you can store much higher values into far fewer characters.
As an example, let’s say we only have two places to store a number. If we use binary we can only store a maximum value of 3. If we use decimal we can only store a maximum value of 99. But if we use hex we can store a maximum value of 255. Take that to four places and now the values are 15, 9,999, and 65,535 respectively.
1 Here is a really full-bodied explanation for those who need more depth. | <urn:uuid:7fcf56aa-e86b-42c6-bacb-59b2676bb70d> | CC-MAIN-2017-04 | https://danielmiessler.com/study/positional_number_systems/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00301-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.86642 | 1,224 | 3.34375 | 3 |
Gallium nitride (GaN) is a new type of semiconductor material commonly used for making white light-emitting diodes (LEDs). This material is much more efficient than conventional tungsten filament light bulbs as well as compact fluorescent lamps. The LEDs that are manufactured using GaN are based on thin layers of material grown on other materials such as silicon or sapphire. Electric current is passed into the active region of the LED which emits the light. The GaN LEDs are highly efficient devices which produce an attractive color of light. This light can create a pleasant, healthy atmosphere at home or workplace or any other place where it is installed.
GaN devices are smaller, lighter, and energy efficient but are tough. They also have low sensitivity to ionizing radiation and better stability in some radiation environments. The demand and popularity of GaN LEDs are expected to increase more and more application areas to adopt these devices. The electric and hybrid vehicles will see a rising demand for GaN LEDs as power consumption is critical in these vehicles.
The report on GaN LEDs classifies the market on the basis of technology into market by native substrate, foreign substrate, and GaN LED growth methods. The market by native substrate is further classified into GaN-on-GaN substrate, foreign substrate is subdivided into GaN-on-sapphire substrate, GaN-on-patterned sapphire, GaN-on-silicon substrate, and GaN-on-silicon carbide-on-silicon substrate; and the GaN LED growth methods is further divided into metal-organic vapor phase epitaxy (MOVPE), molecular beam epitaxy (MBE), and hybrid vapor phase epitaxy (HVPE).
The segmentation of the market is based on applications such as residential, industrial, commercial and automobile sector; military, aerospace and defense, and medical sector. The residential market is sub-segmented into indoor and outdoor lightings. The industrial, commercial and automobile sector is sub-segmented into lightings in hospitals, hotels, offices, smartphones, tablets, and laptops, and luxury cars. The military, aerospace and defense ares is further segmented into electronic warfare communication, space crafts and satellites, and airplanes and helicopters. The medical sector has sub-segments like implantable medical devices, and biomedical electronics. Segmentation of the market on the basis of geography covers the region of North America, South America, Europe, Asia-Pacific, and Rest of the World.
Along with the market data, you can also customize MMM assessments that meet your company’s specific needs. Customize to get comprehensive industry standard and deep-dive analysis of the following parameters:
Raw Material/Component Analysis
- In-depth trend analysis of raw materials in competitive scenario
- Raw material/Component matrix which gives a detailed comparison of Raw material/Component portfolio of each company mapped at country level
- Comprehensive coverage of regulations followed in North America (U.S., Canada, and Mexico)
- Fast turn-around analysis of manufacturing firms with response to market events and trends
- Various firms opinion about different components and standards from different companies
- Qualitative inputs on macro-economic indicators, mergers and acquisitions
- Tracking the values of raw materials/components shipped annually in each country
- Pricing data for 2 inch, 4 inch, and 6 inch GaN wafers
1.1 Objective of the study
1.2 Market Definitions
1.3 Market Segmentation & Aspects Covered
1.4 Research Methodology
1.4.1 Assumptions (Market Size, Forecast, etc)
2 Executive Summary
3 Market Overview
Please fill in the form below to receive a free copy of the Summary of this Report
Please visit http://www.micromarketmonitor.com/custom-research-services.html to specify your custom Research Requirement | <urn:uuid:4311833c-4782-4c69-9e08-1fbdfec77f53> | CC-MAIN-2017-04 | http://www.micromarketmonitor.com/market-report/gan-led-reports-5567793344.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00513-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.901435 | 810 | 3.15625 | 3 |
In preparation of our CCNA exam, we want to make sure we cover the various concepts that we could see on our Cisco CCNA exam. So to assist you, below we will discuss Routed Protocols vs Routing Protocols.
Routed Protocols vs. Routed Protocols
You must know the difference between a “routed” protocol and a “routing protocol”.
A routed protocol is a protocol by which data can be routed. Examples of a routed protocol are IP, IPX, and AppleTalk. Required in such a protocol is an addressing scheme. Based on the addressing scheme, you will be able to identify the network to which a host belongs, in addition to identifying that host on that network. All hosts on an internetwork (routers, servers, and workstations) can utilize the services of a routed protocol.
A routing protocol, on the other hand, is only used between routers. Its purpose is to help routers building and maintain routing tables. The only two routed protocols you should worry about are IP and IPX (although Cisco has dropped IPX from the latest CCNA exam, it is helpful to understand the concepts behind it).
As mentioned above…IP, IPX and AppleTalk are three common routed protocols. The new version of the exam focuses on IP. So what do you need to know about IP other than that is how all your node to node communication will occur?
Make sure you know how to subnet! If you cannot subnet (or are weak in this area), you stand a fairly good chance of failing this exam. Understanding how to subnet will not guarantee that you will pass this exam, but not understanding subnetting will guarantee that you fail.
If you have an IP address and its subnet mask, could you tell the subnet ID of that host, the last useable host on that subnet, the subnet broadcast address, in addition to the number of possible subnets and hosts per subnet? If you feel that you are not strong in subnetting, then you will need to brush up on these concepts. Remember you have roughly a minute per question on the exam. If it takes you more than a minute to figure the above items out, you will not finish the test in the allotted time.
In addition, you will need to know how to recognize a subnet mask in its dotted decimal form (e.g., 255.255.255.240) and by using a bit count (e.g., /28). You should also know which bits must be off and on in the first octet for the various classes of IP addresses (e.g., Class B would have “10” in the first two bits).
The CCNA objectives only require that you know how to configure RIP and IGRP. However, you do need to know about the three classes of routing protocols (distance vector, link state, and hybrid), and which protocol belongs to which class. OSPF is the only link state protocol with which you need to concern yourself, and EIGRP is the only hybrid protocol. Everything else is belongs to the distance vector category. Know which protocol has a lower administrative distance (RIP is 120 vs. IGRP is 100), and that static routes normally have a lower administrative distance than both (if you use the defaults a static router is 1 and a directly connected router is 0).
When configuring RIP or IGRP, make sure that you also know how to turn on the attached networks so that they will start sending and receiving routing updates(network xxx.xxx.0.0). Also remember that IGRP requires the addition of an autonomous system number(AS xx).
Be familiar with the metrics RIP and IGRP use in determining the best path through which to route. RIP for IP only uses hops and IGRP uses Bandwidth, Delay, Reliability, Load, and MTU. But, by default, IGRP only uses Bandwidth and Delay.
Remember that “show ip route” displays the contents of your routing table.
I hope you found this article to be of use and it helps you prepare for your Cisco CCNA certification. I am sure you will quickly find out that hands-on real world experience is the best way to cement the CCNA concepts in your head to help you pass your CCNA exam! | <urn:uuid:d534c020-c318-4d1e-8745-0e4d1bf74dae> | CC-MAIN-2017-04 | https://www.certificationkits.com/cisco-routing-protocols/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00237-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.94548 | 906 | 3.703125 | 4 |
Researchers from security firm Seculert have unearthed a curious piece of backdoor-opening malware.
Once the malware gets installed on a computer and run, it first contacts the server via the HTTP protocol. After that first time, the C&C server instructs it to start communicating with the same IP address and port, but to use a custom-made protocol and to start every communication with (literally) “some magic code”:
The malware is instructed to create a backdoor account, giving the attackers permanent access to the machine. Still, they currently don’t seem to misuse it.
“As the malware is capable of setting up a backdoor, stealing information, and injecting HTML into the browser, we believe that the current phase of the attack is to monitor the activities of their targeted entities,” Seculert’s Aviv Raff pointed out, adding that the fact that the malware is capable of downloading and executing additional malicious files might indicate that this is just the first phase of a much broader attack.
It’s also interesting to note that the malware sample they first detected has been on the infected computer for nearly a year, and that most (78 percent) of the several thousands of different entities that they discovered having been targeted since are overwhelmingly located in the UK. | <urn:uuid:58fc427f-e56b-4bc9-84f6-edea135cc33a> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2013/04/18/backdoor-trojan-uses-magic-code-to-contact-cc-server/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00237-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.951863 | 265 | 2.625 | 3 |
Securing the Internet with IPsec (Internet Security Architecture)
By Pete Loshin
The IP Security Architecture
The IP Security Architecture, or IPsec, offers an interoperable and open standard for building security into any Internet application. By adding security at the network layer (the IP layer, or layer 3 in the OSI reference model), IPsec enables security for individual applications as well as for virtual private networks (VPNs) capable of securely carrying enterprise data across the open Internet.
IPsec and its related protocols are already being widely implemented in virtual private network products. Despite its growing importance to existing deployed systems, not too many people truly grok IPsec, probably because it is complicated (a solid couple of dozen RFCs describe IPsec and its related protocols--please refer to the list of related RFCs at the end of the article).
Saying that IPsec specifies protocols for encrypting and authenticating data sent within IP packets is an oversimplification, and even obscures IPsec's full potential.
IPsec offers the following security services:
Altogether, IPsec provides for the integration of algorithms, protocols, and security infrastructures into an overarching security architecture.
The stated goal of the IP Security Architecture is "to provide various security services for traffic at the IP layer, in both the IPv4 and IPv6 environments." [RFC2401]. This means security services that are: interoperable, high-quality, and cryptographically-based.
The IP security architecture allows systems to choose the required security protocols, identify the cryptographic algorithms to use with those protocols, and exchange any keys or other material or information necessary to provide security services.
How IPsec Works
IPsec uses the Authentication Header (AH) and the Encapsulating Security Payload (ESP) to apply security to IP packets. These protocols define IP header options (for IPv4) or header extensions (for IPv6). Both AH and ESP headers include a Security Parameter Index (SPI). The SPI, along with the security protocol in use (AH or ESP) and the destination IP address, combine to form the Security Association (SA).
The sending host knows what kind of security to apply to the packet by looking in a Security Policy Database (SPD). The sending host determines what policy is appropriate for the packet, depending on various selectors (for example, destination IP address or transport layer ports), by looking in the SPD. The SPD indicates what the policy is for a particular packet: either the packet requires IPsec processing of some sort, in which case it is passed to the IPsec module for processing; or it does not, in which case it is simply passed along for normal IP processing. Outbound packets must be checked against the SPD to see what kind (if any) of IPsec processing to apply. Inbound packets are checked against the SPD to see what kind of IPsec service should be present in those packets.
A second database, called the Security Association Database (SAD), includes all security parameters associated with all active SAs. When an IPsec host wants to send a packet, it checks the appropriate selectors to determine the Security Policy Database security policy for that referenced destination/port/application. If the SPD references a particular Security Association, the host can look up the SA in the Security Association Database to identify appropriate security parameters for that packet.
Key management is another important aspect of IPsec. Two important key management specifications associated with IPsec are: the Internet Security Association and Key Management Protocol (ISAKMP) and the Internet Key Exchange (IKE).
ISAKMP, a generalized protocol for establishing Security Associations and cryptographic keys within an Internet environment, defines the procedures and packet formats needed to establish, negotiate, modify, and delete Security Associations. It also defines payloads for exchanging key generation and authentication data. These formats provide a consistent framework for transferring this data, independent of how the key is generated or what types of encryption or authentication algorithms are being used.
ISAKMP was designed to provide a framework that can be used by any security protocols that use Security Associations, not just IPsec. To be useful for a particular security protocol, a Domain of Interpretation or DOI must be defined. The DOI groups related protocols for the purpose of negotiating Security Associations--security protocols that share a DOI choose protocol and cryptographic transforms from a common namespace. They also share key exchange protocol identifiers, as well as a common interpretation of payload data content.
While ISAKMP and the IPsec DOI provide a framework for authentication and key exchange, ISAKMP does not actually define how those functions are executed. The Internet Key Exchange (or IKE) protocol, working within the framework defined by ISAKMP, does define a mechanism for hosts to perform these exchanges.
By defining a separate protocol for the generalized formats required to do key and Security Association exchanges, ISAKMP can be used as a base to build specific key exchange protocols. The foundation protocol can be used for any security protocol, and does not have to be replaced if an existing key exchange protocol is replaced for some reason, such as if a security flaw was found in the protocol.
IPsec, IPv4, and IPv6
IPsec provides security services for either IPv4 or IPv6, but the way it provides those services is slightly different in each. IPv4 uses header options: every IP packet contains 20 bytes-worth of required fields, and any packet that has any "special" requirements can use up to 40 bytes for those options. This tends to complicate packet processing, since routers must check the length of each packet it receives for forwarding--even though many of those header options are related to end-to-end functions such as security, with which routers are not otherwise concerned.
IPv6 simplifies header processing: every IPv6 packet header is the same length, 40 bytes, but any options can be accommodated in extension headers that follow the IPv6 header. IPsec services are provided through these extensions.
The ordering of IPsec headers, whether within IPv4 or IPv6, has significance. For example, it makes sense to encrypt a payload with the ESP header, and then use the Authentication Header to provide data integrity on the encrypted payload. In this case, the AH header appears first, followed by the ESP header and encrypted payload. Reversing the order, by doing data integrity first and then encrypting the whole lot, means that the recipient can be sure of who originated the data, but not necessarily certain of who did the encryption.
Part 3: IPsec Protocols and Operations
IPsec Protocols and Operations
One of the fundamental constructs of IPsec is the Security Association, or SA. According to RFC 2401, a "Security Association is a simplex 'connection' that affords security services to the traffic carried by it." SAs provide security services by using either AH or ESP, but not both (if a traffic stream uses both AH and ESP, it has two--or more--SAs). For typical IP traffic, there will be two SAs: one in each direction that traffic flows (one each for source and destination host).
An SA is identified by three things:
IPsec defines two modes for exchanging secured data, tunnel mode and transport mode. IPsec transport mode protects upper-layer protocols, and is used between end-nodes. This approach allows end-to-end security, because the host originating the packet is also securing it and the destination host is able to verify the security, either by decrypting the packet or certifying the authentication.
Tunnel mode IPsec protects the entire contents of the tunneled packets. The tunneled packets are accepted by a system acting as a security gateway, encapsulated inside a set of IPsec/IP headers, and forwarded to the other end of the tunnel, where the original packets are extracted (after being certified or decrypted) and then passed along to their ultimate destination.
The packets are only secured as long as they are "inside" the tunnel, although the originating and destination hosts could be sending secured packets themselves, so that the tunnel systems are encapsulating packets that have already been secured.
Transport mode is good for any two individual hosts that want to communicate securely; tunnel mode is the foundation of the Virtual Private Network or VPN. Tunnel mode is also required any time a security gateway (a device offering IPsec services to other systems) is involved at either end of an IPsec transmission. Two security gateways must always communicate by tunneling IP packets inside IPsec packets; the same goes for an individual host communicating with a security gateway. This occurs any time a mobile laptop user logs into a corporate VPN from the road, for example.
The Authentication Header (AH) protocol offers connectionless integrity and data origin authentication for IP datagrams, and can optionally provide protection against replays.
The Encapsulating Security Payload (ESP) protocol provides a mix of security services:
ESP and AH authentication services are slightly different: ESP authentication services are ordinarily provided only on the packet payload, while AH authenticates almost the entire packet including headers.
AH is specified in RFC 2402, and the header is shown in the figure below (taken from RFC 2402).
The Sequence Number field is mandatory for all AH and ESP headers, and is used to provide anti-replay services. Every time a new packet is sent, the Sequence Number is increased by one (the first packet sent with a given SA will have a Sequence Number of 1). When the receiving host elects to use the anti-replay service for a particular SA, the host checks the Sequence Number: if it receives a packet with a Sequence Number value that it has already received, that packet is discarded.
The authentication data field contains whatever data is required by the authentication mechanisms specified for that particular SA to authenticate the packet. This value is called an Integrity Check Value or (ICV), it may contain a keyed Message Authentication Code (MAC) based on a symmetric encryption algorithm (such as CAST or Triple-DES) or a one-way hash function such as MD5 or SHA-1.
ESP is specified in RFC 2406, and while similar to AH in many ways it provides a wider selection of security services and can be a bit more complex.
The ESP header format is shown in the figure below (taken from RFC 2406).
The most obvious difference between ESP and AH is that the ESP header's Next Header field appears at the end of the security payload. Of course, since the header may be encapsulating an encrypted payload, you don't need to know what header to expect next until after you've decrypted the payload. Thus, the ESP Next Header field is placed after, rather than before, the payload. ESP's authentication service covers only the payload itself, not the IP headers of its own packet as with the Authentication Header. And the confidentiality service covers only the payload itself; obviously, you can't encrypt the IP headers of the packet intended to deliver the payload and still expect any intermediate routers to be able to process the packet. Of course, if you're using tunneling, you can encrypt everything, but only everything in the tunneled packet itself.
Part 4: Cryptographic Algorithms and Deploying IPsec
Although there is no IPsec without encryption and authentication algorithms, which algorithms you use do not matter all that much--as long as the ones you use are secure. The fact is, IPsec was designed to allow entities to negotiate the appropriate security mechanisms from whatever algorithms each supports, using ISAKMP-based key and SA management protocols.
There is currently some controversy over which algorithms should be used in IPsec, and which should be considered basic parts of any IPsec implementation. The Data Encryption Standard, or DES, has recently proven to be vulnerable to relatively inexpensive brute-force attacks; there is a significant movement to have it deprecated for use in IPsec. At the same time, the US National Institute of Standards and Technology (NIST) is in the process of selecting DES's successor algorithm, the Advanced Encryption Standard or AES.
Implementing and Deploying IPsec
The IPsec specification (found in RFC 2401) states there are several ways to implement IPsec in a host or in conjunction with a router or firewall:
Most organizations are likely to buy rather than build their IPsec implementation. VPN vendors usually claim to support IPsec, though some are more interoperable than others. Resources for checking interoperability include:
IPsec continues to evolve as research reveals new tools for security and new threats to security. To stay on top of the latest IETF standards developments, check:
There is no longer any question about whether or not the Internet will be important to your business; it already is. IPsec provides a framework within which you can use the Internet as your own, secure, virtual private network.
IPsec and related RFCs
- RFC 1320 The MD4 Message-Digest Algorithm
- RFC 1321 The MD5 Message-Digest Algorithm
- RFC 1828 IP Authentication using Keyed MD5
- RFC 1829 The ESP DES-CBC Transform
- RFC 2040 The RC5, RC5-CBC, RC5-CBC-Pad, and RC5-CTS Algorithms
- RFC 2085 HMAC-MD5 IP Authentication with Replay Prevention
- RFC 2104 HMAC: Keyed-Hashing for Message Authentication
- RFC 2144 The CAST-128 Encryption Algorithm
- RFC 2202 Test Cases for HMAC-MD5 and HMAC-SHA-1
- RFC 2268 A Description of the RC2(r) Encryption Algorithm
- RFC 2401 Security Architecture for the Internet Protocol
- RFC 2402 IP Authentication Header
- RFC 2403 The Use of HMAC-MD5-96 within ESP and AH
- RFC 2404 The Use of HMAC-SHA-1-96 within ESP and AH
- RFC 2405 The ESP DES-CBC Cipher Algorithm With Explicit IV
- RFC 2406 IP Encapsulating Security Payload (ESP)
- RFC 2407 The Internet IP Security Domain of Interpretation for ISAKMP
- RFC 2408 Internet Security Association and Key Management Protocol (ISAKMP)
- RFC 2409 The Internet Key Exchange (IKE)
- RFC 2410 The NULL Encryption Algorithm and Its Use With IPsec
- RFC 2411 IP Security Document Roadmap
- RFC 2412 The OAKLEY Key Determination Protocol
- RFC 2451 The ESP CBC-Mode Cipher Algorithms
- RFC 2631 Diffie-Hellman Key Agreement Method
Pete Loshin has written a dozen books on networking and the Internet, and is editor of the soon-to-be released "Big Book of IPsec RFCs: Internet Security Architecture" (Morgan Kaufmann 1999). Other books include "TCP/IP Clearly Explained" 3rd edition (Morgan Kaufmann 1999) and "Extranet Design and Implementation" (SYBEX 1998). You can reach him at email@example.com or http://www.loshin.com. | <urn:uuid:30e77fb4-532c-4e2c-8a55-6de7b2dd41bf> | CC-MAIN-2017-04 | http://www.enterprisenetworkingplanet.com/print/netsecur/article.php/615901/Securing-the-Internet-with-IPsec-Internet-Security-Architecture.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00449-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.907384 | 3,139 | 2.765625 | 3 |
Subnetting is definitely one of the things you need to know inside and out to pass your CCENT 100-101 or CCNA 200-120 exam. You will see pretty straight forward subnetting questions and you will also see scenario based questions that you will need to employ your subnetting skills to determine where the problem resides.
A classic example of such as subnetting question on the CCENT or CCNA exam is where you will have hosts on different subnets and the exam question will state that Host A and Host B cannot communicate and since the topology shown will be using variable length subnet masks you will need to be able to identify the different subnet ranges and you will find that one of the Hosts is configured with a router as a gateway that is not within the subnet range even though they are physically connected.
So that is just one of the many examples of why you need to know subnetting inside and out. Additionally, the better you know subnetting, the faster you can get through the questions as you do not want to be struggling trying to figure out subnet ranges as the exam only gives you a limited amount of time and most students only have a few minutes left at the end of their exam.
A quick tip is as the proxy shows you to your testing sheet, they will usually hand you two laminated dry erase sheets with a dry erase marker to take notes and do your subnetting in place of scrap pieces of paper. The proxy will then start the exam session for you and you have about 15 minutes to answer various marketing questions that do not impact your exam. During that time write down your subnetting charts on the laminated sheets. This way you can quickly refer back to them during the test. It might only save a few minutes, but every minute counts on this exam!
So below we have another classic subnetting question you may see on the exam. Take a look at the network topology below. One of our CCNA certified network administrators has added a new subnet with 17 hosts to the network. Which subnet address/mask should this network use to provide enough usable addresses while wasting the fewest addresses?
Variable Length Subnet Mask (VLSM) Subnetting
Answer A would provide you 254 hosts per subnet so that is too many. The number of hosts needed is 17; therefore the subnet mask should be /27 which allows for up to 32 hosts i.e 255.255.255.224. In this scenario there are only 2 answers that could fit, which are B and C.
B is wrong since the network 192.168.0.64/27 is in another subnet on the first router. Therefore it cannot be used and accordingly the correct answer is C.
Answer D is incorrect as /29 which is 255.255.255.248 will only provide 6 hosts per subnet and does not meet the criteria of a minimum of 17 hosts per subnet. Answer E is incorrect as a /26 which provides a subnet mask of 255.255.255.192 will provide 62 hosts per subnet which like answer A is wasteful as it provides too many.
What is really cool is when you have your own CCNA lab, you will be able to actually configure the routers so they match the topology and cycle through the different options to see what really works. This way it is not simply theory, it becomes real to you when you see these concepts in action! | <urn:uuid:3f46ac12-dbc3-44b1-8cc9-580701899938> | CC-MAIN-2017-04 | https://www.certificationkits.com/ccent-ccna-subnetting-exam-question/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00357-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.960179 | 714 | 3.578125 | 4 |
Sponsored The Internet of Things (IoT) is already changing industries and no more so than a manufacturing industry that has arguably been leveraging connected technologies for the past decade. In this report, Dell looks at how IoT could spell the end of manufacturing as we know it.
The manufacturing industry is facing some huge challenges, from increased competition and ever-falling market prices to stringent regulation and keeping equipment properly serviced. Then there’s also the issue of the current skills shortage in the sector, and how they keep pace with growing consumer demands.
Of course, these challenges are slightly different for each company, and no more so than for process and discrete manufacturers, but ultimately all markets are increasingly being driven by price and availability. The companies that stay relevant stay in business; those that don’t are at risk of simply falling away.
Such market pressures have naturally pushed industry observers into looking at new ways of working, and especially what up-and-coming technologies can do to make manufacturers smarter, faster and more efficient. Such technologies, today, include the Internet of Things (IoT), robotics, augmented reality, machine learning and 3D printing.
And yet, despite this demand for new technology, it could be said that the manufacturing industry is no stranger to being ahead of the curve, and this is especially true when looking at the Internet of Things (IoT).
For example, while the idea of smart, digitalized factories is now seen by many as one of the benefits of IoT in manufacturing, it could be argued that this is old news to tech-savvy manufacturers.
Germany’s DFKI told Internet of Business recently how it has been prototyping such designs since 2005 while Smart Factories are already in operation at the premises of Cisco, GE and many others.
Click here to read the report…The End of the (Manufacturing) World As We Know It
Industrie 4.0 and IoT
Germany is arguably the pioneer of much of this change through its Industrie 4.0 initiative, with the Industrial Internet Consortium gaining similar traction in the United States.
Industrie 4.0 is expected to boost productivity across all German manufacturing sectors by €90 billion to €150 billion. Productivity improvements on conversion costs will range from 15 to 25 percent. When the materials costs are factored in, productivity gains of 5 to 8 percent are expected to be achieved.
IoT is heavily ingrained in the initiative and it is easy to see why it is much talked about, because it does offer manufacturers numerous opportunities.
Factories and plants that are connected to the Internet can become more efficient, productive and smarter by gaining more insight on everything from the product to the entire production process. Business decisions can be made in real-time, based on accurate data, and product downtime can be avoided through predictive maintenance.
Some firms, including Konecranes, have even looked to use IoT to reinvent themselves by offering intelligent (premium) products that can be better maintained and thus have a longer life.
IoT in manufacturing is essentially about a future where plant operations are more automated and efficient; processes govern themselves, ‘smart’ products help in their own maintenance, and where individual parts can be automatically replenished without the need for human intervention. And the more automated the plant is, the more efficient and profitable it becomes.
Click here to download Dell’s report, ‘The End of the (Manufacturing) World As We Know It‘. The report details everything from changing business models and streamlined R&D processes to the collection of big data via predictive analytics and establishing an ROI from the Internet of Things. | <urn:uuid:87e17039-ee7b-4fea-a902-fdcd308ae927> | CC-MAIN-2017-04 | https://internetofbusiness.com/report-iot-manufacturers-operate/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00074-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.950223 | 753 | 2.515625 | 3 |
Open source technology is simply an evolved class of licensing under which wider, more permissive rights are given to users. Crucially, access to source code is given enabling user support and development of the code. While the open source philosophy originated in California, with frankly a long-haired approach, the model is now effectively mainstream and competes with conventional closed source licensing models (the 2008 Sun/MySQL and Symbian deals demonstrate the popularity of open source).
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Buying and selling open source businesses
Successful technology merger and acquisitions (M&A) is typified by a meeting of minds around value and risk. Some of the unique open source risks include:
- Control over intellectual property rights (IPR) - if the target's products contain third-party open source technology, it is virtually certain that there will be gaps in IPR assurance - open source licenses typically disclaim any IPR non-infringement warranties or indemnities.
- Licence non-compliance or lack of process - as a buyer, it is safer to have a working assumption that the target is unlikely to have a strong licence compliance process and, therefore, breach of licence terms or IPR infringement is more likely to be a material risk for a heavy user of open source.
- Copyleft: the notorious risk - open source is licensed under a range of publicly available licence types which are classed as open source licences because they share a range of characteristics. However, within this class, licences range from simple or benign (BSD) to viral (GPL v2/3). The GPL licence tends to be the most popular form but contains tough obligations. If the user distributes product that contains or is derived from GPL v2 code, this distribution must be done at no cost on the terms of the General Public License, Version 2 (GPLv2). So if you buy a business and want to combine the target's code base with your own, if the target code is GPLv2, this could force the buyer to licence its code at no cost on an open source basis - this is no legal theory, this happens.
Dealing with open source M&A risk
The conventional due diligence and warranty approach still works but also think about:
- does the target have an open source policy - is it followed?
- can it define the scope of its usage?
- what open source is present, can it be listed?
- has the target had any correspondence with the open source or free software "community" (who actively police open source licences)?
If there is a viral licence which could trigger a copyleft issue then it is vital this is analysed from a legal and technical perspective to see if the buyer's plans for that product are consistent with the open source licence obligation.
Code scanning - the new due diligence
Technical organisations such as Black Duck are now emerging to provide source code scanning services to identify open source and the associated licence terms. Once identified, a risk assessment can be carried out prior to the transaction closing. Code scanners provide an effective way of understanding the nature of core software assets in a target's business and this process sits well alongside traditional IP due diligence.
Making sense of open source
Open source is not inherently risky - it should be treated like any other diligence issue. Provided buyer and seller understand the issues pre-transaction and reflect this in the transaction terms, there is no reason why open source should negatively impact a transaction. However, as in-depth knowledge of open source seems patchy at present, there remains the possibility of problems for the ignorant buyer. | <urn:uuid:987355b2-7c45-4f79-b731-666f5643f421> | CC-MAIN-2017-04 | http://www.computerweekly.com/news/2240086453/Open-source-in-technology-merger-and-acquisitions | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00560-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.931721 | 756 | 2.515625 | 3 |
Temperature monitoring for reduced data center power consumption
Tuesday, Nov 19th 2013
Data center facilities require a significant amount of electricity to power the plethora of technologies and systems utilized for housing data. However, in today's drive toward green IT reduced power consumption, owners and operators of these facilities seek ways to decrease the amount of energy consumed within their technologically advanced infrastructures.
Energy consumption by the numbers
According to the Federal Energy Management Program, a data center consumes about 100 times more energy than a typical office building. While many may attribute this usage to the increased amount of information technology equipment, energy used to power this hardware can amount to less than 15 percent of total electricity utilization.
Additionally, the Environmental Protection Agency found that data centers comprised 1.5 percent of the total amount of electricity consumed by the United States in 2007. That number was projected to reach 3 percent by 2011. This will create the need for 10 additional power plants in the nation, stated infoTECH contributor Ashok Bindra. Overall, about 10 percent of global power resources are utilized by organizations within the IT sector, stated the Register.
Although many data center operators have worked to improve the energy consumption levels of their facilities, Maxim Integrated stated that most of these efforts have been directed toward IT components and equipment. One often-overlooked energy consumer is the server room cooling system.
Temperature monitoring for reduced energy consumption
Although data center IT components must be kept cool for a variety of reasons, temperature monitoring of these systems can provide ways to improve a data center's electricity levels. In order to keep servers and other hardware items functioning properly, experts recommend that data centers be kept within humidity levels of 65 to 80 degrees Fahrenheit. The range for the environment temperature varies, but many agree that keeping the server room between 68 and 75 degrees can prevent systems from overheating.
While it is important to keep servers in the optimum temperature range, a cooling system can be one of the highest energy consuming arrangement in a data center. For this reason, temperature monitoring technology can allow data center operators to keep these systems at the higher end of the range while still ensuring their optimal functionality. A high temperature alarm can alert key personnel if the interior temperature level reaches too high a mark. When systems are maintained at higher temperature levels in the recommended range, machines are kept in good working condition and less power is consumed by the cooling system, so finding that sweet spot and staying there is imperative.
Bindra also stated that sensors can be placed in other parts of the data center facility besides just in the server room. The temperature can be maintained at a higher level in places where IT system cooling is not a factor, further decreasing energy consumption. For example, the lobby and office space of the data center does not need to be the same temperature as the server room. Offering this level of control over the temperature of the facility can improve visibility over power usage as well as opportunities for reducing usage. | <urn:uuid:d2a7e19d-a045-4c64-8b6a-363dd4b15403> | CC-MAIN-2017-04 | http://www.itwatchdogs.com/environmental-monitoring-news/data-center/temperature-monitoring-for-reduced-data-center-power-consumption-541998 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00376-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.936262 | 594 | 3 | 3 |
A Variable Frequency Drive (VFD) is a type of motor controller that drives an electric motor by varying the frequency and voltage supplied to the electric motor. VFD are also known as variable speed drive, adjustable speed drive, adjustable frequency drive, AC drive, microdrive, and inverter. With the rising energy demand and increasing price of electricity, VFDs are now increasingly used for different applications across various industries. Some of the other factors contributing to the growth of these are focus of government on efficient energy savings and environmental regulations. VFDs are of three different types: mechanical drives, electric drives and fluid drives.
This technology is hardly new, but the uses have increased manifold. The demand for these is expected to grow in developing countries as they are they have huge requirement for infrastructure and energy. The vendors have revenue generating opportunities in these markets and expand their existing operations.
Some of the vendors mentioned in the report are Eaton, ABB, GE, Crompton Greaves, Siemens, Mitsubishi and Hitachi.
Reasons to buy this report
1) Report gives complete market insights, the driving forces of the market, the challenges market faces, about different VFDtechnologies and their applications
2) A complete market breakdown has been done by different geographies along to give a detailed picture of the market in that particular region
3) The report also gives information of major vendors of VFD products, their existing share in the market, strategies they adopt along with the major products, financials, recent developments and profile of these vendors.
Who should be interested in this report?
1) Vendors who are into manufacturing of these products as they can get an overview of what competitors are doing and also which markets they can look forward to expand their operations
2) Investors who are willing to invest in this market
3) Consultants who can have readymade analysis to guide their clients
4) Anyone who wants to know about this industry | <urn:uuid:9fd8235f-063a-42d2-befa-b0240fa7fffa> | CC-MAIN-2017-04 | https://www.mordorintelligence.com/industry-reports/variable-frequency-drive-market-industry | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00340-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.95772 | 398 | 2.890625 | 3 |
Every year since 2008, Google has assembled a Google Flu Trends study to estimate the severity and the number of cases of influenza that are spreading in many parts of the world.
Since 2008, Google has been looking annually at influenza infection statistics around the world and creating models to estimate how serious flu outbreaks are in a wide swath of nations, including the United States.
But this year, Google is updating its flu data analysis methods
for U.S. cases after last year's 2012 to 2013 Google Flu Trends model overestimated the severity of flu cases that actually occurred in this country during that period, according to an Oct. 29 post by Christian Stefansen, a Google software engineer, on The Official Google.org Blog
. The models look at the number of Web searches that are conducted by people seeking information about the flu, which Google says are good indicators of flu levels.
"When people get sick, they turn to the Web for information," wrote Stefansen. That connection is what inspired Google, through its Google.org philanthropic arm, to begin compiling its Google Flu Trends
(GFT) studies each year, using real-time, aggregated Google search data, in regions around the world, he wrote.
The overestimated number of flu cases described by the Google Flu Trends data in January 2013 were discovered after the estimates were compared to the numbers of actual health care visits for influenza-like illnesses (ILI) reported by the Centers for Disease Control (CDC), which were much lower, wrote Stefansen. The actual CDC numbers for the week of Jan. 13, 2013, showed an estimated flu incidence of 4.52 percent, compared with a Google Flu Trends estimated flu incidence of 10.56 percent, according to a related report by Google.org
After studying the discrepancies, Google.org experts theorized that the reason for the different CDC and GFT estimates was that "heightened media coverage on the severity of the flu season resulted in an extended period in which users were searching for terms we've identified as correlated with flu levels," wrote Stefansen. "In early 2013, we saw more flu-related searches in the U.S. than ever before."
To correct those inaccurate, higher estimates, the GFT team is improving the model by using peak estimates from the 2012 to 2013 season, which "provided a close approximation of flu activity for recent seasons," wrote Stefansen. "We will be applying this update to the U.S. flu level estimates for the 2013-2014 flu season, starting from August 1st. A casual observer will see that the new model forecasts a lower flu level than last year's model did at a similar time in the season. We believe the new model more closely approximates CDC data."
The Google Flu Trends reports "can help estimate the start, peak, and duration of each flu season—all important information for public health agencies," wrote Stefansen. "This is an iterative process. We will keep exploring how we can build resilience to accommodate the effect of news media. In the meantime, stay healthy!"
The data used in the GFT reports is gathered using IP address information
from Google server logs
to make a best guess about where queries originated, according to Google. The flu search estimates include search data collected from more than 25 countries, including Australia, Belgium, Canada, Germany, Japan, New Zealand, Poland, Spain, Sweden and Switzerland, in addition to the United States.
In the Northern Hemisphere, the flu season typically spans from November to March, according to Google. In the Southern Hemisphere, the flu season typically spans from May to September. In tropical countries, a strong seasonal pattern may not exist.
In September, Google announced it is launching a health company, called Calico
, to fight human aging and disease. Calico will work to find ways to improve the health and extend the lives of human beings, according to Google. Much of the details behind the new operation, however, have not yet been announced, including just what that goal means and how Google will take on its mission in these areas.
Calico wasn't the first health care-related initiative undertaken by Google. Back in 2008, Google launched its Google Health initiative
, which aimed to help patients access their personal health records no matter where they were, from any computing device, through a secure portal hosted by Google and its partners, according to earlier eWEEK
reports. Google Health eventually shut down
in January 2013. | <urn:uuid:a0f6e782-ce84-403c-8991-d28863e91e67> | CC-MAIN-2017-04 | http://www.eweek.com/cloud/google-refining-flu-spread-methodology-as-flu-season-approaches.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280587.1/warc/CC-MAIN-20170116095120-00000-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.958015 | 909 | 2.71875 | 3 |
Intelligence augmentation used to be something people accomplished with old-fashioned schooling, that is, until Ray (The Singularity is Near) Kurzweil came along. If you buy into his vision of the future, human intelligence will soon be vastly accelerated with technology plug-ins in which people become computer-corporeal hybrids. All the scientists need to do is model the human brain, scale it up with computer technology, and then figure out how to interface it to our own gray matter.
The Singularity Summit in San Francisco this week covered this very topic (among others), with most of the press coverage focusing on our chances of simulating the workings of the brain inside a computer.
The hardware itself should be relatively staightforward. Priya Ganapati’s article in Gizmodo reports that Kurzweil believes we will need a computer with 3.2 petabytes of memory and at least 36.8 petaflops of performance to simulate the human brain. A supercomputer of that size is probably just three to four years away.
The tough part is the software, which will have to encapsulate how the mind processes information. “The objective is not necessarily to build a grand simulation – the real objective is to understand the principle of operation of the brain,” said Kurzweil. He goes on to say that only a million lines of code or so will be required to implement this.
Here’s how that maths works, Kurzweil explains: The design of the brain is in the genome. The human genome has three billion base pairs or six billion bits, which is about 800 million bytes before compression, he says. Eliminating redundancies and applying loss-less compression, that information can be compressed into about 50 million bytes, according to Kurzweil… About half of that is the brain, which comes down to 25 million bytes, or a million lines of code. | <urn:uuid:212b0aae-9bdb-4ba2-91ec-52d8d7121a54> | CC-MAIN-2017-04 | https://www.hpcwire.com/2010/08/17/modeling_the_brain_within_reach_say_scientists/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00394-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.92722 | 399 | 3.21875 | 3 |
The drilling industry is evolving with the advent of new technologies and practices. The necessity of new drilling technologies and practices result in redefined expectations for drill bits. A drill bit cuts into the rock during the time of drilling an oil or gas well. The drill bit is a rotating apparatus, which is located at the tip of the drillstring, below the drill collar and the drill pipe. It usually consists of two or three cones made up of the hardest of materials (usually steel, tungsten carbide, and/or synthetic or natural diamonds) and sharp teeth that cut into the rock and sediment below.
A boring apparatus is generally equipped with Drill Bits, which directly or by implication help in squashing or cutting rock. The bit is generally present at the bottom of the drillstring and must be changed when it becomes too much dull. Most bits work by scratching or pounding the rock, or both, generally as a component of a rotational movement. Some other bits known as mallet bits, pound rock vertically like air hammer works, while developing a site.
Important key components of drill bits are as follows:
- The cutting or boring element used in drilling oil and gas wells.
- Most bits used in rotary drilling are roller-cone bits.
- The bit consists of the cutting elements and the circulating element.
Drilling engineers choose the drill bits according to the type of formations encountered, whether or not directional drilling is required, for specific temperatures, and if well logging is being done. There are different types of drill bits. Steel Tooth Rotary Bits are the most common types of drill bits, while Insert Bits are steel tooth bit with tungsten carbide inserts. Polycrystalline Diamond Compact Bits use synthetic diamonds attached to the carbide inserts. Diamond Bits which are forty to 50 times stronger than steel bits have industrial diamonds implanted in them to drill extremely hard surfaces. Additionally, hybrids of these types of drill bits exist to tackle specific drilling challenges.
Scope of the report: This study provides the current global market size of drill bits and forecasts till 2019. The report includes detailed qualitative and quantitative analysis as well as a comprehensive review of major market drivers, restraints, opportunities, winning imperatives, and key issues of the global drill bits market. The market is segmented on the basis of geography, which includes regions like Asia-Pacific, Europe, Africa, Middle East and Americas.
On the basis of application:
- Core Bits
- Mill Bits
- Fishtail Bits
On the basis of material type:
- Tungsten Carbide
- Synthetic or Natural diamonds
On the basis of product type:
- Steel Tooth Rotary Bits
- Insert Bits
- Diamond Bits
- Hybrid Bits
On the basis of geography:
- Middle East
1.1 KEY TAKEAWAYS
1.2 REPORT DESCRIPTION
1.3 MARKETS COVERED
1.4 RESEARCH METHODOLOGY
2 EXECUTIVE SUMMARY
3 MARKET OVERVIEW
3.1 MARKET DYNAMICS
4 DRILL BITS MARKET, BY APPLICATIONS
6 DRILL BITS MARKET, BY MATERIAL TYPE
6 DRILL BITS MARKET, BY GEOGRAPHY
6.2 MARKET BY GEOGRAPHY
6.2.3 NORTH AMERICA
6.2.4 SOUTH AMERICA
7 DRILL BITS MARKET: COMPETITIVE LANDSCAPE
7.1 MARKET SHARE: BY COMPANIES
8 DRILL PIPE MARKET: DEVELOPMENTS
8.1 MARKET DEVELOPMENT: BY COMPANIES
9 DRILL PIPE MARKET: BY COMPANIES
9.1 BAKER HUGHES
9.2 DRILL MASTER INC
9.3 ELEMENT SIX
9.4 ESCO CORPORATION
9.6 NATIONAL OIL VARCO
9.8 TORQUATO DRILLING ACCESSORIES
9.9 KING DREAM PUBLIC LIMITED
9.10 VAREL INTERNATIONAL ENERGY SERVICES LIMITED
Please fill in the form below to receive a free copy of the Summary of this Report
Please visit http://www.micromarketmonitor.com/custom-research-services.html to specify your custom Research Requirement | <urn:uuid:f75efe8b-2282-481e-ad4f-01996d85361c> | CC-MAIN-2017-04 | http://www.micromarketmonitor.com/market-report/drill-bits-reports-5289389398.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00330-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.839268 | 911 | 3.171875 | 3 |
No one ever suspected when relatively new, largely untested voting machines were mandated as the primary way to vote in many states or districts that anything untoward could ever happen to the data. No one predicted the machines would break down, lose votes, lose some voter credentials, survive voting day intact but corrupt the data afterward or have their data monkeyed with by partisans interested in skewing elections their own way using voting machines that often produced no paper or other form of record the voter and local election officials could check for fraud. No one person predicted any of those things. Every single person with any concern about the electoral process and/or knowledge of how fallible those darn smart computers can be to simple manipulation. Every single observer was able to point out some major flaw in the plan except the people committed to foisting them on the rest of us without any system of backups or fact-checking. Just to demonstrate that they were all right, a security testing lab in Illinois showed how the Diebold Accuvote (?!) according to a story today in Salon quoting the testers at the Vulnerability Assessment Team at Argonne National Laboratory in Illinois.
We already knew the machines could be hacked remotely using viruses that would give hackers easy and automated control over the results. Princeton University ran penetration tests on the machines in 2006, two years after the machines were brought into widespread use as an alternative to the “hanging chad” controversies in the 2000 presidential election. As with most viruses, this one can jump from one machine to another, infecting whole voting bank, according to the Princeton study. There's even a video demonstration. The new method is a man-in-the-middle attack that waits until a voter has confirmed that the machine is displaying all his or her votes correctly, then blanks the screen for a second while the votes are supposedly being written into storage. Actually a homemade bug intercepts the signal, replaces it with one programmed into it or sent via remote control from a short distance away. The bug uses a microprocessor that cost $1.29, an $8 circuit board and an (optional) $15 remote control. “When the voter hist the 'vote now' button to register his vvotes, we can blank the screen and then go back and vote differently and the voter will be unaware this has happened,” Argonne researcher Roger Johnson said on the video as he and fellow researcher Jon Warner demonstrate several ways to take over and replace, or simply alter results using a bug they refer to only as “alien electronics,” to keep the secret technique from everyone but other hackers. Who knew someone could get access to a sophisticated computer system and cause it to do things its owners and administrators preferred it wouldn't? The administration of George W. Bush clearly didn't do enough to secure the systems and the Obama adminstration hasn't taken up the slack. It looks like it will be up to President Sabu and Vice President Topiary to fix the machines so no one else can tamper with them after they fix the election, too. | <urn:uuid:c33a79a5-c625-4fc3-a1e0-5f7a345c0b7d> | CC-MAIN-2017-04 | http://www.itworld.com/article/2737477/security/-salon--calls-2012-election--president-sabu--vp-topiary-vow-to-fix-hackable-voting-machines.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00450-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.958781 | 620 | 2.859375 | 3 |
Poor website performance has a direct impact on revenue. Even just a few seconds of delay on an ecommerce website can lead to shopping cart abandonment and a decline in the conversion rate. DNS (Domain Name System) allows users to find and connect to websites, and can be a hidden source of latency.
What is DNS and how does it work?
DNS maintains a catalog of domain names, such as “internap.com”, and resolves them into IP addresses. Anyone that has a presence online uses DNS – it’s required in order to access websites.
Sample use case: Ecommerce
Let’s look at a growing ecommerce site that recently expanded into a new market in Europe. Following the expansion, the company noticed that users were experiencing unacceptable levels of latency while connecting to their site. During the past month, users had to wait up to 10 minutes before they were able to reach the site.
The company has been handling their DNS needs through their ISP until this point. Possible causes of such high latency include:
- DNS name servers may not be in close geographic proximity to a large percentage of users, and routing table errors could be misdirecting requests to name servers that are physically far away from the user.
- Network congestion may contribute to slow resolution of DNS queries, resulting in high wait times to connect to the site.
- Poor performance can also be caused by hardware failure at one of the name server nodes, and without an active DNS failover in place, this can keep some users from accessing the site.
To prevent these issues from affecting your business, we recommend a Managed DNS Service to support the performance needs of today’s websites.
In our presentation, DNS: An Overlooked Source of Latency, you will learn:
- Factors that affect webpage load times
- Important DNS features and functions
- Different types of DNS solutions available
View the presentation here. | <urn:uuid:a8635259-85c0-43a2-8314-119e0dba6d18> | CC-MAIN-2017-04 | http://www.internap.com/2015/03/27/dns-overlooked-source-latency/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00294-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.954615 | 396 | 2.609375 | 3 |
Temperature monitoring helps researchers track ocean conditions
Friday, Mar 8th 2013
In research settings involving animals or other forms of life, temperature monitoring is paramount to make sure that external environmental conditions are just right for them to thrive.
This is true in a global context as well, which is why rising ocean temperatures worry so many scientists. In order to better track global water life conditions, researchers are increasingly turning to state-of-the-art water sensors and temperature monitoring equipment to get a better sense of how the oceans are changing and what effects that could have on the planet's ocean-based life forms.
In particular, scientists in the Northeastern United States are leading this charge, as the effects of rising ocean temperatures have been especially acute in this region. According to the National Oceanic and Atmospheric Administration's Northeast Fisheries Science Center, temperatures from the Gulf of Maine to the Gulf of Hatteras in North Carolina reached their highest ever recorded temperatures during the first half of last year.
From January to June, the average sea temperature along the Eastern Seaboard was more than 51 degrees Fahrenheit, the NEFSC reported. In comparison, average temperatures during the previous 30 years hovered around or below 48 degrees F.
"A pronounced warming event occurred on the Northeast Shelf this spring, and this will have a profound impact throughout the ecosystem," Kevin Friedland, a scientist in the NEFSC's Ecosystem Assessment Program, said last September. "Changes in ocean temperatures and the timing of the spring plankton bloom could affect the biological clocks of many marine species, which spawn at specific times of the year based on environmental cues like water temperature."
How accurate is this data?
Although an onshore temperature sensor can provide scientists with some useful information, the data needed by the NEFSC and other researchers to make accurate regional calculations involves the use of remote temperature monitoring equipment.
In particular, the Penobscot Bay Pilot reported that scientists studying conditions in the Gulf of Maine use a buoy-based system to collect data. For added convenience, the researchers could leverage an alert-based system in which data is automatically sent from each temperature sensor to a computer or mobile device. By consistently collecting data over years and decades, scientists can create an accurate assessment of regional water temperature trends and what they mean for life in the area.
What rising sea temperatures mean for wildlife and humans
Although the shift noted by NEFSC and other researchers working in the area may not seem like much, rising temperatures are already having a noticeable effect on fish populations in the region and on fishermen. For example, Friedland said that Atlantic cod, which used to be abundant throughout the coastal waters along the Northeastern U.S. coast, is now found almost exclusively in the colder waters in the Gulf of Maine and off the coast of New Brunswick and Nova Scotia.
These trends have also affected shipping conditions, energy usage and recreational habits in this part of the world as well. For example, a nuclear power facility in Connecticut was forced to shut down operations last summer because ocean water used to cool the reactor was too warm, the Penobscot Bay Pilot reported. | <urn:uuid:aabfe948-6517-4ecb-8b36-84acbca047e4> | CC-MAIN-2017-04 | http://www.itwatchdogs.com/environmental-monitoring-news/research-labs/temperature-monitoring-helps-researchers-track-ocean-conditions-401617 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280308.24/warc/CC-MAIN-20170116095120-00202-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.93939 | 636 | 3.3125 | 3 |
The detection ability of a device should be the primary focus of any IPS testing. Because evasion techniques are evolving every day, it is imperative that IPS devices have the ability to detect well-known exploits and new variants using advanced obfuscation techniques. Detection testing is usually performed using penetration testing tools or public exploits.
A key factor in successful testing is the level of knowledge of the person conducting the test. For example, tools like Core Impact and Metasploit contain modules that scan for the vulnerable service before sending an attack; if the target system is not vulnerable, an attack may not be sent. To further complicate the issue, certain IPS devices may fire exploit specific signatures on this probe, regardless of the fact that nothing malicious has occurred. If this scenario is not recognized, the results of the test may be invalid. These testing processes apply as well to Layers 3, 4, and 7— a high level of knowledge and experience is crucial. Test results will be invalid unless confirmation exists that the exploit functions normally with the evasion techniques in place.
The goal of detection testing is to identify a device’s baseline detection and resistance to evasion. Proper testing consists of creating several test cases for the vulnerability being examined and vulnerable systems for test use. Traffic should be captured at each step to facilitate future testing and verification.
The first step is to choose a non-evasive, well-known version of the attack. Ensure that the attack functions correctly and exploits the vulnerable system without the IPS device monitoring the traffic. This step establishes the baseline attack (the simplest version of the attack). The second step involves changing the shell code used in the attack. This step helps confirm that the IPS is providing protection for the vulnerability as opposed to a specific variant of the attack. Once again, capture and test the attack against a vulnerable system without the IPS inline.
The third step is to add Layer 7 (Layers 3 and 4 if possible) obfuscation to the attack. Adding layers of obfuscation is the most important part of detection testing; the more complex the techniques, the better the test. You may want to create several obfuscation cases if the protocol allows several layers of obfuscation. The final step in testing is to combine all of the evasion and obfuscation methods into the attack.
These four steps allow insight into the type of coverage a device provides for a specific vulnerability, as well as its ability to inspect obfuscated traffic. By testing several attacks using the same protocol, the shortfalls of the IPS device’s ability to inspect the protocol will be illuminated. For example, if a vendor only detects the “out of the box” and nonfragmented variants of an Microsoft Remote Procedure Call (MSRPC) vulnerability, then that particular IPS may not have an engine capable of properly parsing fragmented (Layer 7) MSRPC traffic.
Figure 1 illustrates the Metasploit Project’s ie_xp_wmf exploit without obfuscation
Figure 1. Metasploit ie_xp_wmf Exploit—No Obfuscation
Figure 2 illustrates the same exploit using the gzip, chunked encoding, and header injection obfuscation techniques.
Figure 2. Metasploit ie_xp_wmf Exploit—Multiple Obfuscation Techniques
The MSRPC protocol is also easily obfuscated. It is possible to obfuscate the bind attempt by sending several bind context ids in the initial bind request; however, an IPS device must monitor the traffic to see which binds were successful.
Figure 3 illustrates an example of bind obfuscation using multiple UUIDs.
Figure 3. Blind Obfuscation Using Mutiple UUIDs
It is also possible to connect to a non-vulnerable interface, have the bind accepted, and issue an alter context command to the server, switching the attack to the vulnerable interface. All of this can also be fragmented at the application level making detection extremely difficult. In some cases it is also possible to use MSRPC on top of SMB (all of the MSRPC evasions are still valid), allowing fragmentation on the SMB level, hiding the MSRPC headers.
Figure 4 illustrates an example of MSRPC fragmented by means of SMB.
Figure 4. MSRPC Fragmented on SMB Level
All of these examples can also be further obfuscated using standard ip evasion and obfuscation techniques. If a device can handle all of these techniques, it is properly inspecting the entire protocol, providing much more protection than a device that cannot inspect the protocol.
Many of the currently available IPS testing tools use replay traffic. The primary testing issue when using replay tools is the quality of the captured traffic. Sample issues, such as incomplete streams, packets with bad checksums, and out of order packets, can all create problems during testing. These issues can easily be fixed with open source tools similar to the netdude tool.
The most common problem occurs when capture does not include a sample of a real exploit. Capture samples are often created by vulnerability scanners that scan the services and look at banners or version strings. Normally these tools do not send any malicious traffic and should not be used when testing an IPS. Additionally, you must also ensure that the pcap file contains a copy of the entire attack. Depending on how a device engine is designed, some signatures may fire on partial attacks that do not properly represent the attack in a real world environment in that they would not exploit a vulnerable system.
Issues can occur when replaying samples at speeds other than that of the actual speed. If you replay traffic at different speed than the captured speed, some detection and protection algorithms may not work correctly. Threshold-based or behavioral algorithms need to see the traffic at the same speed it was originally sent. In addition, TCP uses timestamps for Protection against Wrapped Sequence Numbers (PAWS) and accelerating the traffic may cause time dilation to become a problem. Depending on the tool, the inability to react as a real network stack to changes in the network may result in unreliable testing.
If a replay tool can retransmit missing packets, it may retransmit the original packet, unlike a real network host stack. Retransmission can increase congestion on the network and cause a positive feedback loop. The pass or fail criteria of the tool must also be considered. Tomahawk, for example, replays packets and expects to receive the packets unmodified and in the same order as sent. If an IPS device does active TCP/IP normalization, it may reorder IP fragments and TCP segments to protect against evasions. Depending on the tool, this may be incorrectly reported as a failure or a block.
The most reliable form of testing for coverage is live testing. A baseline test case with a vulnerable machine should be setup and attacked with a working exploit. Verify that the exploit has the desired effect (a shell opened back to the attacker, or a crash of the service, for example), then reset the vulnerable machine to the pre-exploited state. Next, place the IPS under test between the attacker and the vulnerable host and attempt the exploit again. This process should be repeated for each of detection testing steps.
Vmware (or another virtual system that supports snapshots) is an excellent tool for hosting the vulnerable machines. If the exploit’s desired action occurs and the traffic is passed though a properly configured IPS device, then the attack was not detected. The problem with this approach is that it can become very time consuming when evaluating more than one IPS device.
A fast and reliable approach for IPS testing is a hybrid of these two methods. By creating pcap file for each attack without the IPS device monitoring the traffic, we have created perfect samples for replay testing. Care must be taken to ensure you capture the entire attack and that the attack was successful. It is recommended to use a closed network so that the traffic sample is as clean as possible. Next, replay the traffic samples against the IPS device and note any that were not detected or blocked. You should see an inverse correlation between detection and obfuscation. All of the attacks that were not detected need to be manually verified using live testing. Manual verification eliminates any potential problem introduced by using a traffic sample and replay tool and provides most of replay testing’s time-saving benefits.
Craig Williams (firstname.lastname@example.org)
Security Research and Operations
This document is part of Cisco Security Research & Operations.
This document is provided on an "as is" basis and does not imply any kind of guarantee or warranty, including the warranties of merchantability or fitness for a particular use. Your use of the information on the document or materials linked from the document is at your own risk. Cisco reserves the right to change or update this document at any time. | <urn:uuid:69aa1288-8931-45ea-bb17-497d9e8f9b0f> | CC-MAIN-2017-04 | http://www.cisco.com/c/en/us/about/security-center/ips-testing.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00405-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.913513 | 1,795 | 2.6875 | 3 |
In order for a computer to connect a network, a component known as a network interface card is required to provide the physical connection to the local network media. Computer normally have some form of expansion card containing the network interface controllers that are plugged into one of a number of different types of Computer Bus. Most desktop computers to date had a network card that fitted into a PCI slot. PCI stands for Peripheral Component Interconnect and was originally developed my Intel, but is now a standard used by almost all manufacturers that produce peripheral devices for connectivity with a computer.
As Ethernet has been the dominant Data Link Layer standard for Local Area Networks for some time, Ethernet Network Interface Cards have been produced in their millions to allow computers the connectivity through hubs, optical switches and routers with other computers. These network cards cost as little as a few pounds or dollars and allow computers to communicate across an internetwork and be identified by means of a unique physical address known as a MAC Address.
Most people refer to such a network device as a network interface card, Ethernet Card or Fiber Optic Network Adapter and they will have an RJ45 female connector, into which a male connector on an Ethernet Patch Cable is connected.
Any Network Adapter will require drivers, which are applications used by the computers to communicate with, and control the actions of the Adapter. Most modern computers will automatically detect a new device and apply the correct drivers, or prompt for the location of the drivers which may be on a disk supplied with the card in the case of a PCI Network Card.
Although a lot of modern computers actually have the NIC built into the motherboard, there is still a need for network cards that plug into a computer as there are so many computers still functioning on networks that do not have motherboards with that capability.
At present, there are many manufactures and it’s difficult to recommend one, the easiest way to purchase a cheap network card or fiber optic network adapter is certainly over the Internet from a specialist online retailer. | <urn:uuid:67edd2b7-d7aa-47ce-9edc-5a99d68bfbe2> | CC-MAIN-2017-04 | http://www.fs.com/blog/what-is-a-network-adapter.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00341-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.959848 | 400 | 3.40625 | 3 |
In the popular Douglas Adams BBC radio series “The Hitchhiker’s Guide to the Galaxy,” hyper-intelligent mice paid a lot of money to create Earth, a giant computer simulation disguised as a planet. Their goal over a ten million year program was to determine the Question to Life, the Universe, and Everything. (The answer, of course, was 42.)
Although Adams meant it as fiction, the universe might not.
Everything can be modeled. This attitude is partially what drives advancements in high performance computing. Physicists are currently working on modeling the strong nuclear forces holding together the quarks and gluons that constitute neutrons and protons, also known as quantum chromodynamics.
These exact models are tiny in scale, measuring in the femtometer range (femto=10^-15). The hope is to expand that precision out to micrometers, allowing for the precise modeling of living cells. All of this will presumably made possible by the relentless advance of Moore’s Law.
So what if our universe is simply a gigantic, several-billion year old computer model?
The notion has existed as somewhat of a philosophical curiosity since the advent of computers. The idea of expanding the current model of a few femtometers to the wide ranges of the universe shouldn’t sound so ridiculous, especially for an engineer that would be older than the universe itself.
According to a team from the University of Bonn in Germany, headed by Silas Beane, cosmic ray detection may enable us to determine if we do, in fact, exist inside of a computer model. They presented their findings in a paper published earlier this month.
The primary assumption relied on by this hypothesis is that a model must utilize a three-dimensional grid, or lattice, from which point the model could be partitioned and run in parallel. This grid places certain limits on the model in that nothing can be smaller than the lattice. In this case, if the universe were the thing to be modeled, that would indicate a limit on the energy of particles.
Studying the Cosmic Microwave Background led to the discovery of such a limit, called the Greisen-Zatsepin-Kuzmin limit.
“The most striking feature of the scenario,” the paper says, “in which the lattice provides the cut off to the cosmic ray spectrum, is that the angular distribution of the highest energy components would exhibit cubic symmetry in the rest frame of the lattice.”
In essence, the GZK limit in concert with existence inside a computer model would lead to a phenomenon in physics where cosmic rays prefer a certain orientation, or angular distribution, in order to attain “symmetry in the rest frame of the lattice.”
That statement offers something testable: the angular distribution of cosmic rays. If cosmic rays exhibit some preference in orientation, that orientation could imply a modeling axis, the existence of which would be a step in determining our digitalization.
According to the paper, such a confirmation would simply be the first in a long checklist. “Of course, improvement in this context masks much of our ability to probe the possibility that our universe is a simulation.”
However, according to the researchers, the universe is finite (it may expand faster than the speed of light but it is still finite), which for the them means that the model’s volume is finite and the spaces between potential model grid lines are non-zero.
Whatever the answer to this model is, it is likely to be more complex than 42. | <urn:uuid:c68a325a-ec00-47c0-9a97-e5a7e402add4> | CC-MAIN-2017-04 | https://www.hpcwire.com/2012/10/10/could_the_universe_reveal_itself_as_a_computer_simulation_/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00341-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.942469 | 741 | 3.75 | 4 |
In a lot of areas today, you can sit in the park with your laptop or tablet and connect to the Internet. The reason that you can do this is because several carriers have set up an array of wireless routers known as hotspots. Your wireless device connects to the Internet over a Wi-Fi connection. As you move from one end of the park to the other your device senses the next closest wireless router and establishes a new connection in a matter of milliseconds. As you move from one area to another the Wi-Fi enabled device keeps a constant connection to the Internet. This is Wi-Fi roaming. While this is very convenient, it also has the potential of allowing access to your wireless device.
The Wireless Broadband Alliance (WBA) was established in 2003. It was founded as a global forum for the wireless broadband ecosystem. According to the Institute of Electrical and Electronics Engineers (IEEE) 802.16-2004 standard, broadband means “having instantaneous bandwidths greater than 1 MHz and supporting data rates greater than about 1.5 Mbit/s.”
Wireless broadband is technology that provides high speed wireless Internet access or computer networking over a large area. The Wireless Broadband Alliance is a group that is driving the next generation Wi-Fi experience. Their goal is to enable seamless, secure Wi-Fi roaming and data offload for operators.
In Singapore, the WBA announced on December 18, 2012 an initiative that will streamline the way WBA members work together on a common set of technical and commercial frameworks for Wi-Fi roaming. Some of the key global players that have confirmed their participation include AT&T, Boingo Wireless, China Mobile, NTT DoCoMo and several others.
The initiative is called the Interoperability Compliance Program (ICP). The ICP will make it easier for operators globally to work together on a common set of technical and commercial frameworks for Wi-Fi roaming.
Public Wi-Fi is becoming essential for mobile connectivity. Unfortunately, there are no standards and consistency in the way that wireless devices connect and roam on to Wi-Fi networks. The ICP will help operators overcome these challenges on a global level. The operators will work together to align guidelines on security, data offload, device authentication, network implementation, network selection, charging models and billing mechanisms.
Promoting and advocating a common set of requirements and procedures for Wi-Fi roaming will make it easier for operators to enter into roaming agreements. The operators will have a much better understanding of how to integrate their networks to support roaming.
The CEO of the WBA, Shirkant Shenwai is quoted as saying, “With public Wi-Fi emerging as a key component of operators’ offerings, it has never been more essential for the WBA to encourage interoperability and collaboration with the Wi-Fi community. Our new ICP provides a framework for operators to assess their own network capabilities and make it easier to create bilateral Wi-Fi roaming agreements.”
The need for this can be seen by the increase in Wi-Fi usage this past year. China Mobile saw a 102.5 percent increase in Wi-Fi traffic in 2012. Japan’s NTT DoCoMo plans to grow its hotspots by as much as 105 times before the end of the year. There is a steady growth in Wi-Fi enabled devices and that by default creates a steady growth in the need for Wi-Fi roaming.
Edited by Brooke Neuman | <urn:uuid:54b56fbf-13e8-48d2-8e7c-8a78d504ed25> | CC-MAIN-2017-04 | http://www.mobilitytechzone.com/topics/4g-wirelessevolution/articles/2012/12/18/320136-wi-fi-roaming-guidelines.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00157-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.942407 | 709 | 2.578125 | 3 |
Big data: it’s getting even bigger. And with it comes the need for more big data storage.
In 2012, EMC estimated the size of the digital universe (all the data created and used) that year to be 2,387 exabytes (EB), and it predicted that by 2020, that number would increase to 40,000 EB—about 5,200 gigabytes per person.
That data explosion is apparent in everyday things, like smartphones, and in less obvious but still critical processes, such as the conversion of medical records from paper to digital, the vast transmission and data processing demands generated by manufacturing, and the increasing sharing of sophisticated simulation and 3D models. Clearly, the demand for data storage capacity must increase to keep up with the data explosion. Therefore, promoting and developing storage center efficiency through improving performance and reducing power consumption and costs is critical.
Data centers consume vast amounts of energy in the course of handling all of those transactions—most of it to cool the facility. Virtualization and intelligent software to manage the servers will help, but the reality is that the heat load is still present, and the tools for increasing utilization only create headroom for processing even more data. Making data centers more energy efficient will go a long way to meeting the ever-growing demand for increased capacity. Ensuring that the cooling systems are reliable and easily monitored, even remotely, will further improve efficiency.
This is easier said than done, of course. Often, conflict exists between IT, facilities and financial/business decision-makers—simply because of the inherent conflicts in their job-related objectives as well as divergent opinions about the data center decision process.
Obviously, risk aversion is a big factor in operating a data center. Even though the server manufacturer might warrant its equipment at server inlet temperatures exceeding 100°F, it would be difficult to convince a data center operator to raise cold-aisle temperatures even as high as 80°F.
Conversely, financial managers will be anxious to reduce operating expenses—something that raising temperatures will do—but these managers must also factor in service-level agreements with clients. Even if the financial managers can be convinced that the risk is unchanged, the wealth of conflicting information on the subject will make convincing clients a real challenge. And of course, manufacturers of data center cooling equipment will naturally present a case that their solution is the best and will often use proprietary research to justify their position.
The best data center solutions, then, are found when facilities, IT and financial managers work together. Clearly, finding an objective and validated solution that puts everyone at ease is a necessary step in achieving the goal of improved efficiency.
Steps Toward a Solution
Fortunately, the U.S. National Science Foundation (NSF) has created a program designed to identify, validate and advance the state of the art in energy-efficient data center design. This program, called the NSF-I/UCRC (National Science Foundation-Industry/University Cooperative Research Centers) on Energy Smart Electronic Systems, combines the research capabilities of four universities (Binghamton University, University of Texas at Arlington, Villanova University and Georgia Tech) with the industry experience of 24 companies operating in the IT and data center environment.
Unlike programs funded by a single manufacturer or company, the NSF program offers the potential for an unbiased source of information. Research is currently underway in the areas of cooling-system control, board- and chip-level cooling, particulate contamination effects, waste-heat recovery, outside-air cooling, evaporative cooling and filtration of outside air.
Innovations in Data Center Cooling Systems
The American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) has proposed that data centers operate at elevated server-inlet temperatures, with a goal of encouraging the use of outside air, or evaporative cooling, as the most efficient means of air-based cooling.
But traditional evaporative cooling methods can present challenges. This method, because it avoids the use of compressors or chillers, consumes 70% less energy than traditional air conditioning—a big reduction in operational expense. But through this process, outside air is passed over a wetted pad to transfer heat, resulting in a much higher relative humidity that has earned this method the nickname “swamp cooling.” That term alone will drive end users away from this very effective cooling method.
Furthermore, some users are uneasy about the potential for particulate contamination. ASHRAE recommends a level of air filtration that is easily achieved in most HVAC equipment (a “MERV 8” level), but many users remain hesitant to trust this recommendation.
To address these concerns, some manufacturers have developed indirect methods of cooling air using evaporative cooling that will reduce the temperature without adding moisture. The indirect method is slightly less efficient than the direct method but still consumes a fraction of the energy that a typical compressor-bearing cooling system might consume. Data from a typical HVAC-equipment manufacturer’s catalog indicates that an indirect evaporative cooling system such as Aztec will use about a third of the energy compared with a similar-size air-cooled rooftop unit or chiller system. Going a step further to employ outside air for cooling can reduce the energy use to less than a quarter of that required by conventional systems. Progressive companies that have already deployed these technologies can justifiably claim PUEs of under 1.1 on a regular basis.
Because evaporative cooling and fresh-air cooling can be so much more efficient, they are the primary types of cooling under research and validation testing as important elements of the current NSF-I/UCRC program. Mestex, a division of Mestek, has provided its specialized Aztec evaporative cooling system to the NSF for research managed by the University of Texas at Arlington Engineering Department. The Aztec system has been installed on a small data pod in Dallas, Texas. Evaporative cooling technologies are generally considered impractical for use in a hot and humid climate like Dallas, so this research can establish whether it is a viable solution in areas that had previously not considered it.
Real-Time Monitoring and Remote Access
Although the NSF research began in earnest only in January of this year, it has already seen interesting results. The Aztec unit and 120 servers, donated by Yahoo, in the four cabinets of this research pod (donated by Verizon) are being monitored in real time by the on-board digital control system. This data will soon be augmented with the addition of a DCIM software package from CommScope’s iTracs division. In the meantime, the current monitoring and control software is displaying real-time PUE figures that have ranged from 1.03 to 1.37—a sharp contrast to the average performance measures of roughly 2.0 for most data centers in the US.
One of the keys to achieving these results while still maintaining cold-aisle temperatures within ASHRAE class-A1 conditions has been the control strategies. Since the Aztec digital control system allows web-based access to over 60 data points (out of the almost 300 points being monitored), researchers in remote locations can view and even trend those points and look for patterns that suggest changes to improve performance further.
A real-life example of the value and importance of remote monitoring is apparent in a biopharmaceutical company that develops and manufactures life-saving drugs and that has just received a patent for a breakthrough drug. Similarly, food service and distribution centers must monitor temperatures in warehouses and storage facilities to comply with FDA regulations, particularly with the new requirements of the Food Safety Modernization Act.
The bottom line is in many industries that require temperature sensitive warehousing—food service, pharmaceuticals and data centers alike—remote monitoring and controls are a critical component of strong, safe supply chains.
Open Access Improves Research
Even the design of the user interface will ultimately become a benefit from this research, as information is only truly useful when it is easily interpreted and becomes actionable. To further that additional benefit from the NSF research, Mestex created the “Open Access Project.” This project allows any data center owner, operator or client to log onto webctrl.aztec-server-cooling.com and watch how an evaporative cooling unit performs on the research pod in Dallas. Because this is a research site is intended to benefit all data center designers, and because commercial and private data centers seldom allow access to details of their operations, visitors to the site can see for themselves how evaporative and outside-air cooling can perform under the variety of weather conditions in Dallas, Texas, over the next year.
The Optimal Outcome
It’s clear that the work of the NSF, in collaboration with industry leaders, demonstrates that the best solutions for energy-efficient data centers will include the following elements:
- Energy efficiency (which may include evaporative cooling)
- Scalability, or “plug-and-play” options, that allow the data center to grow as the industry grows, without the need for retrofitting or other additional expenses
- Vendor-neutral controls to link disparate vendor equipment and/or building-automation systems
- Real-time monitoring and remote access
- A clear interface that provides context for the user
In our increasingly data-driven world, collaborative development and information-sharing between key industry players and researchers, as seen in the NSF program, will drive development of a new level of “best practices” for data center design. The demand for data is here to stay; these efforts enable a possible solution to the ever-increasing demand for cooling energy to be documented and shared with all designers and users for the benefit of the industry—and, really, every person whose life is touched by the digital world.
Leading article image courtesy of cbowns
About the Author
Michael Kaler is president of Mestex. Mestex, a division of Mestek, Inc., is a group of HVAC manufacturers with a focus on air handling and a passion for innovation. Mestex is the only HVAC manufacturer offering industry-standard direct digital controls on virtually all of its products, including Aztec evaporative cooling systems, which are especially suited for data center use, as well as Applied Air, Alton, Koldwave, Temprite and LJ Wing HVAC systems. The company is a pioneer in evaporative cooling and has led industry innovation in evaporative cooling technology for more than 40 years. | <urn:uuid:8538db7d-dc29-4489-a9de-d5a858283f46> | CC-MAIN-2017-04 | http://www.datacenterjournal.com/open-access-practices-energyefficient-data-centers/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00185-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.932339 | 2,170 | 2.796875 | 3 |
Cloud Computing: 15 Ways Python Is a Powerful Force on the Web
The Python programming language has gained popularity as one of the components of the LAMP (Linux, Apache, MySQL and Python/Perl/PHP) stack. Python has seen a resurgence in programmer interest, and dynamic languages such as Ruby and Python have emerged as alternatives to languages like Java and C#. And the popularity of software such as the Google App Engine, the Django Web framework and the Zope application server has made the language more attractive to developers. This slide show looks at some of the things that make Python a standard for Web application development. | <urn:uuid:62046442-b821-40b3-8197-8ed7aa91ec92> | CC-MAIN-2017-04 | http://www.eweek.com/c/a/Cloud-Computing/15-Ways-Python-Is-a-Powerful-Force-on-the-Web-275427 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00303-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.932649 | 126 | 2.765625 | 3 |
A jet airliner generates 20 terabytes of diagnostic data per hour of flight. The average oil platform has 40,000 sensors, generating data 24/7. In accordance with European Union guidelines, 80 percent of all households in Germany (32 million) will need to be equipped with smart meters by 2020.
Machine-to-machine (M2M) sensors, monitors and meters like these will fuel the Internet of Things. M2M is now generating enormous volumes of data and is testing the capabilities of traditional database technologies. In many industries, the data load predictions of just 12 to 24 months ago have long been surpassed. This is creating tremendous strain on infrastructures that did not contemplate the dramatic increase in the amount of data coming in, the way the data would need to be queried, or the changing ways business users would want to analyze data.
To extract rich, real-time insight from the vast amounts of machine-generated data, companies will have to build a technology foundation with speed and scale because raw data, whatever the source, is only useful after it has been transformed into knowledge through analysis. For example, a mobile carrier may want to automate location-based smartphone offers based on incoming GPS data, or a utility may need smart meter feeds that show spikes in energy usage to trigger demand response pricing. If it takes too long to process and analyze this kind of data, or if applications are confined to predefined queries and canned reports, the resulting intelligence will fail to be useful, resulting in potential revenue loss.
Investigative analytics tools enable interactive, ad-hoc querying on complex big data sets to identify patterns and insights and can perform analysis at massive scale with precision even as machine-generated data grows beyond the petabyte scale. With investigative analytics, companies can take action in response to events in real-time and identify patterns to either capitalize on or prevent an event in the future. This is especially important because most failures result from a confluence of multiple factors, not just a single red flag.
However, in order to run investigative analytics effectively, the underlying infrastructure must be up to the task. We are already seeing traditional, hardware-based infrastructures run out of storage and processing headroom. Adding more data centers, servers and disk storage subsystems is expensive. Column-based technologies are generally associated with data warehousing and provide excellent query performance over large volumes of data. Columnar stores are not designed to be transactional, but they provide much better performance for analytic applications than row-based databases designed to support transactional systems.
Hadoop has captured people’s imaginations as a cost-effective and highly scalable way to store and manage big data. Data typically stored with Hadoop is complex, from multiple data sources, and includes structured and unstructured data. However, companies are realizing that they may not be harnessing the full value of their data with Hadoop due to a lack of high-performance ad-hoc query capabilities.
To fully address the influx of M2M data generated by the increasingly connected Internet of Things landscape, companies can deploy a range of technologies to leverage distributed processing frameworks like Hadoop and NoSQL and improve performance of their analytics, including enterprise data warehouses, analytic databases, data visualization, and business intelligence tools. These can be deployed in any combination of on-premise software, appliance, or in the cloud. The reality is that there is no single silver bullet to address the entire analytics infrastructure stack. Your business requirements will determine where each of these elements plays its role. The key is to think about how business requirements are changing. Move the conversation from questions like, “How did my network perform?” to time-critical, high-value-add questions such as, “How can I improve my network’s performance?”
To find the right analytics database technology to capture, connect, and drive meaning from data, companies should consider the following requirements:
Real-time analysis. Businesses can’t afford for data to get stale. Data solutions need to load quickly and easily, and must dynamically query, analyze, and communicate M2M information in real-time, without huge investments in IT administration, support, and tuning.
Flexible querying and ad-hoc reporting. When intelligence needs to change quickly, analytic tools can’t be constrained by data schemas that limit the number and type of queries that can be performed. This type of deeper analysis also cannot be constrained by tinkering or time-consuming manual configuration (such as indexing and managing data partitions) to create and change analytic queries.
Efficient compression. Efficient data compression is key to enabling M2M data management within a network node, smart device, or massive data center cluster. Better compression allows for less storage capacity overall, as well as tighter data sampling and longer historical data sets, increasing the accuracy of query results.
Ease of use and cost. Data analysis must be affordable, easy-to-use, and simple to implement in order to justify the investment. This demands low-touch solutions that are optimized to deliver fast analysis of large volumes of data, with minimal hardware, administrative effort, and customization needed to set up or change query and reporting parameters.
Companies that continue with the status quo will find themselves spending increasingly more money on servers, storage, and DBAs, an approach that is difficult to sustain and is at risk of serious degradation in performance. By maximizing insight into the data, companies can make better decisions at the speed of business, thereby reducing costs, identifying new revenue streams, and gaining a competitive edge.
Don DeLoach is CEO and president of Infobright. Don has more than 25 years of software industry experience, with demonstrated success building software companies with extensive sales, marketing, and international experience. Don joined Infobright after serving as CEO of Aleri, the complex event processing company, which was acquired by Sybase in February 2010. Prior to Aleri, Don served as President and CEO of YOUcentric, a CRM software company, where he led the growth of the company’s revenue from $2.8M to $25M in three years, before being acquired by JD Edwards. Don also spent five years in senior sales management culminating in the role of Vice President of North American Geographic Sales, Telesales, Channels, and Field Marketing. He has also served as a Director at Broadbeam Corporation and Apropos Inc. | <urn:uuid:ffc8bc75-efb2-4dc4-be3e-4a5cbdf889bf> | CC-MAIN-2017-04 | http://data-informed.com/modernizing-m2m-analytics-strategies-internet-things/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00211-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.937343 | 1,325 | 2.609375 | 3 |
Cloud computing has become completely ubiquitous, spawning hundreds of new web based services, platforms for building applications, and new types of businesses and companies. However, the freedom, fluidity and dynamic platform that cloud computing provides also makes it particularly vulnerable to cyber attacks. And because the cloud is a shared infrastructure, the consequences of such attacks can be extremely serious.
With funding from DARPA, researchers from the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) aim to develop a new system that would help the cloud identify and recover from an attack almost instantaneously.
Typically, cyber attacks force the shutdown of the entire infiltrated system, regardless of whether the attack is on a personal computer, a business website or an entire network. While the shutdown prevents the virus from spreading, it effectively disables the underlying infrastructure until cleanup is complete.
Professor Martin Rinard, a principal investigator at CSAIL and leader of the Cloud Intrusion Detection and Repair project, and his team of researchers aim to develop a smart, self-healing cloud computing infrastructure that would be able to identify the nature of an attack and then, essentially, fix itself.
The scope of their work is based on examining the normal operations of the cloud to create guidelines for how it should look and function, then drawing upon this model so that the cloud can identify when an attack is underway and return to normal as quickly as possible.
“Much like the human body has a monitoring system that can detect when everything is running normally, our hypothesis is that a successful attack appears as an anomaly in the normal operating activity of the system,” said Rinard. “By observing the execution of a “normal’ cloud system we’re going to the heart of what we want to preserve about the system, which should hopefully keep the cloud safe from attack.”
Rinard believes that a major problem with today’s cloud computing infrastructures is the lack of a thorough understanding of how they operate. His research aims to identify systemic effects of different behavior on cloud computing systems for clues about how to prevent future attacks.
“Our goal is to observe and understand the normal operation of the cloud, then when something out of the ordinary happens, take actions that steer the cloud back into its normal operating mode,” said Rinard. “Our expectation is that if we can do this, the cloud will survive the attack and keep operating without a problem.” | <urn:uuid:86dc377e-54ee-4e99-9839-48f00afd9a7d> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2012/02/28/researchers-work-on-self-healing-cloud-infrastructure/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00211-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.950038 | 501 | 3.421875 | 3 |
As transistors approach the limits of miniaturization, the rapid pace of progress in the microprocessor industry is destined to start declining unless researchers are successful in innovating alternative designs. The best and brightest minds of our time are hard at work developing the next generation of microprocessors so essential to supercomputers, handheld devices and worldwide communication systems. One of the researchers dedicated to extending these Moore’s law returns is Emre Salman, an assistant professor of computer and electrical engineering at Stony Brook University.
With funding from the National Science Foundation (NSF), Salman, who also directs Stony Brook’s Nanoscale Circuits and Systems (NanoCAS) Laboratory, is refining an approach called heterogeneous three-dimensional (3-D) integration. This emerging technology in which multiple wafers are stacked vertically has the potential to consume less power and provide higher performance than current two-dimensional chips.
“Today’s typical electronic system on a circuit board consists of multiple chips connected with wires that are at the millimeter and centimeter scale,” he explains. “These bulky connections not only slow down the circuit, but also consume power and reduce the reliability of the system.”
In 3-D technology, discrete chips, called tiers, are stacked on top of each other prior to being packaged. “Vertical connections that achieve communication among the tiers are now in the micrometer scale, and getting even shorter with advances in 3-D manufacturing technology, thereby consuming less power and providing more performance,” Salman explains. “Essentially, 3-D technology enables higher and heterogeneous integration at a smaller form factor.”
The approach is not without its challenges, like getting the multiple planes to work in harmony as a single unit. Researchers have spent more than a decade working to develop 3-D chip technology, but Salman says that most projects have focused on high performance and fairly homogeneous chips, such as microprocessors.
The 2011 edition of the International Technology Roadmap for Semiconductors (ITRS), which provides guidance for the field, says that, “the third phase and long term application of 3-D technology includes highly heterogeneous integration, where sensing and communication planes are stacked with conventional data processing and memory planes.”
This longer-term approach is what Salman and his team are focused on. It would extend the three-dimensional domain from high performance computing to relatively low power systems-on-chip (SoCs), which have capabilities beyond the scope of traditional general purpose processors. The result is the integration of multiple functions, including sensing, processing, storage and communication into a single 3-D chip. In other words, a single 3-D chip would have the ability to sense, process and store data using advanced algorithms, and then wirelessly transmit the data to another location.
“Numerous applications exist in health care, energy efficient mobile computing, and environmental control, since a smaller form factor can be achieved at lower power while offering significant computing resources,” he says. “Our fundamental objective is to develop a reliable 3-D analysis and design platform for these applications which will host future electronics systems that are increasingly more portable, can interact with the environment, consume low power, yet still offer significant computing capability.”
Salman is the recipient of an NSF Faculty Early Career Development (CAREER) award, which provides $453,809 in funding for the project over five years. The CAREER program recognizes junior faculty with promising careers who promote the integration of education and research. | <urn:uuid:59df860e-a443-4589-b8ef-30bea7a6d23e> | CC-MAIN-2017-04 | https://www.hpcwire.com/2014/02/06/researcher-advances-heterogenous-3d-chip-design/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00423-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.935677 | 736 | 3.0625 | 3 |
The Basic Curve The basic probability curve looks like an anthill. Here, the X axis represents potential outcomes from worst to best, going left to right. The Y axis represents the probability of those outcomes, from lowest to highest, going bottom to top. The highest point on the curve indicates the most likely outcome of the risk. The best case falls at the far right and worst case far left, both with the lowest probabilities of occurrence.
A steeper, narrower curve (the red line) represents more certainty about the outcome, since more potential outcomes fall in a smaller range. A low, broad curve (the blue line) represents less certainty about a risk’s potential impact on a project. With this understanding, you can determine the likelihood of potential risk outcomes with a quick look at a distribution chart.
The Optimistic Curve While steepness of the curve indicates certainty, its tilt describes relative outlook. A risk distribution that tilts to the right represents a more optimistic outlook, since the higher probability results are closer to the best possible outcome.
The Pessimistic Curve On the other hand, a curve that leans to the left shows a more pessimistic view of the risk, since there’s more probability that the outcome will fall on the worst-case side of the spectrum. | <urn:uuid:625825f4-8fe1-4ca7-bedf-c5131d6c14bb> | CC-MAIN-2017-04 | http://www.cio.com/article/2441973/risk-management/understanding-probability-curves.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00331-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.9203 | 261 | 3.765625 | 4 |
If you’ve ever been tasked with any responsibility involving fonts, you might be all too aware of the pains fonts can bring. Unless the person before you documented what font they used and what size, you can spend a lot of time trying to match fonts or settling for “close enough.”
If you’re updating a Flash menu or just trying to add another line to a flier, or even if you’ve seen a poster somewhere and wondered what font they used WhatTheFont could be very useful to you. It can tell you the possible names of fonts that are in use for an image.
Whether you have to take a screenshot, scan it, or use your camera, any image in GIF, JPEG, TIFF, or BMP format will work.
All you have to do is upload an image or paste in the URL to the image and one step later, you’ll get the site’s best guess at the font name. I took a screenshot from a website (first I zoomed in to increase the size of the text) and then created a gif file to upload to their server. Your image should be focused on the words and have less than 100 characters.
The site will parse the image you uploaded and try to figure out the font on a letter by letter basis. In order to do this, it needs you to tell it which character is which. The means of doing this is really simple. It will show you a segment of the picture with a particular letter in black and the rest all faded out. In the text box next to each image of a letter, you type what letter it is. You’ll need to do this for each character it parses.
If a character has a separate spot like the dot of an ‘i’ or a colon like in my example, it will parse each separate instance as an individual character. To correct this, just drag one of the images onto the other, the site will then merge the two. After that, just type in the character in the text box. After all characters are filled in just hit the ‘Search’ button at the bottom.
The results. That’s all there is to it. The site will give you it’s best guess as to the font name.
(The first guess, Trebuchet MS, was correct.)
If you disagree with the results and know it is a different font than the one they guessed, you can go through the process again. In order to get better results, try to use a better, clearer image of the font.
Check out WhatTheFont for all your font identifying needs.
Bonus, they have a new look for their website coming out. The interface is quite improved and really helps explain the service better. Check the new look out.
Double bonus, if your interest is piqued in fonts check out the documentary on the titular Helvetica. | <urn:uuid:1c07ac7a-b15f-4450-8576-5d42770458be> | CC-MAIN-2017-04 | https://www.404techsupport.com/2009/01/whatthefont-find-out-the-name-of-fonts/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00451-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.944103 | 610 | 2.84375 | 3 |
It’s hard to keep up with the 2Gs and the 4Gs and the XLTEs of the world. This primer will review the basics of mobile phone network technologies.
Cellular technology is what mobile phone networks are based on, and it’s the technology that gave mobile phones the name “cell phones”. Cellular technology basically refers to having many small interconnected transmitters as opposed to one big one.
The other main concept of cellular technology was that they were “multiple access”, meaning that they placed multiple voice or data connections into a single radio channel.
- GSM is the original 2G standard launched in 1991. It was the first major deployment to use encryption, and now enjoys over 90% marketshare and penetration into around 220 countries
- The GSM protocol was initially based on time division, meaning calls take turns using the radio signal
- The 3G rollout of GSM actually used the competing technology (CDMA, see below)
- GSM is the worldwide option, with CDMA being mostly a United States thing
- It’s easier to travel with GSM phone
- GSM phones allow you to simply switch SIM cards and use another GSM device, whereas CDMA phones require you to register your device itself with the network
- More of the world is covered in GSM, but it’s bad in rural locations compared to CDMA (this is why Verizon had better coverage in the U.S. for so long)
- CDMA is mostly a U.S. thing, as most of the world is GSM
- CDMA is based on encoding multiple connections with different keys and then decoding them on the receiving end
- Two main carriers are CDMA-based: Verizon, and Sprint
- You’ve traditionally been unable to talk and use data on CDMA networks at the same time, but this is increasingly becoming less true now (in 2014)
- Better rural coverage, which is good for for large places like the United States, but less overall coverage worldwide
This was the first generation of GSM, and it was an analog technology.
2G stands for “second generation”, and the nG designation continues that onto 4G that exists today.
- 2G was digital (rather than analog)
- 2G introduced encryption
- There were GSM and CDMA versions of 2G
- Introduced higher transfer rates, up to 200 kbit/sec, and later versions could achieve multiple megabits per second.
- The major advance of 4G is mobile broadband internet services provided to external systems, such as laptops, wireless modems, etc.
HSPA (High Speed Packet Access)
HSPA is a merge of two technologies:
- High Speed Downlink Packet Access (HSDPA)
- High Speed Uplink Packet Access (HSUPA)
- Stands for Long Term Evolution
- Often called 4G LTE
- Increases bandwidth available for voice and data communications by using a different radio interface combined with a number of network improvements
- It’s the upgrade path for both GSM and CDMA based networks
- Advanced Wireless Services
- Also referred to as UMTS band IV
- Uses microwave frequencies in two segments: from 1710 to 1755 MHz for uplink, and from 2110 to 2155 MHz for downlink
- Provides a minimum of double the bandwidth of LTE
- XLTE ready devices automatically access both the 700 Mhz and the AWS spectrum in XLTE cities
- Lead by Verizon in 2014
- VoLTE is a voice technology that works over the LTE data connection rather than 3G voice bands
- It has extremely high voice quality (like they’re right next to you)
- VoLTE will require that both participants are using VoLTE and are in VoLTE enabled areas
- Also includes the ability to make video calls
- Wi-Fi calling lets you call to a phone number over the internet
- It’s different from VoLTE because with VoLTE the calls are going over the phone company’s network
- Also promises the ability to swap seamlessly between Wi-Fi and wireless phone networks
- GSM came before CDMA
- GSM is the global technology
- CDMA is better for large, rural areas, and was adopted by Verizon and Sprint in the United States
- CDMA has traditionally not been able to do voice and data at the same time
- GSM has been able to do simultaneous voice and data since 3G
- GSM authenticates SIM cards
- CDMA authenticates the device itself
- GSM is better for world travelers
- LTE brought very high bandwidth to mobile devices, hotspots, and peripherals
- XLTE is faster LTE
- VoLTE sends voice data over the data portion of the phone connection, and is like voice quality
- Wi-Fi calling lets you route regular phone calls over a wi-fi connection, and will let you swap back and forth between internet and your phone network
I hope this has been helpful. | <urn:uuid:a6ef797a-385f-4c0e-8252-e30e413b67ba> | CC-MAIN-2017-04 | https://danielmiessler.com/study/cellular/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00076-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.94881 | 1,063 | 3.390625 | 3 |
Creating test material for computer forensic teaching or tool testing purposes has been a known problem. I encountered the issue in my studies of Computer Forensics at the University of Westminster. We were assigned a task to compare computer forensic tools and report results. Having already analysed test images by Brian Carrier (http://dftt.sourceforge.net) over and over again, I found myself creating images manually, which appears to be the best and only way of doing this. One of my lecturers, Sean Tohill, confirmed this is indeed the case and a test image generator is long overdue.
The need for such a tool is twofold. In educational setting, the problem of plagiarism can be mitigated by giving each student an individual image to analyse. In application quality testing, one of the tests should be to feed several similar but not identical images to the forensic tool, and compare results, which should be identical.
Designing and writing such a tool became my MSc dissertation project, which I have now completed, and Mr Tohill became the project supervisor. One of the outcomes was an application, which creates images based on a scenario defined by the user. Each image representing the scenario is slightly different but they should all be equal in complexity, allowing their use in education and software testing.
This article describes the project and introduces the resulting application, which I have released under GPL for anyone to use or modify. The tool is available on Github (https://github.com/hannuvisti/forge.git). It is guaranteed to work on Ubuntu 12.04 but other linuxes are probably ok as well as long as they have /dev/loopX devices for loopback mounts.
Design principles and choice of tools
The original design had two objectives: to create forensic test images based on scenarios with a random element added, and to design the tool in such a way that it can be easily extended without modifying existing code. This led to the following set of requirements:
- Create NTFS file systems – FAT12/16/32 would have been another option but NTFS was chosen due to personal interest and detailed timestamp handling.
- Provide web browser based user interface to create scenarios, initiate image creation and display image contents.
- Modular design – file system code and actual data hiding methods should be outside the main processor loop. Their interface must be documented to allow addition of new file systems and data hiding methods.
- Implement several different data hiding methods to provide a proof of concept.
- Allow timeline management in scenarios – design the application to protect against timeline contamination due to file system operations by the application.
- Provide means of automatically creating variance between images
NTFS suits well to object-oriented programming due to its design. Due to time constraints, programming languages and tools were not thoroughly evaluated. After a short period of tests and prototyping, I chose Python 2.7.x as the programming language over Java, mainly because of Java lacking unsigned variables.
User interface and database connectivity was built with Django. Django is a complex framework but its built-in “admin” interface gave the database administration part used in scenario design without programming effort.
Building a scenario
The only requisites and preparatory actions are database initialisation – in practice inserting file system and data hiding method information to the database for modularity purposes – and uploading of raw material files. The application provides means to complete these tasks. Database initialisation should be done once after initial installation. Raw files can be uploaded any time and shared between scenarios. Raw files are categorised to “trivial files” and “secret files”. Trivial files are used as bulk to populate the file system with irrelevant information. These files are categorised automatically by their kind; picture, audio, executable, document etc. Secret files are the ones used in data hiding methods. The user must assign a numerical “group” to these files, for reasons that will become apparent later.
The core of a scenario in ForGe is called a case. Case defines file system level parameters. Currently only NTFS is supported, FAT is already in the pipeline. Each case can create several images that all fulfil the overall scenario but are not identical.
Trivial strategies instruct the creator how to build the bulk or uninteresting part of the image. There can be as many trivial strategies in a case as the scenario requires but at least one trivial strategy must be present. Secret strategies generally require “raw material” on the image and this raw material is provided by trivial strategies. Individual files are chosen randomly from the trivial file repository according to “kind” parameter.
Secret strategies implement data hiding methods to images. Currently implemented data hiding methods are:
- Alternate data streams
- File extension change
- Concatenation of files
- Deletion of a file
- File slack space
- “Not hidden” – just place the file to the image.
While a trivial strategy places several files to the image, a secret strategy always operates on exactly one file, which is chosen from the secret repository according to a “group” parameter. If a file is unique in its group, the file is always placed on the image. This allows scenarios where all students must locate certain files, but also scenarios, where the file is chosen randomly from a pool.
Hiding methods have additional “action” and “action time” parameters. If these are present, MACE timestamps are modified to correspond to the chosen file action, for example read, copy or rename.
ForGe manages timelines by modifying raw file system data on an unmounted image. This avoids contamination of timestamps, where a disk operation to modify files or timestamps change one or more timestamp parameters. On NTFS, both $STANDARD_INFORMATION and $FILE_NAME attribute are modified to correspond to file time or action time. The current version does not modify timestamps in directory indices but I will add this to a near-future version.
A case can contain “time variance”. If this is set to 0, every image gets an identical timeline. Upon a non-zero time variance parameter, a number is chosen randomly between 0 and time variance parameter to each individual image. This represents number of weeks added to each time attribute on the disk. The benefit to have time variance in weeks is in preservation of time of day and day of week. If an educational scenario were based on something happening on a night between Saturday and Sunday, this would be the case on every image, just different weeks.
ForGe reports either success or failure for each created image. Failures can occur on some or all images if for example the file system runs out of space. ForGe can also be used to print a “cheat sheet” to display the contents of an image.
The cheat sheet displays the results of trivial strategies (/pic and /docs directories). Hidden items explain, which files have been hidden and where. For example, scotland.png can be found in an alternate data stream of file /pic/IMG_8568.jpg. England.png is hidden in file slack and could be extracted with command
dd if=hidingmethodtest-1 bs=1 skip=12720128 count=1353 of=england.png
Targets and locations for hidden items are chosen randomly, making each image representing the scenario similar but not identical. The images should be equal in complexity as well, as the same data hiding methods are used throughout the scenario, only locations, timestamps and possibly source files vary.
To create a new data hiding method, a new Python class must be created. The class interface is very simple and included in documentation. Basically, the class must implement a method hide_file that takes the file and parameter array as parameters, and returns a set of instructions or raises an exception in case of failure. This new class must be declared in database but existing code needs not be modified.
The image illustrates this. The required database elements are path to the file to be included and name of the data hiding class. Priority must be set as well and equal priorities are allowed. This is to ensure the image creation does not contaminate itself. For example, if a file were hidden into file slack and the file deleted, and then another file would be written on the image, it is possible the file would be overwritten. Thus, priority one methods are those that modify a mounted file system directly. Priority 2 handles deletions and priority 3 unmounted file system raw modifications. More priorities can be set if needed, this is the current setup.
File systems can be added in a similar way but the interface is more complex. Documentation to do so is included in ForGe documentation.
ForGe is a tool to create relatively simple test images rapidly. Creating ten images takes less than a minute. Its limitation currently is its focus on single files. If more complex structures, for example web browsing history, need to be included, ForGe is not able to do that with reasonable amount of work. Even in those cases, ForGe would speed up creation of the base images.
- Creates NTFS images. Most test images available seem to be FAT
- Graphical user interface
- Pays attention to order of actions when building images, to avoid contamination in scenario or timeline
- Easy to install and configure
- Works on single files only – cannot be used to create email archives, web browser histories etc.
- Database management is not perfect – if for example the user wants to delete files in hidden files repository, they must delete both the database entries with the user interface and the physical files with rm.
- NTFS system file attribute times mostly correspond to image creation. Root directory time is set but $Bitmap etc. indicate the time of last action in image creation. Deciding what would be the correct MACE timestamps for each system file according to the scenario with actions is not a trivial task and currently not implemented.
This was an interesting project to do and I am currently working on FAT16/32 extension. I will also add modification of directory index timestamps soon. NTFS is a versatile file system that allows complex timestamp manipulation; ForGe tries to leave timeline as uncontaminated as possible and is able to use some of the more complex NTFS timestamp oddities.
I would be delighted to hear comments and improvement requests. | <urn:uuid:360b8eec-d81c-44e7-aacd-305a09bfead4> | CC-MAIN-2017-04 | https://articles.forensicfocus.com/2013/10/18/forge-computer-forensic-test-image-generator/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00194-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.915231 | 2,166 | 2.6875 | 3 |
In this course you will learn about IPv6 from the perspective of your existing knowledge of IPv4 Technologies. It will cover the new addressing schemes including Link Local addressing, Stateless Auto Configuration, Stateless DHCP, and DHCPv6. It will also cover the major changes to ICMP and the underlying technologies to support the lack of broadcasting in IPv6. We will be covering the changes to the routing protocols RIP, EIGRP, OSPF, and BGP in relation to IPv6 and the basics of how to implement these on Cisco IOS. IPv6 Transition mechanisms including GRE, 6to4, ISATAP, DMVPN, and MPLS 6PE/6VPE. | <urn:uuid:e7452e61-9529-429f-a7be-8d9b73282baf> | CC-MAIN-2017-04 | https://streaming.ine.com/c/ine-ipv6-generic-course | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285315.77/warc/CC-MAIN-20170116095125-00010-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.891645 | 140 | 2.71875 | 3 |
As in many other industries, avionics manufacturers saw software as a way to add value to their products and started to adapt quickly to the fast changing technology of real-time embedded software. The use of embedded software in avionics is continuously increasing but there is a great difference between avionic software and conventional embedded software – failure is unacceptable, the development process is required by law and must be highly optimized for safety. Embedded software in the avionics sector must provide comfort and agility without compromising safety.
Most nations regulate avionics, or at least adopt standards in use by other countries. Here are some of the regulatory authorities that assure safety and reliability and affects the embedded software development:
- The Federal Aviation Administration (FAA) is the national aviation authority of the United States that mandates have safety and reliability standards
- The European Aviation Safety Agency (EASA) is the European Union agency with regulatory and executive tasks in the field of civilian aviation safety.
- The European Cooperation for Space Standardization (ECSS) is an organization which works to improve standardization within the European space sector.
- The European Organisation for Civil Aviation Equipment (EUROCAE) is a non-profit organisation whose membership exclusively comprises aviation stakeholders – manufacturers, services providers, national and international aviation authorities. It develops performance specifications and other documents exclusively dedicated to the aviation community
These regulatory authorities and many more require software development standards. Some representative standards include MIL-STD-2167 for military systems, or RTCA DO-178C for civil aircraft. RTCA DO-178C Software Considerations in Airborne Systems and Equipment Certification helps regulate the development and certification of software and the delivery of multiple supporting documents and records used on aircraft or engines.
Developers of avionics software must demonstrate compliance with guidelines such as DO-178C to assure the certifiability of their software. Certification means that the software aspects of a system must be assured to be safe. All software aspects must be developed as defined by the software certification guidelines to the level of rigor and discipline required by their criticality level, as determined by a functional hazard assessment. DO-178C provides clear guidance for some of the technologies that are being used in safety-critical systems and allows credit for modern technologies such as formal methods, object-oriented programming (OOP) languages, and model-based development.
Embedded software providers can play a crucial role in helping avionics manufacturers meeting their compliance goals, providing them:
- The insurance that all stages of the software development process are adequately documented and compliant with regulations, guidance documents and standards.
- The assurance that the development process that has been planned is progressing according to the plan.
It’s time for avionics manufacturers to work smarter and reach software quality goals without losing their market shares. Enea’s certified developers are proficient in a large number of programming languages and have an in-depth understanding of specific standards and regulations in this sector.
To read more on how the right partner for your next embedded software development project can help you meet your business objectives, visit Enea website. | <urn:uuid:46ccb104-bf66-4093-a08c-14b665a0e171> | CC-MAIN-2017-04 | http://services.enea.com/embedded-software-development-standards-in-the-avionics-industry.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00552-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.927853 | 630 | 2.640625 | 3 |
Data Storage: Maximizing Data Center Power Efficiency: 10 Ways to Do It
Use Smart Grid for Data Collection Across the Environment
Sensors collect power and cooling data from systems and facilities and establish consistent energy benchmarks. This allows IT managers to aggregate information to provide meaningful data on power and cooling efficiency in real time, which can help dynamically manage data center energy usage.
Quantifying the power efficiencies of a data center may appear to be something pretty esoteric, but rest assured, it is all very scientific. There are two metrics, instituted by the Green Grid industry group, which are now beginning the lengthy process of becoming international industry standards: a) Power Usage Effectiveness (PUE): This is a ratio of total facility power divided by IT equipment power. Ideally it should be less than 2-to-1; the closer to 1-to-1, the better; and b) Data center infrastructure efficiency (DCiE):??í DCiE is a percentage: IT equipment power x 100, divided by total facility power. The bigger the number the percent, the better. A data center's DCiE should never be more than 1. To get these numbers in line, there are a number of things that data center managers can do over time. Here is a list of best practices to consider. | <urn:uuid:cda8c748-e635-42c2-8f8a-9ff199d1993e> | CC-MAIN-2017-04 | http://www.eweek.com/c/a/Data-Storage/Maximizing-Data-Center-Power-Efficiency-10-Ways-to-Do-It-139559 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00368-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.912038 | 266 | 3.09375 | 3 |
During the third quarter of 2010, Kaspersky Lab’s products blocked over 600 million attempts to infect users’ computers with malicious and unwanted programs. This is a 10% increase on the second quarter’s figure. Of the total number of blocked objects, over 534 million were malicious programs.
The Stuxnet epidemic received the most attention during the third quarter and confirms the theory that malware is rapidly becoming more sophisticated. An analysis of the worm has shown that it was designed to change the logic within programmable logic controllers (PLCs) embedded into inverters which are used to control the rotation speed of electric motors. These PLCs operate with very high speed motors that have limited applications, such as those in centrifuges. Stuxnet is the most complex piece of malware in the cybercriminals’ arsenal to have appeared. The epidemic also marked the beginning of the era of attacks on industrial targets. Stuxnet is also unique in that it uses as many as four zero-day Windows vulnerabilities at the same time in order to infiltrate victim computers, and has a rootkit component signed with certificates stolen from integrated circuit manufacturers, Realtek Semiconductors and JMicron.
Digital certificates and signatures are one of the pillars upon which cybersecurity rests. A digital signature has an important role in certifying the trustworthiness of the file it is incorporated into. However, several cases were recorded in 2010 in which cybercriminals received digital certificates quite legally, just like any other software developer. In one instance, a group of cybercriminals received a certificate for ’Software with which to remotely operate a computer without a GUI’, which is, in essence, a backdoor. The creators of adware, riskware and Rogue AVs frequently use stolen certificates to prevent their malware from being detected. Apart from in the Stuxnet case, stealing certificates is one of the prime functions that Zbot (aka Zeus), a very widespread Trojan, performs. “Judging by what we are seeing today, the problem of stolen certificates may become even more significant in 2011,” according to Yury Namestnikov, author of the report ‘IT Threat Evolution for Q3-2010’.
Exploiting vulnerabilities, as before, has remained highly popular with the cybercriminal fraternity. Four new vulnerabilities emerged in the quarterly ranking of most commonly exploited vulnerabilities: two in Adobe Flash Player products, one in Adobe Reader and one in Microsoft Office. Additionally, the Top-10 included three vulnerabilities discovered in 2009 and one discovered in 2008. This statistic shows that some users have not bothered to update their software for years. All of the vulnerabilities listed in the Top-10 allow cybercriminals to take full control of the target system.
According to Kaspersky Lab’s experts, the number of virus incidents relating to malicious files bearing certificates will increase dramatically in the near future. More worryingly still, sophisticated malware capable of running on 64-bit platforms will also increase. It is a sure fact that the cybercriminals will take advantage of newly discovered vulnerabilities ever more quickly too.
“The third quarter’s events demonstrate that we are currently on the threshold of a new era in the evolution of cybercrime,” said Yury Namestnikov. “The concept of mass infection, as seen with the Klez, Medoom, Sasser and Kido worms is going to give way to precision strikes.”
The full version of the report is available at: www.securelist.com/en. | <urn:uuid:009ce26e-bd92-45a7-a15c-6fe3623a3c57> | CC-MAIN-2017-04 | http://www.kaspersky.com/au/about/news/virus/2010/Malware_in_Q3_2010_600_Million_Attempted_Infections_Stuxnet_Stolen_Certificates_and_Exploits | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00094-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.954008 | 725 | 2.6875 | 3 |
SAS vs. SATA - Page 5
ZFS does read checksums. When it writes data to the storage, it computes checksums of each block and writes them along with the data to the storage devices. The checksums are written in the pointer to the block. A checksum of the block pointer itself is also computed and stored in its pointer. This continues all the way up the tree to the root node which also has a checksum.
When the data block is read, the checksum is computed and compared to the checksum stored in the block pointer. If the checksums match, the data is passed from the file system to the calling function. If the checksums do not match, then the data is corrected using either mirroring or RAID (depends upon how ZFS is configured).
Remember that the checksums are made on the blocks and not on the entire file, allowing the bad block(s) to be reconstructed if the checksums don't match and if the information is available for reconstructing the block(s). If the blocks are mirrored, then the mirror of the block is used and checked for integrity. If the blocks are stored using RAID then the data is reconstructed just like you would any RAID data - from the remaining blocks and the parity blocks. However, a key point to remember is that it in the case of multiple checksum failures the file is considered corrupt and it must be restored from a backup.
ZFS can help data integrity in some regards. ZFS computes the checksum information in memory prior to the data being passed to the drives. It is very unlikely that the checksum information will be corrupt in memory. After computing the checksums, ZFS writes the data to the drives via the channel as well as writes the checksums into the block pointers.
Since the data has come through the channel, then it is possible that the data can become corrupted by a SDC. In that case ZFS will write corrupted data (either the data or checksum possibly both). When the data is read, ZFS is capable of recovering the correct data because it will either detect a corrupted checksum for the data (stored in the block pointer) or it will detect corrupted data. In either case, it will restore the data from a mirror or RAID.
The key point is that the only way to discover if the data is bad is to read it again. ZFS has a feature called "scrubbing" that walks the data tree and checks both the checksums in the block pointers as well as the data itself. If it detects problems then the data is corrected. But scrubbing will consume CPU and memory resources while storage performance will be reduced to some degree (scrubbing is done in the background).
If you get a hard error on the drive (see first section) before ZFS scrubs the data that affects corrupted data (due to SDC in the SATA channel) then it's very possible that you can't recover the data. The data was corrupted but the checksums could have been used to correct it but now a drive with the block and block pointer is dead making life very difficult.
Given the drive error rate of Consumer SATA drives in the first section and the size of the RAID groups, plus the SATA Channel SDC, this combination of events can be a distinct possibility (unless you are start scrubbing data at a very high rate so that newly landed data is scrubbed immediately, which limits the performance of the file system).
Therefore ZFS can "help" the SATA channel in terms of reducing the effective SDC because it can recover data corrupted by the SATA channel, but to do this, all of the data that is written must be read as well (to correct the data). This means to write a chunk of data you have to compute the checksum in memory, write it with the data to the storage system, re-read the data and checksum, compare the stored checksum to the computed checksum, and possibly recover the corrupted data and compute a new checksum and write it to disk. This is a great deal of work just to write a chunk of data.
Another consideration for SAS vs. SATA is the performance. Right now SATA has a 6 Gbps interface. Instead of doubling the interface to go to 12 Gbps, the decision was made to switch to something called SATA Express. This is a new interface that supports either SATA of PCI Express storage devices. SATA Express should start to appear in consumer system in 2014 but the peak performance can vary widely from as low as 6 Gbps for legacy SATA devices to 8-16 Gbps PCI Express devices (e.g. PCIe SSDs).
However, there are companies currently selling SAS drives with a 12 Gbps interface. Moreover, in a few years, there will be 24 Gbps SAS drives.
SATA vs. SAS: Summary and Observations
Let's recap. To begin with, SATA drives have a much lower hard error rate than SAS drives. Consumer SATA drives are 100 times more likely to encounter a hard error than Enterprise SAS drives. The SATA/SAS Nearline Enterprise drives have a hard error rate that is only 10 times worse than Enterprise SAS drives. Because of this, RAID group sizes are limited when Consumer SATA drives are used or you run the risk of multi-disk failure that even something like RAID-6 cannot help. There are plenty of stories of people who have used Consumer SATA drives in larger RAID groups where the array is constantly in the middle of a rebuild. Performance suffers accordingly.
The SATA channel has a much higher incidence rate of silent data corruption (SDC) than the SAS channel. In fact, the SATA channel is four orders of magnitude worse than the SAS channel for SDC rates. For the data rates of today's larger systems, you are likely to encounter a few silent data corruptions per year even running at 0.5 GiB/s with a SATA channel (about 1.4 per year). On the other hand, the SAS Channel allows you to use a much higher data rate without encountering an SDC. You need to run the SAS Channel at about 1 TiB/s for a year before you might encounter an SDC (theoretically 0.3 per year).
Using T10-DIF, the SDC rate for the SAS channel can be increased to the point we are likely never to encounter a SDC in a year until we start pushing above the 100 TiB/s data rate range. Adding in T10-DIX is even better because we start to address the data integrity issues from the application to the HBA (T10-DIF fixes the data integrity from the HBA to the drive). But changes in POSIX are required to allow T10-DIX to happen.
But T10-DIF and T10-DIX cannot be used with the SATA channel so we are stuck with a fairly high rate of SDC by using the SATA Channel. This is fine for home systems that have a couple of SATA drives or so, but for the enterprise world or for systems that have a reasonable amount of capacity, SATA drives and the SATA channel are a bad combination (lots of drive rebuilds and lots of silent data corruption).
File systems that do proper checksums, such as ZFS, can help with data integrity issues because of writing the checksum with the data blocks, but they are not perfect. In the case of ZFS to check for data corruptions you have to read the data again. This really cuts into performance and increases CPU usage (remember that ZFS uses software RAID). We don't know the ultimate impact on the SDC rate but it can help. Unfortunately I don’t have any estimates of the increase in SDC when ZFS is used.
Increasingly, there are storage solutions that use a smaller caching tier in front of a larger capacity but slower tier. The classic example is using SSD's in front of spinning disks. The goal of this configuration is to effectively utilize much faster but typically costlier SSD's in front of slower but much larger capacity spinning drives. Conceptually, writes are first done to the SSD's and then migrated to the slower disks per some policy. Data that is to be read is also pulled into the SSD's as needed so that read speed is much faster than if it was read from the disks. But in this configuration the overall data integrity of the solution is limited by the weakest link as previously discussed.
If you are wondering about using PCI Express SSD's instead of SATA SSD's drives you can do that but unfortunately, I don't know the SDC rate for PCIe drives and I can't find anything that has been published. Moreover, I don't believe there is a way to dual-port these drives so that you can use them between two servers for data resiliency (in many cases if the cache goes down, the entire storage solution goes down).
If you have made it to the end of the article, congratulations, it is a little longer than I hoped but I wanted to present some technical facts rather than hand waving and arguing. It's pretty obvious that for reasonably large storage solutions where data integrity is important, SATA is not the way to go. But that doesn't mean SATA is pointless. I use them in my home desktop very successfully, but I don't have a great deal of data and I don't push that much data through the SATA channel. Take the time to understand your data integrity needs and what kind of solution you need.
Photo courtesy of Shutterstock. | <urn:uuid:94841312-66f8-45b1-8353-2c583c01ac89> | CC-MAIN-2017-04 | http://www.enterprisestorageforum.com/storage-technology/sas-vs.-sata-5.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00396-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.952807 | 1,956 | 3.15625 | 3 |
5 games in play at agencies
- By John Breeden II
- Apr 09, 2013
The public sector is finding gamification to be an effective way to educate the public, train employees and encourage innovation. Here are five examples.
Money Smart Game Board
The FDIC and Dynamics Research Corporation partnered to create the Money Smart Computer Game, which teaches low- to moderate-income people how to manage their finances. It awards points and certificates to winners. It drew more than 40,000 players in the first six months, and could be expanded into other languages to reach people in targeted communities. Originally a Web-only game, it’s now available on CD-ROM for people without Internet access.
This trailer is designed to lure people into NOAA’s ambitious ReGenesis game, in which players are sent back in time (to the future) in order to prevent an environmental disaster. The game is set in the year 2100, and players must time travel back to the year 2017 in order to prevent or eliminate the environmental damage caused by “Hurricane Rita.” Players, acting on behalf of NOAA (and operating anonymously, since no one can know of the interference from the future) arrive shortly after the hurricane, get to try different strategies, and then go forward in time to see how it all plays out. It's designed to teach about climate, satellite control and environmental damage mitigation strategies.
This is a virtual recreation of NOAA's research vessel, the Okeanos Explorer, being shared using a Holowall, an interactive display that lets users interact without any special devices. The screen is touch-sensitive, allowing either user — locally or at the remote location — to interact with the boat, thus creating a unique collaboration and gaming environment. The Okeanos recently returned from mapping the western North Atlantic Ocean as part of its annual shakedown at the beginning of its field season.
An example of a government game in the educational field, this NOAA title is still in development. In it, aliens seek refuge on Earth in exchange for sharing their advanced technology after expending their own planet's resources. This technology allows the player to travel into possible future scenarios to see the impact of climate change and their decisions based on the latest real data from the Intergovernmental Panel on Climate Change.
NASA Moonbase Alpha
A 3D game with both single- and multiplayer options, Moonbase Alpha simulates lunar exploration and is especially designed for young people interested in the STEM (Science, Technology, Engineering and Mathematics) disciplines. In the game, set in the year 2032, a meteor strike damages an outpost near the moon's South Pole, and the player direct the research team to repair the outpost and save 12 years of research. The multiplayer version lets up to six players work together. You have a variety of tools at hand, including robotic units, hand tools and the lunar rover. | <urn:uuid:245cc866-7c8b-49dd-9ea1-40edb77ba819> | CC-MAIN-2017-04 | https://gcn.com/articles/2013/04/09/5-games-in-play-at-agencies.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00148-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.933622 | 587 | 2.546875 | 3 |
Smart city deployments are well under-way across the globe, but in India some problems around cost and governmental control are starting to emerge.
The aspirations of India’s Prime Minister Narendra Modi to push forward a smart cities programme suffered a serious setback when the country’s Congress said in December that the smart cities concept did not fit in with the country’s constitution.
The mismatch of Modi’s plans and the constitution comes because of amendments made by Rajiv Gandhi’s government in the 1980s stipulating that only municipal corporations and local bodies can make decisions on development works. Senior party leader Narayan Rane asserted that Modi’s plans involve private bodies bypassing an elected body to take decisions on redevelopment.
The decision may embarrass UK Prime Minister David Cameron whose November trade deal with India included national and state partnership between the UK’s Department of International Development and the Indian Ministry of Urban Development for national and state-led support for the development of smart and sustainable cities. Three cities were named: Indore, Pune and Amaravati.
This isn’t the only difficulty smart city development is likely to face in the coming years – with cost in particular an issue.
Also this week, the Nagpur Municipal Corporation says it will recoup more than two-thirds of its smart city development costs from citizens. In the UK, where local government finances are becoming ever tighter and local revenue raising opportunities restricted, such a move would likely be unpalatable even if it was possible. An obvious alternative – raising money from private sources, has its own difficulties in terms of, for example, ownership, revenue sharing, and priority setting.
The UK’s Centre for Cities identified key issues affecting the development of smart cities in 2014, and a year later said while progress had been made, the fundamental barriers remained unchanged. These include confusion about what ‘smart’ really means – it isn’t always about big, shiny new projects, but often about doing what we already do smarter. Smarter working includes things making better use of existing data, integrating ‘smart’ into core strategy, and more central government devolution of both decision making powers and financial control.
The good news is that real world smart city projects keep coming, and administrations do see the benefit of a more localised and integrated approach.
In the US, President Obama’s Smart Cities initiative will invest US$160 million (£108.4 million) in smart cities. One of the focus points is working with individuals, entrepreneurs, and non-profits interested in harnessing IT to tackle local problems and work directly with city governments. Meanwhile closer to home, the Manchester’s new CityVerve project , recently awarded £10 million from the UK Government’s Internet of Things competition, will use IoT to improve services for citizens in healthcare, energy and environment, culture and community across the city. | <urn:uuid:c04f5181-bbfa-4e05-afc7-748c58aa478f> | CC-MAIN-2017-04 | https://internetofbusiness.com/red-tape-threatens-to-dumb-down-smart-cities/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00148-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.948257 | 596 | 2.65625 | 3 |
Ransomware is a problem that doesn’t seem to be going away any time soon – and it looks as though cyber attackers’ newest platform for digital crime, “ransomware-as-a-service”, might make the problem even worse.
In 2016 alone, tens of thousands of people were hit with some form of ransomware or malware attack – from large tech companies to regional hospitals to individuals. Ransomware is a form of digital extortion that encrypts its victim’s files and offers up the decryption key in exchange for payment. The following image shows a sample ransom note that a user infected with the ransomware virus Cryptolocker will receive.
Ransomware-as-a-service: The latest platform for digital crime
Cybercriminals are finding new ways to profit from their data-encrypting malware by renting it out as a service to anyone who is willing to pay to use it. This not only makes it more profitable for the developers and creators of the ransomware (they take a large percentage of each ransom paid by victims of the attacks), but also makes the malware more readily available to cybercriminals on the “dark web”.
With ransomware on the rise through these new ransomware-as-a-service platforms, businesses – small and large alike – need to make sure they take as many precautions as possible to prevent and be able to recover from an attack as effectively as possible. We’ve compiled a list of best practices to prevent malware attacks as well as ways to architect data protection and disaster recovery solutions to minimize the impact of an attack.
The best defense against ransomware is prevention
The best way to prevent a ransomware attack is to not fall victim to an attack in the first place. With most ransomware attacks being conducted through email, businesses should ensure that ensure that their employees are trained to exercise extreme caution before opening email attachments and clicking links contained within messages. Antivirus scanners can do a great job filtering known malware viruses from inboxes, but for those messages that do slip through the security gates, users should:
- Verify that the sender is legitimate. “Spoofing” is a way for cybercriminals to forge a sender’s name so it appears to be from someone the recipient knows.
- Check the attachment type before opening it. Executable files which end in “.exe” or “.dmg” will automatically run a program once opened.
In addition to providing employees with email security best practices, another way to help prevent ransomware attacks is by implementing software restriction policies (SRPs) on Windows computers. SRPs can be implemented to block executable files from running in the areas where Cryptolocker launches itself on a user’s computer. Here’s a sample of SRPs that can be implemented to prevent “.exe” files from launching in the user space:
Invest in robust disaster recovery and data protection
Last month, we put together a blog post to share how having resilient IT can really ease the pain following a ransomware attack. Should the first line of defense – using antivirus software and exercising IT security best practices – not be enough to prevent an attack, having a modern, robust disaster recovery and data protection can make all the difference in the world.
One of the biggest reasons that businesses give in to cybercriminals and pay their ransom demands is that the operational costs associated with downtime and loss of productivity outweigh the ransom itself.
Try to ask yourself this question: how much would it cost for your entire business to lose operations and access to data for 8 hours, a day, or even a week? What about if you were never able to recover that data at all? Is your data that dispensable? For some businesses, these questions have become a very scary reality after being hit with a ransomware attack like Cryptolocker.
Investing in a decent disaster recovery and data protection solution doesn’t need to be a time-consuming or complicated process, though. Just as ransomware is now being offer “as a service”, disaster recovery as a service solutions can provide on-demand failover and recovery capabilities for an entire business in just a few minutes. And the cost to implement it is just a fraction of what it would cost to lose a few hours of business.
Take steps now to avoid pain later
Ransomware has been a major topic in the headlines – and it looks like cybercriminals plan on keeping it there. As ransomware becomes more and more available, it’s really not a matter of “if”, but “when”, when it comes to a business being hit. It’s time to start taking steps now to avoid pain later.
Do you have any good suggestions in ways you’ve prevented or combatted a ransomware attack? Share them with us below. | <urn:uuid:e1ac0df2-4542-44e7-a43d-559796f42d47> | CC-MAIN-2017-04 | https://axcient.com/blog/ransomware-as-a-service-businesses-risk-2017/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00112-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.952647 | 1,001 | 2.703125 | 3 |
In April, an FBI poll of some 500 government agencies and corporations revealed that more than 90 percent detected some form of security breach within the last year. The study suggested that computer attacks are common, are often not reported when detected, and that such breaches pose a significant threat to the enterprise.
The IPC cyber security panel of 17 participants from state, local and federal government agencies and private corporations discussed the issue and, for the most part, agreed with the FBI study findings. They also discussed strategies that need to be implemented to guard against future attacks, some of the barriers to those strategies, and how those barriers could be overcome.
Lack of Awareness
The panel concluded that as cyber attacks become more sophisticated and the risks continue to rise, state and local government agencies continue to fall behind in their efforts to combat computer crime. The reasons for this vary but include a lack of awareness and understanding of sophisticated threats and a lack of governance over security issues, including standards, policies and more.
Not surprisingly, the panel thought the core of the problem is a lack of funding, which stems from a lack of understanding. One panel member said that because of the Y2K threat that never materialized, cyber security is being written off as "alarmist" and funding is difficult to obtain.
Others agreed, saying that it is sometimes difficult to convince officials of the need for increased security because of an inability to demonstrate a return on investment. "They say 'you were wrong about Y2K, why should we spend money on this?' And so we don't have a plan," one panel member said.
A key to solving this problem is educating those who call the shots. "Educate the politicians, they make the decisions."
Education, though, should not be limited to just policy makers. "The security staff is oftentimes not as smart as the people they're trying to stop," one panel member said. One common problem that results from this lack of education is that "everyone thinks he's an expert," and that causes raging debates, according to another source. "We don't have enough skilled security practitioners."
What's needed to help solve these and related issues, said panelists, is some governance or authority over security issues, policies and procedures. A common refrain regarding this issue is the need for centralization - the need for a central point or body in the organization to determine best practices or "targets" for the enterprise. "Somebody has to stand up and say 'this is what you need to do.'"
The barriers to effective cyber-security countermeasures, as detailed by the panel, include: | <urn:uuid:b7365291-316a-418c-82f7-9a0e338a1d90> | CC-MAIN-2017-04 | http://www.govtech.com/security/Cyber-Security-Getting-a-Grip.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00322-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.97724 | 534 | 2.671875 | 3 |
[ Sizes not to scale. ]
If you’re into computer security at all you may have heard of terms like “Deep Web” and “Dark Web”. The terms can be confusing so here are the basics:
- The Internet: This is the easy one. It’s the common Internet everyone uses to read news, visit Facebook, and shop. Just consider this the “regular” Internet.
- The Deep Web: The deep web is a subset of the Internet that is not indexed by the major search engines. This means that you have to visit those places directly instead of being able to search for them. So there aren’t directions to get there, but they’re waiting if you have an address. The Deep Web is largely there simply because the Internet is too large for search engines to cover completely. So the Deep Web is the long tail of what’s left out.
- The Dark Web: The Dark Web (also called Darknet) is a subset of the Deep Web that is not only not indexed, but that also requires something special to be able to access it, e.g., specific proxying software or authentication to gain access. The Dark Web often sits on top of additional sub-networks, such as Tor, I2P, and Freenet, and is often associated with criminal activity of various degrees, including buying and selling drugs, pornography, gambling, etc.
While the Dark Web is definitely used for nefarious purposes more than the standard Internet or the Deep Web, there are many legitimate uses for the Dark Web as well. Legitimate uses include things like using Tor to anonymize reports of domestic abuse, government oppression, and other crimes that have serious consequences for those calling out the issues.
Common Dark Web resource types are media distribution, with emphasis on specialized and particular interests, and exchanges where you can purchase illegal goods or services. These types of sites frequently require that one contribute before using, which both keeps the resource alive with new content and also helps assure (for illegal content sites) that everyone there shares a bond of mutual guilt that helps reduce the chances that anyone will report the site to the authorities.
- The Internet is where it’s easy to find things online because what you’re searching for is all in search engines.
- The Deep Web is the part of the Internet that isn’t necessarily malicious, but is simply too large and/or obscure to be indexed due to the limitations of crawling and indexing software (like Google/Bing/Baidu).
- The Dark Web is the part of the non-indexed part of the Internet (the Deep Web) that is used by those who are purposely trying to control access because they have a strong desire for privacy, or because what they’re doing is illegal.
- The Wikipedia article on the Deep Web.
- The Wikipedia article on the Dark Web.
- Both the Deep and Dark web ride on top of Internet infrastructure, so it’s important to understand the difference between the Internet that’s searchable as an experience vs. the Internet as the set of connections and protocols that enable connectivity.
- The Dark Web is likely to come under increased scrutiny by authorities because of its potential use by terror organizations to coordinate attacks. This could include communication forums that require special access methods, require the use of encryption, and various types of strong authentication.
- The use of “The Internet” above is somewhat confusing, as the Internet generally refers to the infrastructure that connects things. The usage here pertains to the user perspective, where they’re using “The Internet” (through a search engine) to find a recipe, to order a book online, etc.
- Controlling access in the context of the Dark Web is not simply a matter of requiring a login to a web page. Access in this sense means you needing to do something special just to be able to interact with the service in question, such as using a VPN, or a proxy, or an anonymized network. Additional authentication is usually required once you arrive to the resource as well.
- Not all Deep Web (or even Dark Web) resources are illicit, immoral, or illegal. There are some communities that are simply anti-establishment or pro-privacy to a degree that they believe they should be able to function without oversight or judgement by anyone.
- Tor is an example of a project that can be, and is, used for both good and bad. It’s used to anonymize whistleblowers, but also to conceal criminal activity. Like encryption and even weapons, powerful tools often have dual purposes in this way, and are not intrinsically good or bad themselves.
[ NOTE: For more primers like this, check out my tutorial series. ] | <urn:uuid:6caffd81-c85b-4c52-ba9f-41012ece067d> | CC-MAIN-2017-04 | https://danielmiessler.com/study/internet-deep-dark-web/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00442-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.935348 | 995 | 3.125 | 3 |
Risk management is a practice that deals with processes, methods, and tools for managing risks in a project/venture. It is the identification, assessment, and prioritization of risks followed by coordinated and cost-effective application of resources to lessen, supervise, and control the probability and/or impact of things going out of control.
Risk may be expected to come from uncertainty in financial markets, project failures, legal liabilities, credit risk, accidents, natural causes/disasters and deliberate attacks from a competitor. Managing risks provides a disciplined environment for proactive decision-making to continuously assess what may go awry within an organization and with its products/brand. Effective risk management can pinpoint which risks are important to settle at once and which ones can be dealt with at a later time. The implementation of efficient strategies can also mitigate risks.
For the most part, these strategies consist of the following elements, performed, more or less, in the following order.
The strategies to manage risk include transferring the risk to another party, avoiding the risk, reducing the negative effect of the risk, and accepting some or all of the consequences of a particular risk.
The principles of risk management are a set of practices utilized by business to manage its exposure to risk, reach its objectives and goals, and to guide its conduct to meet expectations and concerns of the public interest, labor relations, human safety, the environment, and the laws governing business practices.
The Principles of Risk Management are:
Risk assessment – identifying, quantifying and prioritizing exposure to risk. When exposure to risk has been identified, quantified and prioritized, treatments for the organization’s exposure to risk can be devised.
Risk control – manages exposure to risk on a continuous basis. Part of risk control is an ongoing evaluation of risk exposure that assures the business that its plans are correct for the most current risk climate. It also involves risk mitigation, contingency planning and careful managerial supervision of the combined risk management efforts. This way, adjustments can be made to continually improve the efficiency of the business over time and guard against untreated exposure.
In this section we will discuss: | <urn:uuid:0b8b2421-fb35-445c-8639-793bfdeb319e> | CC-MAIN-2017-04 | http://www.best-practice.com/risk-management-best-practices/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00258-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.948635 | 433 | 3.484375 | 3 |
A drum brake is a braking system using a set of brake shoes, which are pushed against the outer cover that is in the shape of a drum, in order to stop the vehicle. Therefore, it is more commonly known as a drum brake. The major difference between a drum brake and a disc brake is that in a disc brake, a set of disc pads are used which are pressed on the disc in order to stop the vehicle, and in a drum brake, a set of brake shoes are used to push the drum on which the wheel is mounted in order to stop a vehicle in motion.
The different components used in a drum brake are a braking plate, which provides a base for all the other components; a brake drum which rotates along with the wheel and axle and when the brake is applied, it pushes the inner surface of the drum which creates friction and stops the vehicle; a wheel cylinder which pushes the shoe on the drum by its two pistons on either end when pressure is applied from the main cylinder; and brake shoe, which is made of a sheet of metal and is the component that creates friction with the drum surface in order to stop the vehicle.
Asia-pacific is one of the major market for drum brakes, as it is the leads in vehicle production globally, and also since the automobile market in this region is dominated by small and cheaper cars. North America is the second largest market for the drum brakes, followed by Europe. The North American market for drum brakes is growing at a faster pace than that of the European market, as there is an increase in the vehicle production after the region’s recovery from the financial crisis of 2008-2009.
The key players in the drum brake market are Aisin Seiki, TRW, Akebono Nissin Kogyo, and Brembo S.P.A, with market shares of 25%, 14%, 13%, 11%, and 9% respectively.
1.1 Analyst Insights
1.2 Market Definitions
1.3 Market Segmentation & Aspects Covered
1.4 Research Methodology
2 Executive Summary
3 Market Overview
4 Drum Brake by Applications
4.1 Passenger Cars
5 Drum Brake by Geographies
5.3 North America
5.4 Rest of World
6 Drum Brake by Companies
6.1 Aisin Seiki Co Ltd
6.2 Kiriu Corporation
6.3 Nissin Kogyo Co. Ltd
6.4 Sundaram Brake Linings Limited
6.5 TMD Friction Group S.A.
6.6 Zhejiang Asia-Pacific Mechanical & Electronic Co. Ltd
6.7 Mando Corp.
6.8 Accuride Gunite
6.9 Haldex Foundation Brakes
6.10 Hyundai Mobis Module & Parts Mfg
6.11 Knorr-Bremse Commercial vehicle systems
6.12 Meritor Commercial Truck
6.13 TRW Chassis Systems
6.14 Automotive Components Europe S.A. (ACE)
6.15 Brembo S.P.A.
6.16 Continental Automotive Group
6.17 Robert Bosch Gmbh Automotive Technology
6.18 Federal-Mogul Vehicle Components Solutions
6.19 Akebono Brake Industry Co. Ltd
6.20 Nisshinbo Brake Inc.
Please fill in the form below to receive a free copy of the Summary of this Report
Please visit http://www.micromarketmonitor.com/custom-research-services.html to specify your custom Research Requirement
Asia-Pacific Drum Brake
The Asia-Pacific drum brake market was valued at $5.42 billion in 2013, and is projected to grow at a CAGR of 6.0%, to reach $7.25 billion by 2018. This report on disc brakes market in Asia-Pacific is segmented on the basis of applications and geography.
North America Drum Brake
The key players in the North American drum brake market are Hyundai Mobis, Akebono, TRW Inc., Federal Mogul, and Aisin Seiki, with market shares of 14.5%, 12.0%, 10.5%, 5.2%, and 5.0% respectively. The North American drum brake market was valued at $2.5 billion in 2013, and is expected to grow at a CAGR of 8.0%, to reach $3.7 billion by 2018. | <urn:uuid:ac554f89-cd24-4d32-aa3c-a2dd736b190c> | CC-MAIN-2017-04 | http://www.micromarketmonitor.com/market-report/drum-brake-reports-6134495825.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00077-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.920014 | 928 | 3.03125 | 3 |
Data Model Example
The following model of a database was constructed for a hypothetical video store and appears in the following figure:
The data model of the video store, along with definitions of the objects presented on it, makes the following assertions:
- A MOVIE is in stock as one or more MOVIE-COPYs. Information recorded about a MOVIE includes its name, a rating, and a rental rate. The general condition of each MOVIE-COPY is recorded.
- The store's CUSTOMERs rent the MOVIE-COPYs. A MOVIE-RENTAL-RECORD records the particulars of the rental of a MOVIE-COPY by a CUSTOMER. The same MOVIE-COPY may, over time, be rented to many CUSTOMERs.
- Each MOVIE-RENTAL-RECORD also records a due date for the movie and a status indicating whether or not it is overdue. Depending on a CUSTOMER's previous relationship with the store, a CUSTOMER is assigned a credit status code which indicates whether the store should accept checks or credit cards for payment, or accept only cash.
- The store's EMPLOYEEs are involved with many MOVIE-RENTAL-RECORDs, as specified by an involvement type. There must be at least one EMPLOYEE involved with each record. Since the same EMPLOYEE might be involved with the same rental record several times on the same day, involvements are further distinguished by a time stamp.
- An overdue charge is sometimes collected on a rental of a MOVIE-COPY. OVERDUE-NOTICEs are sometimes needed to remind a CUSTOMER that a movie needs to be returned. An EMPLOYEE is sometimes listed on an OVERDUE-NOTICE.
- The store keeps salary and address information about each EMPLOYEE. It sometimes needs to look up CUSTOMERs, EMPLOYEEs, and MOVIEs by name, rather than by number.
This is a relatively small model, but it says a lot about the video rental store. From it, you get an idea of what a database for the business should look like, and you get a good picture of the business. There are several different types of graphical objects in this diagram. The entities, attributes, and relationships, along with the other symbols, describe our business rules. In the following chapters, you will learn more about what the different graphical objects mean and how to use CA ERwin DM to create your own logical and physical data models. | <urn:uuid:6a8006de-a2cd-4f28-8d00-cc5f276041ff> | CC-MAIN-2017-04 | https://support.ca.com/cadocs/0/CA%20ERwin%20Data%20Modeler%20r8-ENU/Bookshelf_Files/HTML/Methods/254557.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00103-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.93449 | 548 | 2.8125 | 3 |
In this article, learn about these concepts:
- Install Samba packages.
- Install Samba binaries you've compiled yourself.
- Upgrade an existing Samba installation.
This article helps you prepare for objective 311.2 in the Linux® Professional Institute (LPI) Certification level 3 (LPIC-3) LPI-302 exam. This objective has a weight of 1.
This article assumes that you have a working knowledge of Linux command-line functions and at least a general understanding of software structure (source code versus binary code) and your distribution's package management tools. To perform the actions described in this article, you must have a working Internet connection or Linux installation disc with the Samba package.
Choosing an installation method
The method you use to install Samba depends on your Linux distribution, the tools available to you, and your needs with respect to specific Samba versions and features. You'll find that some installation methods are impossible on some Linux systems. Although the RPM Package Manager (RPM) and Debian package methods are usually the best and easiest, only installing from source code is possible on all Linux systems—and that method can require installing additional software.
Most Linux distributions are built using either the RPM or the Debian package management system. Red Hat, Fedora, OpenSUSE, Mandriva, PCLinuxOS, and several other flavors use RPMs; and Debian, Ubuntu, and several more use Debian packages. When using one of these distributions, the easiest way to install Samba is invariably to install a Samba binary package provided by the distribution maintainer. You can install such a package using one simple command (or possibly a handful of commands), and the installation process usually finishes in a few seconds. Some distributions, such as Slackware, offer easy installation from other package types, but the details differ from the RPM and Debian package instructions provided here.
Installation from source code enables you to customize the Samba options and optimize the compilation for your particular computer and network needs. You can also install a Samba version that might not yet be available for your distribution using source code. Source installation requires extra steps, though, and can take much longer than a binary installation. The Gentoo distribution installs most software from source code but using a streamlined procedure that's more like using an RPM or Debian package; consult Gentoo's documentation for details.
In most cases, you should install Samba from an RPM, Debian package, or other distribution-specific binary package. Source code installation makes sense mainly if this isn't possible or if you have exotic needs that require unusual customization during the build process.
Installing from source code
The previous article in this series described compiling Samba source code. If you want to install Samba from source, you should begin with that process. This article assumes that you have already compiled your source code and need only install it.
Making an initial installation
If you have compiled Samba source code, you can install it by typing the following command in the source code's build directory (typically source3 within the Samba source directory tree):
# make install
You must type this command as the root user.
Typically, this command installs Samba to the
directory tree, which is the usual location for locally compiled binaries.
Note that installing Samba from source code does not install a System V (SysV) or Upstart startup script, so Samba will not start automatically when you reboot the computer. The upcoming section, Launching Samba, briefly describes this topic.
Upgrading to a new version using source code
If you've previously installed Samba from source code, following the procedure
just described renames the old program files with the
.old extension. Typing
make revert reverts to the old versions, should
you decide the new version isn't working properly.
If you want to completely remove an old version of Samba that was installed
from source code, you should change into that version's source
code directory and type
make uninstall. This
command removes the installed software. You can then install a new version
(either from source code or from a binary package) without fear of conflict
between the two versions.
If you've previously installed Samba from a binary package, that version and
your locally compiled version can theoretically coexist on your computer;
however, keeping both installed can lead to confusion, because chances
are only one will run. Thus, it's best to remove the old binary packages
before installing the new software. Typing
rpm -e samba
uninstalls an RPM package, and
dpkg -r samba
uninstalls a Debian package. (You may need to change the package name
or uninstall multiple packages, depending on how your distribution created
its Samba packages.) Before you uninstall a binary package, you may want
to back up its SysV or Upstart startup script; you can probably modify this
script to start your locally built version of Samba.
Installing an RPM
RPM is a popular and powerful package management system. You can install software
by downloading RPM files and using the
to install them, or you can use a meta-packaging system, such as the Yellowdog
Updater, Modified (YUM) to handle some of the tedious details, including installing
or upgrading dependencies.
Installing packages using YUM
YUM is a standard part of Red Hat, Fedora, and some other RPM-based distributions. Some RPM-based distributions provide different tools with similar functionality.
To install a package using YUM, you use the
command as root, passing it the
subcommand and the name of the package to be installed:
# yum install samba
Note: Samba package names vary from one distribution to
another. It's possible you'll need to install the package using a name other
samba, such as
samba-server. The name
samba works with Fedora Linux.
After you type this command, YUM checks its repositories, downloads the latest
package or packages, and installs them. In some cases, this command installs
more than one Samba package or installs non-Samba dependencies. On a
Fedora system, for instance, installing the
samba-client as well as
Graphical YUM utilities, such as Yumex (aka Yum Extender; command name
yumex), are also available. You can use such a
tool to search for and install Samba or related packages, as shown in
Figure 1. Yumex and other graphical user interface (GUI)
tools can be particularly useful for finding packages related to Samba, such
as the Samba Web Administration Tool (SWAT;
samba-swat) package visible in Figure 1.
Figure 1. Yumex provides a GUI front end to package management on some RPM-based systems
Installing packages using rpm
Sometimes, you can't use YUM, because your distribution doesn't support it.
You might also want to install an RPM package you've obtained from a site
that YUM doesn't support; for instance, you might have found a more recent
package than the version provided by your distribution maintainer. In such
cases, you may need to use the
rpm utility to
install your software.
If possible, you should use
gpg to check your software
package's authenticity before installing it, as described in the
in this series. When that's done—or if you can't or choose not to
perform this test—you can use the
-i) option to
do the job. You may want to add the
-h) options to provide a display as the package is
installed. The final command looks like this:
# rpm -ivh samba-3.5.6-69.fc13.x86_64.rpm
You must, of course, change the Samba package file name to match the file
you've downloaded. If your attempt to install the software results in an error
message, you will have to resolve the problem manually. Most commonly, you
must install prerequisite software. You can do so using YUM, or you can
manually locate and download the necessary software and install it before you
install the Samba package or even at the same time by including multiple file
name references on one
rpm command line.
Upgrading to a new version using RPM
Upgrading software using RPM is a snap. If you use YUM, the process is just like
installing the software; however, you can optionally use the
update subcommand rather than the
install subcommand. If you use
rpm directly, you should use the
In fact, you can use
to install new software, too, so some administrators use this command rather
for new installations.
When you use RPM to upgrade software, the RPM utilities uninstall the old software and install the new version, ensuring that any outdated files are automatically removed. You may want to check your configuration files, such as /etc/samba/smb.conf. Typically, your existing configuration files will be left unchanged, and an updated sample configuration file will appear with a similar file name, such as /etc/samba/smb.rpmnew, so that you can refer to it should there be any configuration file changes that require adjustments to your configuration. As a safety precaution, you might want to back up your original configuration files before upgrading.
Installing a Debian package
Debian packages are conceptually similar to RPM packages, but the details of the utilities involved to manipulate the packages differ. Debian and Ubuntu are the major distributions that use Debian packages, although several others also use this package type.
Installing using APT
The Advanced Package Tools (APT) suite provides network-enabled package management, including dependency resolution, similar to the YUM suite used by many RPM-based distributions. (APT is also available for many RPM-based distributions, and at least one—PCLinuxOS—uses APT by default.)
Before installing Samba, it's best to force APT to obtain the latest package lists.
You can do this using
apt-get and its
# apt-get update
Typing this command causes APT to check with its configured repositories to
obtain the latest list of available packages, so that you will install the
latest version of Samba available for your system. To install a package using
the command-line APT tools, you can use the
utility and its
# apt-get install samba
The result will be a summary of the packages that will be installed, removed,
and upgraded as well as suggestions of optional packages that you might
want to install. If you approve of the changes, you can type
Y at the prompt. The utility then downloads the
necessary packages and installs them using lower-level Debian package
If you prefer using a GUI tool, the Synaptic utility (command name
synaptic), shown in Figure 2,
will do the job. As with Yumex, Synaptic is particularly helpful if you're not
sure of the exact name of the package you want to install or if you need to
locate ancillary packages.
Figure 2. Synaptic provides a GUI front end to package management on most Debian-based and some RPM-based systems
Installing using dpkg
If you can't or don't want to use APT to install Samba, you can do so with the
dpkg utility, which operates on
Debian package files (with
.deb file name
extensions) you can download from the Internet or transfer from one computer
to another in some other way. If possible, it's best to verify your package's
gpg, as described in the
You can install a new package using the
# dpkg -i samba_2:3.5.4~dfsg-1ubuntu8.1_i386.deb
Assuming that all depended-upon packages are already installed, this command
installs the relevant Samba package. If dependencies are not satisfied,
dpkg will complain. You must then install the
relevant packages, either using APT or manually via
(You can install multiple packages using one
command, if you like.)
Upgrading to a new version using Debian packages
You can upgrade Samba using
dpkg in exactly the same way you would install
Samba initially using these tools. Unlike the RPM tools, there's no separate
option for upgrading software. As when using RPM, you should check your
configuration files to be sure they haven't been changed and to look for new
sample files in case the new version includes new options you might want to
If you're using APT, be sure to upgrade your database of available software by
apt-get update before you use the
install subcommand. You can also upgrade
all the software on your computer by typing
apt-get upgrade or
apt-get dist-upgrade. (The latter command
performs more sophisticated dependency resolution checks, which can
result in some outdated packages being removed.)
If you install Samba using a binary package designed for your distribution, it will
include a SysV or Upstart startup script to launch Samba when you restart the
computer. This script might or might not be active when you first install the
package, though. You should use your local startup management tools, such as
chkconfig (common on Fedora and related distributions),
rc-update (common on Debian-based systems), or
manual inspection of SysV startup links or Upstart configuration files, to determine
in which Runlevels Samba will start.
Note: Although it's possible to run Samba via a super server such
configurations are rare and create performance problems.
If you've installed Samba from source code, you will have to create your own
SysV or Upstart startup script or launch the server via an entry in a local
startup script, such as
/etc/init.d/rc.local. Typically, you'll want to
launch both the
nmbd servers and pass them both the
-D option, which causes the servers to run as
daemons. A minimal configuration looks like this:
/usr/local/sbin/nmbd -D /usr/local/sbin/smbd -D
Of course, you must adjust the path to the binaries to suit your configuration. You may also want to launch associated servers, such as SWAT, in a similar manner.
The LPIC-3 312.1 objective—and the next article in this series—describe the basics of Samba configuration, including the structure of the Samba configuration file, setting basic Samba options, and debugging common problems.
- "Learn Linux, 302 (Mixed environments): Configure and build Samba from source" (developerWorks, April 2011) describes how to compile Samba source code. This is a necessary prerequisite to installing the program from source code but not for installing Samba from a binary package.
- In the developerWorks Linux zone, find hundreds of how-to articles and tutorials, as well as downloads, discussion forums, and a wealth of other resources for Linux developers and administrators.
- Stay current with developerWorks technical events and webcasts focused on a variety of IBM products and IT industry topics.
- Attend a free developerWorks Live! briefing to get up-to-speed quickly on IBM products and tools, as well as IT industry trends.
- Watch developerWorks on-demand demos ranging from product installation and setup demos for beginners, to advanced functionality for experienced developers.
- Follow developerWorks on Twitter, or subscribe to a feed of Linux tweets on developerWorks.
Get products and technologies
- Download Samba and find additional information at the Samba website.
- Evaluate IBM products in the way that suits you best: Download a product trial, try a product online, use a product in a cloud environment, or spend a few hours in the SOA Sandbox learning how to implement Service Oriented Architecture efficiently.
- Get involved in the My developerWorks community. Connect with other developerWorks users while exploring the developer-driven blogs, forums, groups, and wikis. | <urn:uuid:0b9bf840-4894-4382-bc1d-f687f894f48b> | CC-MAIN-2017-04 | http://www.ibm.com/developerworks/linux/library/l-lpic3-311-2/index.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00499-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.873695 | 3,408 | 2.75 | 3 |
Google Street View Documenting Japan's Nuke Evacuation Area
In a related move, Google has announced that it now offers Google Public Alerts in Japan to help with emergency preparedness due to the ongoing risks of earthquakes and tsunamis in the region. "With nearly 5,000 earthquakes a year, it's important for people in Japan to have crisis preparedness and response information available at their fingertips," Yu Chen, partner technology manager at Google Maps, wrote in a March 6 post on the Google Maps Blog. "And from our own research, we know that when a disaster strikes, people turn to the Internet for more information about what is happening." The new Google public alerts service in Japan is the first such offering outside the United States, where Google has been offering alerts since January 2012. The alerts aim to provide accurate and relevant emergency notifications when and where people are searching for information online. "Relevant earthquake and tsunami warnings for Japan will now appear on Google Search, Google Maps and Google Now when you search online during a time of crisis," the post explained. "If a major earthquake alert is issued in Kanagawa Prefecture, for example, the alert information will appear on your desktop and mobile screens when you search for relevant information on Google Search and Google Maps."The Japan alerts are being created in conjunction with the Japan Meteorological Agency, which provides critical real-time data to alert the public, the post said. "We hope our technology, including Public Alerts, will help people better prepare for future crises and create more far-reaching support for crisis recovery," wrote Yu Chen. "This is why in Japan, Google has newly partnered with 14 Japanese prefectures and cities, including seven from the Tōhoku region, to make their government data available online and more easily accessible to users, both during a time of crisis and after." Google is planning to expand its Google Public Alerts to additional countries around the world in the future.
Users in Japan will also be able to access the alerts on their mobile devices when they use Google Now on their Android devices. | <urn:uuid:6e4104ec-a1fb-4bd6-8de5-410f7519d71f> | CC-MAIN-2017-04 | http://www.eweek.com/cloud/google-street-view-documenting-japans-nuke-evacuation-area-2 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00223-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.950626 | 422 | 2.703125 | 3 |
Runtime Application Self-Protection
Runtime Application Self-Protection (RASP) is a technology that embeds security protections (detection, alerting, and mitigation) directly into an application and runs as the application is executed. RASP runs on the server and is touted for its ability to detect and prevent real-time application attacks from within the application itself.
RASP is an automated, self-monitoring technology; it intercepts requests to the app then analyzes behavior and context of behavior. If the request is valid, it is validated and allows the application to execute per usual.
In addition to detection and analysis, RASP can be configured to mitigate threats to the app automatically. The technology can operate in diagnostic mode, response/self-protection mode (terminating the app itself or sessions on the app), or may trigger an alert to administrators.
With RASP, each application is individually protected (unlike a firewall which protects the perimeter around the app) and has insight to application logic, configuration, and data and event flows. This means that RASP has a high level of accuracy in detecting attacks, reducing the number of false positives that are often a problem with technologies like firewalls or IDS/IPS.
Despite all of its benefits, RASP cannot protect against vulnerabilities built into an application during the development phase. Because a high percentage of apps are built with flaws, applications protected with RASP could still be vulnerable to attack. In addition, RASP doesn’t fix the problem of secure development, which many in the security industry believe is one of the most important ways to improve data security.
That said, the security of RASP scales with each application, and protection travels with the data when RASP is running in self-protect mode, which makes it a valuable option for organizations concerned about the security of their apps.
Get the DeMISTIfying InfoSec newsletter every Tuesday! | <urn:uuid:5934eb94-ce55-479f-89e9-0053aec29afb> | CC-MAIN-2017-04 | http://misti.com/infosec-news-trends/demistifying-infosec-runtime-application-self-protection | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00039-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.94315 | 404 | 2.953125 | 3 |
For decades, optics have been inspected and cleaned to ensure the proper passage of light. While fiber inspection and cleaning fiber connectors is not new, it is growing in importance as links with increasingly higher data rates are driving decreasingly small loss budgets. With less tolerance for overall light loss, the attenuation through adapters must get lower and lower. This is achieved by properly inspecting and cleaning when necessary. As network speeds and bandwidth demands increase, distance and loss limitations have decreased, making fiber optic testing more important than ever.
Common used fiber optic test equipment includes visual fault locator, fiber optic power meter, fiber optic light source, fiber multimeter, optical time domain reflectometer (OTDR) and fiber fault locator. Visual fault locators can locate the fault of OTDR dead zone and make fiber identification from one end to the other end. Power meter is used for absolute optical fiber power measurement as well as fiber optic loss related measurement. Light source is used with power meter to test the fiber system loss. Multimeter is an integrated unit of power meter and light source. OTDR is classic fiber optic test equipment, it is most easy to use but it is also one of the most expensive, OTDR could give you an overview of the whole system you test and it could test the whole system fiber length, joint point and loss. Fault locator’s name already express its function, it could be regarded to be part of OTDR and fiber fault locator is cheap.
Power meters play an active role of most test solutions. An optical power meter is a device for measuring the power or energy of a laser beam. Optical power meter is commonly used to measure absolute light power in dBm. It is also used with an optical light source for measuring loss or relative power level in dB. Alternatively, some users may prefer an integrated two-way LTS, or a simple LTS.
When choose a fiber test equipment, besides the function and quality of the equipment, you have to consider the specifications of the fiber system that you are going to test, for example, the working wavelength (typical is 1310nm, 1550nm and 850nm), fiber light source type, fiber optic glass type (single mode or multimode), fiber connection interface (like FC, SC) and the system capacity and possible loss range.Fiber optic test equipment working environment is also the factor you should consider, whether you are going to use the fiber test equipment indoor or outdoor, the equipment working temperature, power supply, battery life. Portable fiber test equipment usually should be able to use battery as power, it is suggested to choose the test equipment that could also use very common type battery.
To protect valuable fiber optic network investments, businesses, contractors and fiber technicians turn to Fiberstore for the simplest and most convenient testing equipment. Fiberstore offers a full range of optical power meters to support FTTx deployments, fiber network testing, certification reporting capabilities and basic power measurements. You may be interested in optical power meter 3208 price. Don’t hesitate, contact us at firstname.lastname@example.org or Live Chat with us. | <urn:uuid:8e3e8f7e-c1ba-4a75-8dc2-c55d2a4945b7> | CC-MAIN-2017-04 | http://www.fs.com/blog/power-meter-fiber-optic-test-equipment.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00343-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.927086 | 631 | 2.734375 | 3 |
Multiplexing is the process of transmitting several different signals or information streams via a single carrier. The transmission of all these signals or streams takes place simultaneously by combining the several signals into one common signal that will efficiently moves through the carrier bandwidth. Once the signal reaches the destination point for one of the transmissions, that integrated signal re-assimilates into its original form and is received.
The exact configuration of the process depends a great deal on the mode or type of transmission. When dealing with an analog transmission, the signals are multiplexed using a process that is known as frequency-based multiplexing. This form, usually referred to as FDM, uses a process of dividing the bandwidth into a series of subchannels that will accommodate the transmissions and more or less allow them to flow forward in a parallel fashion.
A second common type is time-division multiplexing, or TDM. With TDM, the various signals or transmissions are carried over a common channel in much the same way as with FDM. The main difference is that the time-division approach allows for the signals to be transmitted in a series of alternating time slots. These alternating slots are still carried within a common channel, and still fit neatly into the available bandwidth.
Multiplexing is one of the common tools used today in just about every form of communications. A wide range of telephony services, including online applications, are able to function with such a high degree of efficiency because of the current technological advances this process has made possible. Optical networks also rely heavily on multiplexing to carry voice and video transmissions along concurrent but separate wavelengths from a point of origin to various points of determination. With an increasing range of communication functions taking place across the Internet, it has become an effective tool that aids in everything from videoconferences and web conferences to large data transmissions to even making a simple point-to-point telephone call.
There are many kinds of the multiplexing, such as video multiplexing, data multiplexing, video/data multiplexing, frequency division multiplexing, voice/data multiplexing, optical multiplexing, etc. Everyone can choose what is suitable for him. What’s more, there are also many other fiber products that are very useful, like fiber cleaver(CT 30A fiber cleaver is recommended), fiber optic transceiver, fiber media converter, fiber patch cable, fiber optic tester(OTDR, optic power meter, PON power meter ), fiber cabling, CWDM and DWDM and so on. If you want more information, please contact to Fiberstore. | <urn:uuid:bee33587-8faa-4f9a-8760-a57a8f5142d5> | CC-MAIN-2017-04 | http://www.fs.com/blog/what-is-multiplexing.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00343-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947101 | 535 | 3.890625 | 4 |
It is estimated that over 50 % of the citys population is alreadyun-servedbythewatercompany, whichisstrugglingtoreduce its estimated 60 % of water losses through leaksand illegal connections. Shortfalls in piped supply are being met by thousands of largely unregulated private or community boreholes into the citys over-abstracted groundwater reserves. Many have been affected by salt water intrusion and have had to be abandoned. Useofthecitysshallowgroundwaterisalsoproblematic because of contamination by pollution from industry, waste andsewagedisposal.
Tanzanias power industry is under tremendous pressure to transform, driven by concerns surrounding growing consumer demand, blackouts, climate change, changing fuel preferences and aging assets. In addition, the discovery of important offshore natural gas reserves presents a transformational opportunity for Tanzania. To what extent will these gas resources contribute to international and regional energy supplies? Are legislative arrangements in place for Tanzania to benefit from its natural resources and encourage the development of its energy market? | <urn:uuid:8dcfcabc-4abd-49a4-802d-4d3692eec3db> | CC-MAIN-2017-04 | https://www.mordorintelligence.com/industry-reports/utilities-sector-of-tanzania-industry | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00343-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.927608 | 211 | 2.90625 | 3 |
MPI…the acronym stands for Message Passing Interface, yet in some places is nearly synonymous with HPC. While this was true in years past, is it still the case? A recent blog by PhD candidate Andreas Schäfer (specialty: HPC, supercomputing, and discrete optimization) tackles this subject, raising a number of excellent questions in the process.
“MPI is in the peculiar situation of being one of the most widely used technologies in HPC and supercomputing, despite being declared dead since decades,” he writes. “Lately however, my nose is picking up some smells which are troubling me. And others.”
Schäfer says he’s not jumping on the MPI is dead bandwagon. If anything, this technology has shown an amazing tenacity, continuing through multiple HPC advances. InfiniBand, multicore, accelerators have all challenged MPI to one degree or another, and MPI doesn’t die, it adapts – all the way to the next big frontier: evolving MPI for exascale computing.
Here are some of the main points that Schäfer touches on:
- Why so many MPI implementations are struggling to support MPI_THREAD_MULTIPLE well.
- Issues with sending/receiving more than 2 GB en block.
- C++ Bindings Removed from MPI-3, although Boost.MPI brings back many of the “missing” features.
- How to achieve asynchronous progress and the problems with each approach, and a short discussion on community expectations. Schäfer wonders about the possibility of expressing parallelism “in a generic, user-friendly way.”
- Issues with broken, unmaintained code and missing features.
In summary of his points, Schäfer writes:
1. MPI is not becoming easier to use, but harder. The voodoo dance an ordinary user has to complete to max out e.g. perfectly ordinary two-socket, 16-core nodes is inane:
polling MPI for asynchronous progress,
using a custom locking regime to funnel MPI calls into one thread,
pack data into arbitrary chunks to skirt the limitations of 32-bit ints.
2. Previously usable and useful features are being removed, sometimes confusing, sometimes even alienating users.
3. Trivial changes (e.g. the use of size_t) seem next to impossible to implement.
Schäfer says that while he doesn’t see MPI going away anytime soon, there are trends toward generic computational libraries and domain-specific problem solving environments. He also points to HPX (High Performance ParallelX) as a possible path forward. While not as mature, it uses C++ instead of C, which has according to Schäfer, “some elegant ways of managing complexity.” | <urn:uuid:2b458a08-2a46-4857-9b38-38df9694947f> | CC-MAIN-2017-04 | https://www.hpcwire.com/2014/04/30/time-look-beyond-mpi/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00159-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.925099 | 602 | 2.515625 | 3 |
By Patrick J. Jones, Ed. S., Director of Technology, Valley Park School District
Valley Park School District has implemented unified communications in order to improve school safety
In the wake of recent, national school safety and weather events, school districts across the nation are conducting needs assessments and re-evaluating their policies, procedures, and resources related to security or natural disaster situations. These needs assessments are bringing together teams of stakeholders focused on the same goals: How to prevent, prepare for, respond to, and recover from a campus safety and security event. While many of the answers to these questions might require additional funding for procurement of equipment, there are many elements that can be accommodated through exploitation of existing resources.
One of the greatest needs discovered in these assessments is communication and coordination during the response to event phase of a situation. Statistics have shown that, in an active shooter scenario, an average shooting event lasts 13 minutes, and the average first responder arrives on scene in 10 minutes. Any opportunities to increase communications that alert of the issue quickly, allow first response to the appropriate location on the scene, reduces the shooter’s event, and mitigate issues is essential to the safety of students and staff.
Please log in or register below to read the full article. | <urn:uuid:5784a1b9-070c-4442-9766-ec3f732f88fe> | CC-MAIN-2017-04 | http://www.bsminfo.com/doc/unified-communications-improving-school-safety-0001 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00397-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.941037 | 257 | 2.953125 | 3 |
Cryptography recently joined forces with neuroscience to propose a groundbreaking innovation in authentication. Hristo Bojinov of Stanford University along with Daniel Sanchez and Paul Reber of Northwestern, Dan Boneh of Stanford, and Patrick Lincoln of SRI published a paper on “Designing Crypto Primitives Secure Against Rubber Hose Attacks.”
Passwords, encryption keys, and other methods of verification are critical to keeping secrets in the digital era but have a number of major weaknesses. Simple passwords are vulnerable to guessing and brute force attacks but long, complicated passwords are hard to remember, leading to password reuse or writing passwords down which can then be found by attackers. Tokens can be stolen and research has shown that even expensive biometric systems can often be fooled. Coercion or “rubber hose” attacks are often the easiest way to defeat cryptography, as even the most secure password can be extracted through torture or other means of coercion. This was one of the ultimate problems of cryptography until a recent method was discovered to use implicit learning to teach users passwords they can’t actually recall but can reliably replicate.
Implicit learning is knowledge you can replicate but cannot describe, like how to ride a bike. Bojinov et al. applied this neuroscience concept to cryptography by creating a system which teaches users a 30 character password, much stronger and more secure than most regular passwords, which they do not consciously know. They do this through a computer game similar to “Guitar Hero” where players are prompted to press the S, D, F, J, K, and L keys in time according to a certain order. The game speeds up or slows down based on the player’s ability, and 80% of the characters presented are from the randomly generated 30 character code while 20% are random. After about 30 to 45 minutes of playing, the code is embedded into users minds but they cannot recall or share it even in part.
To authenticate users, the game makes them play a shortened, 5 to 6 minute round where they will have to enter their code interspersed with two other, random 30 character codes. The performance on all three codes are then analyzed, and users will score reliably better on their code than the two others. The difference is slight but extremely unlikely to occur by chance. This difference persists two weeks after the initial training and, as users have been shown to be unable to recall their passwords, they cannot give it away, even under coercion.
This approach to authentication is still early in the research phase, though you can try the training and authentications games out online, and is not yet perfect. The most obvious problem it that, even if training sessions only 30 minutes long and tests only 5 minutes, the whole process takes much longer than regular passwords. If the threat of coercion is high enough to warrant this method, however, it’s safe to assume that the password or key you are trying to protect is important enough that time is hardly an issue.
There is also a number of different attacks that remain possible against the technique. The most serious is for an expert player to intentionally perform poorly on two of the three test sequences presented, giving him or her a 1/3 chance of guessing the right sequence to do well on. This is, however, very difficult to do. The research assumes that the authentication game would be played in person and not remotely, hence it cannot be done by a machine. The pace of the test would be based on the time the player used in training, which would be the fastest time the player could handle, too fast for methodical planning. Human players would have trouble counting out 30 character sequences and keeping different sequences straight then adjusting their performance accordingly, and any slowdown to do so would be recognized by the game as an attack.
Still, with trained and gifted players, the risk may remain too high so the researchers suggested making guessing the right sequence more difficult by using 4 correct and 12 incorrect sequences for the test. Experimentally, separate implicitly learned codes did not interfere with one another so users could plausibly be trained for 4 codes instead of one. They key drawback here is that it would greatly lengthen the training and testing processes, and hence would only make sense for the most sensitive secrets.
Another attack that this method cannot consistently counter is an eavesdropping attack, literally or metaphorically standing over the shoulder of a user as they perform the authentication and watching the process. While it would be difficult to learn the sequence that way, this method like most others is not designed to counter such attacks, for which the paper recommends further research.
Lastly, the paper does not address the possibility of threatening the user so that he takes the authentication test himself. Compromising the user, however, is equally a problem for all forms of authentication. If you have a man on the inside, passwords, dual-factor authentication, or biometrics won’t help. Still, it’s possible that if a user was under enough stress and threat, he may not be able to perform well and fast enough to pass anyhow, given the test’s sensitivity.
But while not yet perfect, the method of using implicit learning for authentication solves classic cryptography problems in an innovative way. By opening up a new field, this method will inspire many more improvements and further innovations. If we hope to solve the daunting problems of cybersecurity, we need more novel approaches like this to advance our thinking. | <urn:uuid:8dc732e0-a147-4c0e-9dd0-4c4a35634b8d> | CC-MAIN-2017-04 | http://www.fedcyber.com/2012/08/08/implicit-learning-passwords-are-like-riding-a-bike/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00305-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.965219 | 1,107 | 3.265625 | 3 |
Not only is the volume of spam and malware increasing, but the attacks also are ever more targeted and sophisticated, according to e-mail security provider IronPort’s 2008 Internet Security Report. Highlights from the study include:
* Spam volume increased 100 percent, to more than 120 billion spam messages per day in 2007. Enterprise users get between 100 and 1,000 spam messages per day.
* Spam attacks have moved beyond their attempts to lure victims with pharmaceuticals and low interest mortgages. Today’s spam increasingly contains links that point to websites that distribute malware (which extends the size of the botnet that originated the spam). 2007 saw a 253 percent increase in such spam, known as “dirty spam.”
* Viruses are less visible but increasing in number. In 2007, they were more polymorphic and more likely to be associated with sophisticated botnets like "Storm."
* The duration of specific attack techniques has decreased. In 2006, image spam was the primary new technique. But 2007 saw more than 20 different attachment types, such as MP3 and PDF spam, that were used in a variety of short-lived attack techniques.
IronPort predicts that 2008 will be the "year of social malware." Today’s malware, like the "Storm" Trojan, is collaborative, peer-to-peer and adaptive, borrowing characteristics from social networking sites associated with Web 2.0. New variants of Trojans and malware will be increasingly targeted and harder to detect.
Associate Staff Writer Katherine Walsh can be reached at email@example.com. | <urn:uuid:3ea1708b-97e2-405c-ab84-311407251942> | CC-MAIN-2017-04 | http://www.csoonline.com/article/2122222/malware-cybercrime/numbers--spam-increases-100-percent--gets--dirty--in-2007.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00057-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.953138 | 326 | 2.5625 | 3 |
With support from the National Science Foundation and the University of Tennessee, Knoxville, the National Institute for Computational Science (NICS) is expanding access to Beacon, its newest HPC cluster, providing researchers with a powerful research tool. Efforts are underway to optimize a number of science and engineering applications for this system utilizing both Intel Xeon processors and Intel Xeon Phi coprocessors.
By working with researchers to optimizing scientific codes to run on the advanced Intel architecture, the NICS team has determined that Beacon can register impressive performance gains for its users. In particular, scientists and engineers investigating such complex fields as nano-electronics, astrophysics, chemistry, biochemistry, subatomic physics and applied mathematics will be able to tackle larger, more complicated problems while controlling costs.
NICS’s drive to create the high performance computer cluster was the result of a number of factors. Optimizing application performance was at the top of the list – researchers needed to be able to modernize their code, taking full advantage of any inherent parallelism in order to manage increasingly demanding big science applications.
Another challenge facing the NICS team was the need to accommodate a wide variety of users with an equally diverse set of requirements. For example, a researcher running a complex simulation might need extensive raw computer power, while a bioinformatics scientist might require large amounts of memory.
For NICS, the primary driver was to learn how to build more efficient clusters to support researchers investigating increasingly complex, computer problems without significantly increasing hardware costs, power and cooling requirements, or software development costs.
The solution, the Beacon system, is a Cray CS300-AC cluster supercomputer equipped with Intel Xeon processers and Intel Xeon Phi coprocessors. The system includes 48 compute nodes and six I/O nodes, with a total of 768 conventional cores and 11,520 accelerator cores. Compute nodes include Intel Xeon processors E5-2670 and Intel Xeon Phi coprocessors 5110P. Integrated into the storage environment are Intel Solid-State Drives. To optimize code, software developers use the Intel Cluster Studio XE suite.
Building a hybrid system consisting of Intel processors and coprocessors opened up new possibilities for software development and infrastructure testing by the NICS team. According to Glenn Brook, CTO at the Joint Institute for Computational Sciences at the University of Tennessee, “…the (hybrid) environment allows us to explore a variety of programming and processing scenarios. At the same time, the environment is designed to help us examine energy efficiency, data movement, and other variables. We hope to find new ways to maximize performance, minimize energy consumption, and reduce costs.”
Intel provided Intel Cluster Studio XE software development tools to help researchers optimize codes for the new architecture. The fact that team members were already familiar with the tools streamlined the optimization work. Noted Brook, “With the Intel Software Development Tools, optimizing for the Intel Xeon Phi coprocessor is not substantially different than optimizing for the Intel Xeon processor E5 family.”
Speeding Up Performance, Reducing Costs
Working with the optimized code, the NICS researchers are realizing a number of benefits. For example, Brook reports that an optimized computational fluid dynamics (CFD) code achieves about 2.25 times the performance on an Intel Xeon Phi coprocessor as compared to running the identical code on two Intel Xeon E5-2670 processors. “Those results indicate that researchers can build clusters that use Intel Xeon Phi coprocessors to boost performance while reducing costs,” he says.
And, Brook adds, ultimately, Beacon’s enhanced price/performance allows researchers to solve larger, more complex problems while controlling costs. By using Intel Xeon Phi coprocessors, organizations can build smaller clusters with fewer nodes and achieve the same performance as much larger clusters – a savings in hardware acquisition, energy costs, and floor space.
The Beacon project also demonstrates the feasibility of performing big science on sustainable systems. The cluster’s processing power earned it a spot on the November 2012 and June 2013 Top500 list, while its reduced energy consumption allowed Beacon to take top ranking on the November 2012 Green500 list. The cluster was rated at nearly 2.5 billion floating-point operations per second (gigaFLOPS) per watt.
“We hope to expand the Beacon project, creating more of a production environment that is available for science and engineering research,” Brook says. “At the same time, we will continue to evaluate the ways the Intel MIC Architecture can help reduce energy consumption and control the demand for human resources in software development.” | <urn:uuid:99102f4f-f654-4828-b95b-ad16542c7329> | CC-MAIN-2017-04 | https://www.hpcwire.com/2014/06/16/nics-tackles-big-science-beacon/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00361-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.904964 | 950 | 2.5625 | 3 |
Khan Academy videos are usually very heavy in math, formulas and calculations, so it was interesting to see this video come out that included NBA star LeBron James, asking the question about the probability of James making 10 free throws in a row. If you want to find out, watch the video, or show to students when discussing why math is useful.
The easier joke/question would be, of course, "What is the probability that LeBron James will win an NBA championship?", but that probably wouldn't have flown in terms of the math. The more interesting question might have been, "What is the probability that more than 1,000 people want LeBron James to win an NBA championship?". OK, but that's probably just bitterness on my part. Also, LeBron - you're a gajillionnaire a million times over, and you use a crappy webcam or record this on a cell phone? Come on, man, upgrade to a decent camcorder!
Keith Shaw rounds up the best in geek video in his ITworld.tv blog. Follow Keith on Twitter at @shawkeith. For the latest IT news, analysis and how-tos, follow ITworld on Twitter, Facebook, and Google+. | <urn:uuid:6d151b35-1f70-483f-9f20-75f2a01f6ae6> | CC-MAIN-2017-04 | http://www.itworld.com/article/2725958/virtualization/can-math-help-lebron-james-make-10-free-throws-in-a-row-.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00049-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.948889 | 247 | 2.5625 | 3 |
Science.gov 4.0 delves deep into the Web
- By Trudy Walsh
- Feb 16, 2007
The latest version of Science.gov, the search portal that trawls the Web for scientific information in 30 federal scientific databases and more than 1,800 Web sites, features a relevancy ranking architecture that can retrieve the full text of documents.
Launched today, Version 4.0 uses DeepRank, a relevancy ranking algorithm that returns more targeted results than previous versions.
DeepRank uses information gathered from the full-text document to perform relevancy ranking. Earlier versions of Science.gov relied on MetaRank, which ranked queries based on metadata, bibliographic information such as title, author, date or abstract, and QuickRank, which relied on the document's title and short snippets of information.
DeepRank actually downloads and indexes documents, said Walter Warnick, director of the Energy Department's Office of Scientific and Technical Information. Commercial search engines such as Google crawl the Web by attempting 'to visit each Web page they can find and make an index of that page. Science.gov does federated searching,' searching pre-identified databases. 'When the hits come back, they have to be sorted,' Warnick said. 'Otherwise patrons will be overwhelmed with hundreds of thousands of hits.'
All three relevancy ranking algorithms'DeepRank, MetaRank and QuickRank'were developed by Deep Web Technologies of Santa Fe, N.M.
Science.gov is free and requires no registration. The portal is hosted by the Energy Department's Office of Scientific and Technical Information. Members of the Science.gov Alliance include the Agriculture, Commerce, Defense, Education, Energy, Health and Human Services and Interior departments, and the Environmental Protection Agency, the Government Printing Office, NASA, and the National Science Foundation. Some support is also provided by the National Archives and Records Adminstration.
Trudy Walsh is a senior writer for GCN. | <urn:uuid:acad5c7d-508f-4df0-9a89-9996b9b4de47> | CC-MAIN-2017-04 | https://gcn.com/articles/2007/02/16/sciencegov-40-delves-deep-into-the-web.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00049-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.871661 | 403 | 2.5625 | 3 |
“Critical infrastructure consists of physical and information technology assets, such as the electricity distribution networks, telecommunications networks, banking systems, manufacturing and transportation systems, as well as government information systems and services that support the continued and effective functioning of government. Elements of critical infrastructure can be stand-alone or interconnected and interdependent within and across provinces, territories, and international borders. Most of Canada’s critical infrastructure is owned by the private sector or by municipal, provincial, or territorial governments, and much of it is connected to other systems.
Cyber threats to Canada’s critical infrastructure refer to the risk of an electronic attack through the Internet. Such attacks can result in the unauthorized use, interruption, or destruction of electronic information or of the electronic and physical infrastructure used to process, communicate, or store that information.
Our audit examined whether selected federal departments and agencies are working with the provinces and territories and the private sector to protect Canada’s critical infrastructure against cyber threats. This included examining leadership roles and responsibilities for securing key government information systems.” | <urn:uuid:aa3402e9-8ad4-41b9-9dec-b43ba5e60d37> | CC-MAIN-2017-04 | http://www.fedcyber.com/2012/10/23/protecting-canadian-critical-infrastructure-against-cyber-threats/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00078-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.935078 | 213 | 2.859375 | 3 |
Last week, the Internet was buzzing with news some research had been published based on data that was recovered from a hard disc on the Columbia space shuttle which exploded in 2003, killing its seven crew.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Data recovery specialist Kroll Ontrack recovered data from the mission stored on a 400 Mbyte hard disc that fell to earth. The data on the disc was the result of 370 hours of experiments that cost the US government millions of dollars.
In this podcast Cliff Saran interviews Jeff Pederson, manager of data recovery operations at Kroll Ontrack about the task of rescuing the data from the experiments.
See ComputerWeekly's series of videos on extreme Computer Data Recovery: hard drive date recovery from bathed, burnt and broken desktops>>
Download for later:
- Internet Explorer: Right Click > Save Target As
- Firefox: Right Click > Save Link As | <urn:uuid:e48b2e71-56ae-4238-a416-77ed5c0dac64> | CC-MAIN-2017-04 | http://www.computerweekly.com/podcast/Podcast-Columbia-space-shuttle-data-recovery | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00104-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.927062 | 202 | 2.640625 | 3 |
The first ever ransomware virus was created in 1989 by Harvard-trained evolutionary biologist Joseph L. Popp. It was called the AIDS Trojan, also known as the PC Cyborg. Popp sent 20,000 infected diskettes labeled “AIDS Information – Introductory Diskettes” to attendees of the World Health Organization’s international AIDS conference. The AIDS Trojan was “generation one” ransomware malware and relatively easy to overcome. The Trojan used simple symmetric cryptography and tools were soon available to decrypt the file names. But the AIDS Trojan set the scene for what was to come.
17 years after the first ransomware malware was distributed, another strain was released but this time it was much more invasive and difficult to remove than its predecessor. In 2006, the Archiveus Trojan was released, the first ever ransomware virus to use RSA encryption. The Archiveus Trojan encrypted everything in the MyDocuments directory and required victims to purchase items from an online pharmacy to receive the 30-digit password.
June 2006 - the GPcode, an encryption Trojan which spread via an email attachment purporting to be a job application, used a 660-bit RSA public key.
At the same time GP Code and it’s many variants were infecting victims, other types of ransomware circulated that did not involve encryption, but simply locked out users. WinLock displayed pornographic images until the users sent a $10 premium-rate SMS to receive the unlocking code.
Two years after the initial GP Code virus was created, another variant of the same virus called GPcode.AK was unleashed on the public using a 1024-bit RSA key.
Mid 2011 - The first large scale ransomware outbreak, and ransomware moves into the big time due to the use of anonymous payment services, which made it much easier for ransomware authors to collect money from their victims. There were about 30,000 new ransomware samples detected in each of the first two quarters of 2011.
July 2011 - During the third quarter of 2011, new ransomware detections doubled to 60,000.
January 2012 - The cybercrime ecosystem comes of age with Citadel, a toolkit for distributing malware and managing botnets that first surfaced in January 2012. Citadel makes it simple to produce ransomware and infect systems wholesale with pay-per-install programs allowing cybercriminals to pay a minimal fee to install their ransomware viruses on computers that are already infected by other malware. Due to the introduction of Citadel, ransomware infections surpassed 100,000 in the first quarter of 2012.
Cyber criminals begin buying crime kits like Lyposit—malware that pretends to come from a local law enforcement agency based on the computer’s regional settings, and instructs victims to use payment services in a specific country—for just a share of the profit instead of for a fixed amount.
March 2012 - Citadel and Lyposit lead to the Reveton worm, an attempt to extort money in the form of a fraudulent criminal fine. Reveton first showed up in European countries in early 2012. The exact “crime” and “law enforcement agency” are tailored to the user’s location. The threats are "pirated software" or "child pornography". The user would be locked out of the infected computer and the screen be taken over by a notice informing the user of their "crime" and instructing them that to unlock their computer they must pay the appropriate fine using a service such as Ukash, Paysafe or MoneyPak.
April 2012 - Urausy Police Ransomware Trojans are some of the most recent entries in these attacks and are responsible for Police Ransomware scams that have spread throughout North and South America since April of 2012.
July 2012 - Ransomware detections increase to more than 200,000 samples, or more than 2,000 per day.
November 2012 - Another version of Reveton was released in the wild pretending to be from the FBI’s Internet Crime Complaint Center (IC3). Like most malware, Reveton continues to evolve.
July 2013 - A version of ransomware is released targeting OSX users that runs in Safari and demands a $300 fine. This strain does not lock the computer or encrypt the files, but just opens a large number of iframes (browser windows) that the user would have to close. A version purporting to be from the Department of Homeland Security locked computers and demanded a $300 fine.
July 2013 - Svpeng: This mobile Trojan targets Android devices. It was discovered by Kaspersky in July 2013 and originally designed to steal payment card information from Russian bank customers. In early 2014, it had evolved into ransomware, locking the phones displaying a message accusing the user of accessing child pornography. By the summer of 2014, a new version was out targeting U.S. users and using a fake FBI message and requiring a $200 payment with variants being used in the UK, Switzerland, India and Russia. According to Jeremy Linden, a senior security product manager for Lookout, a San Francisco-based mobile security firm, 900,000 phones were infected in the first 30 days.
August 2013 - A version masquerading as fake security software known as Live Security Professional begins infecting systems.
September 2013 - CryptoLocker is released. CryptoLocker is the first cryptographic malware spread by downloads from a compromised website and/or sent to business professionals in the form of email attachments that were made to look like customer complaints controlled through the Gameover ZeuS botnet which had been capturing online banking information since 2011.
Cryptolocker uses a 2048-bit RSA key pair, uploaded to a command-and-control server, and used it to encrypt files with certain file extensions, and delete the originals. It would then threaten to delete the private key if payment was not received within three days. Payments initially could be received in the form of Bitcoins or pre-paid cash vouchers.
With some versions of CryptoLocker, if the payment wasn’t received within three days, the user was given a second opportunity to pay a much higher ransom to get their files back. Ransom prices varied over time and with the particular version being used. The earliest CryptoLocker Payments could be made by CashU, Ukash, Paysafecard, MoneyPak or Bitcoin. Prices were initially set at $100, €100, £100, two Bitcoins or other figures for various currencies.
November 2013 - The ransom changes. The going ransom was 2 Bitcoins or about $460, if they missed the original ransom deadline they could pay 10 Bitcoins ($2300) to use a service that connected to the command and control servers. After paying for that service, the first 1024 bytes of an encrypted file would be uploaded to the server and the server would then search for the associated private key.
Early December 2013 - 250,000 machines infected. Four Bitcoin accounts associated with CryptoLocker found that 41,928 Bitcoins had been moved through those four accounts between October 15 and December 18. Given the then current price of $661, that would represent more than $27 million in payments received, not counting all the other payment methods.
Mid December 2013 - The first CryptoLocker copycat software emerges, Locker, charging users $150 to get the key, with money being sent to a Perfect Money or QIWI Visa Virtual Card number.
Late December 2013 - CryptoLocker 2.0 – Despite the similar name, CryptoLocker 2.0 was written using C# while the original was in C++ so it was likely done by a different programming team. Among other differences, 2.0 would only accept Bitcoins, and it would encrypt image, music and video files which the original skipped. And, while it claimed to use RSA-4096, it actually used RSA-1024. However, the infection methods were the same and the screen image very close to the original.
Also during this timeframe, CryptorBit surfaced. Unlike CryptoLocker and CryptoDefense which only targets specific file extensions, CryptorBit corrupts the first 212 or 1024 bytes of any data file it finds. It also seems to be able to bypass Group Policy settings put in place to defend against this type of ransomware infection. The cyber gang uses social engineering to get the end-user to install the ransomware using such devices as a rogue antivirus product. Then, once the files are encrypted, the user is asked to install the Tor Browser, enter their address and follow the instructions to make the ransom payment – up to $500 in Bitcoin. The software also installs cryptocoin mining software that uses the victim’s computer to mine digital coins such as Bitcoin and deposit them in the malware developer’s digital wallet.
February 2014 - CryptoDefense is released. It used Tor and Bitcoin for anonymity and 2048-bit encryption. However, because it used Windows’ built-in encryption APIs, the private key was stored in plain text on the infected computer. Despite this flaw, the hackers still managed to earn at least $34,000 in the first month, according to Symantec.
April 2014 - The cyber criminals behind CryptoDefense release an improved version called CryptoWall. While largely similar to the earlier edition, CryptoWall doesn’t store the encryption key where the user can get to it. In addition, while CryptoDefense required the user to open an infected attachment, CryptoWall uses a Java vulnerability. Malicious advertisements on domains belonging to Disney, Facebook, The Guardian newspaper and many others led people to sites that were CryptoWall infected and encrypted their drives. According to an August 27 report from Dell SecureWorks Counter Threat Unit (CTU): “CTU researchers consider CryptoWall to be the largest and most destructive ransomware threat on the Internet as of this publication, and they expect this threat to continue growing.” More than 600,000 systems were infected between mid-March and August 24, with 5.25 billion files being encrypted. 1,683 victims (0.27%) paid a total $1,101,900 in ransom. Nearly 2/3 paid $500, but the amounts ranged from $200 to $10,000.
Koler.a: Launched in April, this police ransom Trojan infected around 200,000 Android users, 3⁄4 in the US, who were searching for porn and wound up downloading the software. Since Android requires permission to install any software, it is unknown how many people actually installed it after download. Users were required to pay $100 – $300 to remove it.
May 2014 - A multi-national team composed of government agencies managed to disable the Gameover ZeuS Botnet. The U.S. Department of Justice also issued an indictment against Evgeniy Bogachev who operated the botnet from his base on the Black Sea.
iDevice users in Australia and the U.S. started seeing a lock screen on their iPhones and iPads saying that it had been locked by “Oleg Pliss” and requiring payment of $50 to $100 to unlock. It is unknown how many people were affected, but in June the Russian police arrested two people responsible and reported how they operated. This didn’t involve installing any malware, but was simply a straight up con using people’s naiveté and features built into iOS. First people were scammed into signing up for a fake video service that required entering their Apple ID. Once they had the Apple ID, the hackers would create iCloud accounts using those ID’s and use the Find My Phone feature, which includes the ability to lock a stolen phone, to lock the owners out of their own devices.
July 2014 - The original Gameover ZeuS/CryptoLocker network resurfaced no longer requiring payment using a MoneyPak key in the GUI, but instead users must to install Tor or another layered encryption browser to pay them securely and directly. This allows malware authors to skip money mules and improve their bottom line.
Cryptoblocker – July 2014 Trend Micro reported a new ransomware that doesn’t encrypt files that are larger than 100MB and will skip anything in the C:\Windows, C:\Program Files and C:\Program Files (x86) folders. It uses AES rather than RSA encryption.
On July 23, Kaspersky reported that Koler had been taken down, but didn’t say by whom.
August 2014 - Symantec reports crypto-style ransomware has seen a 700 percent-plus increase year-over-year.
SynoLocker appeared in August 2014. Unlike the others which targeted end-user devices, this one was designed for Synology network attached storage devices. And unlike most encryption ransomware, SynoLocker encrypts the files one by one. Payment was 0.6 Bitcoins and the user has to go to an address on the Tor network to unlock the files.
This was discovered midsummer 2014 by Fedor Sinitisyn, a security researcher for Kaspersky. Early versions only had an English language GUI, but then Russian was added. The first infections were mainly in Russia, so the developers were likely from an eastern European country, not Russia, because the Russian security services quickly arrest and shut down any Russians hacking others in their own country.
Late 2014 - TorrentLocker – According to iSight Partners, TorrentLocker “is a new strain of ransomware that uses components of CryptoLockerand CryptoWall but with completely different code from these other two ransomware families.” It spreads through spam and uses the Rijndael algorithm for file encryption rather than RSA-2048. Ransom is paid by purchasing Bitcoins from specific Australian Bitcoin websites.
Early 2015 - CrytoWall takes off, and replaces Cryptolocker as the leading ransomware infection.
April 2015 - CrytoLocker is now being localized for Asian countries. There are attacks in Korea, Malaysia and Japan.
May 2015 - It's heeere. Criminal ransomware-as-a-service has arrived. In short, you can now go to this TOR website "for criminals by criminals", roll your own ransomware for free, and the site takes a 20% kickback of every Bitcoin ransom payment. Also in May 2015 a new strain shows up that is called Locker and has been infecting employee's workstations but sat there silently until midnight May 25, 2015 when it woke up. Locker then started to wreak havoc in a massive way.
May 2015 - New "Breaking Bad-themed ransomware" gets spotted in the wild. Apart from the Breaking Bad theme, CryptoLocker.S is pretty generic ransomware. It is surprising how fast ransom Trojans have developed. A year ago every new strain was headline news, now it's on page 3. This version grabs a wide range of data files, encrypts it using a random AES key which then is encrypted using a public key.
June 2015 - SANS InfoSec forum notes that a new version of CryptoWall 3.0 is in the wild, using resumes of young women as a social engineering lure: "resume ransomware".
June 2015 - The FBI, through their Internet Crime Complaint Center (IC3), released an alert on June 23, 2015 that between April 2014 and June 2015, the IC3 received 992 CryptoWall-related complaints, with victims reporting losses totaling over $18 million. Ransomware gives cybercriminals almost 1,500% return on their money.
July 2015 - KnowBe4 released the first version of their Ransomware Hostage Rescue Manual. Get the most informative and complete hostage rescue manual on Ransomware. This 20-page manual is packed with actionable info that you need to prevent infections, and what to do when you are hit with malware like this. You also get a Ransomware Attack Response Checklist and Prevention Checklist. Get The manual >>
July 2015 - An Eastern European cybercrime gang has started a new TorrentLocker ransomware campaign where whole websites of energy companies, government organizations and large enterprises are being scraped and rebuilt from scratch to spread ransomware using Google Drive and Yandex Disk.
July 2015 - Security researcher Fedor Sinitsyn reported on the new TeslaCrypt V2.0. This family of ransomware is relatively new, it was first detected in February 2015. It's been dubbed the "curse" of computer gamers because it targets many game-related file types.
September 2015 - An aggressive Android ransomware strain is spreading in America. Security researchers at ESET discovered the first real example of malware that is capable to reset the PIN of your phone to permanently lock you out of your own device. They called it LockerPin, and it changes the infected device's lock screen PIN code and leaves victims with a locked mobile screen, demanding a $500 ransom.
September 2015 - The criminal gangs that live off ransomware infections are targeting Small Medium Business (SMB) instead of consumers, a new Trend Micro Analysis shows. The reason SMB is being targeted is that they generally do not have the same defenses in place of large enterprise, but are able to afford a 500 to 700 dollar payment to get access to their files back.
The Miami County Communication Center’s administrative computer network system was compromised with a CryptoWall 3.0 ransomware infection which locked down their 911 emergency center. They paid a 700 dollar Bitcoin ransom to unlock their files.
October 2015 - A new ransomware strain spreads using remote desktop and terminal services attacks. The ransomware is called LowLevel04and encrypts data using RSA-2048 encryption, the ransom is double from what is the normal $500 and demands four Bitcoin. Specifically nasty is how it gets installed: brute force attacks on machines that have Remote Desktop or Terminal Services installed and have weak passwords.
October 2015 - The nation’s top law enforcement agency is warning companies that they may not be able to get their data back from cyber criminals who use Cryptolocker, Cryptowall and other malware without paying a ransom. “The ransomware is that good,” said Joseph Bonavolonta, the Assistant Special Agent in Charge of the FBI’s CYBER and Counterintelligence Program in its Boston office. “To be honest, we often advise people just to pay the ransom.”
October 2015 - Staggering CryptoWall Ransomware Damage: 325 Million Dollars. A brand new report from Cyber Threat Alliance showed the damage caused by a single criminal Eastern European cyber mafia. The CTA is an industry group with big-name members like Intel, Palo Alto Networks, Fortinet and Symantec and was created last year to warn about emerging cyber threats.
November 2015 - CryptoWall v4.0 released and displays a redesigned ransom note, new filenames, and now encrypts a file's name along with its data. In summary, the new v4.0 release now encrypts file names to make it more difficult to determine important files, and has a new HTML ransom note that is even more arrogant than the last one. It also gets delivered with the Nuclear Exploit Kit, which causes drive-by infections without the user having to click a link or open an attachment (sic).
November 2015 - A Ransomware news roundup reports a new strain with a very short 24-hour deadline, researchers crack the Linix. Encover strain and British Parliament computers get infected with ransomware.
December 2015 - A Kaspersky reports that ransomware is doubling year over year, and Symantec reports that TeslaCrypt attacks moved from 200 to 1,800 a day.
January 2016 - A stupid and damaging new ransomware strain called 7ev3n encrypts your data and demands 13 bitcoins to decrypt your files. A 13 bitcoin [almost $5,000] ransom demand is the largest we have seen to date for this type of infection, but that is only just one of the problems with this ransomware. In addition to the large ransom demand, the 7ev3n crypto-ransom malware also does a great job trashing the Windows system that it was installed on. DarkReading reports on a Big Week In Ransomware.
February 2016 - Ransomware criminals infect thousands with a weird WordPress hack. An unexpectedly large number of WordPress websites have been mysteriously compromised and are delivering the TeslaCrypt ransomware to unwitting end-users. Antivirus is not catching this yet.
February 2016 - It's Here. New Ransomware Hidden In Infected Word Files. It was only a matter of time, but some miscreant finally did it. There is a new ransomware strain somewhat amateurishly called "Locky", but this is professional grade malware. The major headache is that this flavor starts out with a Microsoft Word attachment which has malicious macros in it, making it hard to filter out. Over 400,000 workstations were infected in just a few hours, data from Palo Alto Networks shows. Behind Locky is the deadly Dridex gang, the 800-pound gorilla in the banking Trojan racket.
March 2016 - MedStar receives a massive ransomware demand. The MedStar Hospital Chain was hit with ransomware and has received a digital ransom note. A Baltimore Sun reporter has seen a copy of the cybercriminal's demands. "The deal is this: Send 3 bitcoins — $1,250 at current exchange rates — for the digital key to unlock a single infected computer, or 45 bitcoins — about $18,500 — for keys to all of them."
April 2016 - News came out about a new type of ransomware that does not encrypt files but makes the whole hard disk inaccessible. As if encrypting files and holding them hostage is not enough, cybercriminals who create and spread crypto-ransomware are now resorting to causing blue screen of death (BSoD) and putting their ransom notes at system startup—as in, even before the operating system loads. It's called Petya and clearly Russian.
April 2016 - The Ransomware That Knows Where You Live. It's happening in the UK today, and you can expect it in America tomorrow [correction- it's already happening today]. The bad guys in Eastern Europe are often using the U.K. as their beta test area, and when a scam has been debugged, they go wide in the U.S. So here is what's happening: victims get a phishing email that claims they owe a lot of money, and it has their correct street address in the email. The phishing emails tell recipients that they owe money to British businesses and charities when they do not.
April 2016 - Hello mass spear phishing, meet ransomware! Ransomware is now one of the greatest threats on the internet. Also, a new ransomware strain called CryptoHost was discovered, which claims that it encrypts your data and then demands a ransom of .33 bitcoins to get your files back (~140 USD at the current exchange rate) . These cybercrims took a shortcut though, your files are not encrypted but copied into a password protected RAR archive .
April 2016 - The Future of Ransomware: CryptoWorms? Cisco's Talos Labs researchers had a look into the future and described how ransomware would evolve. It's a nightmare. They created a sophisticated framework for next-gen ransomware that will scare the pants off you. Also, a new strain of ransomware called Jigsaw starts deleting files if you do not pay the ransom.
April 2016 - Ransomware On Pace To Be A 2016 $1 Billion Dollar Business. CNN Money reports about new estimates from the FBI show that the costs from so-called ransomware have reached an all-time high. Cyber-criminals collected $209 million in the first three months of 2016 by extorting businesses and institutions to unlock computer servers. At that rate, ransomware is on pace to be a $1 billion a year crime this year.
Late April 2016 - Scary New CryptXXX Ransomware Also Steals Your Bitcoins. Now here's a new hybrid nasty that does a multitude of nefarious things. A few months ago the 800-pound Dridex cyber gang moved into ransomware with Locky, and now their competitor Reveton follows suit and tries to muscle into the ransomware racket with an even worse criminal malware multitool. At the moment CryptXXX spreads through the Angler Exploit Kit which infects the machine with the Bedep Trojan, which in its turn drops information stealers on the machine, and now ads professional grade encryption adding a .crypt extension to the filename. Here is a graph created by the folks of Proofpoint which illustrates the growth of new strains in Q1, 2016:
Here is a blog post that looks at the first 4 month of 2016 and describes an explosion of new strains of ransomware.
May 2016 - The Petya Ransomware comes loaded with a double-barrel ransomware attack. If the initial overwriting the master boot record does not work, they now have an installer that offers Petya and a backup "conventional" file-encrypting ransomware called Mischa. ProofPoint Q1-16 threat report confirms that Ransomware and CEO Fraud dominate in 2016. A new Version 4 of DMA Locker comes out with weapons-grade encryption algorythms, and infects machines through drive-by downloads from compromised websites. In a surprising end to TeslaCrypt, the developers shut down their ransomware and released the master decryption key.
June 2016 - CryptXXX becomes UltraCrypter and targets data stored on unmapped network shares along with local HDD volumes, removable drives, and mapped network repositories. The Jigsaw strain morphs into new branding and now uses an Anonymous skin - asks for a very high $5,000 ransom. The RAA ransomware goes after Russian victims, which is rare considering that most cyber mafia are based there. A new strain called BART (duh!) locks files by archiving them, is a Locky spinoff, and gets spread by email attachments. The hybrid Satana strain both encrypts files and replaces the Master Boot Record (MBR) as Petya/Misha does. EduCrypt demonstrates what happens when employees open infected attachments. Tripwire has a more detailed write-up here. The upshot? Everyone and their cybercrime brother has jumped on the bandwagon. | <urn:uuid:42bae09f-4cb1-472f-8beb-ec2a90aa9aab> | CC-MAIN-2017-04 | https://www.knowbe4.com/ransomware | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00316-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947038 | 5,366 | 2.953125 | 3 |
Glowworms Produce 'Underground Stars' in New Zealand
/ April 23, 2013
In New Zealand -- about 120 miles south of Auckland -- are the Waitomo caves. In these caves live so many glowworms that visitors feel more like they're looking up at a night sky than the insides of a cave.
The caves have been around for millions of years, and the starry night vibe is there thanks to the Arachnocampa luminosa, a glowworm species only found in New Zealand, GrindTV reported.
"In their larval stage, these glowworms go fishing with long silk threads called “snares,” creating sticky candelabras, according to the news outlet. "These drippy traps reflect and diffuse the glowworm’s native light, enhancing the starry-night effect for cave-bound sky gazers." | <urn:uuid:2c2ac62d-3432-428b-a599-c3ec66bb0ec0> | CC-MAIN-2017-04 | http://www.govtech.com/photos/Photo-of-the-Week-Glowworms-Produce-Underground-Stars-in-New-Zealand.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00224-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.894982 | 182 | 2.734375 | 3 |
Research by USA Today reveals that the US power grid suffers some kind of physical or cyber attack every four days.
Of particular interest is the high incidence of cyberattacks, although the most recent ICS-CERT report (the Cyber Emergency Reponse Team covering all industries utilizing industrial control systems) suggests an even higher incidence of cyber attacks.
In the 2014 ICS-CERT report, of the 245 actual incidents recorded, 80% of these were related to the Energy sector.
"More often than once a week, the physical and computerized security mechanisms intended to protect Americans from widespread power outages are affected by attacks, with less severe cyberattacks happening even more often"
Other incidents highlighted by the USA Today report range from vandals shooting at transformers through to individuals attempting to climb fences and enter power grid facilities.
With NERC CIP Version 5 now approved, security best practices to provide critical infrastructure protection from cyber attacks should already be implemented and operational. This report shows that the need for NERC compliance is more urgent than ever before.
Read the full USA Today article here | <urn:uuid:28be43e6-ef31-489f-bfdc-2cf5665d9643> | CC-MAIN-2017-04 | https://www.newnettechnologies.com/nerc-cip-compliance-power-grid-under-attack-every-four-days-or-is-it-more-frequent.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00490-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.934863 | 221 | 2.53125 | 3 |
Parallel Java Programming System Launched by University
A team of researchers at the University of Illinois' Universal Parallel Computing Research Center (UPCRC) has developed a parallel implementation of Java.With an emerging need for programs that take advantage of today's multicore and parallel systems, the University of Illinois has come up with an effort to deliver a parallel version of the Java language. Indeed, the University of Illinois at Urbana-Champaign has launched a project to deliver a Deterministic Parallel Java (DPJ) implementation under funding from the National Science Foundation (NSF), Intel and Microsoft.
In a press release on the new technology, Cheri Helregel, a spokeswoman for the Universal Parallel Computing Research Center (UPCRC) at the University of Illinois at Urbana-Champaign, said the new parallel language is the first to guarantee deterministic semantics without run-time checks for general-purpose, object-oriented programs. It's also the first language to use compile-time type checking for parallel operations on arrays of references ("pointers") to objects, and the first language to use regions and effects for flexible, nested data structures.
"The broad goal of our project is to provide deterministic-by-default semantics for an object-oriented, imperative parallel language, using primarily compile-time checking. 'Deterministic' means that the program produces the same visible output for a given input, in all executions. 'By default' means that deterministic behavior is guaranteed unless the programmer explicitly requests non-determinism. This is in contrast to today's shared-memory programming models (e.g., threads and locks), which are inherently nondeterministic and can even have undetected data races."The resulting DPJ implementation is a safe and modular parallel language that helps developers port parts of sequential Java applications to run on multicore systems. It also helps developers rewrite parts of parallel Java applications to simplify debugging, testing and long-term maintenance. DPJ-ported parallel code can co-exist with ordinary Java code within the same application, so that programs can be incrementally ported to DPJ, the UPCRC said. Moreover, DPJ simplifies debugging and testing of parallel software as all potential data races are caught at compile-time, the UPCRC press release said. Because DPJ programs have obvious sequential semantics, all debugging and testing of DPJ code can happen essentially like that for sequential programs. Maintenance becomes easier as DPJ encodes the programmer's knowledge of parallel data sharing patterns in DPJ annotations-simplifying the tasks of understanding, modifying and extending parallel DPJ software. And because DPJ features the same program annotations, each function or class can be understood and parallelized in a modular fashion, without knowing internal parallelism or synchronization details of other functions or classes. The University of Illinois researchers said this is especially important because modularity is crucial for creating large-scale software applications. Yet, they say modularity is severely compromised when using any of today's mainstream shared memory programming models. Adve and his group are also working with Intel to define a similar set of extensions to C++ (DPC++), which can be used to check similar properties for existing programming models such as Cilk, OpenMP and Threading Building Blocks (TBB). For its part, the UPCRC makes a distinction between concurrent programming and parallel programming. A page on programming on the UPCRC said:
"We distinguish between concurrent programming that focuses on problems where concurrency is part of the specification (reactive code such as an operating system, user interfaces, or on-line transaction processing, etc.), and parallel programming that focuses on problems where concurrent execution is used only for improving the performance of a transformational code. The prevalence of multicore platforms does not increase the need for concurrent programming and does not make it harder; it increases the need for parallel programming. It is our contention that parallel programming is much easier than concurrent programming; in particular, it is seldom necessary to use nondeterministic code." | <urn:uuid:549a3224-0dfc-4e86-a087-6bbb75169d9a> | CC-MAIN-2017-04 | http://www.eweek.com/c/a/Application-Development/Parallel-Java-Programming-System-Launched-by-University-422351 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00306-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.907554 | 817 | 2.546875 | 3 |
Lormee H.,ONCFS DER CNERA Avifaune Migratrice |
Ferrand Y.,ONCFS DER CNERA Avifaune Migratrice |
Bastat C.,ONCFS DER CNERA Avifaune Migratrice |
Coreau D.,ONCFS DER CNERA Avifaune Migratrice |
And 5 more authors.
Ringing and Migration | Year: 2013
We document the mortality of terrestrial bird species wintering in France as a result of the 2012 February cold spell. We describe the range of species affected and how some of them reacted to the cold spell in term of movement and variation in body mass. Mortality records concerned 1,791 individuals from 42 species. Among terrestrial birds, Northern Lapwings Vanellus vanellus, Eurasian Woodcocks Scolopax rusticola and thrush species suffered the most from the cold spell. Among casualties, 56% of birds starved to death and 8.4% were predated. Collisions with vehicles accounted for 23.7% of deaths for all species, and reached 50% for Lapwings. Location of mortality records suggested that Lapwings and Woodcocks moved en masse towards the south and southwest of France to escape from the cold spell. Body mass of thrushes, Lapwings and Woodcocks birds was rapidly depleted because birds could not access food resources. On average, birds which were 30% lighter than birds weighed at the same period during normal winters had reached a lethal body mass. The results of this enquiry highlight the impact of such cold-weather events and the need, in particular for game bird species, to promote standardised enquiries on mortality when severe winter events occur. © 2013 © 2013 British Trust for Ornithology. Source | <urn:uuid:de6d40cf-5ae9-42d1-87a7-64875006cd45> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/oncfs-der-cnera-avifaune-migratrice-1892934/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00334-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.902641 | 376 | 3.203125 | 3 |
The Department of Energy’s Office of Science website currently offers as one of its feature articles a detailed look at how advances in high-performance computing have brought the power of simulation to bear on almost every facet of the scientific landscape. Dr. Steven E. Koonin, Under Secretary for Science, examines the link between computer simulation and scientific progress, citing a variety of real-world disciplines that have been enhanced by significant, sustained progress in the computational domain.
Koonin explains how the DOE makes supercomputing resources available for both scientific and industrial simulation endeavors. Last fall, Koonin’s office held a Simulations Summit in Washington, which brought together more than 70 leaders from academia, industry, government, and national research laboratories to discuss how science and technology policies affect the nation’s ability to compete on a global playing field. Keynote speaker Secretary Chu emphasized that “the DOE strategy should be to make simulation part of everyone’s toolbox.”
The Department of Energy’s Office of Science (SC) is addressing that need by pushing the boundaries of computing and simulation to advance key science, math, and engineering challenges facing the nation. SC makes advanced supercomputers available and supports high-fidelity simulations that give scientists the power to analyze theories and validate experiments that are dangerous, expensive or impossible to conduct. Scientific simulations are used to understand everything from stellar explosion mechanisms to the quarks and gluons that make up a proton. They can tell us how blood flows through the body and how to make a more efficient combustion engine. And they can do much more.
Koonin goes on to list the some of the merits of a fully-supported national supercomputing strategy:
Improvements in high-performance computing benefit all computer users, not just those who use these world-class machines. Hardware innovation to drive down the energy consumption of processors and memory for exascale machines will be directly applicable to commodity electronics, making portable computers and smart phones much more powerful. Private sector consumers of high-performance computing use simulation to accelerate and reduce the cost of innovation in the design and manufacturing of their products, in applications stretching from advanced materials for engines and airplane wings to advanced chemicals for household products to the design of newer and faster consumer electronics.
More and more, scientific breakthroughs are predicated on continued, steady progress in computing. As Koonin notes, the US still leads the world in computing. Today’s supercomputers are one trillion times faster than the their 1950s counterparts, and more than half of the TOP500 systems originate in the US. Koonin credits the actions of the Department of Energy for much of this progress, but warns that continued government support is necessary to sustain the current trajectory: “A golden moment has presented itself to continue U.S. leadership in simulations,” Koonin remarks, “but concerted action and continued DOE leadership are necessary to turn this opportunity into reality.” | <urn:uuid:e5d0cd1d-44ad-4f8e-a65c-08820ab85a27> | CC-MAIN-2017-04 | https://www.hpcwire.com/2011/04/01/supercomputing_key_to_us_leadership/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00572-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.926751 | 610 | 3.09375 | 3 |
Space-age technology to assist farmers
- By Sara Michael
- Jun 23, 2003
The Agriculture Department and NASA have forged a partnership to use the latest Earth science and mapping technology to help farmers increase their productivity.
The USDA will use NASA's satellite monitoring, mapping and systems engineering technology to improve farmers' yields by providing detailed information about soil and crop properties and weather predictions. In turn, NASA will receive information for an initiative to study the effect of farming on the Earth.
"The idea is to bring the high-level satellite program right down to the breakfast table," said Warren Clark, an agribusiness consultant in Chicago.
Agriculture Secretary Ann Veneman and NASA Administrator Sean O'Keefe signed a memorandum of understanding last month. The agreement launched a $1 million, three-year program to assess the geographic information and remote sensing needs of the agriculture community. Through geospatial extension programs, specialists will work closely with NASA and the USDA at land grant universities to address those needs.
"The ability to look at precise areas of land on a more frequent basis is leading to a whole host of technological advances for farmers, including monitors and maps that can detect and record changes in yields, soil attributes, or crop conditions," Veneman said in a statement on the memorandum signing.
The agencies will cooperate in five areas: carbon capture and storage for carbon management; environmental models for invasive species; water cycle science for water management; weather and climate prediction; and regional, national and international atmospheric prediction for air quality management.
The agencies will also share information through databases, information systems, classes and conferences to facilitate technology development.
NASA has 18 Earth observation satellites with 80 sensors measuring geophysical parameters to understand processes such as plate tectonics and water, atmospheric, carbon and energy cycles, said Ron Birk, director of the Applications Division for the Office of Earth Science at NASA.
The two agencies have been working together for more than 30 years, using technology to study, for example, soil moisture content and crop properties. NASA's growing capabilities created this latest, more advanced partnership. In the 1970s, NASA was limited to two satellites with two sensors. Since 1999, that number has been increasing and includes information on weather, climate and natural disasters, Birk said.
"It enables us to realize our vision and mission from research and development of technology and integrate our results with decision-support tools of other agencies," Birk said.
NASA is the only organization to offer this type of research, and although commercial sources of remote sensing technology exist, they offer different capabilities, Birk said. "It's NASA's mandate to conduct research in Earth sciences, and we're unique in that way," he said. "There isn't another source."
The use of satellite technology supports the trend toward precision agriculture, improving crop management and pesticide application. Enhanced predictability means a decrease in production cost, because farmers can pinpoint where to apply pesticides, and an increase in food quality and volume, Clark said.
The challenge for the Agriculture Department will be taking three terabytes of data per day and deciding which information is relevant and how to synthesize it, Birk said.
"We're working with USDA to identify the systems that are already serving the farming community that are looking for inputs that are predictors of weather or climate," Birk said. "It's a system-oriented approach, rather than a data-oriented approach."
Allen Dedrick, an associate deputy administrator at the USDA's Agricultural Research Service, recognized the data processing as the major challenge with such an endeavor. He said researchers and specialists were hard at work for a solution. "It always has to be done rapidly," he said. "The turn-around time is crucial, and that's always a challenge."
Once the data is processed, it must reach the farmers in the field. Consultants and geospatial extension specialists will likely work directly with the farmers through courses and demonstrations, Birk said. The data may also be relayed through conferences, classes, newsletters and state agriculture magazines.
Eventually, the data will be online, but that requires advanced hardware and software. "That's certainly where it's going," Birk said. "Anyone utilizing this technology probably is going to be very well equipped."
The vast amount of data has kept precision agriculture from catching on rapidly in the field. Although the theory is sound, the application is daunting, said Jim Mock, president of CropVerifeye LLC, which focuses on solutions for food traceability. "It's not a trivial matter to take all that information and make sense of it," he said. "It would seem to be a gargantuan effort."
The Agriculture Department and NASA have partnered to share NASA's satellite monitoring and mapping technology to increase farmers' productivity. The agreement includes the following technologies:
* Monitors and maps to detect and record changes in yields, soil properties and crop conditions.
* Sensors to vary the application rate and timing for seeds, fertilizers, pesticides and irrigation water.
* Vehicle guidance systems to provide sensing for weed and pest populations and to detect crop properties, such as protein content, during harvest. | <urn:uuid:e97f2c0f-962c-44c5-99d1-749db097959d> | CC-MAIN-2017-04 | https://fcw.com/Articles/2003/06/23/Spaceage-technology-to-assist-farmers.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00480-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.939438 | 1,053 | 3.078125 | 3 |
RSA Encryption with padding as described in PKCS#1v1.5 has been known to be insecure since Bleichenbacher’s CRYPTO 98 paper revealed a chosen ciphertext attack. PKCS#1 version 2.0, published September 1998, proposed a new padding scheme based on OAEP and recommended the old scheme not be used in any new implementations.
This hasn’t stopped PKCS#1v1.5 padding from being used just about everywhere. Take a look at this table of Smart Card support for example.
One reason might be that Bleichenbacher’s attack was though to be impractical: the worst-case analysis of the attack in the paper tells us that 2^20 chosen ciphertexts are needed, which gave rise to the name “the million message attack”.
In fact the median case is much easier than that. And like all attacks, the attack algorithm has only got better. Building on advances by Klima, Pokorny and Rosa in 2003, our work published at CRYPTO 2012 showed that the median case for the standard oracle requires less than 15 000 messages.
Still, there seems to be widespread belief that PKCS#1v1.5 is somehow ok if you use it carefully. For example, in this debate on the W3C crypto API bugzilla, one comment suggests that because TLS still uses RSA PKCS#1v1.5 then it must be possible to make secure protocols with it.
Let’s look more carefully at how TLS “fixes” the attack. PKCS#1v1.5 encryption is used to encrypt a seed for the final session key, known as the “pre-master secret” (PMS), when it is sent from the client to the server. If the behaviour of the server decryption reveals padding errors, we can make the attack and so learn the session key.
The fix as proposed in RFC 3281 is that if a padding error occurs, we ignore it, generate a random PMS, and carry on. TLS also requires that both the client and the server demonstrate knowledge of the value of the PMS in order to create a signature that concludes the key establishment handshake. This is where the trick is effective: whether the PMS is a real one obtained from a tampered ciphertext, or a random one created after a padding error, it is unknown to the attacker, so the protocol fails in the same way. It is this “poor man’s plaintext awareness” that has allowed security proofs for this mode of operation of TLS to be constructed by Jonnson and Kaliski and more recently by Krawczyk, Paterson and Wee.
Just because TLS does it, doesn’t make it right
The debate on including PKCS#1v1.5 in the W3C Crypto API centred on its use as a mechanism for the Unwrap command that receives a ciphertext containing an encrypted key, decrypts it, and creates a new key on the client ready for use. If we make a “TLS-style fix” and substitute a random key in the case of a padding error, this won’t necessarily prevent the attack: if the attacker can cause the resulting key to be used to, say, encrypt a known value, he can call the command twice and see if the resulting key is the same. The attack has been slowed down but not prevented.
It might be possible to make a secure protocol, by using some kind of hash construction to demonstrate plaintext-awareness – but leaving in an explicit PKCS#1v1.5 decryption step means you’re still open to an attack by some other side channel, for example by timing if you don’t generate the random PMS in the case of a correct decryption (see this fix in ocaml-TLS) or if you screw up your buffer sizes for accepting the result (see this April 2014 security update to the Java TLS library).
Since it’s so easy to get wrong, and since we have OAEP which does plaintext awareness properly anyway, doesn’t it make sense just to put OAEP in the API and leave out PKCS#1v1.5?
Matthew Green put it nicely: “PKCS#1v1.5 is awesome — if you’re teaching a class on how to attack cryptographic protocols. In all other circumstances it sucks.”
The IETF seems to agree: from version 1.3, even TLS is dropping support for RSA.
By the way, if you’re wondering whether your crypto infrastructure is using PKCS#1v1.5 and how secure it is, that’s one of the many things we test with our Java Crypto tool and our PKCS#11 security suite. | <urn:uuid:f5d88c73-cbbc-45e2-a21d-45513e8152fc> | CC-MAIN-2017-04 | https://cryptosense.com/why-pkcs1v1-5-encryption-should-be-put-out-of-our-misery/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279923.28/warc/CC-MAIN-20170116095119-00142-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.912763 | 1,016 | 2.84375 | 3 |
Homeland Security Tests 360-Degree Video CamThe surveillance system, which uses multiple cameras to provide high-resolution images in real time, is being pilot tested at Logan International Airport.
The Department of Homeland Security has developed new surveillance-camera technology that provides a 360-degree, high-resolution view by stitching together multiple images.
The technology, developed by Homeland Security's Science and Technology Directorate, is called the Imaging System for Immersive Surveillance (ISIS) and works by creating images from multiple cameras and turning them into a single view, according to Homeland Security.
Photographers have been putting together multiple still images to provide a panoramic view of a scene for some time, but that's typically assembled after images are taken. ISIS creates high-res images from multiple camera streams in real time.
ISIS has a resolution 100 megapixels, according to Dr. John Fortune, program manager with the Directorate's Infrastructure and Geophysical Division. Images retain their detail even as investigators zoom in for closer look at something.
The system, which looks like a bowl-shaped light fixture with multiple holes for camera lenses, is being used in a pilot test at Boston's Logan International Airport. Airports are among the first places that Homeland Security expects to use the technology, though it would be suitable for other environments as well.
Some ISIS capabilities were adapted from technology developed by the Massachusetts Institute of Technology's Lincoln Laboratory for military applications. In collaboration with Pacific Northwest National Laboratory, Lincoln Lab built the current system using commercial cameras, computers, image processing boards, and software.
Even as the first version of ISIS is being tested, Homeland Security is working on a next-generation model with custom sensors and video boards, longer-range cameras that take images at higher resolution, and a more efficient video format.
Longer-range plans include giving the technology infrared capability for night surveillance. | <urn:uuid:df1416a4-94e8-4c1f-b204-77bd66894951> | CC-MAIN-2017-04 | http://www.darkreading.com/risk-management/homeland-security-tests-360-degree-video-cam/d/d-id/1088939 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00501-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.940042 | 380 | 2.59375 | 3 |
The Internet Assigned Name Authority IANA) has handed out the LAST IPv4 address space as of Feb, 2011. The Internet’s ability to function is predicated upon each device having a unique Internet Protocol (IP) address and thus, a new address schema called IP version 6, or IPv6, has been implemented so that the ever growing number of “things” on the Internet can function properly.
Background: IPv4 – IPv6, What does this mean?
TCP/IP is the technology that devices use to interact online. What allows each device to get online and communicate is that each one has an unique IP address. IP addresses enable each device to interact with each other over the Global Internet. From desktops, to laptops, to PS3s, to cell phones, to airplanes, to IP enabled washers and dryers, most things will be connected online – this means we need a lot more addresses than are available today.
At the inception of the Internet, IP version 4 (IPv4) was and is currently the most widespread protocol used to communicate. By their binary nature, IP addresses are a finite resource and Vint & Bob established, at the time, 2^32 unique IP Addresses or ~ 4.3 Billion addresses. While 4.3 Billion might seem like a vast number, the growing amount of Internet participation has exhausted this supply – in fact, it has been predicted that by 2020, there will be more than 7 Internet-enabled devices for every man, woman, and child on planet earth. In February, 2011, the keeper of the free address pool, the Internet Assigned Numbers Authority, (IANA), fully exhausted and allocated all of the IPv4 Addresses.
To continue the operation of the Internet, Internet Protocol version 6 (IPv6) was created. The address space created in IPv6 is vast – 2^128 or more than 170 undecillion addresses – and unlikely to be depleted in the next 50 years. Everything online must transition to include both IPv6 and IPv4 and eventually transition entirely to the new IPv6 protocol.
IPv6 was created in the mid ’90s as a result of engineering efforts to keep the Internet growing. It is an entirely new protocol that is not “backwards compatible” with IPv4. However, both protocols can run simultaneously over the same “wires”. This means that there will be a progressive transition (picking up pace from this point forward) from IPv4 to IPv6 commencing with devices that support both protocols (also known as dual stacking). Eventually, IPv4 will cease to be supported and in the end, all IPv4 only devices will no longer be able to communicate with the IPv6 enabled Internet.
Thankfully, the transition to IPv6 has been underway for a while now. For example, all US Government public-facing servers are slated to be IPv6 compatible by September of 2012, and internal US Federal systems must be IPv6 ready by 2014. Companies, starting with Internet Service Providers like ATT & Comcast are well underway in their conversions. Furthermore, 256 out of 306 Top Level Domains (TLDs) – like .com or .net or .nl or .biz – are already enabled for IPv6. Those in the process of transitioning to IPv6 can see how this will all work (or not) on June 8th, 2011, which is designated as World IPv6 Day. World IPv6 Day is the first global 24-hour “test drive” of IPv6.
How the Internet is “Inter” connected.
To understand how we will be affected, it is helpful to understand how the Internet is actually “inter-connected”. The Internet is literally a “web” of networks all connected to each other. From our home network that has 2 or 3 computers to Internet Service Providers to online companies like Amazon & eBay.
In the middle of this diagram, the “Internet” is a collection of all the world’s networks interconnected together so that we, an end-user, can get from point A to point B across (or “routed” across) all of these networks. In the end, this means that everyone online and everyone who wants to be online will be undergoing the upgrade to IPv6 starting with getting a new IPv6 address.
At the end of the day, the biggest and most noticeable difference between IPv4 and IPv6 are the actual IP Addresses being used. IPv4 had a 32-bit string of numbers that often looked like the following:
This “address” was a part of a pool of addresses managed by IANA as described earlier. As this address pool has been depleted, all new requests for addresses will only be able to get a v6 address. IPv6 addresses are quite a bit more complex – they are 128-bit addresses:
There are many advantages to this more complex address schema in addition to the fact that now every device will have it’s own unique identifier. Ironically, the longer address will actually help to improve end-user experience online as the Internet architecture will see improvements with respect to traffic congestion, application specificity, security and much more.
We have established that every Internet-enable device must have a unique IP address. Now what does this mean for the various constituencies accessing the Internet?
For most end-users at home, this transition will happen automatically and will be mostly unnoticeable. They will get their current and updated addresses from their ISP; businesses will have their IT departments configure their own networks so that their customers (the business) will automatically get their addresses, etc. Therefore, those most concerned about this transformation are those that actually manage portions of the Internet: Internet Service Providers, I/PaaS providers, online Content & Application Service providers, and small to Enterprise businesses that run their own networks.
As we see from the above chart, most end-users and small businesses will really only be responsible for ensuring that they have purchased IPv6 enabled devices, including computers, wireless access points, smart phones, printers and game consoles. Most devices purchased after 2007 are in fact IPv6 enabled. For example, Microsoft has been IPv6 enabled since version Windows XP-SP1 as well as commensurate Apple OSs.
The heavy lifting will be shouldered by the ISPs, I/PaaS, Content/ASPs and businesses that manage their own networks.
There are approximately 66,000 registered Autonomous Systems (AS). These “networks” are run by ISPs, I/PaaS, ASP/Content as well as government & education organizations. All of these “networks” imply a level of self administration, hence Autonomous, and will require their Network Administrators to follow this simple review:
a. Assess the network for IPv4 only devices, dual stacked devices (IPv4 & IPv6), as well as IPv6 only devices (not many of these yet)
b. Layout an IPv6 network architecture starting with an Address Schema (which entails sub-netting)
c. Determine your “stop gap” measures for IPv4 only devices – there are many “translation” scenarios that can be employed temporarily to ease burden of next step – however, one should note that like 8 track tapes used for playing music, using IPv4 only will impact your Internet experience and over time cease to operate
d. Provide a rip/replace plan for those things not capable of supporting IPv6
e. Commence upgrade
These are certainly not trivial steps in transitioning to v6, however again, these are exclusive to service providers, those directly involved in managing networks. It does not preclude end users or SMB however, from being aware of this change and ensuring their own devices are compatible.
So hopefully this section has given a snap-shot of the “Internet Infrastructure Ecosystem” and how each “vertical” will be affected by this transition. Furthermore, while not intended to cry wolf nor claim the Internet will die, for those who are involved in the upgrade of your own Network this has catalyzed you to commence the transition. Now the next logical question – when do you really need to do this?
IPv6 Transition: When do we really need to start?
The transition to IPv6 is well under way and has been fueled by the IANA announcement and APNIC announcement (RIPE & ARIN will be next to run out and both should be out in 2011). We will also see a massive IPv6 “World Day” on June 8th, 2011, that will test the globes/internet’s IPv6 readiness as well as spotlight issues. So how does this translate into when you have to get yourself, your business or your organization ready?
While everyone should be AWARE that this transition is underway, the “services providers” are really the ones behind the 8ball right now as it is their jobs to provide Internet access or access to Internet infrastructure, which has to be IPv6 moving forward. Given the lack of backwards compatibility, this will require some education, hardware and software upgrades and re-thinking about how to layout a network. This is due to the fact that the IPv4 mind-set was one of “scarce resources” (we will run out of addresses). In an IPv6 world, you have nearly unlimited resources and can plan your network IP Address plan very differently.
ISPs, I/PaaS, ASP/Content services providers should be in the midst of transition and if they are not, now is the time. Enterprises will have to assess their own network needs but is not of immediate urgency. And, finally, SMBs and End-Users will really only have to track their own ISPs steps to upgrade to IPv6 as well as be aware of existing and future tech purchases being IPv6 ready.
The entire Internet should run more smoothly and securely thanks to IPv6
The steps those undertaking this transition will need to make are also a GREAT OPPORTUNITY to automate many rote network processes. The general steps, and where automation can play a significant role are as follows:
In subsequent articles we will be diving into Software Tools to help Service providers in this transition, what some of the emerging best practices will be in the areas of IPv6 Automation, IPv6 Security, and IPv6 as it relates to Asset Tracking.
The Asia Pacific Regional Internet Registry, 1 of the 5 regional registries that report to IANA, is also fully depleted of IPv4 resources as of April 2011.
A reference to geo-location as a part of many end-users’ application experience.
End-users who have home networks that connect more than one machine will still need to ensure that all their devices can support both IPv4 & IPv6. For example, ensure your Linksys Wireless router can support both protocols.
Wikipedia AS Write-Up: http://en.wikipedia.org/wiki/Autonomous_system_(Internet)
Good examples of IPv6 “translation” stop gap measures: http://en.wikipedia.org/wiki/IPv6#Transition_mechanisms | <urn:uuid:5212e97b-3656-4a60-b89e-6c9959666c52> | CC-MAIN-2017-04 | https://www.6connect.com/resources/ipv6-and-the-transition-from-ipv4-explained/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00409-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.941821 | 2,316 | 3.515625 | 4 |
5.2.1 What are CAPIs?
A CAPI, or cryptographic application programming interface, is an interface to a library of functions software developers can call upon for security and cryptography services. The goal of a CAPI is to make it easy for developers to integrate cryptography into applications. Separating the cryptographic routines from the software may also allow the export of software without any security services implemented. The software can later be linked by the user to the local security services. CAPIs can be targeted at different levels of abstraction, ranging from cryptographic module interfaces to authentication service interfaces. The International Cryptography Experiment (ICE) is an informally structured program for testing U.S. government's export restrictions (see Questions 6.2.2 and 6.2.3) on CAPIs. More information can be obtained about this program by e-mail to email@example.com. Some examples of CAPIs include RSA Laboratories' Cryptoki (PKCS #11; see Question 5.3.3), NSA's Fortezza (see Question 6.2.6), Internet GSS-API [Lin93], and GCS-API [OG96]. NSA has prepared a helpful report [NSA95] that surveys some of the current CAPIs. | <urn:uuid:216db3a2-51a4-41e6-ab74-009ad5db49c9> | CC-MAIN-2017-04 | https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/capis.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280834.29/warc/CC-MAIN-20170116095120-00069-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.885771 | 255 | 2.578125 | 3 |
COBSQL is an integrated preprocessor designed to work with COBOL precompilers supplied by relational database vendors. It is intended for use with:
You should use COBSQL if you are already using either of these precompilers with an earlier version of a MERANT Micro Focus COBOL product and want to migrate your application(s) to Server Express, or if you are creating applications that will be deployed on UNIX platforms and need to access either Oracle or Sybase relational databases.
For any other type of embedded SQL application development, we recommend that you use OpenESQL.
Note: The Oracle precompiler version 1.8 does not support nested programs. COBSQL does not support Object Oriented COBOL syntax (OO COBOL). If you want to use OO COBOL, therefore, you must use OpenESQL.
You can access the SQL functions offered by the Oracle, Sybase or Informix Database Management System (DBMS) by embedding SQL statements within your COBOL program in the form:
EXEC SQL SQL statement END-EXEC
and then using the Oracle, Sybase or Informix precompiler to process the embedded SQL before passing the program to the COBOL Compiler. The database precompiler replaces embedded SQL statements with the appropriate calls to database services. Other additions are made to the source code to bind COBOL host variables to the SQL variable names known to the database system.
The advantage of embedding SQL in this way is that you do not need to know the format of individual database routine calls. The disadvantage is that the source code that you see when you animate your program is that output by the precompiler and not the original embedded SQL. You can overcome this disadvantage by using COBSQL.
COBSQL provides an integrated interface between MERANT Micro Focus COBOL and the third-party standalone precompiler, enabling you to animate a program containing EXEC SQL statements and display your original source code rather than the code produced by the precompiler.
This chapter shows you how you can use COBSQL in conjunction with either the Oracle, Sybase or Informix precompiler to compile and animate your programs.
To use COBSQL, specify the PREPROCESS"COBSQL" Compiler directive when you compile your program. All directives following it are passed from the Compiler to COBSQL. You can specify Compiler directives by using $SET statements in your program or via the cob command line.
To terminate the directives to be passed to COBSQL, you must use the ENDP COBOL directive. You can do this by making the following changes to the directives:
C"preprocess(Cobsql) csqltype=oracle end-c comp5=yes endp"
When using Server Express, END-C and ENDP have the following effect:
You specify directives to COBSQL as if they were Compiler directives, but you must put them after the directive PREPROCESS"COBSQL".
It is also possible to add the Cobsql directives to the standard Server Express directives file cobol.dir.
Alternatively, you can put COBSQL and precompiler directives in a file, cobsql.dir. This file should reside either in the current directory or in a directory specified in $COBDIR. COBSQL searches the current directory and then along the COBDIR path for a cobsql.dir file. Once COBSQL finds a cobsql.dir file, it stops searching. So, if you have a cobsql.dir file in the current directory, the COBDIR path is not searched.
COBSQL processes cobsql.dir first and then any directives specified via the cob command line.
A number of the directives can be reversed by placing NO in front of them, for example, DISPLAY can be reversed using NODISPLAY. All the directives in the lists below that can have NO placed in front of them are marked with an asterisk. By default, the NO version of a directive is set.
You can specify shortened versions of some of the directives. If applicable, the shortened version of a directive is shown in the lists below, immediately after the full length version.
Some directives can be passed to COBSQL by the COBOL Compiler (see the section COBOL Directives below), removing the need to specify common directives more than once. Directives that can be retrieved from the COBOL Compiler are processed before COBSQL directives.
For example, in the following command line:
cob -V -k testprog.pco -C"p(cobsql) csqlt==ora makesyn end-c xref==yes mode==ansi endp omf(gnt) list()"
The following is a list of the COBSQL directives:
|Specifies which precompiler to use (ORACLE, SYBASE or INFORMIX-NEW); for example, COBSQLTYPE=ORACLE .|
|Forces COBSQL to load the stop run module that performs a rollback if the application terminates abnormally.|
|Creates a debug (.deb) file.|
|Displays precompiler statistics. Should only be used when initially verifying that COBSQL is correctly calling the standalone precompiler.|
|Signals the end of COBSQL directives; remaining directives, if any, are passed to the precompiler.|
|KEEPCBL||Saves precompiled source file (.cbl).|
|MAKESYN||Converts all COMP host variables to COMP-5 host variables. The default situation, if MAKESYN is not set, is that all variables (not just host variables) are converted from COMP to COMP-5.|
|NOMAKESYN||No conversion of COMP-5 variables or host variables is carried out.|
|SQLDEBUG||Creates a number of files that can be used by MERANT to debug COBSQL. These files include the output file from the precompiler (normally this has a .cbl extension), the listing file produced by the precompiler (this has a .lis extension), plus a COBSQL debug file which has a .sdb extension. SQLDEBUG will also turn on KEEPCBL and TRACE.|
|TRACE*||Creates a trace file (.trc).|
|VERBOSE||Displays all precompiler messages and gives status updates as the program is processed. You should only use this when initially verifying that COBSQL is calling the standalone precompiler correctly.|
The following is a list of the COBOL directives:
|BELL*||Controls whether COBSQL sounds the bell when an error occurs.|
|BRIEF*||Controls whether COBSQL shows SQL error text as well as the error number.|
|CONFIRM*||Displays accepted/rejected COBSQL directives.|
|LIST*||Saves the precompiler listing file (.lis).|
|WARNING*||Determines the lowest severity of SQL errors to report.|
The complete set of methods used within COBOL to manipulate copyfiles is not available with database precompilers and COBSQL itself cannot handle included copyfiles. These problems can be overcome, however, by using the MERANT Micro Focus Copyfile Preprocessor (CP).
CP is a preprocessor that has been written to provide other preprocessors, such as COBSQL, with a mechanism for handling copyfiles. CP follows the same rules as the COBOL Compiler for handling copyfiles so any copyfile-related Compiler directives are automatically picked up and copyfiles are searched for using the COBCPY environment variable. CP will also expand the following statements:
EXEC SQL INCLUDE ... END-EXEC
Oracle uses .pco and .cob extensions, Sybase uses .pco and .cbl extensions and Informix uses .eco, .cob and .mf2 extensions.
Oracle and Sybase
For CP to resolve copyfiles and include statements correctly, use the following COBOL Compiler directives for Sybase and Oracle:
copyext (pco,cbl,cpy,cob) osext(pco)
For Informix, use:
copyext (eco,mf2,cbl,cpy,cob) osext(eco)
COBSQL can call CP to expand copyfiles before the database precompiler is invoked. This means that all the copy-related commands are already resolved so that it appears to the database precompiler that a single source file is being used.
The other advantage of using CP is that it makes copyfiles visible when animating.
When CP sees an
INCLUDE SQLCA statement, it does the
Note: Using the file sqlca.cpy can result in errors when the program is run.
You can specify the CP preprocessor's SY directive to prevent CP expanding the SQLCA include file, for example:
preprocess"cobsql" preprocess"cp" sy endp
You should always use CP's SY directive when processing Sybase code because Sybase expects to expand the SQLCA itself.
As Oracle can produce code with either COMP or COMP-5 variables, it has two sets of copyfiles. The standard sqlca.cob, oraca.cob and sqlda.cob all have COMP data items. The sqlca5.cob, oraca5.cob and sqlda5.cob files have COMP-5 data items. If you are using the comp5=yes Oracle directive, you must set the COBSQL directive MAKESYN to convert the COMP items in the SQLCA to COMP-5.
If CP produces errors when attempting to locate copyfiles, check to make sure that the OSEXT and COPYEXT Compiler directives are set correctly. COPYEXT should be set first and should include as its first entry the extension used for source files (.pco or .eco, for example).
If these are set correctly, ensure that the copyfile is either in the current directory or in a directory on the COBCPY path.
When using CP in conjunction with COBSQL, SQL errors inside included copyfiles will be reported correctly. Without CP, the line counts will be wrong, and the error will either go unreported or will appear on the wrong line.
COBSQL error messages can be displayed in different languages depending on the setting of the LANG environment variable. For full details on NLS and how to set the LANG environment variable, see the chapter Internationalization Support in your Programmer's Guide to Writing Programs. For details on the LANG environment variable see the appendix Micro Focus Environment Variables in your Server Express User's Guide.
The COBSQL error message cobsql.lng has been translated into a number of different languages and can be found with the COBOL NLS message files. If there is not an error message cobsql.lng for the current setting of LANG, then the default error message file is used.
Note: COBSQL does not translate any error messages produced by the database precompilers.
The following examples show, for the Oracle Sybase and Informix precompilers, command lines that you can enter at the Server Express Command Prompt to compile a program using COBSQL.
cob -a -v -k sample.pco -C "p(cobsql) cstop cobsqltype==ORACLE"
cob -a -v -P -k example1.pco -C"p(cobsql) csp CSQLT==syb"
cob -a -k demo1.eco -C "p(cobsql) cobsqltype==informix-new"
If you experience problems using COBSQL, first of all check the following:
Forget SQL, and determine whether the client and server are communicating. For TCP/IP, check whether you can ping the server from the workstation and vice versa. If host names don't work, try raw IP addresses.
Check that the SQL networking software is "talking" correctly to the network software. Many SQL vendors supply a ping utility which will show whether the SQL network is set up correctly.
If the SQL network is working correctly, try some interactive SQL. Most vendors supply a simple utility that allows you to enter SQL from the keyboard and view the results. Most vendors also supply a sample database that is useful for this purpose.
Verify that the standalone precompiler works. There may be an icon or a command line for the precompiler. Verify that it can produce COBOL code correctly. It is normal for some sample applications to be supplied with the precompiler.
Check that a preprocessed application runs correctly. Pass the expanded program through the COBOL Compiler and then try to run it.
Try COBSQL with minimal directives. Set up a project in Server Express, place the SQLCA copyfile into the directory with the sample program (prior to running the precompiler), and see if this works.
Then, if you still experience problems, please contact MERANT Technical Support. To help Technical Support locate the cause of the problem:
If you cannot locate the source of the problem, then check each of the following:
Ensure that you are using the latest version of all the products involved.
Check the vendor's documentation and example applications.
Check that environment variables, PATH and configuration file settings are set up correctly.
By default, COBSQL does not display the command line it passes to the database precompiler. Setting the SQLDEBUG directive enables the command line to be displayed (you will need to do this if the precompiler gives command line errors). Possible causes of command line errors are that the directives to be passed to the precompiler are incorrect or that the length of the precompiler command line has been exceeded.
COBSQL may display the following error because the database precompiler has terminated unexpectedly:
* CSQL-F-021: Precompiler did not complete -- Terminating
This may be because the operating system has run out of memory attempting to execute the database precompiler.
COBSQL may display the following errors because it cannot find the precompiler's output file. This may be because the precompiler did not produce an output file. The normal reason for this is that the precompiler hit a fatal error which meant it could not create the output file.
* CSQL-E-024: Encountered an I/O on file filename * CSQL-E-023: File Status 3 / 5
where filename is the name of the file produced by the database precompiler.
If COBSQL reports the error "Premature end of expanded source", and the precompiler runs correctly, this indicates that COBSQL has not been able to match the original source lines with the lines produced by the database precompiler.
Another possible reason for COBSQL reporting this error is that the program does not contain any SQL. Generally, if the database precompiler does not come across any SQL it will abort the creation of its output file part way through, causing this error to be displayed.
You can use Oracle Pro*COBOL 1.8 or Oracle Pro*COBOL 8.x. The following sections describe the items to consider for each of these versions.
DBMS HOLD_CURSOR MAXOPENCURSORS MODE RELEASE_CURSOR
The use of arrays enables an application, for example, to fetch ten rows at a time instead of one at a time. Oracle supply an example program (normally called sample3.pco) that uses an array to fetch multiple rows. Arrays are documented in the Pro*Cobol Supplement to the ORACLE Precompilers Guide.
To get the maximum information from Pro*Cobol, set the Pro*Cobol directive xref=yes. You can add this directive to the Pro*Cobol configuration file $ORACLE_HOME\PROxx\pcbcfg.cfg where:
|xx||is the Pro*COBOL version (for example, for Oracle 8.0 this is PRO80)|
|$ORACLE_HOME||is the root directory for the Oracle insallation on your machine|
Support for Pro*COBOL 8.0 has been added to COBSQL which now works correctly with the Pro*COBOL 8.0 4.0 precompiler.
To use COBSQL with Oracle 8, you should use the following directives:
|This puts calls into the Oracle 8 specific support modules ora8prot and ora8lib. Both of these modules are built into csqlsupp.dll.|
|EXEC SQL preprocessor. Use the options ORACLE8 and ORA8 to use Pro*COBOL 8.x with COBSQL.|
If you are migrating programs from Pro*COBOL 1.x to 8.x, you should be aware of the following:
Define all inserted variables as GLOBAL including the data items inserted by COBSQL that support the EBCDIC to ASCII conversions.
If the Oracle directive DECLARE_SECTION=NO is set (the default), Oracle converts all COMP, BINARY or COMP-4 data items to COMP-5.
To limit the conversion of items to the declare section, set one of the following:
These data items are treated in the same way as
PIC 9(4) COMP / COMP-4 / BINARY / COMP-5
PIC 9(4) USAGE DISPLAY
PIC s9(4) USAGE DISPLAY SIGN TRAILING
PIC s9(4) USAGE DISPLAY SIGN TRAILING SEPARATE
PIC s9(4) USAGE DISPLAY SIGN LEADING
PIC s9(4) USAGE DISPLAY SIGN LEADING SEPARATE
This type was previously supported as the Oracle
Pro*COBOL 8.x rejects some MERANT Micro Focus COBOL language extensions, data definitions and section headings:
To overcome this you need to put these items into copyfiles which are not opened by Pro*COBOL. However, this does not work if you use CP which expands copyfiles before Pro*COBOL is invoked. This could cause a problem if you are using htmlpp which calls CP to expand copyfiles. You must therefore invoke htmlpp before COBSQL.
For example, the following compile line works:
COBOL PROG P(HTMLPP) PREPROCESS(COBSQL) CSQLT=ORACLE8
whereas this line does not:
COBOL PROG PREPROCESS(COBSQL) CSQLT=ORACLE8 P(HTMLPP)
You must define at least one variable within the Working-Storage Section for Pro*COBOL 8.0.4 to add its variables to the generated .cbl file.
If the default setting for the client operating system has been configured, but Sybase still reports national language support errors, use the LANG environment variable to override the setting in the locales.dat file.
For example if the aix client was causing problems and the locales.dat file contained the following setting for AIX:
[aix] locale = C, us_english, iso_1 locale = En_US, us_english, iso_1 locale = en_US, us_english, iso_1 locale = default, us_english, iso_1
then a possible LANG setting for US English would be:
where the parameters are:
|SYB||A string which indicates to COBSQL that this is a modified Sybase error message.|
|severity||Indicates the severity of the error; some of the Sybase messages are only warnings rather than normal or fatal errors.|
|number||A unique, four digit error number assigned to the Sybase error.|
|text||The original Sybase error message.|
For example, a typical entry in the esql.loc file might be:
9 = M_PRECLINE, "Warning(s) during check of query on line %1!."
and this would be changed to read:
9 = M_PRECLINE, "SYB-W-2009 Warning(s) during check of query on line %1!."
We recommend that you make a copy of esql.loc before altering it. Using the modified version, COBSQL can detect the full range of Sybase error messages.
The location of esql.loc is dependent on the language and code page used. This is defined in the locales.dat file. If the definition of the default language for the AIX platform was as follows:
[aix] locale = C, us_english, iso_1 locale = En_US, us_english, iso_1 locale = en_US, us_english, iso_1 locale = default, us_english, iso_1
The default language would be us_english, using the iso_1 code page, so the copy of esql.loc that is to be used is:
where sybase home is the directory that the Sybase client is installed into.
For more information on how Sybase uses and locates the different error message files, refer to your Sybase Client Reference Manual.
Copyright © 2000 MERANT International Limited. All rights reserved.
This document and the proprietary marks and names used herein are protected by international law. | <urn:uuid:316c3a68-9c47-4660-9184-893de84da468> | CC-MAIN-2017-04 | https://supportline.microfocus.com/documentation/books/sx20books/dbcsql.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00097-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.802921 | 4,622 | 2.796875 | 3 |
The truth about the recently discovered Internet-worm
Cambridge, UK, January 16, 2001 - Kaspersky Lab Int., an international data-security
software-development company, during the past few days, has received many requests
from customers regarding the numerous publications in mass media about the recently
discovered, extremely dangerous Internet-worm "Davinia."
"Davinia" spreads via e-mail using the popular MS Outlook e-mail
program. The worm uses a very sophisticated way of penetrating into a user's
computer. This process consists of two parts: firstly, an e-mail message is
delivered to a target computer, with this message containing a script program
that automatically opens an additional Internet Explorer window after a message
is read, and initiates a connection to the hacker's Web site. The virus contains
another script program that opens a Word document, located on the same site,
and this document contains a macro-virus that, unbeknownst to the user, switches
off the MS Word built-in anti-virus protection; so the user sees no warning
about macros in the opened documents. To do this, the virus exploits the "Office
2000 UA Control Vulnerability" discovered earlier in May 2000.
Following this, the worm gains access to MS Outlook, enumerates the e-mail
addresses from the local address book, and sends out an e-mail message with
a link to the Web site as described above to all recipients.
Therefore, the virus part of the worm is presented only on the remote Web site,
while target computers receive only a link to this site.
"Davinia" has a very destructive payload: it replaces all the files
located on all local hard disks with a file that shows the following dialogue
box when started:
"At this time, we haven't received any reports of this worm being found
'in-the-wild.' Moreover, we are quite sure that 'Davinia' poses absolutely no
threat, simply because the Web site that is used to penetrate into a user's
computer is shut down right after the worm has been discovered," said Denis
Zenkin, Head of Corporate Communications for Kaspersky Lab.
However, it is possible other modifications of the worm may appear in the very
near future, using other Web sites for their malicious purpose. Thus, we recommend
users immediately install a patch for MS Office that remedies the described
breach exploited by the "Davinia" virus. You can download the patch
for free from the Microsoft Web site here.
"However, this incident shows a very alarming trend, when virus writers
often refuse to use the commonly exploited methods of penetrating into computers
by pretending to be a very interesting and useful utility, such as the 'MTX'
or 'Navidad' worms do. Today, we see more and more malicious code exploiting
security breaches in different applications and operating systems. This makes
timely installation of security patches crucial for both home and corporate
users," added Denis Zenkin.
Protection against the "Davinia" worm already has been added to the daily update
of Kaspersky Anti-Virus (AVP).
More details about the worm are available on Kaspersky's
Kaspersky Anti-Virus (AVP) can be purchased at the Kaspersky
Lab online store. | <urn:uuid:44117149-8c88-441a-9322-bee1c2bfb359> | CC-MAIN-2017-04 | http://www.kaspersky.com/au/about/news/virus/2001/What_s_So_Special_About_Davinia_ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00491-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.906668 | 710 | 2.59375 | 3 |
Strategic Insights:Heart Valve Replacement Market - India
- 1 Introduction
- 2 Current Worldwide Market
- 3 Market in India
- 3.1 Strategic Outlook
- 3.2 Overview of market
- 3.3 Regulatory landscape
- 3.4 Reimbursement landscape
- 3.5 Technology landscape
- 3.6 Distribution landscape
- 3.7 Pricing landscape
- 3.8 Key hospitals & institutions
- 3.8.1 Narayana Hrudayala Hospitals
- 3.8.2 Sri Jayadeva Institute of Cardiovascular Sciences and Research
- 3.9 Key Opinion Leaders
- 3.10 Market Drivers
- 3.11 Recent Trends
- 3.12 Recent Developments
- 4 Key Players
- 5 Recommendations
- 6 Strategy formulation
For blood to go in only one direction, forward, it must pass through the heart valves, which function as one-way doors, opening and shutting with each beat of the heart. Just as there are four chambers to the heart, there are four heart valves. Blood must pass through one of these valves each time it leaves a chamber.
The Four Heart Valves:
- Tricuspid: The tricuspid valve is named because it has three leaflets. It is located between the right atrium and right ventricle.
- Pulmonary: The pulmonary valve is named because it is located below the pulmonary artery, between the right ventricle and the pulmonary artery.
- Mitral: The mitral valve is named because it looks like an upside down bishop's hat or mitre. It is the only heart valve with two leafets; all of the others have three. It is located between the left atrium and left ventricle.
- Aortic: The aortic valve is named because it is located below the aorta, between the left ventricle and aorta.
The two valves located between the atria and ventricles, the tricuspid and mitral valves, are known as atrioventricular valves. The two other valves, the pulmonary and aortic, are sometimes called semilunar valves, because each of those valves has leaflets that are shaped like half-moons.
Types of heart valves
When someone has to have a heart valve replaced, there are a few things that are done to determine what type of valve the patient will recieve. The patient could recieve one of the following valves: mechanical valves, tissue valves, homograft valves, or allograft valves. These all have there advantages and disadvantages.
Tissue Valves: A tissue valve is another field of valves that are taken from an animal and put into human hearts. These kinds of valves are chemically treated for safety and are prepared for the human heart (St. Jude Medical, Inc., 2007). Since these valves are weak, they are reinforced with a frame or stent to make them stronger, and to support the valve. The valves that aren’t reinforced are called stentless valves (St. Jude Medical, Inc., 2007).
These types of valves aren’t a good choice for younger patients because they wear out quickly. They wear out because they stretch when the demand of blood flow increases (Aortic valve replacement, 2007). Another reason as to why they aren’t used often is because when these valves wear out, the patient will have to under go another operation to get a new valve implanted to replace the previous one. These valves last, on average, 10-15 years in the less active patients such as the elderly, while in the younger and more active patients, they wear out a lot faster (Aortic valve replacement, 2007).
Mechanical Valves: Mechanical valves are designed to mimic a real heart valve (St. Jude Medical, Inc., 2007) and to outlast the patient (Aortic valve replacement, 2007). All versions have a ring to support the leaflets (flaps) like a natural valve and has a thin polyester mesh cuff on the circumference of the valve for easier implantation. This is for easier implantation (St. Jude Medical, Inc., 2007). These valves are not controlled electronically but naturally. As the heart beats, the mechanical valve opens and closes (St. Jude Medical, Inc., 2007). These valves have been proven to last several hundred years by being stress-tested (Aortic valve replacement, 2007).
Homograft Valves: The third type of valve is a homograft valve. This a valve that is taken form a human donor (St. Jude Medical, Inc., 2007; Encyclopedia of Medicine, 2006).These donor valves are only given to patients who will deteriorate rapidly because of a narrowing of the passageway between the aorta and that left ventricle (Encyclopedia of Medicine, 2006) This type of valve is better for pregnant women and children (St. Jude Medical, Inc., 2007. Unlike most valves, this type of valve does not require anticoagulation therapy over a long time (long-term) (St. Jude Medical, Inc., 2007).Durability of a homograft is approximately the same as a tissue valves (Aortic valve replacement, 2007).These valves are sometime, but rarely, taking from the patients own pulmonic valve (Encyclopedia of Medicine, 2006).
Allograft Valves:The fourth type of valve is an allograft valve. These valves are usually taken from pig's aortic valve (Encyclopedia of Medicine, 2006). They are chemically treated before they are put into a human heart. The life span of one of these valves is about 7-15 years, depending of the patient (Encyclopedia of Medicine, 2006). Because of the short life span of this valve, it is generally given to the older patients (Encyclopedia of Medicine, 2006).
Disorders treated by heart valves
- Valvular Stenosis:This occurs when a valve opening is smaller than normal due to stiff or fused leaflets. The narrowed opening may make the heart work very hard to pump blood through it. This can lead to heart failure and other symptoms (see below). All four valves can be stenotic (hardened, restricting blood flow); the conditions are called tricuspid stenosis, pulmonic stenosis, mitral stenosis or aortic stenosis.
- Valvular Regurgitation:This occurs when a valve does not close tightly. If the valves do not seal, some blood will leak backwards across the valve. As the leak worsens, the heart has to work harder to make up for the leaky valve, and less blood may flow to the rest of the body. Depending on which valve is affected, the conditioned is called tricuspid regurgitation, pulmonary regurgitation, mitral regurgitation or aortic regurgitation.
Heart valve procedures
Procedure of Heart Valve Surgery
Heart valve surgery means repair or replacement of the diseased valves. In the surgery, some valves are repaired or mended to do its work properly. Replacement means removal of the diseased valves by a new valve. The procedures of heart valve surgery are :
- Valve Repairing : In the valve repair surgery, a ring is sewn around the opening of the valve to make tighter. The surgeons may cut the other parts or may separate and shorten it to help the valve open and close right.
- Valve Replacement : Sometimes by mending the valves, it is not possible to cure the unhealthy valve, and then replacement is required to get back its normal function. A prosthetic valve is used to replace. There are two types of prosthetic valves.
- Mechanical valves : These types of valves are made from man-made materials. While heart surgeons’ use this valve, lifetime therapy with an anticoagulant is prescribed to the patient.
- Biological (tissue) valves : The surgeons take biological valves from pig, cow or human donors. The longevity of biological valves is less than the mechanical valves.
Heart valve procedures by technique
Transcatheter Aortic Valve Implantation (TAVI)
Description This technique involves insertion of a miniaturized valve through a catheter from the groin. The deployed valve is later inflated at the site of the aortic valve. The entire procedure is conducted under general anesthesia and takes about an hour. It is a non-surgical procedure. In TAVI inner organs are accessed via needle-puncture of the skin, rather than by using a scalpel.
Procedures in India TAVI is still in nascent trial stage in India. In mid of March, a team of doctors at Delhi's Fortis Hospital headed by Dr Ashok Seth operated three patients using TAVI.
Cost of surgery in India The cost of procedure is 29,350 USD which include the cost of valve 21,500 USD (approx).
Prevalence After the age of 75 years, 5% population is at the risk of developing a problem in their heart valve, out of which 35% are not suitable for surgery. If not treated, 50% of them will not survive for more than two years.
The Ross Procedure is a type of specialized aortic valve surgery where the patient's diseased aortic valve is replaced with his or her own pulmonary valve. The pulmonary valve is then replaced with cryopreserved cadaveric pulmonary valve. In children and young adults, or older particularly active patients, this procedure offers several advantages over traditional aortic valve replacement with manufactured prostheses.
Source:University of Southern California
- 1,500 Ross procedures are performed annually on a global basis. In US, this number is around 1,000.
Current Worldwide Market
- Aortic Valve segment represents 55% of the overall market. However, with 35-50% of patients suffering from severe aortic stenosis considered at high risk for surgery, the current number of patients eligible for TAVI procedures is 200,000 worldwide.
- The TAVI segment thus represents a $2B market opportunity. According to various sources, this market size will be reached in 2014.
- The Brazilian, Russian, Indian, and Chinese (BRIC) heart valve device market—comprising sales of heart valve replacement (mechanical, tissue, and transcatheter aortic valve replacement [TAVR]) and heart valve repair (annuloplasty) devices—was valued at nearly $180 million in 2011 and will expand through 2016, driven primarily by rising heart valve procedure volumes.
- Rapid economic growth and an aging population, which are increasing both the prevalence of valvular heart disease and patients’ ability to pay for treatment, constitute the primary drivers of growth in the BRIC heart valve device market.
- The patient population will also expand as government funding for health care infrastructure improves the accessibility and affordability of the procedures for patients across all BRIC nations.
- Rising penetration of tissue heart valves will contribute further to market growth due to the premium price of these devices compared to mechanical heart valves.
|Apr 2012||St. Jude Medical||Japan||The new Trifecta aortic stented, pericardial tissue valve has been implanted in procedures atOsaka University Hospital and Saitama Medical University International Medical Center||Medcity News|
|Nov 2011||Edward Lifesciences||US||The Sapien Transcatheter Heart Valve will provide some people with this condition who can’t undergo open heart surgery with the option of valve replacement||Wallstreet Journal|
|July 2011||Sorin Group||Europe||Mitroflow Aortic Pericardial Heart Valve||Sorin|
|Jan 2011||Sorin Group||Europe||Innovative Self-Anchoring Aortic Heart Valve, Perceval™ S||Sorin|
|May 2010||CryoLife||US||Cryovalve SG Pulmonary Human Heart Valve (and Conduit)||FDA|
Market in India
Overview of market
- The Indian market for heart valves was about 30,000 a year and a sizeable portion of that is being met by the TTK-Chitra valves.
Source:Senior Executive at TTK Chitra Hindu Article
- Indian government is working on a comprehensive regulatory framework for the medical device sector because it has lacked a formal regulatory system for many years. Medical devices are currently either regulated as drugs or simply left unregulated.
- In June 2009, the Drug Consultative Committee (DCC) and the Drug Technical Advisory Board (DTAB) approved new formal regulations for India's medical devices sector. The Health Ministry is set to issue the notification of these new regulations in the near future.
According to the final draft of the newly proposed regulations, all medical devices have been broadly classified into the following categories:
- Class A Devices: Low risk devices that include gloves and operating room utensils;
- Class B Devices: Low to medium risk devices such as needles, surgical knives, and syringes;
- Class C Devices: Moderate to high risk devices such as radiation equipment and heart–lung machines
- Class D Devices: Very high risk and life supporting devices such as implantable pacemakers and defibrillators.
Heart Valves notified as “drugs”: As per the notice dated 16/May/2005 from The Ministry of Health and Family Welfare, Govt. of India has notified Heart Valve devices to be considered as drugs under Section 3, Clause (b) . Sub clause (iv) of the Drugs and Cosmetics Act, notification number s.o.1468 (E). CDSCO: Medicines in India are regulated by CDSCO - Central Drugs Standard Control Organization. Under Ministry of Health and Family Welfare. Headed by Directorate General of Health Services CDSCO regulates the Pharmaceutical Products through DCGI - Drugs Controller General of India at Chair.
|Registration Certificates issued for the Heart Valves along with their manufacturing sites and Indian Authorized agents in since 2010|
|Date||Name of Indian Agent||Name of Manufacturer||Name of the Device||File No.||R. C. No.||Validity of the Registration Certificate|
|Jan. 2012 to Feb 2012||M/s. St. Jude Medical India Private Limited, Plot No. 18 & 19 Laxminagar, behind TB Hospital, Hyderabad-500038||M/s. St. Jude Medical Puerto Rico LLC, Lot 20-B, St. Cagaus Puerto Rico 00725||1. St. Jude Medical Mechanical Heart Valve 2.SJM Master Series (Rotatable)-Aortic + 9||31-28-MD/2006-DC (Re-Reg. 2_||MD-28||30-06-2015|
|Jan. 2011 to 20th December 2011||M/s India Medtronic Pvt. Ltd., 1241, Solitaire Corporate Park, Building Number 12, 4th Floor, Anheri-Ghatkopar Link Road, Andheri (E), Mumbai- 400094||M/s Medtronic Inc., 710 Medtronic Parkway N. E. Minneapolis MN 55432 USA having manufacturing premises at M/s Medtronic ATS Medical Inc., 3905 Annapolis Lane, Suite 105 Minneapolis, MN – 5547, USA||2. Open Pivot Aortic Valved Graft (AVG)||31-892-MD/2010-DC||MD-893||31-12-2014|
|Jan. 2011 to 20th December 2011||M/s Edward Lifesciences (India) Pvt. Ltd., E.F. 201-204, Remi Biz Court, Plot No. 9, Off Veera Desai Road, Andheri West, Mumbai- 400058||M/s Edward Lifesciences LLC, One Edwards Way, Irvine CA, USA 92614-5686||1. Carpentier-Edwards Bioprosthetic Valved Conduit||31-93-MD/2006-DC (Re-Registration 2010) (End. 1)||MD- 93||31-01-2013|
|Jan. 2011 to 20th December 2011||M/s Edward Lifesciences (India) Pvt. Ltd., E.F. 201-204, Remi Biz Court, Plot No. 9, Off Veera Desai Road, Andheri West, Mumbai- 400059||M/s Edward Lifesciences LLC, One Edwards Way, Irvine CA, USA 92614-5687||2. Edwards MC Tricuspid Annuloplasty System||31-93-MD/2006-DC (Re-Registration 2010) (End. 1)||MD- 94||31-01-2014|
|Jan. 2011 to 20th December 2011||M/s India Medtronic Pvt. Ltd., 1241, Solitaire Corporate Park, Building Number 12, 4th Floor, Anheri-Ghatkopar Link Road, Andheri (E), Mumbai- 400093||M/s Medtronic Inc., 710 Medtronic Parkway N. E. Minneapolis MN 55432 USA having manufacturing premises at M/s Medtronic Mexico S. de R.L de C.V. Avenida paseo del Cucapah 10510, Parque Industrial EI Lago, Tijuana, B.C. 22570< Mexico||1. Sprinter rapid Exchange Balloon Dilatation Catheter||31-381-MD/2007-DC (Re-Reg. 2010)||MD-381||14-02-2014|
|Jan. 2011 to 20th December 2011||M/s India Medtronic Pvt. Ltd., 1241, Solitaire Corporate Park, Building Number 12, 4th Floor, Anheri-Ghatkopar Link Road, Andheri (E), Mumbai- 400094||M/s Medtronic Inc., 710 Medtronic Parkway N. E. Minneapolis MN 55432 USA having manufacturing premises at M/s Medtronic Mexico S. de R.L de C.V. Avenida paseo del Cucapah 10510, Parque Industrial EI Lago, Tijuana, B.C. 22570< Mexico||2. Sprinter legent RX Balloon Dilatation Catheter||31-381-MD/2007-DC (Re-Reg. 2010)||MD-382||14-02-2015|
|Jan. 2011 to 20th December 2011||M/s India Medtronic Pvt. Ltd., 1241, Solitaire Corporate Park, Building Number 12, 4th Floor, Anheri-Ghatkopar Link Road, Andheri (E), Mumbai- 400095||M/s Medtronic Inc., 710 Medtronic Parkway N. E. Minneapolis MN 55432 USA having manufacturing premises at M/s Medtronic Mexico S. de R.L de C.V. Avenida paseo del Cucapah 10510, Parque Industrial EI Lago, Tijuana, B.C. 22570< Mexico||3. Melody Transcatheter Pulmonary Valve||31-381-MD/2007-DC (Re-Reg. 2010)||MD-383||14-02-2016|
|Jan. 2011 to 20th December 2011||M/s. St. Jude Medical India Private Limited, A & B, 2nd Floor, Brij Tarang, Greenland, Begumpet, Hyderabad-500016||M/s. St. Jude Medical Cardiology Division Inc, DBA 177 County Rod, B East St. Paul, MN 55117, USA||Trifecta Valve Aortic (19mm-27mm)||31-26-MD/2006-DC (Re-Reg. 2009 (End 02)||MD-26||30-06-2012|
- Aarogyasri in Andhra Pradesh State
- Jeevandayi Yojana in Maharashtra State
- Kalignar's Insurance Scheme
- It is a flagship scheme of all health initiatives of the State Government with a mission to provide quality healthcare to the poor. The aim of the Government is to achieve "Health for All" in Andhra Pradesh state.
- In 2007, the Andhra Pradesh government launched ’Aarogyasri’, a community health insurance scheme for the poor (Under this scheme, the hospitals received a fixed amount for valve replacement operations).
- The Maharashtra state government provides financial assistance to people falling in the below poverty line (BPL) category for treatment of various diseases.
- Under this scheme, doctors perform major operations within an upper limit of Rs 1.5 lakh.
Kalignar's Insurance Scheme
- Tamil Nadu launched the “Kalaignar’s Insurance Scheme for Life Saving Treatments” for families with an annual income less than Rs. 72,000.
- Each family will enjoys benefits up to Rs. 1 lakh for certain procedures in private hospitals and pay wards in government hospitals.
- Private insurance company Star Insurance, contracted to implement the scheme, has entered into contracts with a number of hospitals in private health care centres and hospitals throughout the State. There will be a minimum of six hospitals in each district and 15 hospitals in the major cities. The government will pay the premium of Rs. 500 per annum. A total of Rs. 517.30 crore is the allotment for the current financial year.
In India, the reimbursement rate for procedures varies from one insurance company to another. Typically, Heart Valve Replacement procedure falls under major illness category. The amount of reimbursement is typically dependent on the sum assured.
- United India Assurance - Pre and Post Hospitalisation expenses payable in respect of any illness shall be the actual expenses incurred subject to a maximum of 10% of the Sum Insured whichever is less. For major illnesses, the expenses are settled on a co-pay of 80:20 ratio. The co-pay of 20% will be charged as a total package applicable on the admissible claim amount.
- Life Insurance Corporation - For major cardiovascular surgical procedures like Valve replacement surgery, open heart surgery for vale repair and heart by-pass surgery, up to 100% of sum assured could be claimed.
- ICICI Lombard Insurance - For critical illnesses like Coronary artery bypass graft surgery and Heart valve replacement surgery, the insured is entitled to the lumpsum benefit of the 100% of the sum insured for. The insurance sum may vary between $12,000 - $24,000.
- The reimbursement for medical devices like valve, stent, pacemaker etc. are evaluated on a case-to-case basis.
An artificial heart valve is a device implanted in the heart of a patient with heart valvular disease. Natural heart valves become dysfunctional for a variety of pathological causes. When one of the four heart valves malfunctions, the medical choice may be to replace the natural valve with an artificial valve.
There are two main types of artificial heart valves:
- Mechanical valves - prosthetics designed to replicate the function of the natural valves of the human heart.
- Biological valves - valves of animals, like pigs, which undergo several chemical procedures in order to make them suitable for implantation in the human heart.
One of the greatest biomedical engineering challenges today is to develop an implantable device that resists the natural conditions to which heart valves are subjected, without eliciting host reactions that would impair their function. Currently, no artificial heart valve device, either mechanical or tissue-derived, fulfills the required prerequisites for an ideal heart valve.
Regenerative medicine approaches to heart valve replacement:
Regenerative medicine is based on principle of using the patient’s own cells and extracellular matrix components to restore or replace tissues and organs that have failed. Modern approaches to heart valve regenerative medicine include several research methodologies with the most intensely researched approaches being:
- the use of decellularized tissues as scaffolds for in situ regeneration
- construction of tissue equivalents in the laboratory before implantation, and
- use of scaffolds preseeded with stem cells.
The regenerative medicine approach is however still in its nascent stages.
Future and perspesctives:
Effective treatments of valvular disease continues to present multiple challenges. The exciting lines of investigation in this area are:
- Finding causes and developing nonsurgical therapy approaches for valvular disease
- Improvement of current artificial devices
- Regenerative medicine approaches
- This technique involves replacing diseased aortic and mitral vales with the patient's own pulmonary valve and valves collected from cadavers replace the pulmonary valve.
Arkalgud Sampath Kumar- Biography
Percutaneous Transcatheter Aortic Valve Implantation (TAVI)
- In TAVI, a replacement valve is passed through a hole in the groin by a puncture of the femoral artery and advanced up to the ascending aorta of the patient. It substitutes for a more invasive procedure in which the chest is opened. The survival is equivalent, but the risk of stroke is higher.
Dr. Ashok Seth, Fortis Healthcare, Delhi
Transcatheter Pulmonary Valve (TPV) Therapy
- Transcatheter pulmonary valve therapy or Percutaneous pulmonary valve implantation (PPVI) treats narrowed or leaking pulmonary valve conduits without open-heart surgery.
- With transcatheter pulmonary valve therapy, a catheter (a thin, hollow tube) holding an artificial heart valve is inserted into a vein in the leg and guided up to the heart. The heart valve is attached to a wire frame that expands with the help of balloons to deliver the valve. Once the new heart valve is in position, it begins to work immediately.
Value Chain- Heart Valves
Sales Force Structure
Distributors & Stockists
- Distribution channel generally consists of the company, distributor & Hospital while Doctors being the key influencer regarding which valves should be procured. Patients are rarely aware of brands generally go by the Doctors choice which is conjunction with their paying capacity & need of the surgery.
- Big private chain of Hospitals sometimes bypass distributor and directly deal with manufacturing company. Discussed in detail in pricing section.
- Distributors are majorly concentrated in metropolitans like New Delhi, Chennai, Mumbai, ,Kolkata ,Bangalore, Hyderabad etc. and also cater to regions nearby.
- Commercial activities are done by stockist while companies sales force provide technical support to doctors , addresses the all issues faced to streamline the process & take feedback at every level in supply chain.
Distribution channel at government hospitals:
- Hospital floats a tender on basis of their requirement. The stockist quote price for 6 months/ yearly for heart valves. Generally authorized stockist gets discount on the product from the company hence are able to bag deals easily.
- Pricing: Rate contract is signed with the stockist and all the procedures would be charged as per the “rate” mentioned in the contract .Procurement is done on demand basis.
- Exceptions: If the doctor feels that he needs a Heart Value which has new technology and is still not in "rate contract" list, he can issue a local purchase for that ,hence a new entrant with unique offering can make waves.
- Hospitals keep an inventory of different valves from different manufacturers. The number of heart valve from each manufacturer is actually determined by the surgeons.
Procurement of heart valve at the time of surgery:
- When a valve is recommended for a patient, the patient has to go procurement dept. to get their valve, the procurement dept then informs the vendor and only after this the vendor would raise the invoice. This is unique because even if the valve is sitting in the hospital, invoice is raised only when it is ready to be implanted into the patient. This is same for every vendor.
- For insurance related patients, the patients have to go to the credit cell of the hospital and then when the credit cell gives a green sign (after the formalities with insurance vendor and patient) the procurement dept. asks to raise the invoice.
- Commission varies from company to company but generally the distributor take (20-25%),hospitals take (20-25%).
- Doctors take (10%),perfusionist takes (2%) how ever the data regarding the cut taken by doctors & perfutionists varies from nil to a few percentage in monetary /non monetary terms which generally happens under the table.
The major cost of the surgery includes cost of the
- Medical devices
Medical device The heart valve typically costs from INR22,000 ($420) to INR200,000($3,800). The most cost effective valve is manufactured by TTK. The valve is called TTK Chitra. Depending upon the type( mechanical, tissue, percutaneous etc)of valve the price could go up to more than INR10 lakhs($18,800).
Procedure The hospitals generally charge a fixed amount on money for the whole procedure. In Tier 1 cities it is INR2 lakhs($3,800) in most of the hospitals.
The ratio of cost of device to procedure generally is 30% to 70%.
Each hospital in the chain has a pricing committee which decides the price of valve based on factors such as :
- Handling charges
- Benefit to the patients
Hence price of same valve may be different in a hospital of same chain
- Some hospitals (generally a chain) have a central committee which directly negotiates with the company on the purchasing price of the Heart Valves
- Purchasing committee has a lot of bargaining power as they deal in huge volumes (Example – Fortis group)
Key hospitals & institutions
Narayana Hrudayala Hospitals
Narayana Hrudayalaya is founded by one of the India’s oldest construction company “Shankar Narayana Construction Company”. Narayana Hrudayalaya group currently has 5000 beds in India and aims to have 30,000 beds in the next 5 years in India to become the one of the largest healthcare player in the country.
Narayana Hrudayalaya - Highlights
- The largest cancer hospital in the country at the Bangalore campus - 1,400-bed cancer and multispecialty hospital.
- Largest number of Pediatric Heart Surgeries in the world.
- Largest number of Heart Valve Replacements in the world for the year 2007.
- Over 32 heart surgeries performed in a day.
- World leader in endovascular interventions for aneurysm of aorta.
- First hospital in Asia to implant a 3rd generation artificial heart.
- Working on a mission to do a heart operation for US$800 from point of admission to point of discharge in next 3 years.
NH Institute of Cardiac Sciences, Bangalore
Narayana Hrudayalaya is located close to the Electronics City of Bangalore covering 26 acres of land with a building to accommodate 1000 beds, 26 operation theaters and infrastructure to perform 70 heart surgeries a day. Within the first 5 years of commissioning this institution, currently 25 heart surgeries are done on a daily basis, out of them about 30% are on children with heart problem. Rest of them is adult open-heart surgeries.
The institute is one of the world’s largest pediatric heart hospitals. It is the brainchild of renowned cardiac surgeon Dr. Devi Shetty, who performed over 15,000 heart operations.
Heart Valve Procedures
The Ross Procedure
The Ross Procedure, also known as Pulmonary valve translocation, was developed by Donald Ross in 1967.This operation uses the patient’s own pulmonary valve and part of the main pulmonary artery as a unit to replace the aortic valve and ascending aorta. A homograft valve is harvested from a cadaver, is then placed in the pulmonary position. The pulmonary valve is identical in shape, size, and in fact stronger than the aortic valve and is therefore an ideal replacement for the diseased aortic valve. Narayana Hrudayalaya has a full fledge functioning homograft heart valve bank for the benefit of the needy patients. The surgeons of the Narayana Hrudayalaya have a large experience in successful valve replacements using homografts and Ross operations. These operations are being done only in very few centres in our country. Surgeons at Narayana Hrudayalaya have performed about 100 of these procedures with excellent results. They are perhaps one of the most experienced surgeons in the World in performing operations like Bental Procedure for Aortic Aneurysm and Aortic Arch replacement surgery for dissecting Aneurysm of Aorta.
Mitral Valve Repair in New Born Babies and Infants
Mitral Valve leakage is a dreadful condition affecting small percentage of children suffering from congenital heart disease. Only option for these children is repair of the valve, which is done on a regular basis at Narayana Hrudayalaya.
Ross's Procedure for Aortic Stenosis
Best treatment option for Aortic Stenosis is Ross's Procedure in which the patient's own pulmonary valve is used to replace the aortic valve and in the place of pulmonary valve a homograft taken from a dead body is replaced.
Narayana Hrudayala uses economies of scale to keep the cost of treatment low.
- The procedure cost is arounf INR 110,000 for a fully paid heart surgery
- Narayana Hrudayala also offers free treatment for few who cannot afford the procedure
- It has tie-ups with health foundations and offer them discounted price of INR 60,000 to 70,000
Unlike other hospitals, the bulk of its profits come from the out- patients ward, where the cost to the patient is low but the margins are as high as 80 percent. The number of walk-in patients remains high because they know the cost of surgery will be subsidised should they need it.
Dr. Avery Mathew
Designation: Senior Consultant Cardiac Surgeon
Brief Profile: He has done M.Ch(Cardiothoracic) in Kasturba Medical College,Mangalore His forte lies in Aortic Aneurysms Surgery, besides Coronary Artery and Valve Surgery.
Dr. Binoy C, MCh
Designation: Consultant Cardiac Surgeon
Brief Profile: Dr Binoy completed his training in cardiac surgery at the prestigious Seth G.S Medical College and King Edward Memorial Hospital at Mumbai and The Royal Prince Alfred Hospital at Sydney, Australia. His fields of interest and expertise include Total Arterial Coronary Revascularization procedures using bilateral Internal Mammary Arteries, aortic surgeries and Pulmonary Thrombo Endarterectomy. He also leads the Extra Corporeal Membrane Oxygenation (ECMO) programme in the hospital.
Dr. Chinnaswamy Reddy H M, DNB(Gen. Sur.), DNB(CTS), FPCS
Designation: Senior Consultant Cardiac Surgeon
Brief Profile: He has done M.Ch(Cardiothoracic and Vascular Surgery) in Jayadeva Institute of Cardiology,Bangalore University. He specializes in Bex-Nikaidoh operation, REV operation, Double-switch Ross operation and the latest Cone Reconstruction of Tricuspid Valve in Ebstein\'s Anamoly.
Sri Jayadeva Institute of Cardiovascular Sciences and Research
Sri Jayadeva Institute of Cardiovascular Sciences & Research is a Government owned Autonomous Institute and is offering super specialty treatment to all Cardiac patients. It has got 600 bed strength with State of Art equipments in the form of 4 Cathlabs, 4 Operation Theaters, Non-Invasive Laboratories and 24 hours ICU facilities. Presently on an average 800-1000 patients are visiting this hospital every day and annually 21,500 In patients are treated. About 2500 Open Heart Surgeries, 8500 Coronary Angiograms, 3500 Procedures including Angioplasties and Valvuloplasties are done in this hospital. The prevalence of heart attach, which was 2% in 1960 has increased to 12% in 2008. Unfortunately heart attack and other related heart ailments steadily increasing among the poor people. 70% of the patients who comes to our hospital are well below the poverty line. The consumables used for various procedures like Open heart surgeries (Valve replacement), Angioplasty procedures, Pacemaker procedures are becoming very expensive, however quality treatment is given at affordable cost. Well equipped special ward facilities with round the clock angioplasty services are also provided.
- URL:Hospital Website
- Location: Bangalore
Heart Valve Replacement Procedures
Cost of Valve Replacement Procedure (MVR / AVR / DVR ) INR Rupees
- Any additional Devices / Implants/ Drugs used shall be charged extra.
- Wherever the procedure rates are not listed in the CGHS website, SJICR Category rates shall be applicable.
- Deluxe Ward Charges – Rs.2500/day
- Special Ward Charges – Rs. 975/day
- Procedure/Investigation charges vary for CGHS, ESI, Yeshasvini and other boards. please contact SJIC for more details.
- SJIC shall have sole discretionary powers to modify tariff without notice.
Dr. C.N. Manjunath
M.B.B.S, M.D (Gen.Medicine), D.M (Cardiology)
Director and Prof. & HOD of Cardiology
Degree College University Year of passing M.B.B.S M.M.C Mysore 1982
M.D. (Gen. Medicine) B.M.C Bangalore 1985
D.M. (Cardiology) K.M.C Mangalore 1988
Marital Status : Married
Nationality : Indian
Designation : Professor & Head of Cardiology;Director
Sri Jayadeva Institute of Cardiovascular
Sciences & Research, 9th Block Jayanagar
Bannerghatta Road, Bangalore – 560069.
Phone: 080-22977422, 22977433,
Direct -080 – 22977456 fax: 26534477
Cell Phone: 9844006699
Residence: 26692155, 26697558
Key Opinion Leaders
The surgeons are the decision makers regarding the kind and make of the valve. Almost in all hospitals surgeons recommend the type and brand of the device. Dolcera team performed a research exercise that involved interviewing doctors at top hospitals. During this exercise we found that the following factors are taken into account while taking a decision for selecting a heart valve:
- Indication of patient
- Paying capacity
- Quality of valve(durability)
- Supply/ Availability
- Need for anti-coagulation
- Haemodialysis dynamics
The Dolcera team found following insights from the discussions with surgeons:
- Patients are not aware of the brands available in the market.
- Sometimes patients ask for a foreign valve only.
- Patients usually have information about tissue or mechanical valve and want to know which one was used the procedure and why?
Here is a list of few of the key influencers in the industry:
Please click on the names to get biographies of Physicians
- Dr. Vivek Jawali
- Dr. Ashok Seth
- Dr. Naresh Trehan
- Dr. Ajay Kaul
- Dr. Surendra Nath Khanna
- Dr. Z. S. Meharwal
- Dr. Sunil K Kaushal
- Dr. Y. K. Mishra
- Dr. Sanjay Gupta
- Dr. Vijay Dikshit
Rising middle-class and ageing population
India has a population close to 1.1 billion people, making it the second most populated country behind China, and 5% of them are over 65 years of age. And unlike China, India does not impose restrictions such as ‘onechild’ policy upon its citizens. Over the next couple of decades, India is expected to surpass China as the world’s most populous country. During the forecast period to 2015, India is expected to reach 1.3 billion in total population. And as the ageing population grows, the demand for healthcare services and products will also rise. The most important driver for India however, is the rising middle-class population that will exceed 450 million by 2015. Although most of the population cannot afford premium healthcare, there are 100 million middle-class people with an annual income of over $5,000 who demand quality healthcare. While $,5000 may be a small amount in comparison to international standards, in terms of purchasing power parity (PPP), Indian citizens can enjoy premium health services within this income bracket on a par with people in developed nations.
Medical tourism has been gaining more attention resulting in an increased influx of foreign patients into India over the past seven to eight years. About 50% of specialized urban hospitals are actively focusing on tapping medical tourists to grow their business and gain international recognition.
Cost effectiveness against developed countries(Medical Tourism)
India is fast becoming a popular destinations for procedures like heart valve replacement surgeries primarily due to:
- Cost savings ranging from 70-80%
- Presence of highly educated, skilled and experienced surgeons to the same degree as United States.
- The patient may remain in hospital for a prolonged recovery period after the surgical procedure. A hospitalized recovery allows one to heal faster than if he/she were discharged to recover at home as is the practice in the United States.
The following table provides a snapshot of the comparative cost (in USD) for major heart procedures across 6 countries:
|Procedure||Country (cost in USD)|
|Heart Valve Replacement||18,000||21,500||11,500||15,500||12,000||170,000|
Transcatheter Aortic Valve Implantation (TAVI)
Description:This technique involves insertion of a miniaturized valve through a catheter from the groin. The deployed valve is later inflated at the site of the aortic valve
Procedures in India: TAVI is still in nascent trial stage in India. In mid of March 12, a team of doctors at Delhi's Fortis Hospital headed by Dr Ashok Seth operated three patients using TAVI.
Cost of surgery in India: The cost of procedure is $29,350 USD which includes the cost of valve $ 21,500 USD (approx).
Regulatory affairs: Sources at the health ministry revealed that a dialogue is on between Drug Controller General of India and the manufacturing company. Cost is said to be the bone of contention.
TAVI Procedure – Insights from Doctors in India
- Dr Ashok Seth one of the most reputed cardiologist of India is Chairman, Cardiac Sciences, Fortis Escorts Heart Institute
"The valve is known to last up to 15 years but its efficacy for the Indian population is still being assessed.“
- Dr Vivek Gupta a senior interventional cardiologist in Apollo Hospital, Delhi
“Mass availability of the valve is also an issue, as there is only one company manufacturing it”
- Prof RK Saran, Head of Lari Cardiology at Chhatrapati Shahuji Maharaj Medical University
“The disease of AVS is on the rise in Indian population affecting close to 1 million elders every year.”
- March,2012 - Medical devices: Budget unveils moves to drive growth
- March,2012 - New technique offers alternative to vulnerable heart patients
- March,2012 - India’s First Successful Percutaneous TAVI Performed at Fortis Escorts
- November,2011 - India Medtronic Launches Pulmonary Valve Replacement Therapy for Congenital Heart Disease Patients
- March,2012 - Edwards Sapien Safely Replaces Aortic Valves at Two Years
- November,2011 - Colibri Heart Valve Will Present at Upcoming 23rd Annual Transcatheter Cardiovascular Therapeutics Scientific Symposium
- November,2011 - Less Invasive Heart Valve Replacement Is Approved
- March,2011 - Medtronic Announces Global Launch of New Heart Valve Repair Ring Designed to Adapt to Heart’s Natural Valve
- Medtronic is present in India since 1979.It is headquartered out of Mumbai with offices all over the country and a total headcount of 318 employees spread across the country.
- Medtronic has sales offices in New Delhi, Kolkata, Bangaluru, Hyderabad, Chennai, Vadodara and Cochin.
- Medtronic is a leader in biological valves today.
- Medtronic have adapted to the Indian market well as regards the price is concerned.
- Medtronic recently has discontinued their blockbuster model Hall, thus sales have suffered drastically.
St Jude Medical
- St. Jude Medical company’s Indian operations have grown since it launched a wholly owned subsidiary in the local market six years ago.
- St. Jude Medical currently is headquartered and also has distribution center in Hyderabad
- Sales offices in New Delhi, Kolkata and Mumbai — three of the four major metros in India.
- Warehouses located in Delhi, Mumbai, Bangalore, Ahmedabad, Kolkata and Chennai.
- St Jude is today the undisputed winner in mechanical heart valve market revenue wise.
- St Jude have adapted to the Indian market well as regards the price is concerned .St Jude started with high prices but then compromised on prices and sales went up very high, now they are the market leaders. That’s because patients in emerging markets often contend with cost-related accessibility issues which they have addressed well.
Edwards Life sciences
- Edwards have been present in India since long but their presence has not been significant when compared to St. Jude or Medtronic.
- The company is headquartered in Mumbai & has a sales force of 30 employees
- Edwards products are good but priced very high as compared to competition.
- Edwards is concentrating on percutaneous valves costing 30000 $ in US (15 lakhs in India) which has very less takers.
- No presence in minimally invasive products (huge market), now have Port Access also, but have not introduced their products in India.
- TTK Healthcare's most significant contribution to healthcare is the manufacture and distribution of India's first indigenous heart valve prosthesis - the tilting-disc TTK Chitra Heart Valve.
- This is the only Indian-made heart valve and is the most price-friendly in the world. So far, over 50,000 TTK Chitra Heart Valves have been successfully implanted in patients.
- TTK Chitra Heart Valves has been tested in various International Laboratories and the findings were published in leading journals. The results indicate that the performance of TTK Chitra Heart Valves is comparable with any other valves available in the market.
- Manufacturing facility is located at Kazhakottom in Trivandrum India
- TTK Chitra Heart Valves are being used in over 250 major cardiac centers in the country with a total of over 55,000 implants.
- The Chitra TTK Valves are leader in sales number wise .In year 2011-12 10,425 Chitra TTK mechanical valves were sold (Source: annual report)
- Sorin & ATS have also been trying to venture into India market but do not have any significant presence in India as of now. Recently they have been focusing on India ,by poaching sales force of existing players.
|Comparative Analysis||Medtronic||St. Jude||Edwards||TTK|
|Product||Product line is good, but untimely product recalls have caused losses||Huge portfolio expanded very rapidly||High end products serves only niche segments||Single product (Best Seller)|
|Pricing||Adapted to Indian market competitively priced||Adapted to Indian market competitively priced||Priced very high as compared to peers||Unique selling proposition. Priced lowest in world deal in huge volumes|
|Marketing||Specialized in introducing new technology & key opinion leader management||Focus on aggressively reaching out to a large set of surgeons as they deal low margins & high volumes||Not much focus||Tie up with Insurance schemes has benefited in a huge way|
|Training||Conduct training||Not much focus||Conducts training||Not much focus|
Institutional challenges in India (a special case)
India strategic outlook - Phase wise approach
- By targeting at key hospital chains & few key opinion leaders in selected cities most of the Indian market can be covered as all the procedures are concentrated at few sites in exceptionally large volumes. We can have a look at India in a brief snap shot below.
Note:Fortis Healthcare has acquired Wockhardt Hospitals
Find hotspots - Attractive customer segments
- The customers can be segmented in India as follows. The local segment with global quality aspirations is growing at a very fast pace concentrated in most of the metropolitans.
- Bottom of the pyramid: TTK
- Local segment: St. Jude
- Local segment (global aspirations): Medtronic
- Global outlook: Medtronic, Edwards
Key opinion leader management
The Head of department, who is generally the most experienced surgeon would test a new heart valve initially with few surgeries, depending upon his experience he would recommend the use of new heart valves and generally .The surgeons need to be convinced about the safety & durability of the valve through training various online illustration and manuals that talk in detail about the procedure .Generally they are interested in relative comparison with competitors. Also the approval of new heart valve is given by the respective head of dept. at the state / government owned institutions .The list of approved valves are allowed to bid for the tender.
Especially the chain of hospitals, have key doctors that drive the most of business, they have a big team of cardiac surgeons under them. For example Dr Vivek Jawali at Fortis Bangalore has under him team of 17 cardiac surgeons which performs almost 25 procedures daily (not all heart valves) .His opinion is regarded very highly in the industry. In private hospital the procurement & pricing is generally handled by a separate committee hence the doctors in that committee are also very crucial.
- Training conducted in the form of videos, literature and workshops is highly appreciated and well received as suggested by surgeons.
- The module of the current training and interactive programs is a concern for the doctors. They feel it is more marketing oriented and less knowledge oriented. In its present form there is no value add; surgeons thus avoid them.
- Surgeons want to be technological partners; having them in the whole technological framework will prove fruitful
Sales & distribution plan
- Focus on selected cities: As the most of the procedures are carried out in capital city of the state or some prominent town .Mostly the traffic is routed from other cities to the capital city due to availability of proper infrastructure in big cities only. The sales person can be recruited state wise so that they can cater to both Tier 1 &Tier 2 cities with focus initially on Tier 1 city. For example in state of Karnataka has 3500- 5000 procedures are conducted every year out of which 90% of demand is from Bangalore, the capital of the state. Hence focusing on key institutions in metropolitans would cover most of the market.
- The sales person should be allocated to a particular state with major focus on key accounts & key opinion leaders in the capital city and reach out to surgeons based on the priority list. The doctors should be classified in three categories as mentioned below. The top most should have the highest priority.
- Key opinion leaders: Most important surgeons, monthly 4 visits should be conducted, there are doctors are instrumental in getting new technologies in vogue. Should be partnered with in various activities such as awareness programs, radio shows, new product launches. Their insight would help a lot in framing strategy for the company. Conducting special training programs can be of great help to both the surgeon and the firm
- Head of Departments & other key decision makers: They can drive immediate business hence need to follow up rigorously 3-4 times in a month.
- Other surgeons: These are not decision makers but as they also conduct a lot of procedures & are a part of the team they too require follow-ups twice in month.
- For Government organizations: These are tough to handle as the approval channel is complex as the gatekeeper are more and the organizations are highly bureaucratic in nature.
- Paucity of statistical data: There is no company like MAT & OMR that keeps records of patients /prescriptions in pharmaceutical industry; hence no statistics are available to ascertain the exact size of the patients that do not undergo heart valve surgery. Hence interviewing experts in the field can help us finding the unmet needs & the exact potential of the market.
- Underserved market: According to our survey doctors /surgeons say that almost 35-40 % of patients do not get the surgery done. Reasons being scarcity of funds, while few also do not get operated at a later age as they avoid taking risk do not find much utility in getting operated.
- OT Register: Exact demand and log can be found in Operation theatre OT registration which contains detail such as of name patient, size of heart valve used, company name , model number & other details. These details are generally not accessible however some information can be fetched by having good rapport with Hospital staff. Building relationship with surgeon is the key to success in Heart valve sales.
- Following steps should be taken at various fronts: | <urn:uuid:071b5d98-1e33-46aa-8859-aa87250ddb15> | CC-MAIN-2017-04 | http://dolcera.com/wiki/index.php?title=Strategic_Insights:Heart_Valve_Replacement_Market_-_India | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00151-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.913108 | 11,251 | 3.015625 | 3 |
Grazed from ARN. Author: Mike Gee.
Firstly, there was just the Cloud; now we have to deal with the hybrid Cloud. So what on earth is it?...
There are variable definitions of hybrid cloud available, however, the major two seem to be:
1. A hybrid cloud is a composition of at least one private cloud and at least one public Cloud.
A hybrid cloud is typically offered in one of two ways: a vendor has a private cloud and forms a partnership with a public Cloud provider, or a public Cloud provider forms a partnership with a vendor that provides private Cloud platforms.
2. A hybrid Cloud is a Cloud computing environment in which an organization provides and manages some resources in-house and has others provided externally.
For example, an organisation might use a public Cloud service, such as Amazon Simple Storage Service (Amazon S3) for archived data but continue to maintain in-house storage for operational customer data.
Ideally, the hybrid approach allows a business to take advantage of the scalability and cost-effectiveness that a public Cloud computing environment offers without exposing mission-critical applications and data to third-party vulnerabilities. This type of hybrid Cloud is also referred to as hybrid IT. | <urn:uuid:9b367a97-ffd1-4916-9797-a37ab643853d> | CC-MAIN-2017-04 | http://www.cloudcow.com/content/what-hell-hybrid-cloud | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00363-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.944077 | 252 | 2.640625 | 3 |
Joined: 13 Dec 2005 Posts: 154 Location: The Netherlands
The best way to explain cond code is to take a look at the procedure IGYWCL i.e.. the procedure that is used for compile and link edit your cobol pgm's.. if u look into the procedure it would contain two steps
no take a look at the cond stmt. which is coded in the lked step name
If your cobol step ends with a RC of 8(Say u have made some mistake in the syntax of COBOL pgm).
now RC.COBOl=8 so
the check takes places like this
IS 4(Hard coded) Less Than 8(RC.COBOL)
the answer is YES 4 is less than 8 so the cond is satisfied , which will make the lked step to flush , only compile step would be run and the link edit step would skipped. If u refer cond parameter coded in IGYWCL proc. i am sure that u can master COND statement and play with it | <urn:uuid:00238927-356c-4ab4-9ead-78d25f6f6803> | CC-MAIN-2017-04 | http://ibmmainframes.com/about17432.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00115-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.924909 | 214 | 2.671875 | 3 |
The world of IT continually creates new technologies and malicious software develops along with it. As we become more reliant on devices such as mobile phones, consoles and tablets, it is just a matter of time before hackers begin to focus on these technologies and develop tools that allow them to snoop on and steal the confidential information – including critical information you might have shared with someone daily.
Spyware for mobile phones is actually quite common. So far these tools have been used for a few years by individuals, often in a relationship, to track the other person’s activity.
An enterprising hacker though could take this a lot further. Your phone’s microphone could be used to record your telephone conversations. Does that sound farfetched? It’s not. The FBI has already utilized software to do this. This was confirmed by a US judge who approved the practice in 2006.
Just think how valuable a conversation between two prominent individuals would be to an attacker. Even a basic conversation between a client and the bank could be enough for an attacker to obtain personal data that could be used for identity theft. Boardroom discussions can provide valuable insider information to competitors, or used for insider trading on the stock exchange. The opportunities for a focused and tech-savvy group of fraudsters are enormous. So is the risk to end users.
There are a few limitations at present that make these attacks a little less appealing to hackers. The malware would need to be widespread for a viable yield – and that would mean analyzing thousands of hours of audio every day to mine nuggets of valuable information. The volume of information transferred would also be huge.
These limitations could be mitigated by using good speech recognition technology. However, this technology, while being reasonably accurate, still cannot identify every single spoken word, even when a user is speaking clearly into a microphone. Accurately analyzing a conversation from a mobile phone sitting in one’s pocket, while possible, would be considerably more difficult because of all the noise distortion.
In recent years, the popularity of consoles has exploded and you can find one in almost every home. Today, consoles are used for much more than just gaming. They are, for example, increasingly used as media centers. Technologies like Kinect also increase the amount of hardware attached to these devices. Gaining access to the devices attached to the consoles, such as microphones and webcams, is certainly possible. An attacker with time on his/her hands can take picture or record conversations, especially if there is an opportunity to blackmail the console’s owner.
That said, this is not very likely to occur any time soon. Consoles are quite well protected, with inbuilt measure to prevent unsigned software from running on the system. However, as consoles evolve to incorporate new technologies such as email, vulnerabilities will appear that could be exploited by malware creators.
Tablets have similar exploitable points as mobile devices. Tablets are becoming the ‘toy’ of choice in business and more and more employees are taking their devices to the office. Now, tablets connected to a network can pose as much of a threat as a laptop or networked PC does. Tablets are used in board meetings, corporate email, online banking and have confidential work-related data stored on them, and so on.
Although to date they have not proven to be a prime target for spyware creators, it does not mean it won’t happen. With millions of tablets sold monthly, that is one big ‘market’ for hackers to tap into! There is more potential in tablets for hackers than other devices because they are used more frequently for online transactions than, say, phones or consoles. People also tend to install a lot more software on their tablets than they do on their phone or console at home. The more software you install, the greater the risk of some form of spyware finding its way onto your device.
The same security and safety tips we have been talking about for over a decade still apply. Don’t install software from sources you don’t trust, and avoid falling for social engineering attacks that attempt to gain information or install spyware on your device. When it comes to mobile phones and tablets, however, there is another important safety precaution to take note of: never leave them unattended. It is very easy for someone to install a spyware package once they have access to your device – so don’t let it out of your sight; and if you protect your mobile device with a password, always beware of your surroundings to ensure you don’t fall victim to shoulder surfing attacks.
Be prudent and a bit of healthy paranoia always helps. The more mobile phones, tablets and consoles become part of our lives and our daily acitivities, the greater the chance that hackers will develop ways to try and gain access to them. Protect them as you would protect your PC.
Like our posts? Subscribe to our RSS feed or email feed (on the right hand side) now, and be the first to get them! | <urn:uuid:8ff9cc00-38e4-4d36-8d06-4a2ba0c58980> | CC-MAIN-2017-04 | https://techtalk.gfi.com/spyware-the-next-generation/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00417-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.96145 | 1,028 | 2.5625 | 3 |
Excerpt from Chapter 6 of
Copyright © 2002 Addison-Wesley.
This paper was also published as an article in CSI’s Computer Security Journal, Summer 2002 (see Note 14).
Strong Password Policies
(I cheat and make all my computer accounts use the same password.)
– Donald A. Norman, The Design of Everyday Things
Since passwords were introduced in the 1960s, the notion of a “good” password has evolved in response to attacks against them. At first, there were no rules about passwords except that they should be remembered and kept secret. As attacks increased in sophistication, so did the rules for choosing good passwords. Each new rule had its justification and, when seen in context, each one made sense. People rarely had trouble with any particular rule: the problem was with their combined effect.
The opening quotation illustrates one well-known assumption about proper password usage: it’s “cheating” to use the same password for more than one thing. This is because passwords may be intercepted or guessed. If people routinely use a single password for everything, then attackers reap a huge benefit by intercepting a single password. So, our first rule for choosing passwords might be:
1. Each password you choose must be new and different.
An early and important source of password rules was the Department of Defense (DOD) Password Management Guideline (see Note 1). Published in 1985, the Guideline codified the state of the practice for passwords at that time. In addition to various technical recommendations for password implementation and management, the Guideline provided recommendations for how individuals should select and handle passwords. In particular, these recommendations yielded the following password rule:
2. Passwords must be memorized. If a password is written down, it must be locked up.
Password selection rules in the DOD Guideline were based on a simple rationale: attackers can find a password by trying all the possibilities. The DOD’s specific guidelines were formulated to prevent a successful attack based on systematic, trial-and-error guessing. The Guideline presented a simple model of a guessing attack that established parameters for password length and duration. This yielded two more password rules:
3. Passwords must be at least six characters long, and probably longer, depending on the size of the password’s character set.
4. Passwords must be replaced periodically.
The DOD Guideline included a worked example based on the goal of reducing the risk of a guessed password to one chance in a million over a one-year period. This produced the recommendation to change passwords at least once a year. Passwords must be nine characters long if they only consist of single-case letters, and may be only eight characters long if they also contain digits. Shorter passwords would decrease the risk of guessing to less than one in a million, but that still provided good security for most applications. The DOD Guideline didn’t actually mandate eight-character passwords or the one-in-a-million level of risk; these decisions were left to the individual sites and systems.
In fact, the chances of guessing were significantly greater than one in a million, even with eight- and nine-character passwords. This is because people tend to choose words for passwords-after all, they are told to choose a word, not a secret numeric code or some other arbitrary value. And there are indeed a finite number of words that people tend to choose. Dictionary attacks exploit this tendency. By the late 1980s, dictionary attacks caused so much worry that another password rule evolved:
5. Passwords must contain a mixture of letters (both upper- and lowercase), digits, and punctuation characters.
Now that we have these five rules in place, it is time to click on this link. The evolving rules, and the corresponding increases in password complexity, have now left the users behind. None but the most compulsive can comply with such rules week after week, month after month. Ultimately, we can summarize classical password selection rules as follows:
The password must be impossible to remember and never written down.
The point isn’t that these rules are wrong. Every one of these rules has its proper role, but the rules must be applied in the light of practical human behavior and peoples’ motivations. Most people use computers because they help perform practical business tasks or provide entertainment. There’s nothing productive or entertaining about memorizing obscure passwords.
Passwords and Usability
Traditional password systems contain many design features intended to make trial-and-error attacks as hard as possible. Unfortunately, these features also make password systems hard to use. In fact, they violate most of the accepted usability standards for computer systems. Of the eight “Golden Rules” suggested by Ben Shneiderman for user interface design, password interactions break six of them (see Table 1). People can’t take shortcuts: the system won’t match the first few letters typed and fill in the rest. Most systems only report success or failure: they don’t say how close the password guess was, or even distinguish between a mistyped user name and a mistyped password. Many systems keep track of incorrect guesses and take some irreversible action (like locking the person’s account) if too many bad guesses take place. To complete the challenge, people rarely have a chance to see the password they type: they can’t detect repeated letters or accidental misspellings.
| Golden Rules of User Interface Design
(See Note 2)
|True for Passwords?|
|1. Strive for consistency||YES|
|2. Frequent users can use shortcuts||NO|
|3. Provide informative feedback||NO|
|4. Dialogs should yield closure||YES|
|5. Prevent errors and provide simple error handling||NO|
|6. Easy reversal of any action||NO|
|7. Put the user in charge||NO|
|8. Reduce short-term memory load||NO|
To appreciate another truly fundamental problem with passwords, consider what happens when changing a password. Imagine that a user named Tim needs to change his password, and he wishes to follow all of the rules. While it’s possible that he might have a particular password in mind to use the next time the occasion arises, many (perhaps most) people don’t think about passwords until they actually need to choose one. For example, Windows NT can force its users to immediately change a password during the logon process, usually because the existing password has become “too old.” If Tim hasn’t thought of another good password ahead of time, he must think of one, fix it permanently in his mind, and type it in twice without ever seeing it written.
This presents a significant mental challenge, especially if Tim tries to follow the classic password selection rules. He has to remember and apply the rules about length, reuse, and content. Then he must remember the password he chose. This is made especially hard since the system won’t display the password he chose: Tim must memorize it without the extra help of seeing its visual representation.
Human short-term memory can, on average, remember between five and nine things of a particular kind: letters, digits, words, or other well-recognized categories. The DOD Guideline spoke of eight- or nine-character passwords, which lie on the optimistic end of peoples’ ability to memorize. Moreover, Tim’s short-term memory will retain this new password for perhaps only a half minute, so he must immediately work at memorizing it. Studies show that if Tim is interrupted before he fully memorizes the password, then it will fall out of his working memory and be lost. If Tim was in a hurry when the system demanded a new password, he must sacrifice either the concentration he had on his critical task or the recollection of his new password. Or, he can violate a rule and write the password down on a piece of paper (see Note 3).
Passwords were originally words because it’s much easier for people to remember words than arbitrary strings of characters. Tim might not remember the password “rgbmrhuea,” but he can easily remember the same letters when they spell out “hamburger.” Tim more easily remembers a word as his password because it represents a single item in his memory. If Tim chooses an equally long sequence of arbitrary characters to be his password, he must mentally transform that sequence into a single item for him to remember. This is hard for people to do reliably. While there are techniques for improving one’s memory, they are difficult to learn and require constant practice to retain. Strong passwords simply aren’t practical if they require specialized training to use correctly. Later in this chapter we examine a few simple and practical memory techniques for producing memorable passwords. The techniques do not necessarily provide the strongest possible secrets, but they are within the reach of most peoples’ abilities (see Note 4).
Dictionary Attacks and Password Strength
Note to purists: This section doesn’t really appear in Chapter 6 of Authentication. It was added to explain the notion of the average attack space and provide enough context to fully appreciate the weakness of passwords. The material in this section came from Chapters 2 and 3.
In general, strong authentication techniques require a person to prove ownership of a hard-to-guess secret to the target computer. Traditionally, a user would transmit the password during the login operation, and the computer would verify that the password matched its internal records. More sophisticated systems require a cryptographic transformation that the user can only perform successfully if in possession of the appropriate secret data. Traditional challenge response authentication systems use symmetrically shared secrets for this, while systems based on public key cryptography will use the transform to verify that the user possesses the appropriate private key. In all cases, successful authentication depends on the user’s possession of a particular piece of secret information. In this discussion, that secret information is called the base secret.
A simple way to compare different authentication techniques is to look at the number of trial-and-error attempts they impose on an attacker. For example, an attacker faced with a four-digit combination lock has 10 times as hard of a job as one faced with a three-digit lock. In order to compare how well these locks resist trial-and-error attacks and to compare their strength against the strength of others, we can estimate the number of guesses, on average, the attacker must make to find the base secret. We call this metric the average attack space.
Many experts like to perform such comparisons by computing the length of time required, on average, to guess the base secret’s value. The problem with such estimates is that they are perishable. As time goes on, computers get faster, guessing rates increase, and the time to guess a base secret will decrease. The average attack space leaves out the time factor, allowing a comparison of the underlying mechanisms instead of comparing the computing hardware used in attacks.
Each item counted in an average attack space represents a single operation with a finite, somewhat predictable duration, like hashing a single password or performing a single attempt to log on. When we look for significant safety margins, like factors of thousands, millions, or more, we can ignore the time difference between two fixed operations like that.
If all possible values of a base secret are equally likely to occur, then a trial-and-error attack must, on average, try half of those possible values. Thus, an average attack space reflects the need to search half of the possible base secrets, not all of them.
In practice, people’s password choices are often biased in some way. If so, the average attack space should reflect the set of passwords people are likely to choose from. In the case of a four-digit luggage lock, we might want to represent the number of choices that reflect days of the year, since people find it easy to remember significant personal dates, and dates are easily encoded in four digits. This reduces the number of four-digit combinations an attacker must try from 10,000 to 366.
When we try to measure the number of likely combinations, we should also take into account the likelihood that people chose one of those combinations to use on their luggage. The average attack space, then, doesn’t estimate how many guesses it might take to guess a particular password or other secret. Instead, it estimates the likelihood that we can guess some base secret, if we pick it randomly from the user community.
Biases in password selection are the basis of dictionary attacks, and practical estimates of password strength must take dictionary attacks into account. In the classic dictionary attack, the attacker has intercepted some information that was derived cryptographically from the victim’s password. This may be a hashed version of the password that was stored in the host computer’s user database (i.e. /etc/passwd on classic Unix systems or the SAM database on Windows NT systems) or it may be a set of encrypted responses produced by a challenge response authentication protocol. The attacker reproduces the computation that should have produced the intercepted information, using successive words from the dictionary as candidates. If one of the candidates produces a matching result, the corresponding candidate matches the user’s password closely enough to be used to masquerade as that user. This whole process occurs off-line with respect to the user and computing system being targeted, so the potential victims can’t easily detect that the attack is taking place. Moreover, the speed of the search is limited primarily by the computing power being used and the size of the dictionary. In some cases, an attacker can precompile a dictionary of hashed passwords and use this dictionary to search user databases for passwords; while this approach is much more efficient, it can’t be applied in every situation.
We can compute an estimate of password strength by looking at the practical properties of off-line dictionary attacks. In particular, we look at dictionary sizes and at statistics regarding the success rates of dictionary attacks. In this case, the success rate would reflect the number of passwords subjected to the dictionary attack and the number that were actually cracked that way. The 1988 Internet Worm provides us with an early, well-documented password cracking incident.
The Internet Worm tried to crack passwords by working through a whole series of word lists. First, it built a customized dictionary of words containing the user name, the person’s name (both taken from the Unix password file), and five permutations of them. If those failed, it used an internal dictionary of 432 common, Internet-oriented jargon words. If those failed, it used the Unix on-line dictionary of 24,474 words. The worm also checked for the “null” password. Some sites reported as many as 50% of their passwords were successfully cracked using this strategy (see Note 5).
Adding these all up, the worm searched a password space of 24,914 passwords. To compute the average attack space, we use the password space as the divisor, and we use the likelihood of finding a password from the space as the dividend. We use the constant value two to reflect the goal of searching until we find a password with a 50-50 chance, and we scale that by the 50% likelihood that the password being attacked does in fact appear in the dictionary. This yields the following computation:
24,914 / (2 x 0.5) = 24,914, or 215 average attack space
Since the most significant off-line trial-and-error attacks today are directed against cryptographic systems, and such systems measure sizes in terms of powers of two (or bits), we will represent average attack spaces as powers of two. When assessing average attack spaces, keep in mind that today’s computing technology can easily perform an off-line trial-and-error attack involving 240 attempts. The successful attack on the Data Encryption Standard (DES) by Deep Crack (see Note 6) involved 254 attempts, on average, to attack its 56-bit key (we lose one bit when we take the property of complementation into account).
We can also use the average attack space to compute how long a successful attack might take, on average. If we know the guess rate (guesses per second) we simply divide the average attack space by the guess rate to find the average attack time. For example, if a Pentium P100 is able to perform 65,000 guesses per second, then the P100 can perform the Internet Worm’s dictionary attack in a half-second, on average.
The Worm’s 50% likelihood figure plays an important role in computing the average attack space: while users are not forced to choose passwords from dictionaries, they are statistically likely to do so. However, the 50% estimate is based solely on anecdotal evidence from the Internet Worm incident. We can develop a more convincing statistic by looking at other measurements of successful password cracking. The first truly comprehensive study of this was performed in 1990 by Daniel V. Klein (see Note 7).
To perform his study, Klein collected encrypted password files from numerous Unix systems, courtesy of friends and colleagues in the United States and the United Kingdom. This collection yielded approximately 15,000 different user account entries, each with its own password. Klein then constructed a set of password dictionaries and a set of mechanisms to systematically permute the dictionary into likely variations. To test his tool, Klein started by looking for “Joe accounts,” that is, accounts in which the user name was used as its password, and quickly cracked 368 passwords (2.7% of the collection).
Klein’s word selection strategies produced a basic dictionary of over 60,000 items. The list included names of people, places, fictional references, mythical references, specialized terms, biblical terms, words from Shakespeare, Yiddish, mnemonics, and so on. After applying strategies to permute the words in typical ways (capitalization, obvious substitutions, and transpositions) he produced a password space containing over 3.3 million possibilities (see Note 8). After systematically searching this space, Klein managed to crack 24.2% of all passwords in the collection of accounts. This yields the following average attack space:
3,300,000 / (2 x .242) = 223 average attack space
Klein’s results suggest that the reported Internet Worm experience underestimates the average attack space of Unix passwords by about 28. Still, a 223 attack space is not a serious impediment to a reasonably well-equipped attacker, especially when attacking an encrypted password file. The guess rate of a Pentium P100 can search that average attack space in less than two minutes.
The likelihood statistic tells us an important story because it shows how often people pick easy-to-crack passwords. Table 2 summarizes the results of several instances in which someone subjected a collection of passwords to a dictionary attack or other systematic search. Spafford’s study at Purdue took place from 1991 to 1992, and produced a variety statistics regarding people’s password choices. Of particular interest here, the study tested the passwords against a few dictionaries and simple word lists, and found 20% of the passwords in those lists. Spafford also detected “Joe accounts” 3.9% of the time, a higher rate than Klein found (see Note 9).
|Report||When||Passwords Searched||Percentage Found|
|Internet Worm (note 5)||1988||thousands||~50%|
|Study by Klein (note 7)||1990||15,000||24.2%|
|Study by Spafford (note 9)||1992||13,787||20%|
|CERT Incident IN-98-03 (note 10)||1998||186,126||25.6%|
|Study by Yan et al. (note 11)||2000||195||35%|
The CERT statistic shown in Table 2 is based on a password cracking incident uncovered at an Internet site in 1998. The cracker had collected 186,126 user records, and had successfully guessed 47,642 of the passwords (see Note 10).
In 2000, a team of researchers at Cambridge University performed password usage experiments designed in accordance with the experimental standards of applied psychology. While the focus of the experiment was on techniques to strengthen passwords, it also examined 195 hashed passwords chosen by students in the experiment’s control group and in the general user population: 35% of their passwords were cracked (see Note 11). Although the statistics from the Internet Worm may be based on a lot of conjecture, the other statistics show that crackable passwords are indeed prevalent. If anything, the prevalence of weak passwords is increasing as more and more people use computers.
The average attack space lets us estimate the strength of a password system as affected by the threat of dictionary attacks and by people’s measured behavior at choosing passwords. As shown in Table 3, we can also use the average attack space compare password strength against other mechanisms such public keys. In fact, we can compute average attack spaces for any trial-and-error attack, although the specific attacks shown here are divided into two types: off-line and interactive.
Off-line attacks involve trial-and-error by a computation, as seen in the dictionary attacks. Interactive attacks involve direct trial-and-error with the device that will recognize a correct guess. Properly designed systems can defeat interactive attacks, or at least limit their effectiveness, by responding slowly to incorrect guesses, by sounding an alarm when numerous incorrect guesses are made, and by “locking out” the target of the attack if too many incorrect guesses are made.
|Example||Style of Attack||Average Attack Space|
|Trial-and-error attack on 1024-bit public keys||Off-line||286|
|Trial-and-error attack on 56-bit DES encryption keys||Off-line||254|
|Dictionary attack on eight-character Unix passwords||Off-line||223|
|Trial-and-error attack on four-digit PINs||Interactive||213|
For an example of an interactive attack, recall the four-digit luggage lock. Its average attack space was reduced when we considered the possibility that people choose combinations that are dates instead of choosing purely random combinations. Even though a trial-and-error attack on such a lock is obviously feasible, it obviously reflects a different type of vulnerability than that of a password attacked with off-line cryptographic computations. The principal benefit of considering the different average attack spaces together is that they all provide insight into the likelihood with which an individual attack might succeed.
Forcing Functions and Mouse Pads
If strong security depends on strong passwords, then one strategy to achieve good security is to implement mechanisms that enforce the use of strong passwords. The mechanisms either generate appropriate passwords automatically or they critique the passwords selected by users. For example, NIST published a standard for automatic password generators. Mechanisms to enforce restrictions on the size and composition of passwords are very common in state-of-the-art operating systems, including Microsoft Windows NT and 2000 as well as major versions of Unix. While these approaches can have some value, they also have limitations. In terms of the user interface, the mechanisms generally work as forcing functions that try to control user password choices (see Note 12).
Unfortunately, forcing functions do not necessarily solve the problem that motivated their implementation. The book Why Things Bite Back, by Edward Tenner, examines unintended consequences of various technological mechanisms. In particular, the book identifies several different patterns by which technology takes revenge on humanity when applied to a difficult problem. A common pattern, for example, is for the technological fix to simply “rearrange” things so that the original problem remains but in a different guise (see Note 13).
Forcing functions are prone to rearrangements. In the case of strong password enforcement, we set up intractable forces for collision. We can implement software that requires complicated, hard-to-remember passwords, but we can’t change individuals’ memorization skills. When people require computers to get work done, they will rearrange the problem themselves to reconcile the limits of their memory with the mandates of the password selection mechanism.
Coincidentally, mouse pads are shaped like miniature doormats. Just as some people hide house keys under doormats, some hide passwords under mouse pads (Figure 2). The author occasionally performs “mouse pad surveys” at companies using computer systems. The surveys look under mouse pads and superficially among other papers near workstations for written passwords. A significant number are found, at both high-tech and low-tech companies.
Authentication © 2002, used by permission
People rarely include little notes with their passwords to explain why they chose to hide the password instead of memorize it. In some cases, several people might be sharing the password and the written copy is the simplest way to keep all users informed. Although many sites discourage such sharing, it often takes place, notably between senior managers and their administrative assistants. More often, people write down passwords because they have so much trouble remembering them. When asked about written passwords, poor memory is the typical excuse.
An interesting relationship noted in these surveys is that people hide written passwords near their workstations more often when the system requires users to periodically change them. In the author’s experience, the likelihood of finding written passwords near a workstation subjected to periodic password changes ranged from 16% to 39%, varying from site to site. At the same sites, however, the likelihood ranged from 4% to 9% for workstations connected to systems that did not enforce periodic password changes. In some cases, over a third of a system’s users rearranged the password problem to adapt to their inability to constantly memorize new passwords.
These surveys also suggest an obvious attack: the attacker can simply search around workstations in an office area for written passwords. This strategy appeared in the motion picture WarGames, in a scene in which a character found the password for the high school computer by looking in a desk. Interestingly, the password was clearly the latest entry in a list of words where the earlier entries were all crossed off. Most likely, the school was required to change its password periodically (for “security” reasons) and the users kept this list so they wouldn’t forget the latest password.
Using the statistics from mouse pad searches, we can estimate the average attack space for the corresponding attack. Table 4 compares the results with other average attack spaces. In the best case, the likelihood is 4%, or one in 25, so the attacker must, on average, search 12 or 13 desks to find a password. That yields an average attack space of 24. The worst case is 39%, which is less than one in three. Thus, the attacker must, on average, search one or two desks to find a written password.
|Example||Style of Attack||Average Attack Space|
|Trial-and-error attack on 56-bit DES encryption keys||Off-line||254|
|Dictionary attack on eight-character Unix passwords||Off-line||223|
|Trial-and-error attack on four-digit PINs||Interactive||213|
|Best-case result of a mouse pad search||Interactive||24|
|Worst-case result of a mouse pad search||Interactive||21|
The mouse pad problem shows that we can’t always increase the average attack space simply by making passwords more complicated. If we overwhelm people’s memories, we make certain attack risks worse, not better. The reason we want to discourage single-word passwords is that they’re vulnerable to off-line dictionary attacks. Table 4 shows that such attacks involve a 223 attack space. We don’t increase the average attack space if forgettable passwords move to the bottom of people’s mouse pads.
If you are following the notes to see if they contain more technical details, don’t bother. The notes only provide sources for the information in the text. If you are interested in general in the sources, it’s best to postpone looking at the notes until you’ve read the entire paper. Then just read all of the notes.
1. See the DOD Password Management Guideline, produced by the NCSC (CSC-STD-002-85, Fort Meade, MD: National Computer Security Center, 12 April 1985).
2. See Chapter 2 of Designing the User Interface: Strategies for Effective HumanComputer Interaction by Ben Shneiderman (Reading, MA: Addison-Wesley, 1998). For a point of view more focused on usability and security, see the papers by Alma Whitten and J. D. Tygar: “Usability of Security: A Case Study” (CMU-CS-98-155, Pittsburgh, Pennsylvania: Carnegie Mellon University Computer Science Department, 18 December 1998), and “Why Johnny Can’t Encrypt,” (Proceedings of the 8th USENIX Security Symposium, USENIX Association, 1999).
6. The best description of attacks on DES is in Cracking DES: Secrets of Encryption Research, Wiretap Politics, and Chip Design, by the Electronic Frontier Foundation (Sebastopol, CA: O’Reilly & Associates, 1998).
13. Edward Tenner was inspired to write Why Things Bite Back: Technology and the Revenge of Unintended Consequences (New York: Alfred A. Knopf, 1996) after noticing how much more paper gets used in a modern “paperless” office. Tenner summarized his taxonomy of revenge effects in Chapter 1.
14. The CSI Journal actually published the article twice. The first time, in Spring 2002, the printer eliminated all exponents, so that 2128 became 2128. The Summer 2002 version contains the correct text. | <urn:uuid:4f3c0ca7-e28e-4146-a6e6-a480fa444704> | CC-MAIN-2017-04 | https://cryptosmith.com/password-sanity/dilemma/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279923.28/warc/CC-MAIN-20170116095119-00143-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.934603 | 6,218 | 3.34375 | 3 |
The vulnerability is caused by a flaw in the Windows operating system which allows hackers to exploit the "plug and play" capability of the Windows system. The vulnerability can be exploited by an infected machine creating a denial of service (DOS) attack on other vulnerable machines.
By leveraging a chat channel, the initiating hacker gains access to a host machine, leveraging it to attack other networked machines.
To learn more about the Zotob and IRCbot worms visit the IMlogic IM and P2P Threat Center.
Initially rated a low risk by most security industry threat centers, the rapid propagation of the Zotob and IRCbot worms motivated most providers to increase the risk level.
The worm appears to lay quiet on an infected machine until prompted into action by the hacker. The messaging channel opened up by the worm appears to await direction prior to disrupting system activity or propagating itself on the network. | <urn:uuid:c4283daa-e35c-4e5a-a023-72b477ef6122> | CC-MAIN-2017-04 | http://www.cioupdate.com/news/article.php/3528106/Latest-Worms-Exploiting-IM.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00051-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.903793 | 180 | 2.5625 | 3 |
"We have no problem of governance in cyberspace. We have a problem with governance. There isnt a special set of dilemmas that cyberspace will present; there are just the familiar dilemmas that modern governance confronts," wrote Lawrence Lessig, Professor of Law at the Harvard Law School, in an article on cyber governance.
Lessigs statement is especially true when it comes to the digital divide, an issue that is largely an old problem garbed in new circumstances.
The new circumstances are, of course, the rise of a global economy increasingly driven by innovation and new technology. And beyond that, the ever-expanding role that computers and the Internet play in the economic, political and social life of our nation.
So far, most efforts to bridge the digital divide have focused upon increasing access to computers and the Internet and the basic skills needed to use these.
The U.S. Commerce Departments report, "Falling Through the Net: Toward Digital Inclusion," notes that more than half of all households (51 percent) have computers, up from 42.1 percent in December 1998.
The report adds, "The rapid uptake of new technologies is occurring among most groups of Americans, regardless of income, education, race or ethnicity, location, age or gender, suggesting that digital inclusion is a realizable goal. Groups that have traditionally been digital have nots are now making dramatic gains."
Yet, the measure of success, as well as an accurate estimate of how far there is still to go, hinges largely upon how the digital divide is actually defined.
The U.S. Department of Educations National Literacy Survey suggests that nearly 25 percent of all adults in America are functionally illiterate. They may have basic literacy skills, but they cant apply them effectively in their day-to-day lives.
"Unless were able to overcome basic as well as functional illiteracy, the digital divide will have no prospects of ever being solved," said Andy Carvin, senior associate at the Benton Foundation.
He goes on to define the digital divide in terms of everything from basic reading skills to cyber fluency -- the ability to utilize all the tools available, accurately access and interpret the content and create meaningful and relevant content.
Carvin suggests that any strategic digital divide initiative must look beyond simply giving people Internet access for the sake of giving them access. Community initiatives must focus upon developing a technological infrastructure thats appropriate for the community and upon creating the skills to use it in order to raise the quality of life for the citizenry.
Thinking in such terms, one realizes that the digital divide is going to be with us for some time and that it is a situation with no easy answers. Access alone will never solve it. For, at its heart, it is not really a new problem, but one we have been trying to solve for more than a century.
How do you incubate effective learning and the acquisition of skills and abilities that allow everyone to participate and prosper in a vibrant society? What can we do, in terms of education in and outside of our schools, to ensure that people are not left behind?
The challenge before us is not simply the rise of the Internet, but the fact that we live in an information society -- a society where people require a new level of professional competence in accessing, acquiring and using information.
Basic and technological literacy is a minimum requirement, not for the few, but for all. Beyond that, quality of life and the health of the society depends largely upon what people can do and how well they do it. This involves acquiring and using information in sophisticated ways.
Until we start defining the digital divide in the correct perspective, all efforts to solve the problem are going to fall short of the goal -- the inclusion of all in this New Economy. | <urn:uuid:cba33f38-3e3a-4ac5-8fef-f502ec605915> | CC-MAIN-2017-04 | http://www.govtech.com/magazines/gt/100498179.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00502-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.951841 | 776 | 2.84375 | 3 |
Twenty eight partners from Estonia, Finland, France, Norway and Sweden are investigating new ways of tagging and tracking trees as they move from logs to sawmills to wood products. Inefficient management of European forests, wastes 10 percent of the value of the wood, according to a white paper on the project, as logs suitable for lumber and other products are inefficiently chipped and converted into wood pulp used in paper-making.
Technologies being investigated include RFID -- even an RFID chip made from biodegradable natural fiber that is acceptable for papermaking. Swedish scientists are investigating computer readable ink that must survive extremes of temperature -- even log steaming in preparation for conversion into plywood. Lasers can read the codes through snow, dirt and ice, but costs have been high.
Embedded nanoparticles are the next technology to be tested. The European Union is funding nearly two-thirds of the 12 million Euro cost, with companies chipping the in the remainder. | <urn:uuid:5b1636ae-2351-4df8-8018-c2a81fc7dd37> | CC-MAIN-2017-04 | http://www.govtech.com/technology/Trees-Talk-With-Technology.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00410-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.951699 | 197 | 3.0625 | 3 |
National Video Games Day (12 September)
The first video game that was developed for a bring-home console (the Magnavox Odyssey) was called “Table Tennis” and led to the massive success of the similarly-themed Atari game “Pong”. It was released in September 1972, was sold for $100.00 and included several other games. The Magnavox Odyssey was the first gaming device that could connect to a raster-scan video display (Television) and by the end of 1975 it had sold over 350,000 units.
Once Atari released “Pong” as a cabinet-style arcade game, a new era in gaming was ushered in, which sparked the beginning of game development careers and the unprecedented rise in the video game industry for not only arcade-style, but also console- and PC-based games. This is evident by the fact that there is now a National Video Games Day celebration on 12 September of each year.
Today there is a great demand for Game Developing professionals as the market for games on all platforms including PC, console and mobile devices has become massive, and only continues to grow at an astonishing rate given that it is an industry that reaches across all demographics, regardless of age group or race.
Game development career
A game development career will suit individuals that have a flair for the extravagant and a creative mindset coupled with a good set of technical skills and knowledge. Depending on the area of game development that you wish to pursue, such as Programmer, Designer, Artist or Animator, the typical duties that will be performed by a Game Developer may include creating code for the game to run smoothly and correctly, working audio, the in-game physics, artificial intelligence or the game’s graphics.
Given the above information, it becomes clear that a casual knowledge of how video games work and an entry-level skillset will not be enough to be proficient at programming or to progress your game development career. The best way to attain the skills and knowledge that you will need is to gain an up-to-date certification that will advance your skills quickly, but thoroughly. Those that are new to programming will need to become familiar with the common programming languages, such as C# and Python. Both languages are used by programming professionals on a daily basis and will quickly guide you to the mindset of a Programmer.
It is an industry that demands that deadlines are met and that the expected quality is attained. This could lead to long hours behind your computer and may often involve working overtime and on weekends. This is why you need to enter a field that you have a passion for and enjoy working on. Once the project is completed and the final product is ready to be released, the rewards are more than worth it not only on a personal level, but on a monetary level as well as many organisations will implement incentive bonuses for projects that are completed on time.
Game Development Earnings
A game development career can be very lucrative, depending on the amount of experience you have, your qualifications and, of course, the company that you are working for. Below are some examples of the average salaries that can be expected in the game development field:
|C# Game Developer||£52,500|
|Senior Game Artist||£32,500|
|Mobile Game Developer||£52,500|
*Source – ITJobswatch 2016
Become a gamer
This would seem obvious, but to truly understand how a game should look and function, it is important to spend time playing and scrutinising other games that have been successfully developed. Most individuals that opt for a game development career do so because they already have a passion for gaming, but others that are new to the field can greatly benefit from this.
Further Reading: Top 20 best video games for beginners
Never stop learning
Once you have gained a certification, you will have a fundamental understanding of game programming. You will continue to learn as your game development career progresses and you continue to work on different projects. No two games are exactly alike and you will have to learn new skills in order to attain what is demanded from each project.
New platforms are constantly being developed, new technologies will emerge and new styles in gameplay and graphics will become popular, so it is imperative that you evolve with the demands of the ever-changing gaming industry. This can be done by renewing your certification, using online tutorials for emerging technologies or reading magazines that are dedicated to the gaming and tech industries.
Further Reading: What’s the best games console?
There is also a lot to learn from attending gaming exhibitions and festivals in that the latest games and technologies will be on display, usually with informative presentations and explanations accompanying them. You will also very often be given the chance to experience these first hand via demos that are available to be played while attending.
Don’t be afraid to start your game development career in a junior position, such as Junior Programmer or Junior Graphics Designer. You will learn a lot from your more senior co-workers and the experience that you will accumulate while working your way up the programming ladder will see you become a seasoned professional with skills that will always be in demand, as long as they are kept up to date.
Creating a portfolio of the projects that you have worked on can drastically increase your chances at being hired. Whether it be work that you have done as an artist, animator, programmer or completed games, it all counts towards experience that you have accrued.
Other soft skills that will be helpful include:
• Good computer skills
• An imaginative mindset
• Being a problem-solver
• The ability to communicate effectively
• Being a team player
• Working well under pressure
• A willingness to learn
When looking to start your game development career, study courses that are applicable to the job role that you are pursuing. Courses in computer science, software engineering, interactive media, multimedia, graphic design and even maths or physics are examples of certifications that are applicable to a game development career.
Speak to our course and career advisors if you are looking to start a game development career. | <urn:uuid:364b8d94-c11e-47b2-8e78-a8ba943f3f79> | CC-MAIN-2017-04 | https://www.itonlinelearning.com/blog/national-video-games-day/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00162-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.962711 | 1,254 | 2.84375 | 3 |
Being too abstract - Do we need a lexicon to understand your speech? Is your topic so abstract that the audience only hears words instead of seeing images? Most human beings retain information as images, sounds, or feelings. Rarely will they remember information as words or abstract concepts. In order for your audience to understand and remember what you say, you have to paint a picture in their minds. They need to be able to hear you and see a picture that accompanies your words. One of the best ways to do so is to give examples.
In an academic situation, theoretical concepts don't necessarily need an immediate practical application. But outside of academia, it's important to translate what you say into a sensory experience for your audience. When your topic is very abstract, take the time to illustrate it with concrete and specific examples. The examples will help cement the information and help with understanding.
A technical speech will lose its effectiveness and its usefulness if it is not properly presented. The five points above are some of the elements that can distract your audience and keep them from understanding the information that you present. These are points that can and should be taken into account during your preparation; prior to standing before your audience. By taking the necessary time for proper preparation, the speech will be better structured, more convincing, and more useful to your audience.
Laurent Duperval is the president of Duperval Consulting which helps individuals and companies improve people-focused communication processes. He may be reached at firstname.lastname@example.org or 514-902-0186. | <urn:uuid:0444baba-c799-4aa2-8c00-97e7027980f1> | CC-MAIN-2017-04 | http://www.cioupdate.com/insights/article.php/11049_3822231_2/Five-Mistakes-To-Avoid-During-a-Technical-Presentation.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280587.1/warc/CC-MAIN-20170116095120-00006-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.950449 | 319 | 2.703125 | 3 |
Now that you've got the basics down of using smartphone and tablet touchscreens, Carnegie Mellon University researchers are ready to take you to the next level.
SLIDESHOW: Evolution of the Desktop GUI
They will show off technology this week called TapSense that takes more advantage of all the touchy-feelyness of your fingers to better control computing devices such as iPhones and Android tablets. For example, using your fingernails can signal one thing to a device, whereas pressing the touchscreen with the pad of your fingertip or even a knuckle can send a different instruction (See video below for a better "feel" for the technology).
The technology involves the use of a microphone attached to the touchscreen that enables the CMU scientist to distinguish between a fingernail, fingertip or knuckle. A proof-of-concept system could distinguish between four types of finger outputs with 95% accuracy, according to CMU.
One goal of the TapSense team is to eliminate the need for buttons and other space-hogging conventions for taking action on a device and making better use of the sometimes limited screen size.
"TapSense basically doubles the input bandwidth for a touchscreen," said Chris Harrison, a Ph.D. student in Carnegie Mellon's Human-Computer Interaction Institute (HCII), in a statement. "This is particularly important for smaller touchscreens, where screen real estate is limited. If we can remove mode buttons from the screen, we can make room for more content or can make the remaining buttons larger."
The same technology could be used to distinguish between different tools, such as pens that write different colors, used to write or draw on a tabletop touch surface
Harrison developed TapSense with fellow Ph.D. student Julia Schwarz as well as with Scott Hudson, a CMU professor. Harrison is discussing the technology this week at the Association for Computing Machinery's Symposium on User Interface Software and Technology in Santa Barbara, Calif.
Things could get really interesting if TapSense ever gets integrated with OmniTouch, a wearable computer from CMU and Microsoft that can turn any surface into a touchscreen.
Circle Bob on Google+
Read more about anti-malware in Network World's Anti-malware section.
This story, "Researchers knuckle down and tap into super-sensitive touchscreens" was originally published by Network World. | <urn:uuid:894902e5-a941-4e4f-810a-5bd42277eb8b> | CC-MAIN-2017-04 | http://www.itworld.com/article/2736053/unified-communications/researchers-knuckle-down-and-tap-into-super-sensitive-touchscreens.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00492-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.916564 | 483 | 2.953125 | 3 |
IBM and the University of Illinois's National Center for Supercomputing Applications (NCSA) have called a halt to a $308m (£188m) project to build one of the world's fastest supercomputers, citing unforeseen cost and complexity.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
The partners had planned to build a petaflop-speed supercomputer capable of a thousand trillion floating point operations a second, and have been working on the project since 2008.
In addition to cost and complexity, pundits said the fact that new techniques with potentially lower cost and less complexity have been emerging since the project started could be another key factor in the decision to call a halt.
The proposed Blue Waters system would not be the world's fastest supercomputer because a computer in Japan known as the "K Computer" currently runs at a maximum speed of just over eight petaflops. However, the K Computer is capable of its top speed for only short periods. The target of Blue Waters was to run at a petaflop for sustained periods of time.
The decision to scrap the project was mutual, according to some US reports, but others have said it was IBM that decided to pull out.
According to Fox News, IBM dropped out of the project because it required too much financial and technical support, but that the NCSA still hopes to complete the supercomputer by the end of 2012.
This means the NCSA will have just a few weeks to find a new team to build the supercomputer and present a revised plan to the National Science Foundation (NSF), which is the main financier of the project.
But there is no guarantee that the project that was originally scheduled to go online this year will continue.
The NSF commissioned the supercomputer to study in new, much faster ways subjects such as the formation of galaxies and the effects of hurricane storm surges on land.
Photo: Dick Luria/Thinkstock | <urn:uuid:5a6b6e69-aa97-4718-8eb6-c49209c9f2c4> | CC-MAIN-2017-04 | http://www.computerweekly.com/news/2240105292/IBM-and-NCSA-scrap-project-to-build-petaflop-speed-supercomputer | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00428-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.96288 | 414 | 2.671875 | 3 |
The boll weevil is one of the most destructive pests in American agriculture. A native of Mexico, it first appeared in Brownsville, Texas, around 1892. Since then weevil depredations to U.S. cotton crops have run into the billions of dollars. It was not until the recent decade that federal and state agencies and cotton growers combined forces and brought advanced technology to bear on the problem.
Texas was among the first to adapt geospatial technologies to the monitoring, decision-making and treatment processes involved in cotton production. The need was clear: Cotton is the state's number one cash crop, contributing over $1.3 billion annually to the Texas economy, even after losing 10 percent of crops to weevils each year. Losses would be upward of 20 percent had Texas, the federal government and cotton growers not taken action, according to Carl Anderson, agricultural economist and cotton marketing specialist at the Texas A&M Cooperative Extension Program.
In an effort to banish the weevil once and for all, the State Legislature in 1996 established the Texas Boll Weevil Eradication Foundation (TBWEF), a quasi-government entity funded by cotton growers, the state and the U.S. Department of Agriculture. Since 1999, the Legislature has appropriated $125 million in support of the foundation's eradication program.
At the time, TBWEF Program Director Osama El-Lissy, along with others, proposed using geospatial technologies in concert with proven labor-intensive monitoring and treatment methods as a practical approach to large-scale weevil eradication. El-Lissy said a combination of GIS, GPS and advanced database-management technologies could accelerate the foundation's eradication program.
"Based on GIS analysis of predefined biological, meteorological and operational parameters, such a system could indicate which fields to treat and when," El-Lissy said. "If the system is user friendly and practical to integrate into the Boll Weevil Eradication Program, fewer, less-experienced workers will be able to produce the same results as those achieved by many experienced personnel, but faster and more efficiently."
Role of Spatial Technology
In 1996, the TBWEF introduced the Boll Weevil Eradication Expert System (BWEES) to facilitate the eradication program. A GIS-based application developed by El-Lissy and the foundation's IT group, the BWEES incorporates data from a wide range of sources. Differential GPS point files of field coordinates, field shapes, acreage and weevil trap locations are downloaded to MapInfo Pro GIS and integrated into the base map of a cotton field and its surrounding environment. Grower data, planting dates, cotton variety, numbers of weevils found in the traps and related agricultural information are all stored in an Oracle database-management system and integrated into thematic maps of the respective cotton fields.
Trap data are collected with bar code scanners during weekly field inspections. The scanner automatically records date, time and trap number, and prompts the user for the number of weevils in the trap, the growth stage of the crop and related information. Data from the scanners is downloaded to the GIS and linked to the map location of each trap, enabling supervisors and producers to precisely locate weevil infestations in the field.
MapInfo MapX compares this data against parameters established for cotton fields at various stages of crop growth and infestation. Based on the number of weevils caught in traps over time, MapX color-codes fields meeting various growth and treatment criteria. Data on fields marked for treatment are entered into a contractor's DGPS-based flight-tracking system, which is designed to trigger spray only on the infested areas of the field. After treatment, the swath tracks and related data from the aerial applications are incorporated into the BWEES and used to assess the progress of eradication and monitor the health of the field.
The foundation has also Web-enabled the BWEES. Cotton producers can now query a TBWEF site to find out if weevils are present in their fields, where they were trapped, the degree of infestation and progress toward treatment and eradication. The same network links program offices across the state. Supervisors can query eradication operations in any part the state. They can look at trap and field data for specific fields, and plot the migration and population densities of weevils.
El-Lissy said the ability to produce data in near real time allows managers to carry out the eradication program more efficiently. "Timely information on weevil populations, and on when and how much to spray translate to lower production costs for producers," he said.
TBWEF Executive Director Lindy Patton estimates that eradicating the boll weevil from all eleven cotton-growing zones in Texas will cost about $600 million. "Growers will be paying 70 percent to 75 percent of that; the state, probably 15 percent to 20 percent; and the federal government, probably 10 percent," he said. "Of course, those numbers can change, depending on what Congress and the State Legislature decide, and on the available funding."
Patton said the BWEES has already helped lower the cost of the eradication program for producers and the state.
Brian Murray, Texas Department of Agriculture special assistant of producer relations, said "Elimination of the boll weevil means we will no longer need to appropriate eradication funds on the scale we have seen since 1999." He stresses that the goal is eradication. "We hope that one day this job is completed."
Anderson said the weevil eradication program is spreading rapidly in the production zones across Texas. "I believe some zones, such as San Angelo, have already been declared essentially boll weevil free," he said.
Since initial development of the program, the BWEES capabilities have also been adapted to weevil eradication in other cotton-growing states. Already developed is a special module for eradicating another cotton pest, the pink bollworm. The TBWEF has also helped train cotton producers and agriculture departments in several states on the eradication program and the use of the BWEES. They include Arizona, New Mexico, Arkansas, Oklahoma, parts of Louisiana and Georgia. El-Lissy, who now heads up the USDA's National Cotton Pest Program, said only minor adjustments are required to adapt the BWEES to the eradication of other agricultural pests, such as the Mediterranean fruit fly, citrus canker, etc. "The system can accommodate all of the biological, meteorological and operational parameters necessary for any of these applications," said El-Lissy.
The BWEES has proven highly effective in automating many of the monitoring, administrative and decision-making processes involved in boll weevil eradication. Anderson pointed out that as the program expands, it will result in other benefits as well. "For example, we know that in the long run this program is going to reduce the amount of insecticides currently used. It's also going to reduce the need to employ as many people as we have now. It takes a lot of people to check traps in every cotton field, conduct spraying operations and carry out administrative tasks. As the severity of the infestation diminishes, there will be no need to spray every acre or to employ as many people. It is pretty clear that regions without this program will not be able to compete very well with other regions in the state or with other states that have adopted it."
Susan Combs, Texas commissioner of agriculture, said the program has made eradication an achievable goal. "With 21st century technology and the hard work and commitment of Texas cotton producers and the Texas Boll Weevil Eradication Foundation Inc., we are beginning to win the war on one of the most devastating pests in American agriculture. We have already declared one zone to be functionally eradicated of boll weevils. Doubters are becoming supporters." | <urn:uuid:50635bf0-512a-483b-9667-146ccf60de8a> | CC-MAIN-2017-04 | http://www.govtech.com/magazines/gt/Pest-Patrol.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00336-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.938926 | 1,629 | 3.640625 | 4 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.