text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
Table of Contents A CD/DVD image is a file that contains all the information necessary to make an exact duplicate, or clone, of a CD or DVD. Images are created by software that writes every bit of information contained on a CD or DVD into a file on your computer. This file can then be made available for download so that other people can make an exact duplicate or your original CD or DVD on their own computer. Now do not think that you can simply go out and buy a CD and then create a image of it for your friends to download and write to a CD. Most commercial CDs these days have a copy protection on them that makes it difficult or next to impossible to duplicate. There are, though, certain CDs that you are allowed to freely distribute to friends. For example, almost all Linux distributions are available by download as a type cd image known as an ISO. These ISOs tend to be bootable CDs that contain an image of the original master CD for the operating system. Once you download the ISO image you can then burn that image onto a blank CD and boot your computer from it. Whether you download a Linux operating system that you can run directly off the CD or use it to install the operating system onto your computer is up to you. There are many different types of CD/DVD images that can be found available for download. The type of image is usually dependent on the CD Writing software that was used to create the CD/DVD image. A very common image format that is used on the Internet is anISO image. This image format can generally be read by almost any CD / DVD Writing software on the market. The type of a CD image is usually determined by the extension of the filename. For example if a CD image was called linux01.iso, then the image type for this file is most likely an ISO image The table below contains a list of common image formats and the software generally used to create them. Programs used to create Image |.ISO||Almost all commercial CD Writing software| |.NRG||Nero Burning Rom| The next two sections will explain how how to write a CD/DVD ISO image when using the free Windows Disk Image Burner and Windows 7 USB/DVD Download Tool. In Windows Vista, Windows 7, and Windows 8, Microsoft includes a free program called Windows Disc Image Burner that you can use to burn ISO or IMG disc images onto a CD or DVD. To start the Windows Disc Image Burner, you need to right-click on an ISO or IMG file and then select Burn disc image as shown in the image below. Once you click on the Burn disc image option, the Windows Disc Image Burner will open. Select the drive that corresponds to your DVD writer and make sure there a blank DVD or CD inserted into the drive. Once you are ready to start burning the selected ISO image, click on the Burn button. Windows Disc Image Burner will now begin to burn the ISO image on to the selected media. When it has finished burning the disc, Windows Disc Image Burner will automatically eject the disc and then state that it has finished. You can then click on the Close button and use the DVD as needed. If you using Windows XP, or would rather have a full featured DVD/CD burning program, then you can CDBurnerXP. This is a free program that works on all versions of Windows and is a full featured DVD & CD writing utility. To install CDBurnerXP, go to their homepage and click on their download link. Once you have downloaded the program, double-click on it to start the installation process. While you go through the steps to install it, it may prompt you to install a 3rd party program. At the time of this writing they were prompting you to install RealPlayer. If you do not wish to install this program, uncheck the check boxes that ask if you wish to install it. Then continue with the install process. When it has finished, CDBurnerXP will automatically start. As we want to burn an ISO image, click on the Burn ISO image option and then click on the OK button. You will now be shown the Burn ISO Image screen. Click on the Browse button to open a window where you can navigate to the ISO file you to wish burn. When you have selected the file, click on the Open button. You will now be at the same screen, but now the ISO you wish to burn will be selected. Make sure you have a blank DVD/CD media inserted and that your target device is set to the correct drive. Then click on the Burn disc button. CDBurnerXP will now start to burn the ISO image to the inserted media. Please be patient as the image is burned. When it is finished, CDBurnerXP will automatically eject the burned media state that it has finished. You can now close CDBurnerXP and use the media as necessary. If you had upgraded Windows 8 using the Windows8-Setup.exe executable, rather than through a DVD, it can be difficult to troubleshoot your computer in the event that it is not starting properly or you cannot access the Advanced Startup Options menu. This tutorial will walk you through creating a Windows 8 DVD on another computer that can be used to troubleshoot problems starting or using Windows ... This tutorial focuses on using GParted, or Gnome Partition Editor, a free and open source partition editor. To use GParted, you must first download the CD Image file (.iso file) of GParted Live for this program. Instructions on where to find and how to burn the GParted ISO file are covered in the Preparation step. In this tutorial we will be using Microsoft Windows XP for certain steps. If you use ... Damn you Microsoft! I am a notepad addict. If you look at my taskbar at any time and you will see at least 5 notepads, usually a lot more running at one time. Why? Because it is fast and small I use it to keep notes, to do lists, phone numbers, write code, search and replace, etc. The reasons are endless.... If you have ever worked with computer graphic images, whether they be from digital cameras, found on the web, or you create them yourself, then you know there are a lot of image file formats that are available. This is because each format stores the image in a certain way that makes it the best choice for a given situation. This tutorial will cover the most common image formats that you will find ... When Windows is installed on your computer it can be installed as a 32-bit version or a 64-bit version. For most people, whether they use a 32-bit or a 64-bit version of Windows does not make a difference. It is, though, important to know whether you are running a 64-bit or 32-bit version of Windows when performing certain tasks on your computer. For example, if you install new hardware or update ...
<urn:uuid:7eba7427-85e1-4139-9dd0-a170774cde64>
CC-MAIN-2017-04
https://www.bleepingcomputer.com/tutorials/write-a-cd-dvd-image-or-iso/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280128.70/warc/CC-MAIN-20170116095120-00063-ip-10-171-10-70.ec2.internal.warc.gz
en
0.898492
1,442
3.09375
3
“He’ll be walking in here all tooled up.” “There’s gonna be a case of lead poisoning today.” If you saw the above comments on a student’s computer, would you know what they mean by “tooled up” or “lead poisoning?” Turns out that “tooled up” is slang for carrying a weapon, and “lead poisoning” means death by a gun. Disturbing, right? There are hundreds of slang words and phrases used to describe weapons and violent acts. You could argue that plenty of teachers or IT admins don’t know what these slang words mean or stand for. How could they keep up with the trending phrases? It’d be nearly impossible. Educators and administrators can’t do it alone. In addition to their true duties each day, how could they have time to keep track of all the URLs they need to block for their Internet capable students? Blocking websites doesn’t really tell you what students are looking for when it comes to violent behavior or weapons. Instead, it’s these search keywords that give educators insight into what is going on. Thankfully, there is an intuitive way to handle the ever-changing landscape of the Internet and how students use it. It’s called behavior management and monitoring software, and it should be a teacher and IT admin’s best tool in keeping students safe. What is monitoring and behavior management software? Monitoring is an Internet safety feature in behavior management software that protects students online. Behavior management and monitoring software is fundamentally different from content blocking and filtering software. Filters merely allow or deny access to websites, while behavior management software uses categories — such as lists of words or phrases — to capture and identify inappropriate activity on PCs, laptops and other digital devices. Once captured, the software logs an automatic screenshot or video recording. This allows educators and administrators to identify the context of any questionable conversation – such as a screenshot identifying a concerning word or phrase, a logged-in user or an IP address. When students use certain keywords, the software alerts the teacher. This can identify violent behavior and present the teacher with a way to confront the situation. As new slang terms trend, keyword lists can be updated with new words, phrases and definitions. Blocking vs. monitoring — what’s the difference? Filtering and blocking Internet content is no longer sufficient when it comes to preventing violence in schools. Simply blocking Internet access not only closes off the opportunity to gain access to valuable learning resources, but it also removes the ability to identify students who need intervention. Monitoring online behavior, including social media, puts behavior management into the student’s hands. It also gives the teacher a window into what is going on in a student’s cyberworld. How can you help students report weapons and violence? Impero Education Pro behavior management software provides students with a confidential way of reporting any questionable online activities to authorities through its Confide function. Students can find comfort knowing that their submissions are anonymous. They can safely expose a potential violent act or an impending situation without fear of further harassment. They don’t have to “rat out” their peers. This gives students a voice when they previously felt like they had none. What’s in the future of monitoring software? Currently, Impero Education Pro software allows school officials to create custom keyword lists to monitor students. This is the first step in keeping kids safe online and in classrooms. Soon, however, Impero will roll out a keyword library specifically geared toward the prevention of cyberbullying, weapons, violence, suicide and self-harm, eating disorders, child abuse and other harmful activities. This library will be free of charge for school systems that currently have the Education Pro product and for new purchasers of the software. For a more thorough explanation of behavior management software, Internet safety monitoring and Impero Education Pro, download our whitepaper here. To talk talk to our team of education experts call 877.883.4370, or email Impero now to arrange a call back. If you are a nonprofit organization that would like to partner with Impero to keep kids safe from weapons and violence, child abuse, bullying or other harmful acts, email us today!
<urn:uuid:45a301ee-d750-4538-9062-5039f890bebf>
CC-MAIN-2017-04
https://www.imperosoftware.com/monitoring-students-online-to-prevent-weapons-and-violence-in-schools/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280128.70/warc/CC-MAIN-20170116095120-00063-ip-10-171-10-70.ec2.internal.warc.gz
en
0.931682
896
2.921875
3
Definition: A sort algorithm in which the sorted items occupy the same storage as the original ones. These algorithms may use o(n) additional memory for bookkeeping, but at most a constant number of items are kept in auxiliary memory at any time. Also known as sort in place. Generalization (I am a kind of ...) Specialization (... is a kind of me.) American flag sort, quicksort, insertion sort, selection sort, Shell sort, diminishing increment sort, J sort, gnome sort. If you have suggestions, corrections, or comments, please get in touch with Paul Black. Entry modified 15 March 2004. HTML page formatted Mon Feb 2 13:10:39 2015. Cite this as: Paul E. Black and Conrado Martinez, "in-place sort", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 15 March 2004. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/inplacesort.html
<urn:uuid:3729cf2c-2127-4744-b01b-1c794cbac190>
CC-MAIN-2017-04
http://www.darkridge.com/~jpr5/mirror/dads/HTML/inplacesort.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00457-ip-10-171-10-70.ec2.internal.warc.gz
en
0.820565
231
2.953125
3
Boston's subway system will soon be sprayed with non-hazardous bacteria to test new sensors for identifying biological attacks. The test is part of the U.S. Department of Homeland Security's Science and Technology Directorate's (S&T) Detect to Protect program, which is designed to identify a biological attack within minutes. In 2009 and in August this year, inert gasses were released in the Boston subway system as part of an initial study to determine how particulates flow through the subway and where the best locations for sensors were. Now that the sensors have been placed, killed Bacillus subtilis will be sprayed in small quantities throughout the subway tunnels. The bacterium being used is common and considered nontoxic to humans even while it's alive. “While there is no known threat of a biological attack on subway systems in the United States,” S&T Program Manager Anne Hultgren said, “the S&T testing will help determine whether the new sensors can quickly detect biological agents in order to trigger a public safety response as quickly as possible.” The released particles will dissipate quickly but will provide invaluable data for the project, according to a Homeland Security Department press release.
<urn:uuid:c06acc35-de64-475a-9560-b1d7b50ab8f9>
CC-MAIN-2017-04
http://www.govtech.com/public-safety/Boston-Tests-Subway-Bio-Sensors-With-Bacteria.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.60/warc/CC-MAIN-20170116095120-00145-ip-10-171-10-70.ec2.internal.warc.gz
en
0.948228
245
2.8125
3
Machine-to-Machine (M2M) Software Increasingly, device manufacturers are finding interesting opportunities to drive more value out of Machine-to-Machine (M2M) software and the Internet of Things (IoT) initiatives. Common definitions of Machine-to-Machine (M2M) and the Internet of Things (IoT) are based on the concept of “connected devices” and may be described as: Machine-to-Machine (M2M) refers to technologies or software that allows both wireless and wired systems to communicate with other devices of the same ability.¹ M2M uses a device (such as a sensor or meter) to capture an event (such as temperature, inventory level, etc.), which is relayed through a network (wireless, wired or hybrid) to an application (software program), that translates the captured event into meaningful information. (Source: http://en.wikipedia.org/wiki/Machine-to-Machine) Internet of Things (IoT) is a scenario in which objects are provided with unique identifiers and the ability to automatically transfer data over a network without requiring human-to-human or human-to-computer interaction. IoT has evolved from the convergence of wireless technologies, micro-electromechanical systems (MEMS) and the Internet. (Source: http://whatis.techtarget.com/definition/Internet-of-Things) The primary benefits of connecting devices have historically been the ability to better support them remotely, e.g., knowing when a pump is close to failure and needs to be replaced, when restocking is required in a vending machine or when a system is running old software. Additionally, this helps reduce the support costs associated with the maintenance of the device. Connected devices can circumvent the need for manual inspections, and can also speed up time to diagnosis and ultimately resolution. Customer satisfaction improves in this scenario as well, as they continue to receive uninterrupted benefit from the device. Moving beyond machine-to-machine and the Internet of Things The breadth of use cases and benefits of M2M and IoT are expanding to include new ways to monetize the offerings from the device manufacturer. A wide range of opportunities open up when the connection is used to share usage data from the device. More specifically, it enables new business models to be implemented – these can be business models that are more lucrative in general and/or more appropriate for specific sub-markets. For example, users of a device can be charged based on number of discrete uses, or usage levels during specific times of the day, or a concurrent number of users within an enterprise, or based on geographic location, or use of specific features… or any number of parameters. This data can also be aggregated and analyzed for improved business intelligence. Monitoring patterns of usage consumption can lead to focusing new sales activity on the products or features that are most highly valued, and in the most receptive markets. It can lead to targeted marketing campaigns that up-sell users to new levels of functionality or renew them at the most opportune time. It could focus development on the most valuable features, on the most widely deployed platforms. It could be fully anonymized and provided to others to be leveraged in their databases, marketing activities, or support services. M2M has evolved to encompass and enable high-order value propositions, driving increased revenue. Maturity models presented by Gartner, Sprint, and others at recent M2M conferences reflect this evolution. They generally depict the major steps of connectivity, support enhancement, and then additional monetization. Creating new revenue opportunities M2M and its related software is a trend that is driving IDMs to use licensing and entitlement management to protect and monetize their IP with the ability to diagnose, fix and upgrade products remotely, to offer scheduled preventative maintenance instead of emergency repair services, to roll-out usage based pricing models, and to get real time feedback on product performance. Flexera Software's FlexNet Producer Suite for Intelligent Device Manufacturers is a part of a strategic solution for Application Usage Management. FlexNet Producer Suite provides embedded software licensing to unlock new revenue streams, protect intellectual property and implement configure-to-order manufacturing processes that dramatically reduce inventory while enabling greater responsiveness to changing market conditions. It also delivers back-office software entitlement management solutions to streamline fulfillment, protect maintenance revenues, implement new revenue models quickly and easily and establish direct relationships with customers through multiple tiers of distribution.
<urn:uuid:8e62fe44-4697-42cc-bd46-48b7c9b0698f>
CC-MAIN-2017-04
https://www.flexerasoftware.com/producer/solutions/challenge/machine-to-machine/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00447-ip-10-171-10-70.ec2.internal.warc.gz
en
0.926342
917
2.90625
3
I blogged last week on my favorite topic – that of productivity – with Politicians and Press are Not Fighting the Right Battle – Productivity Continues to Decline. In today’s US print edition of the Financial Times, there is a full page article called, “The Productivity Puzzle”. Correctly the article calls out: “The rise of modern civilization rested on this trend: for each person to produce ever more”. The article continues: “The productivity question is of the greatest possible consequence for the US economy, affecting everything from when interest rates should rise to where they should peak, from the sustainability of US debt to what is the wisest level of investment for every business in the country.” The issue is explained in the article. For the most recent period US productivity has fallen, and this is a key element to understanding our economic situation. However, it is not that simple. - How is productivity measured? - Why is US productivity so important at this moment in time? - Why can’t economists agree how to improve it? - Why is it that politicians don’t agree on its importance? Most measures of productivity concern how GDP per hour worked is measured. This brings in an external variable, GDP. That is an interesting variable in its own right. See my blog Book Review: GDP – A Brief but Affectionate History, by Diane Coyle, 2014. The point is that through history a sustainable and solid growth in productivity has led to a growing economy and a developing society. It has not always been positive for everyone I agree. But with such growth, many good things also happen. The issue that when that growth is missing, virtually nothing of a an economically positive nature is possible. Think of your household – you cannot live of debt forever. But this simple fact is missed by our politicians. The originating factor that led to the growth in debt came about at Bretton Woods. When the US led the breakup of the link between the US dollar and gold, the US government was freed to create debt, period. From this point on, every government of any worthiness was able to promise to pay its own currency to pay for its debt. The US dollar was already the reserve currency by this time. I wrote a book review of this important book of the subject, copied below as I cannot seem to find out if I posted the book review previously: Anyway, the article explores the different elements of productivity and growth, across labor and technology, and of course the role of innovation. My blog last week looked at the falling number of “start ups” that is a proxy, perhaps, for innovation. The bad news is that most of the signs are negative; less innovation; fewer start-ups, stagnant education system and so on. And from a growth perspective, restrictive employment laws (more so in other countries) and policies more focused on redistribution than growth. You cannot re-distribute your self out of a contracting pie. And a growing pie can help feed more folks, even of some gorge more than others. Anyway, well worth reading. And enough to get your blood boiling, hopefully then in search of entrepreneurial excitement! Book Review: The Battle of Bretton Woods, Benn Steil, Princeton, 2013. One of the most exciting reads I have come across that captures economic and politicking of great import. During and after the end of WWII, two men supported by their respective economic teams (Harry White for the US, and John Maynard Keynes for Britain), ‘fought’ for supremacy of their ideas as the bankrupt British Empire fought for financial survivals as America was vying for global economic relevancy. Steil seems to capture the moment, with in depth background of the men, and preparation for the famous meeting. He looks at several countries and the roles they played, and follows through after the meeting to discuss and explore the results, failings, and consequences of the Bretton Woods agreement. The rub is that the gold standard had collapsed under the pressure brought to bear on sterling due to the costs of two world wars. The responsibility to police and progress global trade, assumed by the British admire, was expensive. The costs to keep Empire operational added to this charge. The facts were that by the end of WWII, Britain’s’ gold reserves were depleted, and debts galore were piling up. Britain could not simply print money since it has assumed, to the Empire, that the pound would remain at a fixed exchange rate as before the war. Any ‘quantitate easing’ of the day would thus lead to destabilizing of the currency and possibly a run. This would then become self-sustaining and a crash would follow. Only the glut of dollars and the loans that came with that currency could help. But the UK needed dollars to pay those loans and there were few assets let to sell. All to quickly Empire came crashing down (e.g. Suez debacle ’56, Cold War starting up so quickly, and numerous countries seeking independence). The book explains the founding goals of the IMF and the World Bank. It also exposes how the vision that White had sold at Bretton Woods never came about, primarily due to the speed of the Empires collapse and the new demands placed on America. The individual battles between Keyens and White are well worth reading; they explain so much that describes who we are today since the economies of the world we see today started for Bretton Woods. You need to read this boom if you have any interest in economics, global trade, or the US-UK relationship. 10 out of 10. Comments or opinions expressed on this blog are those of the individual contributors only, and do not necessarily represent the views of Gartner, Inc. or its management. Readers may copy and redistribute blog postings on other blogs, or otherwise for private, non-commercial or journalistic purposes, with attribution to Gartner. This content may not be used for any other purposes in any other formats or media. The content on this blog is provided on an "as-is" basis. Gartner shall not be liable for any damages whatsoever arising out of the content or use of this blog.
<urn:uuid:67c29fff-48e3-4e2e-8018-3d1cf441e52c>
CC-MAIN-2017-04
http://blogs.gartner.com/andrew_white/2014/06/30/the-productivity-puzzle-the-one-solution-that-can-negate-the-inequality-issue/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279650.31/warc/CC-MAIN-20170116095119-00109-ip-10-171-10-70.ec2.internal.warc.gz
en
0.957621
1,276
2.53125
3
MySQL Database Security Best Practices MySQL is one of the most popular open-source databases that runs on a variety of platforms. It is relatively easy to configure, simple, and shows good performance characteristics even under significant load but it still has a wide variety of security-relevant configuration issues. MySQL Database Security Best Practices The very best practice of security management is to be paranoid and anticipate an attack any minute from every direction, but if you adopt some precautionary measures, it won’t be such a hard work. Following guidelines will help you to substantially reduce the surface of possible threats. First of all, read security guidelines at http://dev.mysql.com/doc/refman/5.7/en/security.html and check for updates regularly. Several serious vulnerabilities have been found recently for MySQL RDBMS which have freely available exploits. Take advantage of updates that add new features and more importantly, fix security flaws. Regularly monitor vulnerability databases. Always be aware of newly found threats to your system. - Turn off unnecessary daemons and services. The fewer components attackers can access the less is the chance of them finding a flaw that can be used to gain access to the system. By keeping the host configuration simple you reduce the effort needed to manage the system and mitigate the risk of security omissions. - Ensure that MySQL users cannot access files outside of a limited set of directories. MySQL data files should not be read by any users except for root or administrator accounts. - Disable or restrict remote access. In case you need your MySQL to be accessed remotely, configure GRANT statement, which is used to set up the user, to require SSL. - Make sure that no user other than MySQL user can read MySQL configuration and log files. Files my.cnf, my.ini and master.info commonly have unencrypted usernames and passwords. If there is a query log file, it is likely to contain passwords as well. Some MySQL configuration files can also contain plaintext usernames and passwords. Ensure that these files are protected from unwanted users. - Run MySQL with the –chroot option. It provides an excellent mitigation to the power of the file privilege. Chroot is used to restrict file access by a process to a given directory. Even with the chroot option, an attacker that gains file privilege will be able to read all MySQL data and probably still be able to execute UDF’s. - Regularly clear your .mysql_history file or permanently link it to /dev/null. By default on Unix systems, you will find a .mysql_history file in your home directory. It contains a log with all the queries that you’ve typed into the MySQL command-line client. Perform the following command to clear the history: cat /dev/null > ~/.mysql_history After configuring the operating system, you need to build a privilege model and assign user accounts. - Rename root username and change the password using different numbers and characters. You can change administrator’s username with the following command in MySQL console: mysql> RENAME USER root TO new_name; - Don’t give account privileges that they don’t really need, especially File_priv, Grant_priv, and Super_priv. Consider creating a separate MySQL account that your application can use for interaction with the filesystem within MySQL. Keep in mind that this user will have access to all MySQL data, including password hashes. - If it is possible, create a MySQL user for each web application or for each role within each web application. Within this application assign user privileges only for required commands. It can seem tedious but makes sense when it comes to establishing comprehensive security system. - In case remote connections are enabled, specify REQUIRE SSL in the GRANT statement used to set up the user. Some exploit scripts don’t work will not work as they don’t have SSL support. Moreover, SSL protocol ensures confidentiality of a passport response sequence. You can also establish restrictions based on a client-side certificate that is used to authenticate with SSL. Another helpful security measure, knowledge of a password won’t be enough, specified certificate will also be required. - Don’t give anyone access to the mysql.user.table (except for the users with root privileges). - Disable LOAD DATA LOCAL INFILE command. It is a construction that helps to import local files into a table, it has a peculiarity that under certain circumstances can lead to retrieval of /etc/passwd file content. Exploit for this has been freely available since 2013. Add set-variable=local-infile=0 to the my.cnf file. - Get rid of any unused UDF. UDFs also pose threats to database security. If you see unused UDFs in mysql.func table, remove. - If you are using only local connections and there is no need for remote hosts to connect to MySQL, disable TCP/IP connections via the –skip-networking option. - Remove the test database. There is a test database by default that can be accessed by anyone. Remove it or restrict privileges. - Remove Anonymous accounts. Do not leave any users with blank passwords. You can find anonymous users with this command: select * from mysql.user where user=””; - Make sure that MySQL traffic is encrypted. - Enable logging via –log option. According to MySQL documentation “general query log” is a debugging feature, but you can also use it as a security measure. It logs successful connections and executed queries. By default the query log is disabled, you can turn it on using the –log option. Bear in mind that query logs and error logs are a source of information to an attacker as well. Ensure that the log file is visible only to administrator or root account of the system. Also, it doesn’t log results of the queries and retrieved data, but here are special database activity monitoring solutions for that matter. Regularly monitor query logs and search for SQL injection attacks and use of the load_file, infile and outfile filesystem syntax. Install antivirus and antispam software. Deploy a firewall to control incoming and outgoing network traffic and protect from attacks. Comprehensive database security system is built by combining a huge amount of unobtrusive configuration changes. Every detail is important. There is no way to guarantee 100% security, but pursuance of maximum protection is a must in the era of cybercrimes.
<urn:uuid:52345754-1336-4da7-bdde-92ce762c29d8>
CC-MAIN-2017-04
https://www.datasunrise.com/blog/mysql-database-security/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279650.31/warc/CC-MAIN-20170116095119-00109-ip-10-171-10-70.ec2.internal.warc.gz
en
0.888542
1,364
2.59375
3
Pesticides constitute the largest category within the market for crop protection chemicals, with biopesticides accounting for a comparatively minute share. Global consumption of synthetic insecticides is projected to reach 833.32 thousand tons by volume and USD 19.6 billion by value by 2020 at a respective CAGR of 4.9% and 5.4% between 2015 and 2020. Factors driving the markets for pesticides include decreasing arable land, increasing population and the requirement of improving crop yields. On the other hand, regulatory authorities such as EPA (Environment Protection Agency) frequently come up with stringent laws related to curbing pesticide use for alleviating environmental damage and increasing consumer awareness about pesticide consumption, which is expected to be instrumental in slowing down growth in demand for herbicides. Synthetic insecticides have been largely responsible for restricting consumption of synthetic pesticides following the European Commission’s two-year ban on using three neonicotinoid insecticides, including clothianidin, imidacloprid and thiametoxam, starting April 2013. These chemicals have been directly implicated in causing harm to honeybees, in addition to the European Food Safety Authority finding that the use of these insecticides resulted in “high acute risks.” Volume consumption of synthetic insecticides is comparatively higher in Asia-Pacific as a result of higher infestation of insects, though North America continues to retain its highest share. Demand for insecticides in Asia-Pacific is expected to be driven by established products with a greater emphasis on using less toxic insecticides, which would hamper growth to some extent. Latin America and other parts of the world are also influenced by this scenario. The report also analyzes the global market for insecticides by application area, including crop-based applications (grains & cereals, oilseeds and fruits & vegetables) and non-crop-based applications (turf & ornamental grass and other applications). By application area, crop-based end-uses of insecticides are likely to maintain the fastest growth in terms of volume consumed and value demand during the similar period, and retain the leading raking in terms of largest application area. Major companies covered in the report include American Vanguard, Arysta LifeScience, BASF SE, Bayer CropScience, BioWorks, Cheminova, Chemtura Corp, Chr Hansen, Dow AgroSciences, DuPont, FMC Corp, Isagro SpA, Ishihara Sangyo Kaisha, Makhteshim Agan, Marrone Bio Innovations, Monsanto, Natural Industries, Novozymes A/S, Nufarm Ltd, Sumitomo Chemical, Syngenta AG and Valent Biosciences. The key strategies used by companies in this market are new product registrations and acquisitions to enter new markets. The focus in this industry should be on integrated pest management techniques and sustainable practices for an improved yield without harming the environment. In this report we offer, Why should you buy this report?
<urn:uuid:ad97003f-8f64-43b4-b182-44c3bbf4486f>
CC-MAIN-2017-04
https://www.mordorintelligence.com/industry-reports/global-bioinsecticides-market-industry
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279650.31/warc/CC-MAIN-20170116095119-00109-ip-10-171-10-70.ec2.internal.warc.gz
en
0.926029
612
2.5625
3
“We have seen an increased emphasis on technology as a critical component of the global effort to end hunger,” said Julia Duncan and Robert Domaingue, officials who serve in the Office of Global Food Security at the U.S. Department of State. Feed the Future is a Federal initiative that uses open data and technology to encourage food-insecure families to find a pathway in which they can prosper. In Fiscal Year 2015, Feed the Future helped more than 9 million farmers get access to resources like high-yielding seeds, fertilizer application tools, and better soil conservation and water management methods. Project 8, led by the Demand Institute, is a cloud-based platform where people can share and discuss open data from the World Food Program (WFP), The Food and Agricultural Organization of the United Nations (FAO), the U.S. Department of Agriculture (USDA), and the International Food Policy Research Institute (IFPRI). “Project 8 is an exciting opportunity to bring together existing open data for governments, practitioners and NGOs to help us all make a more comprehensive analysis of evolving human needs,” said Duncan and Domaingue. The United States is a founding member of the Global Open Data for Agriculture and Nutrition (GODAN) network, which has 354 partners that seek to make agriculture and nutrition data available and accessible for anyone to use. The group focuses on creating policy and partnerships between the public and private sectors to support open data without duplicating resources that are already available. “It is clear that open data can promote sustainable development by improving the access to information that leads to economic opportunities for the hungry,” said Duncan and Domaingue. “Evidence also demonstrates that open data allows for greater innovation and better decision making.” The State Department said that these open data initiatives will also contribute to the United Nations’ Sustainable Development Strategy, which works to ends poverty and protect the environment by 2030. “Open access to research and the publication of data can help identify where food insecurity and nutritional challenges exist,” said Duncan and Domaingue. The State Department said open data will help researchers and advocates to understand the challenge of world hunger in a more comprehensive way. “As we continue to work with our partners around the world toward solutions to help #endhunger, increasing open data on food security, nutrition, and agriculture will be critical to our ability to set goals, generate plans, and measure our collective progress,” said Duncan and Domaingue.
<urn:uuid:2a8fed73-dc01-4ce5-a191-909dfb0ad83f>
CC-MAIN-2017-04
https://www.meritalk.com/articles/state-department-uses-open-data-to-end-world-hunger/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280763.38/warc/CC-MAIN-20170116095120-00191-ip-10-171-10-70.ec2.internal.warc.gz
en
0.921619
522
2.828125
3
City officials in Chicago have dubbed a two mile stretch of Cermak Road “the greenest street in America.” And it gets this name for many reasons, one of which is a pavement that reportedly reduces air pollution -- the first such use this technology in the U.S. The street also was upgraded with various green technology as part of a $14 million project to explore how sustainability in infrastructure can help solve larger environmental problems. "Sustainability is critical for us," Karen Weigert, chief sustainability officer for the city of Chicago, told AFP. "We think of it as a part of quality of life, about economic opportunity in terms of what kinds of jobs we attract and about stewardship of tax dollars." The pavement used for the top layer of Cermak Road was developed in Italy after the Vatican began searching for a material that would stay white amid the pollution of Rome, Phys.org reported. So cement manufacturer Italcementi developed a pavement with titanium dioxide to solve the problem. The chemical reaction caused by sunlight and titanium dioxide in the pavement reportedly speeds up the decomposition process, keeping the surface clean. However, it was soon discovered that it wasn’t just the surface of the church being kept clean, but eight feet of air above the structure’s roof was also measuring cleaner. The project also uses solar-powered street lights and bioswales with drought-resistant plants to withstand hot seasons without the need for more water. The intelligent use of bioswales and landscape elements placed to displace silt and pollution will help the city manage the large volume of stormwater flowing through the city’s sewers. About 60 percent of the project’s construction waste was reportedly recycled, and about 23 percent of the materials used for the project came from recycled things. It is the combination of all these green elements, city officials said, that will make a project like this successful in the long term. "These infrastructure projects last for 50, 100 years," Project Manager Janet Attarian told Phys.org, "so you can't afford to redo them again when you finally figure them out."
<urn:uuid:beb3892b-b679-47b2-871c-e52038e95ad1>
CC-MAIN-2017-04
http://www.govtech.com/transportation/Greenest-Street-in-America-Eats-Smog.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280929.91/warc/CC-MAIN-20170116095120-00099-ip-10-171-10-70.ec2.internal.warc.gz
en
0.954646
440
2.953125
3
An Outbreak of Anthrax on the Internet? 16 Oct 2001 Is this just the beginning? Kaspersky Lab, an international data-security software developer, reports that two new Internet worms are making the rounds, trying to spread under the guise of important information about the anthrax virus. It is obvious that malefactor(s) have callously taken advantage of the recent events surrounding this dangerous biological virus. Detailed analysis of the worms' code has revealed that fatal bugs keep both worms from effectively propagating. However, it is highly possible that similar worms, with a more capable malicious program posing as the aforementioned subject, could appear in the future. Due to this fact, Kaspersky Lab recommends that users not open any attached files in which "anthrax" (or, "antrax" in Spanish) is mentioned. These worms were created utilizing the virus generator "VBSWG," and are simply another modification of the "Lee" family of script-viruses. The infamous malicious program "Anna Kournikova" also was written with the help of VBSWG. Both worms can be delivered to computers via IRC channels (possibly under the client names mIRC or pIRCh). In all cases, the infected files have the names ANTRAXINFO.VBS or ANTRAX.JPG.VBS. The received e-mail appears as follows: Informacion Sobre El Antrax Antrax InfoE-mail Body: Como ahorita esta muy de moda hablar sobre el antrax aqui te mando la foto de un enfermo terminal,para que veas como se ponen orsi no sabes que es el antrax o cuales son sus efectos aqui te mando una foto para que veas los efectos que tiene Nota:la foto esta un poco fuerte. Upon start-up of an infected file, the worms become system resident, and attempt to send copies of themselves to all recipients in the victim's Microsoft Outlook address book. The worms destroy all files on a computer with the VBS and VBE extensions, writing their copies here instead. Kaspersky Anti-Virus efficiently and effectively protects against this malicious code thanks to the built-in heuristic analyzer, not requiring any additional anti-virus database updates.
<urn:uuid:9d278ae4-227a-41ec-8605-3d4d7fb282a3>
CC-MAIN-2017-04
http://www.kaspersky.com/au/about/news/virus/2001/An_Outbreak_of_Anthrax_on_the_Internet_
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00007-ip-10-171-10-70.ec2.internal.warc.gz
en
0.831739
508
2.875
3
Security on websites is based on session management. When a user connects to a secure website, they present credentials that testify to their identity, usually in the form of a username and password. Because the HTTP protocol is “stateless,” the web server has no way of knowing that a particular user has already logged in as they browse from page to page. Session management allows the web-based system to create a ‘session’ so that the user will not have to re-authenticate every time they wish to perform a new action, or browse to a new page. In essence, session management ensures that the client currently connected is the same person who originally logged in. Unfortunately however, sessions are an obvious target for a malicious user, because they may be able to get access to a web server without needing to authenticate. A typical scenario would involve a user logging on to an online service. Once the user is authenticated, the web server presents this user with a “session id.” This session ID is stored by the browser and is presented wherever authentication is necessary. This avoids repeating the login/password process over and over. It all happens in the background and is transparent to the user, making the browsing experience much more pleasant in general. Imagine having to enter your username and password every time you browsed to a new page! The session ID itself is simply a string of characters or numbers. The server remembers that the session ID (SID) was given to the user and allows access when it is presented. As a result, the Session ID is of great value and malicious users have, for years, searched for ways to compromise it and use it to circumvent authentication mechanisms. Session Management is all about protecting this session ID, and in modern day interactive web applications this becomes critical. So how to get your hands on a Session ID? There are a number of techniques attackers use to compromise a Session ID. The most obvious is to attack the server. The server often stores the session ID somewhere, and more worryingly, the server sometimes stores the session ID in a world-readable location. For example, PHP stores its session variables in the temporary /tmp directory on Unix. This location is world-readable, meaning that any user on that system can easily view the session IDs with basic utilities that are part of the Unix API. This is serious risk, particularly on shared hosts since many users will be active on the system. This issue has since been addressed but it is just one example. Another method is to attack the client. Microsoft Internet Explorer, for example, has had numerous flaws that allowed web sites to read cookies (often used to store the Session ID) to which they did not belong. Ideally, only the site that created the cookie should have access to it. Unfortunately, this is not always the case, and there are many instances of cookies being accessible to anyone. On top of this, a browser’s cache is often accessible to anyone with access to that computer. It may be a hacker who has compromised the computer using some other attack, or a publicly accessible computer in an Internet café or kiosk. Either way, a cookie persistently stored in the browser cache is a tempting target. Unencrypted transmissions are all too common and allow communication to be observed by an attacker. Unless the HTTPS protocol is used, a Session ID could be intercepted in transit and re-used. In fact, it is possible to mark cookies as ‘secure’ so they will only be transmitted over HTTPS. This is something I have rarely seen developers do. Such a simple thing can go such a long way. Another way to that is used to compromise a Session ID is to attempt to predict it. Prediction occurs when an attacker realises that a pattern exists between session IDs. For example, some web based systems increment the session ID each time a user logs on. Knowing one session ID allows malicious users to identify the previous and next ones. Others use a brute force attack. This is a simple yet potentially effective method for determining a session identifier. A brute force attack occurs when a malicious user repeatedly tries numerous session identifiers until they happen upon a valid one. Although it is not complicated, it can be highly effective. So what can you do to mitigate these attacks? 1. Always use strong encryption during transmission. Failure to encrypt the session identifier could render the online system insecure. In addition, for cookie based sessions, set the SSL-only attribute to “true” for a little added security. This will reduce the chance that an XSS attack could capture the session ID because the pages on the unencrypted section of the site will not be able to read the cookie. 2. Expire sessions quickly. Force the user to log out after a short period of inactivity. This way, an abandoned session will only be live for a short duration and thus will reduce the chance that an attacker could happen upon an active session. It is also wise to avoid persistent logins. Persistent logins typically leave a session identifier (or worse, login and password information) in a cookie that resides in the user’s cache. This substantially increases the opportunity that an attacker has to get a valid SID. 3. Never make the Session ID viewable. This is a major problem with the GET method. GET variables are always present in the path string of the browser. Use the POST or cookie method instead or cycle the SID out with a new one frequently. 4. Always select a strong session identifier. Many attacks occur because the SID is too short or easily predicted. The identifier should be pseudo-random, retrieved from a seeded random number generator. For example, using a 32 character session identifier that contains the letters A-Z, a-z and 0-9 would have 2.27e57 possible IDs. This is equivalent to a 190 bit password. For example, using a 32 character session identifier that contains the letters A-Z, a-z and 0-9 is equivalent to a 190 bit password and is sufficiently strong for most web applications in use today. 5. Always double check critical operations. The server should re-authenticate anytime the user attempts to perform a critical operation. For example, if a user wishes to change their password, they should be forced to provide their original password first. 6. Always log out the user securely. Perform the logout operation such that the server state will inactivate the session as opposed to relying on the client to delete session information. Delete the session ID on logout. Some applications even force the browser to close down completely, thus ensuring stripping down the session and ensuring the deletion of the session ID. 7. Always prevent client-side page caching on pages that display sensitive information. Use HTTP to set the page expiration such that the page is not cached. Setting a page expiration that is in the past will cause the browser to discard the page contents from the cache. 8. Always require that users re-authenticate themselves after a specified period even if their session is still active. This will place an upper limit in the length of time that a successful session hijack can last. Otherwise, an attacker could keep a connection opened for an extremely long amount of time after a successful attack occurs. 9. It is possible to perform other kinds of sanity checking. For example, use web client string analysis, SSL client certificate checks and some level of IP address checking to provide basic assurance that clients are who they say they are. All in all, web applications rely on good session management to stay secure. If you follow some of the steps outlined in this article and be aware of the risks, you are well on your way to leveraging the full benefits of web applications.
<urn:uuid:572d111b-71c8-462e-865f-b3aab1c2e6e2>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2006/06/20/security-for-websites---breaking-sessions-to-hack-into-a-machine/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00493-ip-10-171-10-70.ec2.internal.warc.gz
en
0.91885
1,582
3.1875
3
It's afternoon, the air is warm and a grayish haze tints the horizon in the distance. Through your windshield you see the rear end of a sedan -- the first in a long line of vehicles inching their way down the highway. A look out your side windows and in the rear-view mirror shows that you're surrounded by other drivers. You sigh and wait for traffic to move. Slowly, maddeningly, it does. You finally arrive at a city-owned parking lot, but soon realize you are no better off. All of the parking spaces are occupied. This is a story people in many large and mid-sized cities may identify with. Congested streets, rush-hour stagnation, hapless drivers -- all are unpleasant byproducts of modern metropolitan living. At least for now. Public-sector forces in the San Francisco Bay Area are working to alleviate the problem by deploying wireless parking technology that informs people of parking space availability while they're driving or even before they get in their vehicles. These high-tech parking experiments are conducted with a few prominent goals in mind, including making it easier for drivers to hunt down spaces in today's urban jungles. One example is a lengthy test conducted at the Rockridge Bay Area Rapid Transit (BART) station in Oakland, Calif., from December 2004 to spring 2006. This collaborative endeavor, which included the California Department of Transportation and researchers from the University of California (UC) at Berkeley, used high-tech gadgets to create a smart-parking management field test. Smart-parking devices help people find and pay for spaces. People used the technology to navigate an area of about 50 spaces at the Rockridge station as part of the test. "We enabled people to make reservations via the Internet prior to the parking event," said Susan Shaheen, a researcher from California Partners for Advanced Transit and Highways at UC Berkeley. "We also encouraged people to get off the highway on their way to work by providing them with real-time availability information via changeable message signs during peak commute hours." Ground-mounted wireless sensors collected vehicular information in the parking lot through magnetic-imaging technology. This information was transmitted in real time to an electronic information network over the Internet that allowed the data to be viewed by drivers on their cell phones, computers, ground-mounted changeable message signs or other devices. Drivers could also use cell phones or PDAs to reserve spaces. Users could also reserve spaces through an interactive voice response system, known as Kate. "The core technology is the parking information network, and we've developed that," said Rick Warner, CEO of ParkingCarma, the company that supplied the project's network that collected vehicular information from wireless sensors and made it publicly available on the Web. "Built it, scaled it and it's a patented technology that we're bringing to bear in a constructive way." Project managers gauged the test's success by conducting 177 surveys of 35.8 percent of participants in February and March 2006. The surveys yielded interesting results. Data from a June 2008 PATH research report includes: The report also states that, while smart-parking management systems have been implemented in European and Japanese cities, the Oakland BART project was the first transit-based, smart-parking system implemented in the United States. Similar systems have followed at public transit stations in Maryland and Illinois. Similar technology will support San Francisco's SFpark project, planned to start fall 2008 and end summer 2010. It's an ambitious undertaking -- in April 2008, the San Francisco Chronicle reported the city would be the first in the country to deploy smart-parking technology so broadly. The project's scope encompasses about 25 percent of all metered spaces -- about 6,000 -- and about 11,500 spaces in parking garages. At the meters, wireless sensors will detect changes in magnetic fields created by parked cars and transmit the information to an electronic information network. These "smart" meters will replace existing ones and accept more forms of payment -- coin, credit, debit or smart card -- and they'll also transmit the payment information to the network. In the garages, parking information will be collected at the gates and transmitted to the network. Currently San Franciscans have serious trouble finding legal spaces in the city. A chief goal for SFpark is to provide both city and federal government crucial data to support further expansion of smart-parking programs, if other public-sector entities want to follow suit. In the past, the city has used technology from Streetline Networks, a local parking management technology company. In 2006, the company contracted with the city to help manage hundreds of on-street parking spaces at the Port of San Francisco. Streetline's wireless sensors monitored activity and told the city how often and when certain spaces were occupied. The data was collected over a wireless mesh network and transmitted over the Internet. The intent was to help San Francisco determine if parking price adjustments were necessary. "In 2007, San Francisco as a region was selected to receive some federal funding as part of the Urban Partnership Program to test innovative ways to manage congestion," said Jay Primus, a manager of the San Francisco Municipal Transportation Agency. The project's budget is $23 million, with the U.S. Department of Transportation footing $18.4 million and the city handling the rest. "So that funding is providing most of the funding for the pilot projects and really accelerated their timeline and made them a little larger." When Primus said "projects," he was pluralizing the various locations in which SFpark will take place. It will comprise major commercial areas like Fisherman's Wharf and Fillmore, Chestnut and Lombard streets. He and his colleagues expect other cities to learn from SFpark's experiences. One of the most talked-about aspects of the project is how smart-parking technology will affect parking space pricing. The city can use the parking data to increase prices at peak commute times. "Our plan is to gradually and periodically adjust prices up or down to help us achieve our availability targets," Primus said. He also said not all the mechanisms for drivers to receive parking information have been worked out yet. "But what is envisioned [is] that from your BlackBerry, you could access information via a map." Planned distribution channels for parking information include: changeable message signs, static signs, the Internet and text messages. Primus said the text messaging is strictly for parking garages, where people could text a number to find out availability. They won't receive the texts automatically. Primus and his colleagues expect this data network to help citizens make informed travel choices and find parking spaces more easily. The potential benefits are considerable. If drivers can find parking spaces fast it can lead to less congestion, frustration and pollution. The city would be better able to manage the public-parking system and its revenue stream. Consumers, however, might balk at above-average parking prices at peak commute times. But if so discouraged by that and low availability of spaces, they could use public transportation more often, which would also reduce congestion, pollution and frustration. "There is a real emphasis upon evaluation for the project, so we're planning to gather the data we need to evaluate the different expectations for the project upon its effects," Primus said. "For example, congestion, reliability, greenhouse gas emissions and so on." There will likely be more chances in the future for cities and transit authorities to test smart-parking technology, if California's endeavor is any indication. Bob Justice is the project manager at the state Department of Transportation for the current phase of the smart-parking project deployed at the Rockridge BART station. He said similar technology is being deployed at five train station lots in San Diego. "I would say it's becoming more accepted," said Justice. "I would still say it's in the early stages, but I foresee it over time, expanding and eventually being a viable service."
<urn:uuid:a41b040d-3b23-450e-9ad8-7538268fc93b>
CC-MAIN-2017-04
http://www.govtech.com/transportation/Wireless-Parking-Information-Could-Boost-Public.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280410.21/warc/CC-MAIN-20170116095120-00128-ip-10-171-10-70.ec2.internal.warc.gz
en
0.954943
1,627
2.859375
3
The Web is about to have its big bang. About 1,000 new generic top-level domain names, or gTLDs (the last bit of an internet address, such as the com in qz.com or Nextov.com) will come into existence this year. On Feb. 4, anybody will be able to create and start running a website on the first of the new domains. The number of alphabets in which you can create a web address will go from one—Latin—to at least a dozen including Chinese and Arabic. Hundreds of millions of dollars will be made. And our conception of the web will change entirely. You may not have heard about this. That’s unsurprising. The infrastructure of the internet is rarely a sexy subject, except when it breaks spectacularly. New standards are constantly being adopted in the background. Who can keep track? The coming deluge of new domains is different. It is highly visible, and will affect everybody who uses the web. What’s less certain is whether it is strictly necessary. Proponents argue that it will benefit people and businesses (small ones especially) by giving them more addresses to choose from. Critics call it a massive land grab by both entrepreneurs and some of the world’s most powerful internet companies. First, a little background Domain names, properly known as Uniform Resource Locators (URLs) and more commonly known as web addresses, are overseen by the Internet Corporation for Assigned Names and Numbers (ICANN). They follow a hierarchy, much like physical addresses. If the web were a country, then a generic top-level domain like .com might be the state or province, and a second-level domain, like google.com, would be a city. Neighborhoods within the city can be found in either a suffix (google.com/images) or a prefix (images.google.com). Until 2013, there were only 22 functioning gTLDs. The most familiar predated the creation of ICANN: .com, .net, .org, .edu, .gov, and .mil. Another seven came into existence in 2000, and a further eight in 2004. Most of the new domains in these two waves never really took off. You will sometimes spot .biz or .info in the wild, but more niche ones such as .mobi (aimed at mobile sites) and .xxx (for porn) got little attention from the markets they were aimed at. In addition, countries get their own top-level domains, called ccTLDs. Familiar examples include .de (Germany), .ca (Canada) or .co (Colombia, now used mostly for other purposes). These form another huge chunk of the internet.
<urn:uuid:86b7c99b-ce83-4d56-9719-0b311b46d7a9>
CC-MAIN-2017-04
http://www.nextgov.com/cio-briefing/2014/01/biggest-land-rush-history-internet-starts-february-4/77306/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280791.35/warc/CC-MAIN-20170116095120-00036-ip-10-171-10-70.ec2.internal.warc.gz
en
0.938485
568
2.6875
3
On May 24th, 2018, the EU Data Protection Directive will be updated for the first time since 1995. The directive is now becoming a regulation – or a comprehensive, enforceable law – and the drastic changes will affect organisations everywhere. The new set of guidelines are known as the European General Data Protection Regulation, or EU GDPR, and here are 8 key differences: 1. One Set of Rules Across the EU The EU GDPR is a regulation, not a directive. A directive is a set of rules presented to the entire EU that can then be interpreted and implemented differently by each of the 28 countries within the union. The new regulation, on the other hand, creates a unified digital economy across the EU, and will be implemented uniformly by one supervisory authority across the entire union. 2. Personal Data Redefined Under the current directive, each of the 28 countries developed their own interpretation of what constituted personal data. The EU GDPR enforces a strict and broad definition of personal data, referring to any information that could be used, on its own or in conjunction with other data, to identify an individual. This may mean, for example, that even a phone number stored on its own without an associated name or address falls under EU GDPR guidelines and needs to be properly protected. 3. New Individual Rights Built into the EU GDPR is a strong focus on citizen rights. Organisations will have to disclose the intended use and duration of storage of the data acquired, and re-solicit permissions each time a new use of the data is proposed. EU citizens will have to explicitly opt in to the storage, use, and management of their personal data, and will have the right to access, amend, or request the deletion of, their personal data. Additionally, they will be able to object to certain types of processing – profiling for marketing purposes, for example. 4. Mandatory Breach Notification The EU GDPR requires organisations to report data breaches to the individuals whose data was lost, and to a supervisory authority within 72 hours. The data breached, and the preventative security measures in place at the time of the breach, must then be evaluated to assess repercussions and ensure future compliance. 5. Financial Repercussions To ensure compliance with the new regulation, steep fines are being put in place. If violations occur, organisations could be charged either 4% of their global turnover or 20,000,000 EUR, whichever is higher. 6. Joint Responsibility The regulation defines data controllers as organisations who acquire EU citizens’ data, and data processors as organisations who may manage, modify, store, or analyse that data on behalf of or in conjunction with the controllers. Under the regulation, both parties are jointly responsible for complying with the new rules. This means If an organisation outsources data entry or analysis to a third party, or processes data on behalf of another organisation, both parties are liable. 7. Information Governance Under the EU GDPR, organisations are required to actively track how and where data are stored and used through the supply chain. This means adopting risk management tools and building security and privacy into their operations by design. Any organisation directly involved with the processing of data, or with more than 250 employees must also appoint a Data Protection Officer. 8. Truly Global Impact Even though the regulation is being rolled out by the European Union, it has a global impact. Organisations based outside of the EU must comply if they handle, store, manage, or process EU citizens’ personal data. Any companies in the world who sell to European companies, or received data from EU citizens, for example will be affected. Want to Learn More? For more information on the new regulation, including a detailed overview and an in-depth Q&A session, check out our recent EU GDPR webinar. In this recorded open discussion, you’ll hear from Andrew Dyson of DLA Piper UK LLP, and Jennifer Sand, CloudLock’s VP of Product Management.
<urn:uuid:a2ea892c-2ea7-41bd-85d3-2544198a6537>
CC-MAIN-2017-04
https://www.cloudlock.com/blog/eu-gdpr-vs-data-protection-directive/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00156-ip-10-171-10-70.ec2.internal.warc.gz
en
0.918017
822
2.96875
3
Congress Greenlights Nanotechnology R&D Measure Feds budgets a few billion bucks for nanotechnology R&D. Nanoparticles could help produce smaller, faster microprocessors.WASHINGTONTrying to tie up loose ends before heading home for the year, Congress this week passed legislation dedicating nearly $3.7 billion for nanotechnology research and development. Following Senate approval earlier in the week, the House of Representatives Thursday approved the 21st Century Nanotechnology Research and Development Act, which names The National Science Foundation as the lead R&D agency. The Department of Energy, Department of Commerce, NASA, the EPA and other agencies will participate in the nanotechnology project. The manufacturing industry lobbied for the measure as a means of igniting economic growth, arguing that Europe and Japan were positioned to out-pace the United States in nano-science advancement absent a stronger federal role. Scientists have said that nanoparticles can improve and strengthen traditional manufacturing materials such as steel and will help produce smaller, faster electronic devices.
<urn:uuid:452c37a2-7a7b-438a-a82e-b827f59c2547>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Desktops-and-Notebooks/Congress-Greenlights-Nanotechnology-RD-Measure
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280128.70/warc/CC-MAIN-20170116095120-00064-ip-10-171-10-70.ec2.internal.warc.gz
en
0.919469
203
2.546875
3
A new on-line map makes it possible, for the first time, to track disease outbreaks around the world that threaten the health of wildlife, domestic animals, and people. Updated daily, the map displays pushpins marking stories of wildlife diseases such as West Nile virus, avian influenza, chronic wasting disease, and monkeypox. Users can browse the latest reports of nearly 50 diseases and other health conditions, such as pesticide and lead poisoning, by geographic location. Filters make it easy to focus on different disease types, affected species, countries, and dates. The map is a product of the Wildlife Disease Information Node (WDIN), a five-year-old collaboration between UW-Madison and two federal agencies, the National Wildlife Health Center and the National Biological Information Infrastructure, that are part of the U.S. Geological Survey (USGS). WDIN is housed within the university's Nelson Institute for Environmental Studies and the USGS. A powerful feature of the wildlife disease map is its ability to tap into the WDIN's large and growing electronic library of information from around the globe. "If you click on the name of a particular disease, it takes you to our main Web site and does a quick search of everything that we have on that topic," said Cris Marsh, a librarian who oversees the wildlife disease news services for the WDIN. State and federal wildlife managers, animal disease specialists, veterinarians, medical professionals, educators, and private citizens will all find the new map useful for monitoring wildlife disease, adds Marsh. Ultimately, the WDIN seeks to provide a comprehensive online wildlife disease information warehouse, according to project leader Josh Dein, a veterinarian with the Madison-based USGS National Wildlife Health Center. "People who collect data about wildlife diseases don't currently have an established communication network, which is something we're working to improve," said Dein. "But just seeing what's attracting attention in the news gives us a much better picture of what's out there than we've ever had before." Concerns about the emergence and spread of diseases that can pass between species have forged new links in recent years between wildlife health, human health, and domestic animal health professionals. "It all ties in together, the 'One-World, One-Health' idea," said Marsh. "The West Nile virus acted as one of the catalysts for that connection. People in different areas in the eastern U.S. began to see isolated incidences of dead and dying crows that seemed abnormally high, but nobody knew other areas were experiencing the same thing." Because West Nile virus also affects humans and other mammals, it became apparent to scientists that disease outbreaks of this kind need to be addressed as quickly as possible, explains Marsh. Outbreaks of monkeypox and highly pathogenic avian influenza soon afterward underscored the importance of linking information about emerging diseases across all species. "If scientists share with one another the information they're collecting on the patterns of diseases like these, we can respond to outbreaks much more efficiently," says Marsh. Besides providing news services, WDIN collaborates with a wide variety of public and private entities to gather and provide access to important wildlife disease data. Because of the global significance of these diseases, WDIN encourages others to become involved with the project. "The more information we can link," said Marsh, "the more robust our service becomes."
<urn:uuid:49d9c6e9-2867-4529-87c1-53686bd91159>
CC-MAIN-2017-04
http://www.govtech.com/e-government/Web-Tool-Puts-Wildlife-Diseases-on.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283301.73/warc/CC-MAIN-20170116095123-00182-ip-10-171-10-70.ec2.internal.warc.gz
en
0.950116
692
3.359375
3
A new study by University of California at Davis researchers shows that workers with low wages are at a higher risk for hypertension than higher-paid workers. Further, the link between low wages and hypertension was strongest among women and workers between the ages of 25 and 44. "We were surprised that low wages were such a strong risk factor for two populations not typically associated with hypertension, which is more often linked with being older and male," J. Paul Leigh, senior author of the study and professor of public health sciences at UC Davis, said in a statement. "Our outcome shows that women and younger employees working at the lowest pay scales should be screened regularly for hypertension as well." The UC Davis study was published in the December issue of the European Journal of Public Health. (You can read the abstract here, but have to pay to see the entire study.) Hypertension can be a killer, frequently leading to heart disease and stroke. The U.S. Centers for Disease Control and Prevention estimates that hypertension afflicts one out of three adults in the country and costs $90 billion annually in health care, medication and absences from work. Researchers studied more than 5,600 working adults between ages 25 and 65 over three time periods -- 1999 to 2001, 2001 to 2003 and 2003 to 2005. Using regression analysis, the team determined that doubling wages was linked to an overall 16% decrease in the risk of a hypertension diagnosis. Wage hikes had an even more dramatic impact on women and younger workers. Doubling their wages led to a 25% to 30% decrease in the risk of a hypertension diagnosis among younger workers and a 30% to 35% decrease among women. "Wages are also a part of the employment environment that easily can be changed," Leigh said. "Policymakers can raise the minimum wage, which tends to increase wages overall and could have significant public-health benefits." Careful, doctor. Talk like that is sure to raise the blood pressure of our nation's job creators. Now read this: Some of today's 'desktop' mini-PCs make laptops seem downright bulky in comparison. Among many other provisions, the legislation "explicitly prohibits" the replacement of American workers... Sensing a possible stall in your coding career? Here’s how to break free and tap your true potential Sponsored by Puppet The YETI Hopper 20 ice-for-days portable cooler and is tough as nails so it can be hauled anywhere you... A flash drive, reinvented. With the SanDisk Connect Stick in your pocket, in your bag or across the... Instead of making unfashionable smart glasses, Apple will make fashionable glasses smart.
<urn:uuid:22d06b6e-b17d-43c5-9121-8217f9cd0058>
CC-MAIN-2017-04
http://www.itworld.com/article/2714750/enterprise-software/low-wages-linked-with-hypertension.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00210-ip-10-171-10-70.ec2.internal.warc.gz
en
0.962941
543
2.71875
3
This year's record holders for damage caused are certainly Mydoom.a (February 2004) and Sasser.a (May 2004). The most important changes in the malware world include the criminalization of the Internet with malicious code writers and hackers migrating to the creation of bot networks to support spammers. On the other hand antivirus companies have become more responsive, while law enforcement agencies worldwide have finally focused efforts on cyber crime. 2004 was a record year for arrests of cyber criminals. - AdWare (advertising systems) becomes one of the biggest security headaches; - Email traffic is clogged with spam, and in most cases it is impossible to work with email without anti-spam filters; - Successful attacks on Internet banks; - Numerous cases of Internet racket (DDoS attacks with consequent extortion); - Antivirus companies include protection from AdWare in their products; - The fastest response to new malware threats becomes the main criteria for evaluating antivirus vendors; - A lot of different anti-spam solutions appear; using such solutions is de-facto standard for mail service providers; - Successful investigations and arrests (about 100 hackers arrested, three of whom were on the FBI top 20 most wanted list). Malware developments in 2004 Each generation of malware (malicious software) writers stands on the shoulders of the previous one. It's no surprise, therefore, that the seeds of development in malware that have come to fruition in 2004 were actually sown in the previous year. Lovesan 'popularized' the use of system exploits to infect vulnerable machines directly over the Internet and included in its 'back pocket' a Distributed-Denial-of-Service (DDoS) attack (on the Windows update server). Sobig.f broke all previous records (at its height, one in ten emails were infected with Sobig) by using spam techniques to spread. It also pioneered the 'slow burn': each new variant of the worm created a network of infected machines that were used as a platform for a later epidemic. When Swen appeared in September 2003, it seemed to be just another mass-mailer. However, it succeeded through 'social engineering'. Social engineering is just a fancy way of describing a non-technical breach of security that relies on human interaction: in the case of viruses and worms, it means tricking unsuspecting users into running an infected attachment. Swen masqueraded as a cumulative Microsoft patch designed to patch all vulnerabilities, manipulating users' growing awareness of the need to secure their operating system from attack. These techniques have been continued, and further developed, by successive threats in 2004. The use of system exploits to get a foothold in the corporate network and spread rapidly has now become commonplace, as writers of malicious code have woken up to the potential 'helping hand' provided by vulnerabilities in common applications and operating systems. Some threats in 2004, like Sasser, Padobot and Bobax, have used the system exploit as their sole attack mechanism, spreading directly over the Internet from machine to machine, avoiding the use of 'traditional' virus techniques altogether. Others, among them Plexus and the numerous Bagle, Netsky and Mydoom variants, have combined the use of system exploits with other infection methods (for example, mass-mailing and the use of network resources, including P2P networking). Many of today's most successful threats (successful from the author's perspective that is) are a composite 'bundle' that includes different kinds of threat. And increasingly this 'bundle' includes a Trojan of one kind or another. Typically Trojans are dropped onto the system by a virus or worm. Since Trojans don't have their own on-board replication capability, they're often perceived as being less dangerous than viruses or worms. Yet their effects can be dangerous and far-reaching. They're not only becoming more sophisticated. They're being put to an increasing number of malicious uses. The 2004 New Year celebrations had hardly ended before the appearance of the Trojan proxy Mitgleider set the scene for the coming year. Thousands of ICQ users received a message with a link directing them to a web site containing this Trojan. Mitgleider used one of two Microsoft Internet Explorer vulnerabilities to install and launch a proxy server on the victim machine without the user's knowledge. It then opened a port on the machine, allowing it to send and receive email. The result was that victim machines were turned into an army of spam-spewing 'zombies'. Mitgleider established Trojan proxies as a separate class of malware closely linked to the distribution of spam. It also set a trend with the mass-mailing of links to infected web sites. Most of the significant threats that followed Mitgleider have made use of Trojans. Bagle, a worm that seems to have been written by the same coders that produced Mitglieder, either installed a Trojan proxy or downloaded it from the Internet. In any case, the worm was simply an improved version of Mitgleider that included propagation by email. Bagle was distributed from machines infected by Mitgleider. This highlights another important feature of 2004 threats: the use of Trojan programs to 'seed' computers in the field as a platform for a later epidemic. This technique was used to great success not only by Bagle, but also by Netsky, Mydoom and other significant threats. As each successive variant of these worms was released, it increased the number of infected machines: once 'critical mass' was reached, there was a new epidemic. This was the principal factor behind the success of Mydoom, which outdid Sobig to become the biggest epidemic that we've seen to-date. Mydoom is also a good illustration of the point made earlier about malware 'bundles'. It used effective a clever piece of social engineering, set up a DDoS attack on 'www.sco.com' that crashed the SCO site for months and dropped a backdoor Trojan onto victim machines that was used by many copycat threats that followed in its wake. 2004 witnessed a battle between rival malicious code writers. Netsky didn't simply infect victim machines; it deleted any existing infection by Mydoom, Bagle and Mimail worms. On top of this, the authors of Netsky instigated a war of words with rival authors of Bagle. At its height, several new variants of both worms appeared daily, complete with insults embedded within the code. Bagle and Netsky authors also pioneered the use of password protection for infected attachments, in a clear attempt to make them difficult to detect. The body of the email contained the password, either in plain text or as graphics, so users had all they needed to launch the infected attachments. The technique of mass-mailing an infected attachment, so successful since it was first used by Melissa in March 1999, has been used by many of the major threats since then. However, there are alternative methods. One we've already discussed: Internet worms like Lovesan, Welchia and Sasser infect directly, using system exploits. One important alternative that has become common in 2004 is the use of links to direct users to a web site containing malicious code. The Mitgleider Trojan proxy, discussed earlier, is not the only threat that has used this technique: it has also been used by a number of worms. Netsky, for example, spread by sending an email containing links to previously infected machines. It was followed by Bizex, the first ICQ worm. Bizex penetrated machines via ICQ sending all the ICQ contacts found on the newly infected machine links to a site where the body of the worm was located. Once users clicked on the links, the body of the worm was downloaded from the infected web site and the cycle was initiated all over again. Snapper and Wallon later used the same technique, but used it to download Trojans that the author had placed on the web sites. So far, emails containing links have not been treated with suspicion by recipients, many of whom are much more likely to follow a link than they are to double-click an attachment. In addition, this method effectively 'skips over' the perimeter defenses deployed at the Internet gateway by many enterprises: they're used to blocking suspect extensions (EXE, SCR, etc.), but emails containing links slip through unnoticed. Undoubtedly, this method will continue to be used until users learn to treat links sent via email with the same caution that many now show email attachments. We've seen a significant increase in the numbers of Trojan spies, designed to steal confidential financial data. Dozens of new variants appear every week, often different in both form and function. Some of them are simple keystroke loggers that use email to send all keystrokes to the author or controller of the Trojan. The more elaborate Trojan spies provide total control over victim machines, sending data streams to remote servers and receiving further commands from these servers. This total control over victim machines is often the goal for Trojan writers. Infected machines are frequently combined into 'bot' networks, often using IRC channels or web sites where the author puts new commands. The more complex Trojans, like many Agobot variants, combine all infected machines into a single P2P network. Once these bot networks have been constructed, they are leased out for spam distribution, or used in DDoS attacks (like those carried out by Wallon, Plexus, Zafi and Mydoom). We're also seeing large numbers of Trojan droppers and Trojan downloaders. Both have one goal: to install an additional piece of malware on the victim machine, whether it's a virus, a worm or another Trojan. They simply use different methods to achieve their goal. Virus writers often use downloaders in the same way as droppers, although they can be more useful to them than droppers. First, downloaders are much smaller than droppers. Second, they can be used to download endless new versions of the targeted malware. Like droppers, downloaders are usually written in script languages such as VBS and JS, but they also often exploit Microsoft Internet Explorer (IE) vulnerabilities. Droppers and downloaders are used not only to install other malicious code. They're also used to install non-viral adware or pornware programs without the knowledge or consent of the user. Adware refers to programs that show advertisements, often banners, independently of user activity. Pornware refers to dialers installed without the knowledge or consent of the user that dial pornographic pay-to view sites automatically. The use of Trojan programs to steal passwords, to access confidential data, to launch DDoS attacks and to distribute spam email highlights a key change in the nature of the threat landscape, its increasing commercialization. It's clear that the computer underground has realized the potential for making money from their creations in a wired world. This includes the use of 'zombie' machines leased to the highest bidder as a platform for spam distribution. Or the use of extortion, where the same 'zombie' machines are used to launch a 'demonstration' DDoS attacks on a victim as a way of extorting money [pay up or we'll take down your site with a full-scale DDoS attack]. In addition, there's theft of login information. And the use of 'phishing' scams to trick users into providing their bank details (username, password, PIN number, etc.). 2004 has also seen the launch of a series of threats specifically targeting wireless devices. Cabir, the first virus for mobile phones appeared in June. This was a proof-of-concept virus produced by the virus-writing group 29A, although the virus was later reported in the field in the Far East. This was followed by the Duts virus in July (another creation of 29A) and the Trojan Brador in August, both aimed at Pocket PC. The number of wireless devices used within the corporate world is increasing. In particular, the use of handheld devices - PDAs and smart phones - is growing significantly and with it the use of wireless technology of one sort or another (802.11b, Bluetooth, etc.). These devices are quite sophisticated. They run IP services, offer web access and are hooked up to corporate networks. They also provide users with the ability to connect remotely to other devices and networks. Unfortunately, they're intrinsically less secure, operating outside the reach of traditional network security safeguards. And as they start to carry more and more valuable corporate data, wireless devices and wireless networks are likely to become a more attractive target for the writers of malicious code. Furthemore, 2004 has also been significant for the number of arrests of malicious code writers. In February, the Belgian virus writer Gigabyte was arrested. In May, two virus writers were arrested in Germany. The first was Sven Jaschen, who admitted to writing Sasser and some Netsky variants. A second coder was arrested for created the numerous Agobot/Phatbot worm families. These arrests followed the announcement by Microsoft of bounties for information leading to the arrest of virus writers. In July, a Hungarian teenager, 'Laszlo K', was found guilty of distributing the Magold.a worm that became widespread in Hungary during May 2003. He was sentenced to two years probation and ordered to pay court costs of $2,400. In the same month, a computer engineer from Spain was arrested and tried for distributing the Cabrotor Trojan: Oscar Lopez Hinarejos was sentenced to two years in prison. There were other arrests in the same month In Taiwan, Canada and Romania. In August, Jeffrey Lee Parson, a teenager from Minnesota, pleaded guilty to damaging computers by creating the Lovesan.b worm. The fast spread of viruses and worms during the last few years has clearly demonstrated the global nature of the threat. Increasingly, however, law enforcement is becoming a global phenomenon, with government authorities from various countries collaborating to bring to justice malicious coders. One example of how successful such joint operations can be is the arrest of 28 people in October in connection with identity theft in six countries. The operation involved the US Secret Service, the UK National Hi-Tech Crime Unit, the Vancouver Police Department's Financial Crimes Section (Canada), the Royal Mounted Police (Canada), Europol and police agencies in Belarus, Poland, Sweden, The Netherlands and Ukraine. More recently, a Russian phisher was arrested in Boston and charged with multiple counts of fraud, identity theft and the use of credit card scanning devices. So, what does the future hold? Well, we're likely to see 'more of the same'. As long as the techniques outlined above prove successful in attacking PC users, the writers of malicious code will continue to use them. This includes tried-and-trusted methods like mass-mailing and the use of system exploits to attack vulnerable computers, the widespread use of Trojans to steal data or as a platform for DDoS attacks or spam distribution. It also includes techniques pioneered this year, like the use of links in emails to download malicious code from a web site. The key factor is that these methods have proved successful, for the writers of malicious code and those who pay them to create code that can be used to make money illegally. Of course, they will continue to tweak their creations, adding new features to make them even more effective, or new 'self-defence' mechanisms to make them less easy to detect and remove. As in the past, some malware authors will continue to break new ground. In particular, they're likely to target the growing numbers of wireless devices that are increasingly used by enterprises and users alike. For the latest in malware developments, check out viruslist.com.
<urn:uuid:32eab639-d772-4cb1-bb57-c806f5e5236b>
CC-MAIN-2017-04
http://www.kaspersky.com/au/about/news/virus/2004/Malware_Trends_in_2004
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.60/warc/CC-MAIN-20170116095120-00146-ip-10-171-10-70.ec2.internal.warc.gz
en
0.956878
3,237
2.546875
3
Open-source software is an increasingly popular software development and distribution model that may spread further in the face of financial constraints in our current economy. With publicly available source code generally offered without charge, it is tempting to look to open source for potentially significant cost savings in this time of need. But not so fast. While proponents of open source proclaim the benefits of "free code," it might better be compared to the free puppy offered to a good home. The "puppy" may come at no initial cost, but the ongoing maintenance and undisclosed hidden dangers may create unforeseen hassles in your corporate home. Open source has complex legal restrictions that can create copyright and patent compliance issues and corporate transaction challenges for companies that rely heavily on customized software or that distribute software to partners or customers. In 2004, the Federal Reserve, FDIC and other federal financial regulatory agencies outlined various strategic and legal risks of using open-source software in the jointly issued guidance notice "Risk Management of Free and Open Source Software." Public company disclosure statements also demonstrate open-source issues. In their annual reports, many public companies note their use of open source as a risk factor to their businesses, while others go as far as to highlight their lack of open source as a positive factor. Private companies seeking to be acquired have seen their valuation drop, or have seen acquisitions fail altogether, as a result of open-source software discovered during the due diligence process. To read more on this topic, see: Wall Street Software Scandal: When Does Open Source Become Proprietary Code and Open Source - Dirty Code, Licenses and Open Source. Any doubts about the enforceability of open-source software licensing restrictions in practice have been put to rest by recent court decisions. At the same time, the use of open-source software is expanding rapidly, and even commercial software companies often provide open-source licensing options and opportunities. Open-Source Licenses Are Complex The Open Source Initiative standards group has approved nearly 70 open-source licenses, each with different terms. These licenses typically fall into one of two categories. The first is described as an attribution-type license, and it generally imposes few obligations beyond requiring that an acknowledgement of the authorship of the software be included in some manner, such as in source code comments and help files. The second, more common and more demanding type of open-source license is the reciprocal-type license, also known as a "viral" or "copyleft" license. Reciprocal-type open-source license terms can be complex and ambiguous. Generally, any company that uses open source and either modifies or distributes it will need to have a thorough program in place to ensure compliance with the applicable licensing requirements. Typical features of reciprocal-type licenses include requirements to make source code generally available, prohibitions on using the software for commercial purposes, and?? implied or express patent license grants. These licenses may also lack authorization for the rights to transfer or assign the software. One example of a reciprocal-type license is the GNU General Public License (GPL). When a company includes GPL-licensed software in its own software, that company must then allow its software to be made available and licensed to all third parties under the same GPL terms. That means competitors can examine--and in some circumstances copy, distribute and develop derivative works of--what could otherwise be proprietary source code. Know the Risks Failing to comply with open-source license terms is not merely a breach of contract. Noncompliant use of open-source software also can result in copyright infringement, with increased possibilities for injunctive relief that may force product recalls or expensive alternative software development. It can also lead to enhanced damages and a fixed penalty of up to $150,000 per work infringed, as well as liability for the other party's attorneys' fees. This is not a hypothetical threat. In 2008, the Federal Circuit Court of Appeals issued a decision that upheld the enforceability of open-source licenses. The court ruled that as a result of the defendant's failure to comply with the notice and attribution requirements in the open-source license for software it had used, the defendant did not have a license and potentially was subject to a preliminary injunction to stop his alleged copyright infringement as well as liability for copyright damages. Another risk that arises from using open source is that its pedigree often is unknown and always is uncertified. Using open-source software may expose a company to claims that it has infringed the intellectual property rights of others. Open-source licenses provide no warranties or other guarantees that contributors to the source code did not copy the protected work of others, nor do these licenses provide any indemnification to protect against third-party claims for such infringement. No one stands behind the software. Again, the threat is not hypothetical; open-source distributors have been sued for patent infringement, and end users can be liable as well. For example, in October, Red Bend Software sued Google for patent infringement with respect to functionality included in Google's Chrome browser. Manage Your Exposure Companies preparing to be acquired should know the risks of open source upfront, since most buyers will conduct a sophisticated and rigorous evaluation of open-source software use. The representations and warranties in an acquisition agreement generally will require disclosure of open-source use and distribution. Additionally, an acquiring company will want a general understanding of the origin of all of the software used and distributed by the target company. Part of that exercise involves understanding open-source use and which license requirements apply. Target companies that use "not for commercial use" open-source software for commercial purposes will need to obtain a different and generally more costly commercial license, if such a license is even available. Depending upon the structure of the acquisition, third-party consent for assignment may be needed for continued use of the software. Additionally, if company employees have contributed software in any collaborative open-source projects, their participation may require corollary contribution of company intellectual property or a promise not to assert intellectual property rights to the code or software developed in the project. Many acquirers require target companies to undergo an expert technical assessment to determine the use and applicable license terms of open-source software, with the commitment to proceed with the acquisition contingent on satisfactory results. Software licensed under a reciprocal-type license may need to be replaced with newly written software, commercially licensed software or perhaps open-source software licensed under an attribution-type license. This replacement or remediation effort can be substantial and may delay closing or, in the worst-case scenario, terminate the transaction. The Sarbanes-Oxley Act (SOX) requires executives of a public company to certify that the company has procedures in place to provide accurate financial statements and has the related internal controls necessary to produce those statements. Such controls include being able to verify ownership of material assets. Failing to establish procedures to ensure compliance with open-source licenses may indicate a lack of procedures necessary to verify ownership and use of intellectual property. At a minimum, risk factors associated with compliance with reciprocal-type licenses--which may require that a company make its intellectual property assets publicly available without charge--may need to be disclosed. Penalties for falsely certifying a SOX-required statement are severe, including substantial fines and possible imprisonment. If you know open source has been used by your IT staff or external developers, get the details on use, modifications and compliance. Your organization should have policies for oversight and control of all software acquisition by employees. If your open-source use is extensive, you also may want to check with a consultant that specializes in open-source compliance. Finally, once you have a clear understanding of your company's open-source use and the corresponding licensing requirements, get a jump-start on remediation by thinking through your options with your financial and legal advisers. This is especially important if you are attempting to go public or are involved in merger, acquisition or other investment discussions, so that these matters can be addressed early in the process. Mark H. Wittow and Jessica C. Pearlman are partners in the Seattle office of the law firm K&L Gates. Wittow focuses on intellectual property and technology transactions and litigation. Pearlman focuses on corporate securities, and mergers and acquisitions.
<urn:uuid:5d33f383-ed1f-419a-bfaf-ada490a6d64b>
CC-MAIN-2017-04
http://www.cio.com/article/2422016/legal/how-using-open-source-software-can-affect-your-company-s-value.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280266.9/warc/CC-MAIN-20170116095120-00174-ip-10-171-10-70.ec2.internal.warc.gz
en
0.938063
1,681
2.6875
3
A new high altitude IoT research project is set to launch that will begin sending data to the cloud from beyond the clouds. A helium balloon fitted with seven radios, 38 sensors, and six cameras will be sent 100,000 feet into the air so that it can stream real-time telemetry and live flight video to Microsoft’s Azure IoT platform. The project has been named Pegasus II and is scheduled for a test flight over the next few days, following a cancelled launch back in February due a malfunctioning air pressure sensor. The upcoming test will prove critical to ensure that communications with the payload, ground station and field gateways are all working correctly, as well as any associated software. If successful, a new launch window will be declared for later this month. “The Pegasus Mission is all about experimentation and the search for new ideas to achieve something that is not currently possible,” explains the project website. “This is not our day job, it’s our passion for experimentation. High Altitude Science provides an interesting proving ground for this, where it takes literally a mission to get a craft into the upper atmosphere, 20 miles above the surface of the Earth.” IoT in space The Pegasus II project is not the first effort that has looked to take IoT solutions into the stratosphere. Airbus Defence and Space began work on the MUSTANG project last year, aiming to create a global IoT network that relies on both terrestrial and satellite terminals. The vast quantities of data provided by extra-terrestrial sources – NASA collects hundreds of terabytes every hour – also presents possible IoT opportunities. For the team at Pegasus II, they will be hoping that the tests scheduled to take place this weekend are without issue. Anyone that wants to follow the team’s progress can access live feeds from the project website or download the Pegasus Mission smartphone app. Want to stay up-to-date with the latest IoT news and analysis? Sign up for our weekly newsletter here!
<urn:uuid:1500a581-ea56-4ffb-a270-80e62f8be25d>
CC-MAIN-2017-04
https://internetofbusiness.com/microsofts-azure-sends-iot-space/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00384-ip-10-171-10-70.ec2.internal.warc.gz
en
0.913449
406
2.578125
3
A new study by Norton reveals the staggering prevalence of cybercrime: 65% of Internet users globally, and 73% of U.S. Web surfers have fallen victim to cybercrimes, including computer viruses, online credit card fraud and identity theft. As the most victimized nations, America ranks third, after China (83%) and Brazil and India (76%). The first study to examine the emotional impact of cybercrime, it shows that victims’ strongest reactions are feeling angry (58%), annoyed (51%) and cheated (40%), and in many cases, they blame themselves for being attacked. Only 3% don’t think it will happen to them, and nearly 80% do not expect cybercriminals to be brought to justice— resulting in an ironic reluctance to take action and a sense of helplessness. “We accept cybercrime because of a ‘learned helplessness’,” said Joseph LaBrie, PhD, associate professor of psychology at Loyola Marymount University. “It’s like getting ripped off at a garage – if you don’t know enough about cars, you don’t argue with the mechanic. People just accept a situation, even if it feels bad.” Despite the emotional burden, the universal threat, and incidents of cybercrime, people still aren’t changing their behaviors – with only half (51%) of adults saying they would change their behavior if they became a victim. Even scarier, fewer than half (44%) reported the crime to the police. Cybercrime victim Todd Vinson of Chicago explained, “I was emotionally and financially unprepared because I never thought I would be a victim of such a crime. I felt violated, as if someone had actually come inside my home to gather this information, and as if my entire family was exposed to this criminal act. Now I can’t help but wonder if other information has been illegally acquired and just sitting in the wrong people’s hands, waiting for an opportunity to be used.” Solving cybercrime can be highly frustrating: According to the report, it takes an average of 28 days to resolve a cybercrime, and the average cost to resolve that crime is $334. Twenty-eight percent of respondents said the biggest hassle they faced when dealing with cybercrime was the time it took to solve. But despite the hassle, reporting a cybercrime is critical. “We all pay for cybercrime, either directly or through pass-along costs from our financial institutions,” said Adam Palmer, Norton lead cyber security advisor. “Cybercriminals purposely steal small amounts to remain undetected, but all of these add up. If you fail to report a loss, you may actually be helping the criminal stay under the radar.” The “human impact” aspect of the report delves further into the little crimes or white lies consumers perpetrate against friends, family, loved ones and businesses. Nearly half of respondents think it’s legal to download a single music track, album or movie without paying. Twenty-four percent believe it’s legal or perfectly okay to secretly view someone else’s e-mails or browser history. Some of these behaviors, such as downloading files, open people up to additional security threats.
<urn:uuid:93074233-9d26-4715-aa7b-57494e7ef20d>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2010/09/08/the-emotional-impact-of-cybercrime/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00384-ip-10-171-10-70.ec2.internal.warc.gz
en
0.953588
683
2.578125
3
Question 1) CompTIA’s A+ Core Hardware Exam Objective: Installation, Configuration, and Upgrading SubObjective: Identify the names, purpose, and characteristics, of system modules. Recognize these modules by sight or definition Single Answer Multiple Choice What is standard VGA resolution? A. 640 x 350 B. 640 x 200 C. 720 x 350 D. 640 x 480 D. 640 x 480 Standard VGA (Video Graphics Array) was introduced in 1987, and the VGA definition has not changed. VGA systems can display 640 x 480 pixels using 16 colors in graphics mode and 720 x 400 pixels using 16 colors in text mode. Improved VGA systems that are capable of displaying at higher resolutions and using more colors are referred to as Super VGA. The minimum requirement for SVGA compatibility is 640 x 480 and 256 colors. Typical SVGA systems operate at 800 x 600 or better, with thousands or millions of colors. The SVGA definition continues to expand. Video controllers capable of 2048 x 1152 screen width and 128-bit color are available. 640 x 350 is the resolution of Enhanced Graphics Adapter (EGA). EGA is an older standard that was first introduced in 1984. 640 x 200 is the resolution of Color Graphics Adapter (CGA). CGA is an even older technology, introduced in 1981. 720 x 350 is the resolution of Monochrome Display Adapter (MDA). MDA was also introduced in 1981. It does not support color or graphics. 1. A+ Training Guide – Basic Terms and Concepts – Inside the System Unit – Adapter Cards – Video Adapter Cards These questions are derived from the Self Test Software Practice Test for CompTIA Exam #220-301: A+ 2003 Core Hardware.
<urn:uuid:37f23018-7390-466b-989c-730d395ce854>
CC-MAIN-2017-04
http://certmag.com/question-1-test-yourself-on-comptias-a-core-hardware-exam/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00018-ip-10-171-10-70.ec2.internal.warc.gz
en
0.853597
369
2.953125
3
Definition: Solve an optimization problem by caching subproblem solutions (memoization) rather than recomputing them. Aggregate parent (I am a part of or used in ...) Smith-Waterman algorithm Solves these problems: matrix-chain multiplication problem, longest common substring, longest common subsequence. See also greedy algorithm, principle of optimality. Note: From Algorithms and Theory of Computation Handbook, page 1-26, Copyright © 1999 by CRC Press LLC. Appearing in the Dictionary of Computer Science, Engineering and Technology, Copyright © 2000 CRC Press LLC. If you have suggestions, corrections, or comments, please get in touch with Paul Black. Entry modified 23 March 2015. HTML page formatted Mon Mar 23 10:36:23 2015. Cite this as: Algorithms and Theory of Computation Handbook, CRC Press LLC, 1999, "dynamic programming", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 23 March 2015. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/dynamicprog.html
<urn:uuid:9317a1d6-d01b-4a39-9708-f9ef6ac9c2c3>
CC-MAIN-2017-04
http://www.darkridge.com/~jpr5/mirror/dads/HTML/dynamicprog.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00532-ip-10-171-10-70.ec2.internal.warc.gz
en
0.778948
252
2.703125
3
Get started with Raspberry Pi We’re big fans of the Raspberry Pi here at BetaNews. The super-cheap credit card sized computer was created to help get kids back into programming, just as they did in schools in the 1980s and 1990s, but the ARM GNU/Linux board has found an appreciative audience outside of the education system, with over 3 million Pis sold since 2012. Getting started with the device is easy enough, and there’s plenty of help and advice available on the Raspberry Pi Foundation’s website, but if you want a simple, straightforward guide then Manchester based NeoMam Studios has put together an infographic covering setting up, getting started and more. The guide is easy to follow, and there’s also a selection of fun Raspberry Pi projects available for you to try, sorted by difficultly level. If you’re a beginner, you could consider creating an IR remote for XBMC Media Center, a solar powered Pi, or a web server. Intermediate users can choose from creating a gaming device/emulator, Raspberry Pi internet radio or a weather station. Finally, advanced users have the chance to program a game, build a digital camera, or design a mobile robot. There is one problem with the guide and that’s it doesn’t mention the new Model B+ that was introduced a month ago, which is a shame. Check out the infographic below.
<urn:uuid:6509eee2-2aaf-4444-b91c-e0ca98e92347>
CC-MAIN-2017-04
http://betanews.com/2014/08/18/get-started-with-raspberry-pi/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00348-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947127
296
2.78125
3
Almost one-third of U.S. small businesses surveyed by the Ponemon Institute had a cyber attack in the previous year, and nearly three-quarters of those businesses were not able to fully restore their company’s computer data. The primary causes of cyber attacks on small businesses were computer viruses, worms and Trojans (61 percent) and unspecified malware (22 percent), the Ponemon Institute reported. Following the cyber attacks, 72 percent were not able to fully restore their company’s data. The survey found that 29 percent of the small businesses experienced a computer-based attack. The consequences of those attacks included managing potential damage to their reputations (59 percent); theft of business information (49 percent); the loss of angry or worried customers (48 percent) and network and data center downtime (48 percent). In recently released findings on data breaches, the Ponemon Institute surveyed the same small businesses, health care providers and professionals around the U.S. and found that 53 percent had experienced a data breach and 55 percent of those businesses had multiple breaches.
<urn:uuid:57f26608-0224-45bb-a9d1-7632fe7d6d17>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2013/06/06/most-small-businesses-cant-restore-all-data-after-a-cyber-attack/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280899.42/warc/CC-MAIN-20170116095120-00256-ip-10-171-10-70.ec2.internal.warc.gz
en
0.973448
218
2.59375
3
As more and more of our daily life happens online, the issue of online privacy should be of prime importance to each of us. Unfortunately, it’s not. Most users are not worried enough to scour the Internet for information about the latest privacy-killing features pushed out by social networks, online services and app makers, and even those who are often find it difficult and too time-consuming to keep abreast of the changes. What we need is a way of getting all the relevant privacy information in a timely, applicable and focused fashion. Arvind Narayanan, Assistant Professor of Computer Science at Princeton, proposes a “privacy alert” system that would know the users’ usual privacy choices and notify them of appropriate measures they should take to tackle potential privacy pitfalls. In his mind, the system should consist of two modules. “The first is a privacy ‘vulnerability tracker’ similar to well-established security vulnerability trackers. Each privacy threat is tagged with severity, products or demographics affected, and includes a list of steps users can take,” he explained in a blog post. “The second component is a user-facing privacy tool that knows the user’s product choices, overall privacy preferences, etc., and uses this to filter the vulnerability database and generate alerts tailored to the user.” This would allow users to keep on top of things, but also prevent them from being overwhelmed with unnecessary and impractical information. “The ideas in this post aren’t fundamentally new, but by describing how the tool could work I hope to encourage people to work on it,” he admitted, and offered to collaborate with someone who is interested in creating it. He also mentioned a few additional “bells and whistles” such a tool might incorporate, such as the possibility of crowdsourcing relevant information, an open API, and the option of connecting the tool to the users’ browsing history and other personal information. This last option, he says, would work only if users trust the creator of the tool.
<urn:uuid:4980e5db-ee65-4db4-87fc-20863d841843>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2014/04/23/researcher-proposes-alert-tool-for-managing-online-privacy-risks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280899.42/warc/CC-MAIN-20170116095120-00256-ip-10-171-10-70.ec2.internal.warc.gz
en
0.943702
429
2.625
3
As you will see here there are two kinds of IPv6 address autoconfiguration. One of them is the old well know way to automatically configure IP address from IPv4 world, DHCP. The other way to make the autoconfiguration in IPv6 world in new and really interesting as it leaves the host the ability to make the autoconfiguration by themselves without the need to communicate to any other system. IPv6 is meant for various purposes but one main purpose it serves is that it makes life of the network administrators easier, especially when it comes to dealing with vast address space provided by IPv6 provides as compared to IPv4. In order to meet this need the automatic address configuration, autoconfiguration was created. As a result an IPv6 host can configure its complete or part of the address automatically, which depends on the type and method it uses for autoconfiguration. The method types include: - Stateful autoconfiguration - Stateless autoconfiguration using EUI-64 addressing process (SLAAC) Stateful autoconfiguration is a method in which a host or router is assigned its entire 128-bit IPv6 address with the help of DHCP. Stateless autoconfiguration or SLAAC is another method in which the host or router interface is assigned a 64-bit prefix, and then the last 64 bits of its address are derived by the host or router with help of EUI-64 process. This process in fairly simple but there is another article on this site on this link that will make it easier to understand so here we will skip this part. SLAAC uses NDP protocol to work and NDP is another thing that I have written an article earlier so you can read more about NDP here. After all those article that you have here to read it will hopefully be clear how this technology enables all host on the IPv6 network to have their own globally unique IPv6 address without the need for having someone else configure it for him. If you think that there is no better way to do things but to du it yourself then you will be convinced that there is no better way of configure the IP address on host but to give him the ability to do it by himself. There will be no other servers, routers or anything else that can fail and leave the host without the address. From my perspective this is the reason that SLAAC technology is brilliant.
<urn:uuid:fea719c5-c260-406d-b512-0bbe9d1dcfed>
CC-MAIN-2017-04
https://howdoesinternetwork.com/2013/slaac
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00008-ip-10-171-10-70.ec2.internal.warc.gz
en
0.948717
494
2.734375
3
The debate over responsible disclosure of vulnerabilities has been going on for years, but has recently been reignited by Microsoft’s decision to end its public advanced notification system, as well as Google’s decision to publish details for a vulnerability found in Windows the day before Microsoft was set to make the patch available. It begs the question once vulnerabilities are discovered, should one disclose them? If so, what’s the appropriate amount of time? Do we as a security community, need to re-examine the process in which we disclose vulnerabilities? From my perspective, there are two types of disclosure used today by security researchers — full disclosure and responsible disclosure. Full disclosure is the practice of publishing the details of the vulnerability as early as possible and making the information available to everyone without restriction, which typically includes publicly releasing information through online forums or websites. The primary argument for full disclosure is that ethically the potential victim of attacks against the previously unknown vulnerability should be as knowledgeable as those who attack them. Alternatively, responsible disclosure requires that the security researcher not disclose the vulnerability until a fix is available. The argument for responsible disclosure is that blackhats — cyber criminals — can typically exploit the vulnerability when publicly disclosed much quicker than those who are attacked can fix the issue. As such, it is important that a fix is ready and widely available once the vulnerability is made widely known. Responsible disclosure basically requires: - The security researcher who found the vulnerability to confidentially report it to the impacted company. - The security researcher and company work in good faith to establish an agreed upon period of time for the vulnerability to be patched. - Once the agreed upon time period expires and the vulnerability is patched or the patch is available for installation by the users of the software, the security researcher can publicly disclose the vulnerability. Several companies such as Google, Microsoft, and Facebook have also instituted bug bounty programs. Bug bounty programs are similar to responsible disclosure, with the exception that the security researcher is compensated for reporting the vulnerability. Given the number of significant vulnerabilities being found in software we use on a daily basis, it’s clear that this is a debate that should be revisited. I would love to hear your thoughts on how we should define responsible disclosure, please feel free to leave a comment below.
<urn:uuid:1bbe8e31-b03d-443b-b027-a435b6498c19>
CC-MAIN-2017-04
http://www.csoonline.com/article/2889357/security0/responsible-disclosure-cyber-security-ethics.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00494-ip-10-171-10-70.ec2.internal.warc.gz
en
0.958313
465
2.5625
3
A career path that began with studying infectious diseases and led to analyzing terabytes of game data may seem a circuitous route. For Brendan Burke, though, the applied math skills he picked up as an undergraduate biology and political science major, the programming skills he added as a bioengineering graduate student, and his use of the two as a research scientist led to a job in the booming IT field of data science. "A lot of the skill set I developed very specifically for biology could be applied in very commercially viable ways," says Burke, who earned both of his degrees from Stanford University and worked at the California school as a scientist. As head of player science at Playnomics, a Silicon Valley company that uses game data to develop player analytics, the math and computer science skills he used to determine how many touch points a virus requires to spread across a population now help him understand how people interact with games. "Something in data science gets your creative juices flowing when you see something that you built for an entirely different purpose can be used in all of these other ways," he says. Data science also excites companies that want to use the data they've amassed to make strategic decisions that will benefit the bottom line. A range of industries are using data to guide business decisions and bring in revenue, says Laura Kelley, a vice president at technology staffing firm Modis. "Companies are using this information to launch products and services. Whether it's what customers are buying, what products or services get the better ROI, [data] comes into strategic decisions." Businesses, though, are struggling to find employees to handle big data, the term assigned to gathering and analyzing massive quantities of information. This field is relatively new to enterprise IT and although many companies are exploring data science programs, the necessary talent is still maturing, say technology and staffing executives. Related: Big data, big jobs? This places people with applicable skills in demand now and in the future, say hiring experts. The U.S. faces a substantial shortage of workers with data science skills, according to a much-talked about report published last year by consulting firm McKinsey and Company. The report predicted that by 2018 the country will lack 1.5 million analysts who can make strategic decisions using big data and between 140,000 to 190,000 workers with the proper data-processing technology skills. "There [will be] more career opportunities in the future for this type of strategic analysis," says Kelley, who has seen the business intelligence analyst job change into a data scientist position in the last 18 months. "We've always used information but not to this level. With the amount of data companies are capturing on everything and everybody it's just amazing what can be done with that." Colleges have realized the need to train people for those careers and are developing degree and certification programs targeting undergraduate and graduate students as well as IT professionals. To address immediate data-science staffing needs, which include technical and business roles, companies have adopted assorted tactics. To handle the more than 100TB of data processed each week by BrightEdge, a San Mateo, California, startup that helps companies manage their search engine rankings, CEO and founder Jim Yu wants data workers who grasp the entire scope of big data processing. People know how to query databases, but there is "an extra layer of understanding" when handling large data sets, which at BrightEdge includes tracking data on more than 150 billion URLs. Experience working with traditional SQL relational databases helps, but big data's scale requires a different processing mindset, he said.
<urn:uuid:2eb4e607-c32d-4675-83e8-9e413af19b66>
CC-MAIN-2017-04
http://www.computerworld.com/article/2492208/it-careers/big-data-worker-shortage-demands-job-candidates-with-diverse-backgrounds.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00313-ip-10-171-10-70.ec2.internal.warc.gz
en
0.960916
724
2.671875
3
The space agency today began accepting solicitations for these space exploration opportunities and will ultimately pick one of them to begin perusing in 2009 with a launch date targeted at 2018. The solicitations and ultimate expedition are part of NASA's New Frontiers program, which has as its main objective to explore the solar system with medium-class spacecraft missions that will conduct high-quality, focused scientific investigations, NASA said. The first New Frontiers mission was selected in 2003 and will result in the launch of Juno, a Jupiter polar orbiter mission set to blast off in 2011. Ultimately New Frontiers missions should launch on an average of one every 36 months, NASA said. The description and list of potential new expeditions gleaned from NASA's announcement letter looks like this: * The moon's South Pole - Aitken Basin Sample Return: The surface of the South Pole-Aitken basin, located on the Moon's far side southern polar region, is likely to contain some fraction of the mineralogy of the Moon's lower crust. Samples of these ancient materials are highly desirable to further understand the history of Earth's Moon. The return of at least 1 kg of sampled materials is expected, NASA said. * Venus In Situ Explorer: Although the exploration of the surface and lower atmosphere of Venus provides a major technical challenge, the scientific rewards are major. Venus is Earth's sister planet, yet its tectonics, volcanism, surface-atmospheric processes, atmospheric dynamics, and chemistry are all remarkably different than on Earth, which has resulted in remarkably different end states for its surface crust and atmosphere. While returning physical samples of its surface and/or atmosphere may not be possible within the New Frontiers cost cap, innovative approaches might achieve program goals including better understanding the properties of Venus' atmosphere down to the surface through meteorological measurements, NASA said. * Comet Surface Sample Return: Detailed study of comets promises the possibility of understanding the physical condition and constituents of the very early solar system, including the early history of water and the biogenic elements and the compounds containing them, NASA states. The choice of target comet is left to the proposer; however that choice of target must be justified in the proposal by how well it supports attaining the NEW Frontiers science objectives including ability to measure is the elemental, isotopic, organic, and mineralogical composition of comet, NASA said. * Network Science: The interiors of Mercury, Venus, and Mars are poorly characterized and geophysical network missions to these bodies are needed to learn what is inside them, NASA said. * Trojan/Centaur Reconnaissance: The Trojans, known to number well over a thousand, are aggregated along Jupiter's orbit, NASA said. These objects, initially discovered in the early 20th century are thought to be primitive leftovers from early solar system formation, possibly captured during giant planet formation. The Centaurs occupy positions further from the Sun. NASA said it wants to among many other things, determine the mass, size and density of a Trojan and a Centaur; but does not prescribe how the mission should actually be accomplished, NASA said. * Jupiter Io Observer: Tidal heating, a process that can greatly expand the habitable zones in the solar system and elsewhere, is best studied at Io because it provides the most extreme example of this process in the solar system. Io provides the best place in the solar system, beyond Earth, to study volcanism, a process of fundamental importance on many planetary bodies. Among other things, the mission should help scientists understand the eruption mechanisms for Io's lavas and plumes and their implications for volcanic processes on Earth, NASA said. * Ganymede Observer: The large icy satellites hold the key to answering many outstanding fundamental questions about the solar system, and Jupiter's largest moon Ganymede is of particular interest because of its unique internal magnetic field and its interaction with that of Jupiter. Ganymede is the only icy body in the solar system known to generate its own magnetic field, thus providing a unique window into Ganymede's interior and could shed light on the generation of internal magnetic fields elsewhere in the solar system. The objectives of this mission include a measurement of its magnetic fields and how they're generated, as well as how they interact with Jupiter's magnetic field, NASA said. In related NASA news, the agency this week said it had successfully tested its deep space communications network modeled on the Internet. Specifically the space agency used its Disruption-Tolerant Networking, or DTN technology to send dozens of space images to and from a NASA's Epoxi spacecraft located about 20 million miles from Earth. NASA's DSN is made up of myriad systems. It includes an international network of antennas that supports interplanetary spacecraft missions and radio and radar astronomy observations for the exploration of the solar system and the universe. Layer 8 in a box Check out these other hot stories:
<urn:uuid:8d5ca865-5036-43a4-b20f-9a5b43d3f3ab>
CC-MAIN-2017-04
http://www.networkworld.com/article/2347339/data-center/nasa-exploring-8-new-space-expeditions.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00367-ip-10-171-10-70.ec2.internal.warc.gz
en
0.930791
1,008
2.84375
3
Sometimes when you’re streaming a movie on Netflix, it’ll start pixelating. The quality drops and your crisp HD stream melds into blocky fuzz. It might even drop out altogether. It’s incredibly frustrating – it happens to all of us – and it’s the result of network congestion. Not too many years ago, barely any video worked online. It’s astounding how far we’ve come thanks to adaptive bit rates and better routing techniques that help avoid bottlenecks. When the system does fail, congestion can be sourced from one of four places: Either it’s a failure in the home network (like your Wi-Fi router), in the last mile network (like your ISP), at core interconnection points (like in a CDN), or it’s a failure with the edge provider (like Netflix). It can be difficult to tell which piece of the puzzle is causing the problem, but if we connect the dots, the source of the congestion becomes clearer. Last week, three different reports were released that contain technical data to help better understand the source of streaming performance .The first was the FCC’s 2014 Measuring Broadband America report. The report revealed how well broadband providers are delivering the advertised speeds in their last-mile networks. The FCC found that, on average, almost all ISPs are meeting or beating advertised speeds. So even though peak periods can experience some fluctuations, the congestion is probably not caused by your ISP. The second was an MIT preliminary report measuring Internet congestion. In their report, MIT data revealed that there was not widespread congestion among the U.S. providers at their interconnection points in the core of the network. So that rules out systemic interconnection failure. The third report was from a consulting company, NetForecast, which released a report that looked at the Netflix ISP Speed Index – a report from Netflix that analyzes the performance of ISPs. NetForecast concluded that the Netflix ISP Speed Index, which many have used to suggest ISPs are responsible for degradation in streaming quality and chronic congestion, was actually factoring in things ISPs had no control over. Things like choices made by the end-user, available capacity or performance of the Netflix servers, and the performance of the network path between Netflix’s own CDN and the last mile ISPs. So what do these reports reveal? The FCC shows ISPs are generally over-delivering on speeds in the last-mile. MIT says the core interconnection points are not congested, except those related to Netflix. And Netflix’s own report implicating ISPs turns out to calculate things ISPs have no control over. So if we assume your home network is functioning properly (you can call your ISP to check) these reports confirm that edge providers, in this case Netflix, are a source of persistent congestion which can lead to buffering and a less-than-HD experience. But don’t take our word for it. Listen to the technical experts.
<urn:uuid:cefbd3a4-673b-4369-a309-9f0720cb46bb>
CC-MAIN-2017-04
https://www.ncta.com/platform/broadband-internet/connecting-the-dots-on-internet-congestion/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00211-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947149
617
2.703125
3
The federal government is awash in data. And it's expanding at rates faster than chief information officers can count. No one knows exactly how much information agencies have stored in their far-flung databases, but experts say it's a lot. Consider this: By 2015, the world will generate the equivalent of almost 93 million Libraries of Congress--in just one year, according to Cisco's Internet Business Solutions Group. The government is a big player in that information explosion, although how big is not certain. The cost to store and manage the growing mound of data is rising and eating up scarce information technology resources. It's no surprise the next big IT investment agencies will make in the coming years, if they haven't already started, is in something called virtualized storage, which uses software to connect multiple devices to create what simulates a single pool of storage capacity that can be controlled from a central console. The console makes it easier to back up, archive and retrieve data. With agencies creating more data, storage virtualization is an inevitable part of their IT future. Many operations--from the Congressional Budget Office to the State Department to the U.S. military--are looking for ways to squeeze more efficiency out of their storage systems and drive down costs. The Census Bureau is looking to virtualize storage to help it manage the 2.5 petabytes of data that ebbs and flows as it conducts the decennial census and vast economic surveys. The data, which amounts to more than the entire collection in all U.S. academic research libraries, is contained in a variety of storage platforms that multiple vendors supply. But maintaining so many disparate systems is driving up the cost of operating the data centers that house the information. "We have a very diverse storage architecture, and that diversity doesn't lend itself nicely to be highly efficient from a cost perspective," says Brian McGrath, CIO at the Census Bureau. He says virtualization would create storage platforms that could be shared throughout the bureau to minimize unused capacity and lower operating costs. Five years ago, the first wave of data center efficiency began with server virtualization. Agencies were able to consolidate 10 or more servers into one, increasing use of available computing power from about 30 percent to as much as 80 percent. But that placed demands on storage and backup systems, which require a lot of server capacity. "Backing up a virtual server infrastructure becomes a big burden on data centers and their resources," says Fadi Albatal, vice president of marketing with FalconStor Software. "When server utilization rates were 20 percent, servers still had 80 percent available for heavy load processing such as backups. Now that servers have utilization rates of 80 percent it means there's only 20 percent left for all my backup processes." The Next Big Wave Now agencies are turning to virtualization not only to process their data but also to store it, driving down the cost of purchasing and maintaining storage devices. The savings potential is substantial. Storage accounts for about half of what an agency spends on hardware. And less equipment means less power usage, which can drop by as much as 50 percent with storage virtualization, Albatal says. Other savings come from freeing up IT employees to work on other data center projects. The shift to storage virtualization is picking up momentum because it coincides with several Obama administration IT initiatives. The Federal Data Center Consolidation Initiative that federal CIO Vivek Kundra outlined in February requires agencies to come up with plans to combine the government's 1,100 data centers. The goal is to reduce energy consumption and operating costs by making better use of hardware. The approach is key to cloud computing, another Kundra initiative. By virtualizing servers and storage systems, agencies create shared data center platforms that can host applications and provide Web-based services similar to cloud computing models. "The opportunity is there to take a holistic view of the entire enterprise, with servers, storage and backups, and to create an agile and responsive data center that will enable private and public cloud computing in the near future," says Michael Voss, lead associate with Booz Allen Hamilton on federal data center consulting projects. 'Teeny' Agency, Tons of Data The Congressional Budget Office was driven to storage virtualization for cost and power savings, which are critical for the 250-person agency located in the aging Ford House Office Building on Capitol Hill. The agency provides lawmakers with myriad economic reports, analysis and statistics that inform federal budget decisions. "We're a teeny agency, and we're dealing with huge amounts of data. This is not data that we're generating. This is data that we're taking in to analyze. It's a process that's beyond our control," says CIO Jim Johnson. "I want to spend as little on storing that data as I can and still have it readily available." CBO has 15 virtualized servers containing anywhere from 3 to 5 terabytes of data, depending on their workload. One terabyte is the equivalent of all the X-ray films in a large high-tech hospital. The agency has been able to buy the storage it needs incrementally, and it has been able to reduce downtime because it can get backup servers and storage devices up and running faster. "For our primary storage--that's the corporate storage that we live and breathe on every day--we're going to buy high-end storage solutions. . . . But for the replica, for the mirror copies that we keep . . . that we hope we'll never have to use, we use lower-end storage," Johnson says. "This allowed us to spend our funds more appropriately based on our requirements." Johnson estimates CBO spent about $300,000 on its virtualized storage platform during the past four years. But, he says, the agency saved money by not having to maintain identical storage platforms for primary and backup copies of data. "Storage virtualization makes it easier for me, particularly as a smaller agency with a smaller budget, to be able to manage my storage requirements . . . and not be held hostage by a single vendor," Johnson says. Capacity on Demand The State Department is a leader in data center consolidation, deciding back in 2002 to start reining in the computer rooms and servers that were popping up throughout its facilities. The department is shifting to server and storage virtualization at its three data centers--on the East Coast, West Coast, and one operated by a commercial vendor. "We have 3,000 systems, and about 21 percent of them are virtualized," says Ray Brow, division chief for enterprise data center consolidation at State. "Our goal is to get to 90 percent. We think that's doable, while 100 percent may not be." Next up for State is modernizing the systems it uses to store more than 10 petabytes of data, including e-mail, files and electronic forms that used to be filled out and stored on paper. "At the Department of State, we have online all the visas that have been granted since 1992. It's all stored on disk," Brow says. The department has migrated from tape to disk for backups, with only one mainframe application left to transition, because tapes are easily lost and take longer to replicate. "We were buying tapes and more tapes," Brow says. "We were able to justify moving entirely to Data Domain disk backup systems just on the cost of new tapes, new tape drives and maintenance costs alone." According to Brow, the biggest benefit of virtualization is centralizing storage management. The department's storage area network software and disk arrays can provide mirror images of systems and snapshots of stored information for faster data recovery. "The snapshots are very efficient in that they are only keeping track of changed data. . . . When someone needs a file restored, most of the users can figure out how to get back to the snapshot," he says. The biggest challenge for State is managing the growth of its storage requirements and their associated costs. Brow estimates storage and backup represent 35 percent of the department's IT purchases. State already limits the size of employees' mailboxes and home directories, and maintains only six months of backups. State expects to reap additional savings through a process known as thin provisioning, which allocates available server capacity as needed rather than committing blocks of storage space upfront. "Our [storage area network] growth is pretty badly out of control," Brow says, adding he hopes thin provisioning will improve the situation. "We're trying to project our storage needs so that thin provisioning and acquisition go hand in hand." Weeding Out Duplication When the Defense Department merges two of its Washington area hospitals--Walter Reed Army Medical Center and the National Naval Medical Center--the new facility in Bethesda, Md., will consolidate IT as well. Due to open in September 2011, the National Military Medical Center will feature a new 5,000-square-foot data center with the latest in storage virtualization technology. "We need 24-by-7 availability and reliability," says Lt. Cmdr. Cayetano "Tony" Thornton, CIO at the National Naval Medical Center. "We are the president's hospital. In addition to that, our primary customers are the men and women who support the nation. Once these guys leave the battlefield, they land at Andrews Air Force Base and they come over to Bethesda. We need a robust backbone and a robust data store to allow our providers to give them seamless health care." Data center efficiency is key to the merger, says Thornton, who notes the services had duplication across the board. "Walter Reed had over 400 applications and systems, and now we've brought that down to around 150," Thornton says. "Within each clinical area, we are looking at what the Army, Air Force and Navy are using, and we're choosing the best systems and applications so that we can deliver the best health care possible. "We are becoming more and more dependent on storage because of all the different scans that we do--CAT scans, PET scans, MRIs--that we're looking to store in one central repository," he says. The Defense Department's electronic medical records system is demanding an ever-increasing amount of storage capacity, according to Thornton. Walter Reed has more than 100 terabytes of storage, while the Naval Medical Center has about 40 terabytes. "Our health care providers are looking to scan and to digitize all of that information and to make it available across the military health care enterprise in the National Capital Region," Thornton says. "That means we need a very robust data store and servers." The new data center also will feature deduplication software that will automatically remove multiple copies of data, freeing up storage space and speeding backup processes. According to Thornton, the center aims to reduce duplication by about 50 percent. "You really can't get rid of medical data from a historic perspective. It's very complex, and it's not an IT decision. It has to be a clinical health care decision," he says. "What we try to do is manage the data we have as efficiently as possible." Defense has spent about $10 million for the hardware and software for the new data center, with storage components accounting for 30 percent to 40 percent of the cost. All the servers and storage devices will be virtualized for better data management, accessibility and reliability. Just Getting Started From census and spending tallies to passport and health care records, storage virtualization has unleashed the potential to free up computer, energy and staff resources for agency missions. Virtualized storage also promises to improve the speed at which agencies are able to respond to disasters, from hurricanes and floods to terrorist attacks. "Once we virtualize the storage infrastructure, we can do continuous backups or replicate to another location and have a standby," says FalconStor's Albatal. Agencies have been adopting storage virtualization for several years, but have a long way to go before they have optimized their data center storage environments. "We probably haven't virtualized 10 percent or 20 percent of federal storage platforms," says Mark Weber, vice president and general manager of the U.S. public sector business for NetApp. "The market is still in front of us." Carolyn Duffy Marsan is a high-tech businessreporter based in Indianapolis who has covered the federal IT market since 1987.
<urn:uuid:d893ed8e-1e6b-4640-b976-8f54a735c567>
CC-MAIN-2017-04
http://www.nextgov.com/cloud-computing/2010/08/data-deluge/47440/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281649.59/warc/CC-MAIN-20170116095121-00119-ip-10-171-10-70.ec2.internal.warc.gz
en
0.957479
2,525
2.59375
3
I’m not an expert on transportation history, but I think I have enough of the facts straight to make this analogy work. So please bear with me if I stretch the facts in order to make them fit my purpose. In the early days of this country, transportation was more or less based on the individual; you rode your horse or wagon to get where you needed to go, and to move goods from one place to another. Then came the railroads. These traveled along fixed lines, and were able to carry more people and more cargo at a much greater speed and reliability and at lower cost. And businesses and population centers sprang up along these routes, and many of them prospered as a result of their connection to the railroad lines. In time however, our horses and wagons were replaced by vehicles with internal combustion engines, and we were freed from the limits of the railroad lines. And we built a network of roads and highways to encourage the movement of people and goods at even greater speed and efficiency. Now, here’s the big difference between these two systems. The railroads were closely controlled by their owners (at least initially), and folks like the Vanderbilts and Goulds made fortunes from them. The roads were publicly owned (though there have been exceptions), typically supported by public funds from tolls or taxes. With railroads, you had to go where they took you, and the content that they delivered was controlled by their owners. The highways existed simply to let people and goods move from here to there in a network of points, and the public entities that owned them didn’t much care what was moved or where it went. I see parallels in the systems we have now. Cable television and the traditional phone service are like the railroads of old; they are purpose-built for a specific service. The service provider maintains both the physical system and the service, all for the profit of the owners. In contrast, the Internet is like our highway network; built and maintained by a range of entities for the sole purpose of facilitating the movement of data from here to there. The difference from the highway system is that the majority of the entities involved in the Internet are also looking to profit from their activity. But neither network cares much where the content comes from or ends up, or what the content is. And here lies the problem for the cable companies. They are transitioning into becoming the providers of broadband Internet access. For now, they are in a conflicted position where they are trying to protect the access to their content (the TV programming that they provide) while trying to offer their customers high-speed access to all the content that the Internet has to offer. In short, they are trying to run access to a highway system as if it were a railroad. In my opinion, they won’t succeed. Before long, the content will become completely separated from the delivery medium, and consumers will be free to choose which physical (or wireless) network they want to connect to in order to gain access to the Internet and its content. They will also have even more choices for free (Hulu, YouTube) and fee-based (Netflix, Hulu Plus, Amazon Video on Demand) video and movie content that they can access over this connection. The cable company will become a conduit for information as a utility, just like water or electricity. (And keep in mind that the electric company could also become a competitor to provide broadband connections; after all, they also have a physical infrastructure already in place that reaches almost every home.) This is not a change that will take place overnight, but I believe that it will happen and that the signs of this shift are already present.
<urn:uuid:8f916619-5be8-40ba-afe4-eaa0fcbb5075>
CC-MAIN-2017-04
https://hdtvprofessor.com/HDTVAlmanac/?p=1476
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00083-ip-10-171-10-70.ec2.internal.warc.gz
en
0.982694
752
2.6875
3
21st century skills: Changing the way a whole new generation thinks. Business demands are rapidly changing and the pace of change is accelerating. The top 10 in-demand jobs today did not even exist five years ago. We are preparing for careers that have yet to evolve. We are gearing-up for technologies that are still to be designed. A significant proportion (some say over 50%) of today's school kids will end up at jobs that haven't yet been invented. To prepare for the jobs of the future, students need to become proficient at what are increasingly being termed as “21st century skills." What are 21st century skills? These are the skills that will help us conquer the 21st century. They fall into three broad categories: Learning and innovation skills: This includes critical thinking and problem solving, communications and collaboration, creativity and innovation. Digital literacy skills: This covers information literacy, media literacy, and information and communication technologies (ICT) literacy. Career and life skills: Here students are taught flexibility and adaptability, initiative and self-direction, social and cross-cultural interaction, productivity and accountability. We believe that children across social strata and regardless of economic background have the innate ability to think intuitively, reason logically, analyze creatively and do complex problem-solving. All they need is the opportunity and support to help develop and hone these abilities in order to grow and succeed in a rapidly evolving digital world. We put this theory to the test at a recent Genpact-sponsored competition – and the young contestants came out with flying colors! Armed and ready to learn Globally, schools and colleges are looking for ways in which to adopt these new learnings into their curriculum and though in India we may still have a little way to go, the process has certainly begun. At Genpact, we are pushing for a change in every way possible – be it through our tie-ups with Ashoka University, our LEADearthSHIP program, or our Reach Higher program. The most heartening thing is that young children, regardless of their social and economic backgrounds, are innately equipped with the intuitive skills and thinking – all they need is the opportunity to develop them. Putting it to the test Genpact, along with its partner Thinkstations, ran a competition in the Delhi/NCR region for children from 32 schools (of which 15 were under-privileged schools). Thinkstations had conducted a training program for students of the 15 underprivileged schools, prior to the competition. The objective of the training was to familiarize the students with the format of the competition, and to teach them some basic skills, including research and presentation skills. There were many rounds and many kinds of tests and challenges. The final activity required the teams to understand greenhouse gases, demonstrate it in an experiment, prepare an action plan and present their plan to the audience. An overwhelming achievement! Out of the shortlisted 8 schools, the winner was Sarvodaya Kanya Vidyalaya, one of the participating under-privileged schools! They were followed by Shri Ram School and Lotus Valley School. What an incredible achievement for this brilliant, all-girl team from Sarvodaya Kanya Vidyalaya! And what an amazing attitude these five young women had! When asked about their opinion on the competition, one of them said: “We didn't look at it as a competition; we came to learn." Their level of collaboration, clarity of thought and attitude towards mastering the concept – all prime 21st Century skills – is what truly set them apart. What goes around, comes around (And I say this in the best way possible!) Before the competition, we had been afraid that the underprivileged schools would need special 'innovation' awards to encourage them if they lost, so imagine our delight when one of them emerged winners! For us this was a revelation, and it was compounded by the fact that the Sarvodaya Kanya Vidyalaya team came from a class taught by a fellow from Teach For India – and that fellow had been sponsored by Genpact as part of our on-going relationship with Teach For India. We had come full circle, in a way, and in doing so, were able to prove what we had believed all along: that there is so much latent talent and potential in these children – all they need is the opportunity and support to truly tap it and put it to use. It makes everything we do that much more meaningful. Even the fact that this was an all-girls' school was so heartwarming in the context of what we are trying to do to help empower women. And so, as companies continue to disrupt and transform their industries, we will continue to find ways in which to help equip a new generation of talent with the skills that will be needed to meet their demands. I have only optimism and excitement for the future – and for how we are going to help these young people change the world!
<urn:uuid:0eb72e76-0f61-450e-bcb4-39e321b1f758>
CC-MAIN-2017-04
http://www.genpact.com/home/blogs/bloginner?Title=It+takes+skill+to+change+the+world&utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+GenpactBlogs+%28Genpact+Blogs%29
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00505-ip-10-171-10-70.ec2.internal.warc.gz
en
0.973294
1,030
2.828125
3
By Parul Oswal, Senior Research Analyst, RFID, and Michelle Foong, Senior Research Analyst, Smart Cards, Industrial Technologies Frost & Sullivan Asia Pacific. The debate between RFID and smart cards technology is an ongoing one. There is no clear definition that describes RFID and smart cards, and at times these two terms are used interchangeably, due to lack of awareness, resulting in confusion between the differences. Confusion is especially strong between contactless smart cards and RFID. The key issue that has given rise to this debate is the contact less interface and that too an RF (radio frequency) one. Both contactless smart cards and RFID use radio frequencies for communicating between the card and reader. The applications for which RF is used can be different for RFID and smartcards. RFID is mainly meant for applications within the supply chain, for track and trace. Contactless smart cards on the other hand are mainly meant for payments/banking, mass transit, government and ID, and access control. This article aims at clearing the confusion between the two technology definitions. The following chart depicts the various applications of contact less smart cards and RFID, along with their level of information security. RFID and smart cards both can be used in transit applications and most of the time they are used together to provide increased convenience to end users. An example of this would be the "Touch n go" cards in Malaysia used on toll ways. The Touch n Go card is a contactless smart card, but this card can be purchased with an additional RFID transponder (where the smart card will be inserted) so that the toll booth reader can read the cards from a greater distance than the 10cm limit restricted by smart card standards. Without the additional RFID transponder, the contactless Touch n Go smart cards can still be used, which means that the driver need to screen down their windshield to tap the card on the reader, instead of just driving through while the RFID transponder will be detected by the reader above the toll booths at a greater distance.
<urn:uuid:bd05d5e2-38aa-4cc9-8d1c-4a45c55f5e28>
CC-MAIN-2017-04
http://www.frost.com/sublib/display-market-insight-top.do?id=83467478
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00285-ip-10-171-10-70.ec2.internal.warc.gz
en
0.946021
423
3
3
What email address or phone number would you like to use to sign in to Docs.com? If you already have an account that you use with Office or other Microsoft services, enter it here. Or sign in with: Signing in allows you to download and like content, which the author will be aware of. Embed code for: Presentation about Threats to Boreal Forest Select a size Presentation about Threats to Boreal Forest Good afternoon, everyone. Welcome to my presentation, my topic today is about threats to boreal forest ecosystem. At beginning, let’s talk about the climate, globe distribution and species in this system. For climate, look at the picture here, the climate in taiga is very cold normally (taiga is a nickname of boreal forest), its annual temperature is between 5 degrees Celsius and negative 5 degrees Celsius. According to average annual rainfall, about 20 cm of precipitation each year to over 200 cm(data is from http://w3.marietta.edu/~biol/biomes/boreal.htm). Something worthy to talk is much rainfall type in taiga is snow because of its cold temperature. “The winters in boreal forest system are extremely cold, and long, Summers are cool and short relatively.”(http://w3.marietta.edu/~biol/biomes/boreal.htm). Hence the growing seasons in taiga are really short, they’re usually less three months. Globe distribution: in this image, Russia has a wild range of boreal forest in the southern area. And a little of them distribute in northeastern China, almost in Hei Longjiang province. Look at the America, south Canada has many taiga forests. And some in Alaska of USA. Okay, now we go to the next part---organism. Most of plants in taiga can against low temperature with their special niche. For example, pine trees, they grow some narrow and long leaves like needles. So, why are such leaves? This kind of leaf shape can help pine trees reduce water runoff and resist cold, because of the small surface area. That is the reason. And the picture on the right side shows how pine cones survive in winter. They are able to make their body open in a warm condition or closed in a low temperature. When their body close, their surface area will decrease, almost same to pine leaves. Then we will watch some other plants in this system---pine trees we have talked, moss and lichen(both grow in everywhere in the forest). With regard to animals, I put some typical animals here. Bobcat as we all know, they can jump into snow and hear tiny sound around. Due to this skill, they’re able to catch mouse that hide in the snows. Grizzly bear! Do you still remember this, they rub their backs on the trees in spring. Snowshoe hare, they keep the brown hair normally, when autumn come, they begin to shed their brown coat and replace with white fur that will help them hide them in the winter snows. And the last one is moose, they looks like deer but actually it’s not. Look at this picture, grizzly bear is on the top level---tertiary consumer. Bobcat is on second level called secondary consumer. About snowshoe hare and moose, they are both primary consumers which eat plants. Threats: I will introduce three main threats to taiga ecosystem. First, logging. In Canada, millions of ares of trees will be harvested yearly. Because timbers are a kind of important building material, and also they can made paper that people almost use everyday. Next, oil and gas exploitation. Oil and gas are both fossil fuel that can provide energy for human. And most parts of energy come from fossil fuel. Hence they are important for our daily life. However, while people exploit fossil fuel, it makes some hazards to forest as well. Last, desertification(also called deforestation). Normally, there are some lakes and rivers next to a forest, they provide water for the forest system. Sometimes, people build dams on the rivers to product electricity. It will cause some areas of forest dry-up. Why we should protect boreal forest ecosystem (what functions does this ecosystem have): Producing oxygen(if we don’t have oxygen, we will die) Keeping earth in a certain temperature range(forest is rich for water, and water have high specific heat capacity---need absorb/release more heat to increase or decrease temperature relatively) Solution to logging: 1. Cutting trees dispersively 2. Limiting cutting number per hectares 3. Planting new trees after cutting to keep ecosystem sustainable Solution to oil and gas exploitation: Developing renewable energy Restoring original environment after exploiting Solution to deforestation: Don't build dams next to boreal forest Planting more trees Reducing exploitation of undergound water http://water.usgs.gov/edu/heat-capacity.htmlal rainfall, about 20 cm of precipitation each year to over 200 cm(data is from http://w3.marietta.edu/~biol/biomes/boreal.htm). Something worthy to talk is much rainfall type in taiga is snow because of its cold temperature. “The winters in boreal forest system are extremely cold, and long, Summers are cool and short relatively.”(http://w3.marietta.edu/~biol/biomes/boreal.htm). Hence the growing seasons in taiga are really short, they’re usually less three months. Solution to oil and gas exp
<urn:uuid:122f8a89-3eeb-4f69-bad1-87b4f44d1dde>
CC-MAIN-2017-04
https://docs.com/lee-will-1/2049/presentation-about-threats-to-boreal-forest
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00009-ip-10-171-10-70.ec2.internal.warc.gz
en
0.926001
1,193
3.578125
4
Manufacturing Breakthrough Blog Thursday July 2, 2015 What Are the Thinking Processes? In a nutshell, the Thinking Processes comprise a suite of five logic diagrams as well as a set of logic rules. As I’ve written about in previous posts, there are three basic questions we should always ask when trying to improve our organization. - What to change? - What to change to? - How to cause the change to happen? The TOC Thinking Processes are designed to answer these three questions in a very systematic and logical way by exploring and communicating information and assumptions about the current reality and the future reality and how to get there. Each Thinking Process diagram includes the use of a particular type of logic when they are used. Some use necessity based logic and some use sufficiency based logic. So before we delve into each of these tools, let’s first explain the difference between these two different types of logic. Necessity-Based versus Sufficiency-Based Logic Necessity-based logic diagrams are those that identify conditions that are necessary for a particular effect to exist. A sufficiency-based logic diagram is one that identifies all of the conditions that are necessary and sufficient to cause a particular effect. While both sufficiency- and necessity-based logic rely on cause/effect relationships, there is a difference between the two. When testing for sufficiency we ask: “If A occurs, then is this sufficient to cause B?” That is, are the entities within the logic tree complete and valid? On the other hand, necessity-based logic is looking for those things that must be done to overcome potential obstacles to achieving a particular outcome. These injections (actions or ideas) then become minimum mandatory requirements for the predicted outcome to happen. Necessity based logic is triggered by asking: “In order to have A we must have B because C” For example, In order to minimize downtime at the system constraint, we must plan 100% all equipment services (e.g. scheduled maintenance) and repairs (e.g. unscheduled maintenance) because planning reduces wait time and therefore minimizes the mean time to repair. Examples of Necessity and Sufficiency Based Logic To say that A is sufficient to cause B means that if A exists, then it guarantees the presence of B. Using maintenance as an example, if we were to say that “correctly trained maintenance employees” are required to have a “highly reliable manufacturing plant,” then by saying this suggests that correctly trained maintenance employees (A) will guarantee that we have a highly reliable plant (B). This implies that correctly trained maintenance employees would be a sufficient condition for a highly reliable plant. To test whether this sufficiency statement is actually true, we need to ask this question: “Is there a situation where A is present but B is not?” We naturally know that in reality there are organizations that do have correctly trained maintenance employees but still have low plant reliability due to many other factors like old equipment, incorrect maintenance strategies, replacement parts not available, etc. Because of this, our original statement doesn’t pass the sufficiency-based logic test. There are many other conditions that must exist like correctly trained operators or enough maintenance personnel that are also required to have a highly reliable plant. Therefore the statement ”properly trained maintenance employees are needed for a highly reliable plant” is not sufficient by itself to guarantee a highly reliable plant. The logic of necessity is to identify the minimum mandatory requirements to achieve the intended objective. Necessary conditions seek to remove ambiguity and be unequivocal. The primary test for necessity asks “‘is it true that the stated requirement must exist in order for the subsequent outcome to occur?” For example, in order to have a fire, we must have fuel matches and air. If one of these is removed then a fire cannot happen, so it is a necessity that all three exist. The Thinking Process Diagrams Dr. Eli Goldratt is the man responsible for the creation of the first five logic trees listed below and we will discuss the basics of each one in future posts. The Goal Tree was developed by H. William Dettmer and, for me, it was a clear breakthrough in logical decision making. I will spend more time on this logic diagram because it’s the easiest to learn and, in my opinion, will help you the most. - Current Reality Tree - Evaporating Cloud (EC) - Future Reality Tree (FRT) - Prerequisite Tree (PRT) - Transition Tree - Goal Tree A Question to Ponder Let's say we want to improve the quality of the product we manufacture. We want to achieve this effect: "The defect rate of our manufacturing operation is less than five percent." Right now the defect rate is nine percent, and our control-chart shows that our manufacturing system is in a state of statistical control. If you were to devise an improvement plan, which type of logic would you use to develop the plan? In my next post we’ll begin discussing the Thinking Process tools and the correct sequence in which to use them. As always, if you have any questions or comments about any of my posts, leave a message and I will respond. Until next time.
<urn:uuid:641591e8-a405-40cd-8d4d-6d6cd28253c0>
CC-MAIN-2017-04
http://manufacturing.ecisolutions.com/blog/posts/2015/july/the-thinking-processes-part-1.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00403-ip-10-171-10-70.ec2.internal.warc.gz
en
0.939329
1,108
2.859375
3
How to Find BSOD (Blue Screen) Error Messages This guide will explain how to locate and analyze BSOD error reports. There are 4 places (by default) where Windows presents this information. If you've disabled the Error Reporting Service or the Event Viewer, then I'm afraid that you're just SOL The Blue Screen of Death (also known as the BSOD) is a screen that Windows shows you when it shuts down your computer in order to prevent damage to it. It's also known as a STOP error or as a BugCheck Code. It is a hardware error by definition - but this doesn't mean that it's caused by faulty hardware. Viruses, corrupt drivers, and even poorly written programs can cause it. Here's an example of the screen with some notations on what to look for: Finally, a note on shorthand. A STOP 0x0000007a error is referred to (in shorthand) as a STOP 0x7a error. It's just a way of not having to write all them zero's out each time that you refer to it. How To Disable Automatic Restarts How To Use The Event Viewer How To Debug Memory Dumps - The first place to get the information is from the Blue Screen itself. Write down all of the long numbers, the description that's in all Caps with underscores ( _ ) between the words, and any file names that may be mentioned (be sure to note if there wasn't a filename in your post). A more in depth look at this is included in the second reference ( How To Use The Event Viewer ). In the event that the BSOD flashes by too fast to read, use the first reference to disable the Automatically Restart function ( How To Disable Automatic Restarts ). - The next place to find the information is in the Event Viewer. Use the mini-guide in the second reference to see how to do this ( How To Use The Event Viewer ). - The last place to find the information is on your hard drive. Search your hard drive for files ending in .dmp and .mdmp. You're looking for the most recent file (or the one closest to the last BSOD that you experienced). Once you find it, use the third reference ( How To Debug Memory Dumps ) to perform an analysis of the memory dump. Be sure to use the !analyze -v command in the bottom of the Debugger's window before closing out your session. Then copy and paste the results into your next post. Someone will have a look at it to see if we can figure out what's gone wrong. - Sometimes, when Error Reporting is enabled, the dump files will be stored temporarily on your system and are erased once the report is sent. To save this info, you'll have to copy the dump file before sending the report. To do this, just click on the "Details" link in the error report and you'll see some file locations listed. Choose the one that ends in .dmp or .mdmp, locate it in Windows Explorer, and copy it to your Desktop (you'll have to enable viewing of hidden files to do this). Here's an example of the Details: ERROR REPORT CONTENTS Following files will be included in this error report C:\DOCUME~ 1\Owner\ LOCALS~1\Temp\WER7fde.dir00\Mini112706-02.dmp Edited by usasma, 01 November 2008 - 08:48 AM.
<urn:uuid:6e445185-6b09-4aa0-9b7b-4dd41c861797>
CC-MAIN-2017-04
https://www.bleepingcomputer.com/forums/t/74712/how-to-find-bsod-error-messages/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280791.35/warc/CC-MAIN-20170116095120-00038-ip-10-171-10-70.ec2.internal.warc.gz
en
0.91403
721
2.78125
3
Photo of the Week -- Predicting Floods From Space / July 8, 2014 Researchers are looking to space to predict which rivers are most at risk of flooding, using satellite data to measure how much water is stored in a river basin months ahead of the spring flood season. "Just like a bucket can only hold so much water, the same concept applies to river basins," lead study author J.T. Reager, an earth scientist at the University of California, Irvine, told the Christian Science Monitor. When the ground is saturated, conditions are ripe for flooding. Looking back in time using data from NASA's twin GRACE satellites, Reager and his colleagues measured how much water was soaking the ground before the 2011 Missouri River floods -- as the satellites circle the Earth, changes in gravity slightly disrupt their orbit, which are proportional to changes in mass like a buildup of water and snow. The researchers' statistical model strongly predicted this major flood event five months in advance. Though less reliable, the researchers say the prediction could be extended to 11 months in advance. Reager told the Christian Science Monitor that he hopes his new method will eventually help forecasters prepare reliable flood warnings several months earlier. "It would be amazing if this could have a positive effect and potentially save lives," he told the media outlet.
<urn:uuid:850337f5-8bcd-4b79-920d-81ab60fea09d>
CC-MAIN-2017-04
http://www.govtech.com/photos/Photo-of-the-Week-Predicting-Floods-From-Space.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00488-ip-10-171-10-70.ec2.internal.warc.gz
en
0.95915
267
3.90625
4
Speaking at SyScan360 in Singapore, Rob Miller – senior security consultant at MWR today detailed how to build LoRa systems that are provably secure against cyber-attack. Despite LoRa being a being relatively new standard, it’s low-power, long-range credentials make it a perfect candidate for a number of cutting edge-applications, meaning adoption is now accelerating at a significant pace. With industries now scrambling to take advantage of this emerging protocol, MWR noticed a lack of practical LoRa security guidance available and sought to close the gap with a new whitepaper. Explaining the use case for LoRa, Miller said “Long range radio protocols, like GSM and WiFI, draw a lot of power making them unsuitable for smaller or remote devices, while In contrast low power solutions, like ZigBee or BTLE, are limited in range to tens of meters. So, there is a need for a long range solution that only sends occasional, small amounts of data that could run off a battery for years. LoRa, and its primary protocol LoRaWAN, addresses this gap in the market. It is intended for systems that require the ability to send and receive low amounts of data over a wide range without high power costs. “ In conducting the research, Miller noted that whilst several effective security features are designed into LoRa, companies should not consider the protocol secure out of the box - “Simply stating that a technology "uses AES-128 encryption" does not mean that solutions using this technology are therefore secure. It should be clear to all developers of LoRa solutions that using LoRa does not guarantee security. Instead they should build LoRa solutions with the potential attacks in mind. “Given that LoRa will form part of a complex IT solution means that security vulnerabilities are a likely occurrence during development. Similarly given that LoRa solutions are being used in systems ranging in use from home security through to monitoring and controller infrastructure, attacks and development of exploits against these systems are also likely. Miller concludes in the whitepaper that “Secure systems can be developed by understanding LoRa’s security features, as long as developers accept that they are not a silver bullet for security. A secure solution can be developed by considering cyber-security at every stage. Knowing the different ways that a LPWAN solution can be attacked allows a system to be developed built to defend, detect and respond to cyber-attacks.” The MWR Labs whitepaper with guidance on securing this protocol is available here.
<urn:uuid:017728f7-4a22-4f2e-9447-72e14e425dcf>
CC-MAIN-2017-04
https://www.mwrinfosecurity.com/news/mwr-issues-advice-to-build-a-secure-lora-solution/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00414-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947259
522
2.578125
3
The first building blocks of the network that could replace the internet were laid this week. The first prototype GENI (Global Environment for Network Innovations) core network nodes were installed in two Internet2 backbone sites, and are starting shakedown and trial operations, says the GENI organisation. By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers. GENI is a virtual laboratory funded by the US National Science Foundation for exploring future internets. The aim is to support large-scale experiments on shared, heterogeneous, highly instrumented infrastructure. The nodes were deployed in McLean, Virginia, and Salt Lake City, Utah, and provide sliceable, programmable network elements for working end-to-end GENI prototypes. Funders hope GENI will prompt and promote innovations in network science, security, technologies, services and applications. The new GENI nodes enable OSI Layer 2 (physical addressing) network experiments. The Mid-Atlantic Crossroads (MAX) consortium was the first outside organisation to connect to the new core nodes to support programmable connections up to 10Gbps for GENI researchers in Washington DC. Photo courtesy of Chris Tracy, MAX The nodes were created by the ProtoGENI team at the University of Utah and the Internet Scale Overlay Hosting team at Washington University in St Louis.
<urn:uuid:391825e2-af21-4c3d-9545-0628b23a0492>
CC-MAIN-2017-04
http://www.computerweekly.com/news/1280090788/GENI-internet-replacement-undergoes-testing
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282202.61/warc/CC-MAIN-20170116095122-00230-ip-10-171-10-70.ec2.internal.warc.gz
en
0.923603
285
2.734375
3
Three of the hottest topics in IT are the cloud (let someone else do all the work while you reap the results), security (protect yourself from anyone who would damage your equipment, reputation or pocketbook) and privacy (make sure others mind their own business, not yours). But as users hand over more of their computing, storage and networking tasks to ever-larger companies, can security and privacy survive? The Cloud: Centralization Versus Decentralization One of the beauties of the Internet (and other computing technologies) is that it is a decentralizing force. Consider the case of music: the connectivity of the Internet means that musicians no longer need recording companies to deliver their art to the offices and living rooms of listeners. Fans can thus connect directly with musicians without going through the middle man of mega corporations. The same trend is influencing other areas as well, such as education. Even government schools are beginning to rely on resources such as the Khan Academy, and other online knowledge bases abound (such as MIT’s OpenCourseWare). Contrast this decentralization with the results of an important economic driver of the cloud: larger scale means lower cost. Thus, the cloud relies heavily (although certainly not exclusively, at this point) on huge data centers run by large companies. Although the Internet is a decentralizing force, the ownership of the means of that decentralization is increasingly centralized. And mobile computing is a major driver of the cloud. As users demand the ability to access their data from anywhere and at any time, the old desktop computing model increasingly fails to meet user requirements. Mobile devices usually run off of batteries, meaning they must conserve energy as much as possible to provide maximum operating time. So why not offload battery-draining tasks to someone else (i.e., to the cloud)? Furthermore, cloud-based data storage makes accessing user information easier: no longer does that access require physical presence at a particular device (such as a desktop computer), nor does it even require a particular device be on and connected to the Internet. Someone else (whoever that might be) stores the data and sends it to whatever device the user is operating at the time. The cloud is also a means of data backup, practical pay-as-you-go access to vast compute resources and expanded networking capabilities. Although neither centralization nor decentralization per se is ideal, centralization poses certain dangers to both privacy and security that cannot be ignored. For instance, when the critical data of a large number of users is stored in one place (whether physically, logically or corporately), any party that wants to access that data has only one target, not many (which would be the case if everybody kept private data on their own desktop computers). Such parties might be hackers, IP thieves or even governments. On the other hand, centralized data repositories, owing to their scale, can implement stronger security measures. But one thing the history of computing technology has shown is that regardless of the security system, someone will eventually figure out a way around it. Social networking sites—Facebook in particular—are goldmines of private information. And if you don’t think more individuals and organizations will try to exploit it, you’re not reading the news. For instance, ZDNet notes in a recent article (“Teacher’s aide fired for refusing to hand over Facebook password”) that increasingly, employers are reviewing Facebook pages as a means to learn about employees and potential employees—but even worse, some are demanding access to private (i.e., shared only among friends/contacts or even limited to the user alone) Facebook information. Next time you apply for a job, you may be asked to hand over your Facebook password. And lest you think that your government will protect you (it may or may not pass some kind of law, and it may or may not follow that law once it passes), the offender cited in the ZDNet article is an elementary school—a branch of government. You could probably construct a really good conspiracy theory around Facebook. The CIA/NSA/name-your-alphabet-soup-federal-agency need not plant bugs in every home—it can just log on to Facebook to find out what you’re doing every minute. It has essentially outsourced to the population the task of spying on themselves and on each other. Maybe that’s farfetched, but the practical results are the same. The centralized, cloud-based resource (Facebook, in this case)—even though it connects people and provides a service in high demand—creates a great temptation to those who want to discover private information. And this quest need not even require attempts to breach security measures, particularly when users willingly hand over their passwords! The same problems with centralization abound in other areas, such as medical data. Yes, arguments abound that making electronic medical records is important to saving lives; it’s easy to conceive of scenarios where fast access to records at another health-care provider, for example, can be the difference between life and death. Equally conceivable is that governments are intent on implementing electronic records for their own (probably less than savory) reasons. Measures, Countermeasures and Counter-Countermeasures Centralization of user data in large cloud data centers means that hackers (or any other party) have fewer targets in seeking to obtain the sensitive information of numerous users. Why steal wallets for credit card numbers when you can just hack PayPal or some similar site? Although such companies have the means to implement stronger security, someone will always—given enough time and motivation—find a means to circumvent those measures. In terms of security, centralization of compute and storage resources in the cloud isn’t necessarily superior to decentralization (into millions of desktop computers, for instance); each poses its own challenges. Centralization and decentralization are two forces that are most beneficial to society when properly balanced. The Internet is a force for decentralization (of a variety of resources and goods) in an overly centralized society. But the cloud also raises some concerns, as large amounts of private data are stored by companies, creating prime targets for hackers, governments and others in search of confidential information. The increasing push on the part of employers to gain access to potential employees’ private Facebook accounts (not public posts and other information—private information protected by a password) illustrates this danger. Of course, one might argue that employers have the right to ask, employees have the right to refuse and it’s ultimately no one’s business. In some sense, that may be true. But similar dilemmas can arise in the case of less voluntary interactions. What happens when the district attorney demands your Facebook password, or else you will face a slew of charges? (And who cares if they’re valid or not—you still have to pay a lawyer to defend yourself, so the DA at minimum has financial leverage over you.) Part of the problem is that not only does access to your Facebook account reveal your private data, it reveals the private data of your friends as well—meaning the DA can get to someone else through you. In light of such frightening scenarios, care is needed with regard to balancing the convenience and benefits of the cloud with the dangers that it poses via a centralization of resources. And striking the right balance may simply require years of trial and error. Photo courtesy of opensourceway
<urn:uuid:bf2f2e26-8a12-4560-932d-5e70bc1e957a>
CC-MAIN-2017-04
http://www.datacenterjournal.com/security-privacy-and-the-cloud/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284411.66/warc/CC-MAIN-20170116095124-00138-ip-10-171-10-70.ec2.internal.warc.gz
en
0.93634
1,519
2.890625
3
Every company needs a backup – this is a fact no one would deny. As the saying goes, there are two types of people in the IT world – those who back up and those who are yet to back up. In the event of the system malfunction, hacker attack, computer virus infection or malicious user activity, or just simply a human error (accidental deletion of data), a backup copy of our data is the only way to be back on track. Backup copies are simply the essential part of the IT infrastructure of an organization. Simply put, a backup is a full copy of all relevant computer data in the company. However, nowadays the amount of that data is so large that creating backups may take a considerable amount of time. That is why new methods were invented – a differential backup and an incremental backup. When a company operates on a large amount of data that is constantly growing or changing and does that on a daily basis, a full backup is simply impossible. The process is time-consuming and usually takes a lot of storage space. To address this problem, a differential backup method was invented. The differential backup takes a copy of all items that were changed since the last full backup. For example, a full backup was performed on Sunday. Next, on Monday a differential backup job takes a copy of items that were changed or added since Sunday. On Tuesday, the job takes a copy only of the data changed since Sunday, etc.. The cycle is repeated until the next full backup is performed. - The process is much quicker than a full backup since it only takes a copy of what was changed. - The backup copy itself takes far less storage space than when a full copy is created each day. - The size of the data differences part grows with each cycle. If the cycle is long (e.g. the full backup is performed once a month and the differential is taken every day), at the end of it the size of the archive might be quite big and the process itself pretty lengthy. The main difference is that the incremental backup takes a copy of items changed or added since the last incremental backup job. For example, the full backup was performed, as before, on Sunday. On Monday, the incremental job kicks in and takes a snapshot of all data that was changed since Sunday. On Tuesday, the job takes a copy of all changes since Monday, on Wednesday it backs up everything changes since Tuesday and so on. In other words – the process works in a chain order creating the copy of data modified or added since the last backup job. - The backup process is even faster than the differential job, not to mention the full backup. It is, in fact, so fast that it can be performed every hour or even minute. - Each iteration of the backup job copies just the data that was changed. Therefore, only a small amount of storage is required each time. - In some cases, the backup software requires all iterations of the incremental backup for data restoration. If one of the pieces is missing – the restore is impossible. - The restore process might take some time as the software needs to rebuild data from separate incremental pieces and also the last full backup piece too. In the age of the Cloud, it seems that the incremental backup is the best choice – it is fast, pulls down a small amount of data and can be performed even in real time. If you add advanced data versioning and sophisticated recovery processes that rebuild lost information much quicker even without all the incremental pieces, you’ll get a robust tool that has you covered all the time. CodeTwo Backup for Office 365 is an example of such software – not only does it back up entire mailboxes (all types of items) from Microsoft Office 365, but it also gives you all the benefits of the incremental backup with a pinch of what’s best in the differential type of the process.
<urn:uuid:2c7fefd9-108e-4aaa-b1ae-b99bd78e7c31>
CC-MAIN-2017-04
https://www.codetwo.com/admins-blog/difference-differential-incremental-backup/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00534-ip-10-171-10-70.ec2.internal.warc.gz
en
0.962159
794
2.609375
3
Twice a year, in step with the biannual TOP500 list, the Green500 list ranks the most powerful systems in the world based on energy-efficiency. Published Wednesday evening at SC13, this year’s Green500 list continues a trend from previous years, the rise of heterogenous supercomputing. The latest list shows that the top 10 greenest systems are powered by NVIDIA Tesla GPUs, specifically the Kepler parts. The only other architecture to have accomplished a clean sweep of the 10 top spots on the list is IBM’s BlueGene system. In fact, the top 20 spots on the June 2012 list were all occupied by IBM Blue Gene/Q supercomputers. A heterogenous computing system employs two or more types of processing technologies, such as traditional processors (CPUs), graphics processing units (GPUs), and coprocessors. The age of mixed-processor systems can be traced back to Roadrunner, which paired 12 thousand IBM PowerXCell 8i coprocessors with six thousand standard dual-core x86 CPUs. In June 2008, Roadrunner became the first supercomputer to deliver over a Linpack petaflop, earning it a number-one spot on the TOP500 list and a number-three spot on the Green500. This year’s most energy-efficient supercomputer is the Tsubame-KFC system, installed at the Tokyo Institute of Technology. Boasting a record 4.5 gigaflops per watt, Tsubame-KFC is about 25 percent more efficient than the runner-up Cambridge University’s Wilkes supercomputer, which can process 3.6 gigaflops per watt. In third place is the HA-PACS TCA system at the University of Tsukuba, delivering 3.5 gigaflops per watt. Many of the greenest systems of late have been relatively small, raising the question of whether energy-efficient techniques have scaling power. So it’s promising news that this latest top ten grouping includes two petaflop systems. Piz Daint, the 6.27 petaflop system at Swiss National Supercomputing Center, comes in at number four, delivering 3.2 gigaflops per watt, while TSUBAME 2.5, a 2.8 petaflopper at Tokyo Institute of Technology, sits in sixth position, with 3.07 gigaflops per watt. The current fastest supercomputer, China’s Tianhe-2, relies on a heterogenous design for its record-breaking 33.86 Linpack petaflops. Equipped with Intel Xeon Phi coprocessors, Tianhe-2 achieves an efficiency of 1.9 gigaflops/watt, for a not-too-shabby number 40 Green500 ranking. In an official blog post, a NVIDIA rep characterizes the company’s presence on the Green500 as ascendant. Six months ago, on the previous Green500 list, there were two systems in the top 10 with GPU parts. “At the heart of this trend is the spread of NVIDIA Tesla GPU accelerators based on our Kepler architecture,” remarks the graphics chipmaker. “Launched last year, they are three times more energy efficient than the Fermi-based family of processors they succeeded.” NVIDIA points to energy-efficiency as a crucial consideration for supercomputing systems going forward. The largest supercomputers require many megawatts of power and the cost can run into millions of dollars each year. Reaching exascale means boosting speeds by 50-100 times, while keeping power relatively static. These power constraints need to be addressed for exascale computing to be a reality. In a talk at SC13, the University of Tennessee’s Jack Dongarra said that achieving this feat will require a tenfold increase in efficiency, i.e., systems capable of 50 gigaflops per watt. It’s a challenge that can sound daunting, but Green500 figures show a better than tenfold gain in efficiency over the last six years. On the November 2007 list, England’s Daresbury Laboratory system, an 11.1 teraflops (Rmax) Blue Gene/P, operated at 0.3 gigaflops a watt.
<urn:uuid:e148319e-87b1-45fd-9289-ad625e0e1be0>
CC-MAIN-2017-04
https://www.hpcwire.com/2013/11/22/nvidia-kepler-parts-top-green500/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00286-ip-10-171-10-70.ec2.internal.warc.gz
en
0.887644
896
2.546875
3
There are legions of them. Mormon crickets crawl. They leap. They destroy everything in their path. Nothing can halt the attack of these armored masses. Treatment 1: Radio-tagged crickets were released into the band. Studies were done on how safety in numbers affects the survival of crickets in the band. No, this isn't a B-grade sci-fi flick. It's not a biblical plague, although early Mormon settlers in Utah thought as much when hordes of Anabrus simplex Haldeman -- the scientific name for this two-inch, shield-backed, short-winged katydid -- descended on them in 1848, devouring their crops. Desperate for salvation from the pestilence they believed God sent them, the settlers prayed to rid themselves of what they called "Mormon crickets." According to church legend, their prayers were answered when a flock of seagulls swooped down to feast on the insects. If burgeoning populations of Mormon crickets in recent years are any indicator, ravenous bands could be poised to march across the western United States and Canada. Idaho, Utah, Colorado and Wyoming are typically hardest hit, spending millions of dollars to control the cricket migrations and the damage they do. During a 1937 outbreak, crop damage amounted to $500,000 in Montana and $383,000 in Wyoming. In 2004, Congress made a special appropriation of $20 million for Mormon cricket control. Three researchers are studying the crickets' migration by attaching tiny radio transmitters to them that chart their migration path. The goal is determining if better ways exist to stop the migration from hitting certain states, either through killing the crickets or diverting their migration path with concentrated and targeted pesticide application. Containing the Swarm "Little is known about what causes increases in population size," said Patrick Lorch of the University of North Carolina's biology department. "We know extended drought, early spring snow thaw and overgrazing all seem to favor high cricket densities. They lay eggs in the soil, and the eggs can sit for several years, hatching when conditions are most favorable." Mormon crickets' culinary tastes lean toward succulent forbs, or broad-leaved flowering plants, but they'll graze on desert grasses before moving to greener pastures. Insatiable, the insects engulf rangelands, laying waste to cultivated crops such as wheat, barley, alfalfa and clover. Experts say swarms of the crickets can cover a mile a day and eat everything in their path. Some packs stretch several miles wide and 10 miles long. "A farmer might not see a single cricket one day but end up facing millions the next day because they move in such large groups," explained Gregory Sword, a USDA Northern Plains Agricultural Research Laboratory research ecologist in Sidney, Mont. "They can potentially eat everything in the field." As unpredictable and destructive as a tornado, the ominous black band of crickets inexplicably shifts direction, decimating one field and sparing the next. In a moveable feast, the band can overrun communities, consuming ornamentals and stripping vegetable gardens bare. There have even been accounts of them chewing wood siding off homes. In addition to the crop damage they do, the crickets also pose a threat to public safety. "When their bands cross roads, they tend to mass together and cannibalize the crushed dead bodies of other insects," Sword elaborated. "These in turn get crushed by more passing vehicles, leading to large, messy 'oil slicks' of crushed crickets." Until recently, when cricket bands were on the run, no one could predict where or how far they would travel. A study of Mormon crickets conducted by Lorch, Sword and Darryl Gwynne, a zoology professor at the University of Toronto, sheds new light on accurately tracking the Mormon cricket's migration habits. Together, these scientists devised a way to bug the pests that have been bugging humans for more than 2,000 years. Radio transmitters about the size of a dime and weighing 0.5 grams were hot-glued onto the backs of adult female crickets. Directed rearward on the insect, these tiny devices send signals detectable in brush or grass to a distance of 500 meters. "We use an antenna and a receiver to hear the 'pings' sent out by the radios," Lorch said. "Each radio transmits at a different frequency, so we can follow many individuals at once. Using the directional antennas, we physically track down each cricket." Tagged females were recaptured at intervals over 24 to 48 hours to estimate their position, then released again. "Trying to follow an individual in that band and finding out where 'Joe Cricket' is today or a few hours down the road would be impossible without some kind of radio device," said Gwynne, who began studying mating habits of the katydids in the 1980s. "You can go back and get the same cricket every day, recover that cricket, and figure out where it is using GPS information." Lorch said they will use a dual-processor G5 Macintosh when they begin running models. "We may find that parallel computers will be necessary," he said, adding that they don't have huge storage needs. "We hope that by learning more about Mormon crickets -- what motivates them, what directs them, etc. -- we will be able to help with control efforts. More generally, what we discover should help with other insect outbreaks, particularly ones that involve mass migration like African locusts." Such migrations in underdeveloped countries cause widespread famine. The Mormon crickets were studied in a series of three treatments, which included three replicates of six crickets each. Treatment 2: Solitary tagged crickets were released where there was no band. Lone crickets do not form "selfish herds" and are lighter in color than their more gregarious dark-colored cousins, which amass in troops three miles deep and one mile across. Bands can travel up to 50 miles in a season. Treatment 3: A tagged cricket was transported a short distance by vehicle then released back into the band. The crickets were relocated once a day for five days. "We recorded their position. We knew what direction they traveled and how they traveled," Gwynne explained. "I don't know if we can ever get to this point, but the information we're going to get is going to at least go toward understanding their direction movement and their distance movement as well." If you know where the band is going and when it might arrive, he said, you know where to drop the bait. "I'm anti-insecticide," Gwynne acknowledged, "but the most effective way of controlling these things is with bait. They just roll out sacks of poisoned food, the insects come along, eat it and die." The poison used to control crickets is not harmful to livestock. In Brazil, where residents prefer not to use chemical insecticides, which can contaminate milk and meat, there has been some success in controlling locust damage by introducing various parasites and pathogens. These methods haven't been used successfully to control Mormon crickets yet, according to Sword, but he said researchers at Utah State University are studying cricket diseases. The universal theme of those B-grade sci-fi flicks is that to effectively conquer the invader, you must first understand it. The same is true of Mormon crickets. "We hope the predictive movement models we produce will help farmers, ranchers, state, federal and other land managers by improving the efficiency of existing control measures, and reducing the amount of pesticide and manpower used to treat Mormon crickets," Sword said. "Our work in the United States understanding how and why Mormon cricket migratory bands form, as well as the ability to predict their movement patterns, can potentially provide insight into the management of migratory pests around the world."
<urn:uuid:f803435f-958a-4de4-9d08-55e97c79da0e>
CC-MAIN-2017-04
http://www.govtech.com/products/Bugging-Crickets.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280763.38/warc/CC-MAIN-20170116095120-00194-ip-10-171-10-70.ec2.internal.warc.gz
en
0.963388
1,652
3.15625
3
Information security and data protection is crucial for every business, as it is likely that your company’s most prized asset is its data. Failure to secure your company’s sensitive information could be damaging to its brand in addition to leading to possible legal ramifications. This quick guide, to IT security, includes news, tips, and articles on tackling cyber-attacks, data protection, malware and computer viruses. Table of contents: A cyber-attack involves vulnerable computers being targeted and deliberately made to malfunction. This can include the disruption of data flow, to disable businesses/government organisations, or targeting applications and databases with the intent of theft. Cyber security according to Steve Robinson Being able to access data electronically might be convenient; however it also introduces new security challenges. According to Robinson, businesses need to address these complexities if they want to remain competitive. How to keep on top of today’s growing cyber-threats With cyber-attacks on the increase, organisations are aligning their security methods across all areas of operations, instead of just IT. In this report, from PwC, CEOs need to step up and take the lead in protecting their organisations. Dealing with security cloud providers and data backup Backing up expanding data is a growing issue for all businesses. Many are opting to work with cloud providers, to automate backups and improve data protection, but only if it’s at a reasonable cost. Keep cyber-threats in proportion – IT infrastructure risks Governments have been warned to keep cyber-threats in proportion and not to entrust the military with the defence of critical national infrastructure. Read the details of the report published by the Organisation for Economic Co-operation and Development (OECD). Data protection refers to the safeguarding of individuals personal data. The Data Protection Act 1998 is a UK Act of Parliament which outlines the UK law of how the data of living people should be processed. The Act itself does not mention privacy; however it offers a way for individuals to regulate personal information about themselves. Under data protection laws students have the right to request marked A-level exam papers According to the Information Commissioner's Office (ICO), students have the right to request their marked exam papers, to see the examiners comments. The loss of 26,000 housing records paints a picture of poor data protection in the UK The Information Commissioner's Office (ICO), finding two London housing bodies in breach of the Data Protection Act, highlights a poor state of UK data protection. Boeing takes to cloud computing to address security concerns Aviation firm Boeing has opted for security first, when designing its cloud computing strategy. Hospitals deliver test data via mobile BI Doctors in three UK hospitals are using mobile BI to access patient test results. Taking place in Accident and Emergency departments, the aim is to further improve patient care. How to create an information sharing policy and protect your business from data leakage Few companies have an information sharing policy in place, to ensure the exchange of information is secure. Learn how to create an information sharing policy in this tip. 243 police officers convicted of Data Protection Act violations in the last three years Concerns continue to mount as a Free of Information request reveals that over the last three years 243 police officers have been found in breach of the Data Protection Act. Short for malicious software, malware is designed to disrupt either the operation of a computer of to gather sensitive data from an organisation or individual. Malware includes computer viruses, spyware, worms, bugs and Trojan horses, to name but a few. Microsoft Internet Explorer 9 comes out on top in socially engineered malware blocking The most common security threat facing users of the internet remains to be socially-engineered malware. Recent studies show internet users are four times more likely to be tricked into a malicious download. House of Commons science and technology committee calls for submissions on impact of malware The House of Commons science and technology committee has put out a request for submissions detailing the impact of malware, for example how much cyber-crime is connected with malware, and where it comes from. Ramnit worm becomes serious threat to banking industry Previously perceived as a low level concern, the Ramnit worm has now become a serious threat to businesses within the banking industry. A computer virus refers to a computer program which can replicate itself, before spreading to other computers. To increase its chances of infection, the virus will infect files on a network file system that is frequently accessed by more than one computer. The term ‘computer virus’ is often used as an umbrella for all types of malware, even those that cannot replicate themselves. 4.5 million computers at risk of TDL-4 virus Cyber criminals have deployed the TDL-4 virus, which is usually spread through traps on pornographic sites, video and file storage and bootleg websites. Scareware cyber criminals beware as twelve nations join forces to take action The twelve nations put their heads together to shut down international cyber-thieves, as victims continue to be sent fake virus warnings to trick them into handing over their credit or debit card details.
<urn:uuid:ced39672-5da4-4670-8962-c9afe6adddd1>
CC-MAIN-2017-04
http://www.computerweekly.com/guides/A-quick-guide-to-IT-security
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00315-ip-10-171-10-70.ec2.internal.warc.gz
en
0.929103
1,051
2.609375
3
Cisco on Cisco Optical Networking Case Study: How Cisco IT Used CWDM to Interconnect Japanese Data Center Sites Cisco Systems® maintains two sales offices in Tokyo, Japan. One is located in the government district of the city known as Akasaka and the other in a commercial area known as Shinjuku. As is typical for enterprises in the region, Cisco® Japan IT colocated data centers in each of the offices to support the IT needs of sales, Cisco Technical Assistance Center (TAC), and engineering staff within the facilities, but needed to connect these two locations together. In Japan, many service providers offer managed services like Gigabit Ethernet service to end customers. Cisco leased a Gigabit Ethernet circuit from a service provider at a cost of one million yen per month (about US$9000 at an exchange rate of 111 yen per dollar) to tie the two data centers (and offices) together. Although these sites had been in place for some time, it is common for sales offices to change locations-to relocate to larger facilities or move closer to customers, etc. Reestablishing a colocated data center can be costly and disruptive. In 2003, Cisco Japan IT considered relocating the two data centers to a single, dedicated facility. A more permanent facility would allow Cisco Japan IT to engineer a higher level of availability with more robust redundancy. Situating the data center outside the expensive city center also would reduce leasing costs. Although a dedicated data center appeared to be a good solution, the cost of providing reliable connectivity between the data center and the two sales offices seemed prohibitive. To provide an acceptable level of reliability and redundancy, a circuit would be needed from the data center to Akasaka, from Akasaka to Shinjuku, and from Shinjuku back to the data center. With three circuits required, the one-million-yen-per-month Gigabit Ethernet lease cost would triple to approximately $27,000. Cisco IT manager, As early as 2001 Cisco Japan IT had investigated leasing dark fiber from carriers as an alternative to the costly Gigabit Ethernet service. Because the great majority of the cost of laying fiber optic cable is labor, not the fiber, telecom carriers typically install more fiber strands than they need. In many areas of the world, private enterprises sometimes can lease these "dark" unused strands from carriers at low rates to connect company sites in metropolitan areas. They had abandoned this solution in 2002, however, because carriers in Tokyo were not offering dark fiber to enterprises. Furthermore, the dense wavelength-division multiplexing technology that enabled multiple channels over a pair of optical fibers, a critical requirement for Cisco future bandwidth needs, was expensive and difficult to manage. To provide viable connectivity between the new data center and the two sales offices, Cisco Japan IT needed a high-speed, low-cost solution that would be reliable and easy to manage. In 2003, with the construction of a dedicated data center under serious consideration, Cisco Japan IT again investigated dark fiber. In Japan, Class 1 carriers can provision and lease fiber, but can lease it only to Class 2 carriers and service providers. Class 2 carriers can in turn lease them to corporations. This time Cisco Japan IT found that there were several carriers in Tokyo who were receptive to leasing dark fiber, and at prices far lower than the current managed Gigabit Ethernet services. In addition, the Cisco coarse wavelength-division multiplexing (CWDM) gigabit interface converter (GBIC) solution was now available, which could provide economical optical bandwidth scalability with little or no management requirements. CWDM employs multiple light wavelengths to transmit signals over a single optical fiber. CWDM technology is a crucial component of Ethernet LAN and MAN networks because it maximizes the use of installed fiber infrastructure at an attractive price point. The dark fiber and CWDM GBIC solution offered another benefit. "If we had chosen to lease Gigabit Ethernet circuits from a service provider, we would have a single provider with the risk of a single point of failure," says Zhengming Zhang, Cisco IT network engineer. "With dark fiber, however, we have the ability to select circuits from different providers, ensuring physically diverse routes, which was an important requirement for us." A building was located for the new data center about 25 kilometers from Akasaka and 30 kilometers from Shinjuku. Efforts to lease the dark fiber links began in October 2003. The task of relocating nearly 50 racks of equipment from Akasaka and 30 from Shinjuku to the new data center began on February 28, 2004, with the first move on March 6. Although Cisco IT uses CWDM for network access in other locations, the Tokyo Internet data center (IDC) is the first place in which Cisco takes advantage of CWDM to interconnect an IDC with multiple Cisco offices. The advantage of CWDM technology is that it can transmit and receive signals over a single strand of fiber. With Cisco CWDM GBICs, a maximum of four channels can be multiplexed over the single fiber. CWDM provides bandwidth for growth and secure traffic separation with only one single fiber. The Tokyo IDC currently utilizes one channel, which is used for a single gigabit circuit. Several dark fiber providers are located in Tokyo, and Cisco Japan IT included the dark fiber vendor selection process as part of the overall data center evaluation process. Seven venders were sent Requests for Proposal (RFPs) for the data center project, and Cisco Japan IT selected three separate fiber providers to ensure redundant paths to all three sites. A single provider also was responsible for supporting service-level agreements (SLAs) on all three fibers and for terminating the fibers in each of the sites. "Installation was very simple," says Greg Duncan, Cisco IT Manager. "They pulled the dark fiber into our racks and attached the SC connector, and that was it." Although the fiber providers had estimated eight to nine weeks for completion, they installed the first circuit in less than four weeks and the remaining circuits within another week. Because the CWDM equipment is passive, it does not amplify the signal traveling through the fiber. The signal naturally weakens, or attenuates, over the length of the fiber based on factors such as the quality of the fiber, distance, and number of splices. If too much loss occurs, the signal at the receiving end will be too weak to detect and could cause packets to be dropped. Testing carried out by Cisco Japan IT showed that the CWDM equipment could tolerate a loss of 30 dB with no packets dropped. The fiber provider SLA guaranteed that these fibers would not exceed 24 dB. "This is one of those few instances where everything has gone exactly to plan," says Duncan. "Our fiber provider installed every fiber link with less loss (less than 16 dB for all three paths) and in less time than what they promised. It went very, very smoothly." Connecting the fiber to the LAN environment at each of the three locations is the Cisco CWDM GBIC solution. The primary components of the CWDM GBIC solution are the Cisco CWDM GBIC and Cisco CWDM optical add/drop multiplexer (OADM) modules. The Cisco CWDM GBICs are active components that convert Gigabit Ethernet electrical signals into an optical single-mode fiber (SMF) interface. The CWDM GBIC plugs into standard GBIC ports on Cisco switches and routers. No dedicated or additional routers were required for the deployment. "The CWDM solution is very cost effective," says Zhengming Zhang. "You can use existing routers or switches as long as the hardware has the Gigabit Ethernet module." At the Akasaka and Shinjuku offices, existing Cisco 7603 routers were used, as shown in Figure 1. A Cisco 7603 Router also was used at the data center. Figure 1. CWDM Network Diagram The CWDM OADM modules used in this deployment (CWDM-MUX-4-SFx) are passive optical components that multiplex multiple wavelengths from multiple SMF pairs into one SMF strand. Other CWDM OADM modules are designed to multiplex multiple wavelengths into a pair of SMF fibers where a dual-fiber topology is used. The CWDM OADM modules are connected to the CWDM GBICs with SMF using dual SC connectors. Because they are passive devices, no power is required. Neither the CWDM GBIC nor OADM modules require any configuration. The technicians simply matched the GBIC color with the color of the channel interface on the respective OADM module. As with a Gigabit or Fast Ethernet interface, an IP address must be configured for the GBIC interface if it is used as a Layer 3 router port. If a Layer 2 switch is used, spanning tree configuration may be required. The CWDM (CWDM-MUX-4-SFx) solution supports up to four channels over a single fiber. When additional channels are needed, technicians simply plug another CWDM GBIC into a GBIC port on the Cisco 7603 Router, as shown in Figure 2. No new fibers or changes to the dark fibers are required. The technician simply plugs the second CWDM GBIC into the Cisco 7603 Router and connects it to the OADM with a pair of single mode fibers. "Adding a Gigabit takes only about five seconds and costs about $750," says Zhengming Zhang. Figure 2. Adding a Second CWDM GBIC The Cisco CWDM GBIC solution supports point-to-point, ring, hub-and-spoke, and mesh network topologies. The solution offers both path protection (using two fiber paths for the same wavelength) and client protection at the channel endpoints through the CWDM GBICs. Availability redundancy schemes such as EtherChannel technology, Spanning Tree Protocol, and Hot Standby Router Protocol (HSRP) can be used to provide redundancy. Cisco Japan IT chose a multisite point-to-point topology for the Tokyo network deployment because of its simplicity and cost. Using Enhanced Interior Gateway Routing Protocol (EIGRP), the network detects a failure in one of the links and automatically reroutes traffic to the redundant path. A full-mesh solution would have required two fibers between each location, more extensive hardware, and greater management requirements, such as Spanning Tree Protocol. "If two fibers were as inexpensive as one, we might have chosen a full mesh solution, but this solution is also good," says Zhengming Zhang. The Tokyo IDC hosts all the regional mission-critical services and applications and supports all WAN connectivity for Cisco Japan offices. Some of these services include Internet access, extranet connections, VPN concentrators for site-to-site and user-based IPSec VPN connections, content networking and IP/TV® streaming video broadcasts, CallManagers, storage filers, printing servers, and many more. high performance of the network in IDC makes all services available to users as if the resources are located nearby. In addition, critical user and application data in Japan data is replicated to our Hong Kong IDC, which provides redundancy in the event of a critical hardware failure. Cisco quality of service technology allows for near-real-time replication without consuming other critical services such as Web, video, and voice services. Circuit diversity was an essential factor for building a highly available IDC in Tokyo. In this case both physical circuit routes and carriers are diversified. Normally, circuit backup is sufficient and physical diversity of circuit paths is valuable but sometimes hard to achieve; carrier diversity is even more valuable but usually too difficult to achieve. Carrier diversity is valuable because there are rare instances of multiple outages on a single carrier's network (for example, due to a port module board failure that causes multiple circuits to fail at the same time). In Tokyo, Cisco IT was able to select a unique carrier for each circuit. This was fortunate because high availability is crucial. In addition to hosting all mission-critical services and applications for Cisco Japan, the IDC connects Cisco Japan's large locations to the rest of the network, and to services and applications located in other regions. This diversity helps ensure that mission-critical services and applications are always available no matter which circuit or carrier fails. Before being relocated to Tokyo IDC, all mission-critical devices were hosted in either the Shinjuku office or Akasaka office. Every year each building conducted a necessary electricity maintenance process that would bring down entire power supplies for 48 hours. During this maintenance, all the customer devices in the building lost power and none of the applications or services were available. Cisco IT used uninterruptible power supplies (UPSs) to provide backup for a few hours, but battery backup for 48 hours was not realistic, since it would require a huge number of batteries, and the cost, weight, space, and safety factors made this too expensive to consider. During the 96 hours of power outage (different dates for each building outage), users were unable to use file and printing servers, DHCP, DNS, ACS, DC (directory service), and local VPN concentrators. Some field sales offices in Japan were unable to connect to the corporate network through site-to-site-based IPSec VPN. Users would have to connect to the VPN concentrators in San Jose to access corporate resources and the Internet, and the long distances between Japan and the United States made VPN performance slow. Relocating all servers and network equipment to a single Tokyo IDC has resolved all these problems. The Tokyo IDC has three power generators, each one with an independent power source. The Tokyo IDC supplies power to each server rack with at least two separate power feeds (and some have three separate power feeds). The N+1 redundancy has greatly improved service availability. Had Cisco Japan IT chosen to lease Gigabit Ethernet circuits from a service provider, the cost would have been approximately 3 million yen (about $27,000) per month. Instead, the three dark fibers cost approximately 1.1 million yen (about $9900)-a saving of more than 60 percent. And by adding relatively inexpensive CWDM GBICs and OADM modules to the existing infrastructure, bandwidth can be doubled, tripled, or even quadrupled without additional monthly fiber leasing expense. Total route diversity has eliminated single points of failure and ensures high availability between sites. The network has been in operation since March 2004 with no problems. On May 15, Cisco Japan IT took offline one of the Cisco 7603 routers to replace line modules. The network rerouted automatically and connectivity between sites was never affected. Improvements in other areas were achieved as well: More usable space available at the two Tokyo offices: Relocation of shared services to the Tokyo IDC allows us to reuse the expensive downtown Tokyo office space for services showcasing, labs, and customer support staff. When new Cisco technology or products go to market, the existing spaces can be used for sales and marketing purposes. Removed duplicate hardware costs: Some shared services and applications were duplicated in the Shinjuku and Akasaka offices (for example, storage filers and printing servers). When new services were deployed, the same hardware and software had to be installed in both offices. With the Tokyo IDC, existing services are combined into fewer, higher-capacity hardware devices, which perform the same tasks at a lower price per task. Simpler management: With the centralized colocation of services in the Tokyo IDC, management, troubleshooting, and maintenance are easier than when equipment was located in two separate offices. Reduced IT labor: Cisco IT Japan used to spend a significant amount of time on cabling, mounting hardware, simple hardware replacement, and circuit installation. These tasks are now handled by the Tokyo IDC support staff, and Cisco IT Japan can concentrate on new service design and implementation. Unlike most deployments, the CWDM project went exactly as planned. "I can't think of a single instance during the CWDM deployment that trapped us or caused us to reconsider our plans," says Duncan. Several factors made this deployment simple and trouble-free. Among them was the willingness of different service providers that, even as direct competitors in the same market, were willing to share cable path information to provide diversified routing. And the fiber vendor performed as promised. "They lived up to their word without exception, even beating their schedule," notes Duncan. And finally, the CWDM equipment offered no surprises. "I think the lesson learned for me is, it was as easy to do as what the product information said it would be," says Duncan. Cisco Japan IT plans to use the Tokyo CWDM solution for several new applications over the next year. Prior to CWDM, separate access paths had to be provided for labs that needed direct demilitarized zone (DMZ) access, resulting in additional Internet access points distributed throughout the different labs. One of the four channels on the CWDM already is being used to carry secure, segregated lab traffic into the DMZ, located at the new data center, from the existing engineering lab at the Shinjuku office. Without CWDM, a dedicated leased line, which might cost at least 300,000Yen ($2700) monthly, would have been required to connect the DMZ lab in Shinjuku to DMZ backbone in the Tokyo IDC. Other labs will follow, replacing their separate Internet trunks with a channel on the CWDM. If more circuits between the same sites are required in the future, Cisco IT Japan can easily add CWDM GBIC modules to expand bandwidths to 2 Gbps, 3 Gbps, or 4 Gbps. Because each channel is utilizing a different wavelength to transmit and receive, each one can be treated as an independent physical circuit. This allows us to use another channel of CWDM to interconnect the DMZ lab in the Shinjuku office to the DMZ backbone in the Tokyo IDC over the same dark fiber without compromising security. Another advantage of the CWDM dark fiber solution is its ability to support technology demonstrations without negatively affecting production traffic. Before customers spend a large sum of money for a Cisco solution, they want to see that it works. Cisco sales and engineering teams set up demos for different solutions. Often, the customer might be at the Akasaka sales office while the servers and critical resources that make the demo work are in Shinjuku. Engineers would have to connect the two sites but they could not use the existing Gigabit Ethernet circuit because of existing IT policies and information security policies, which caused them to find another circuit. With CWDM, they will be able to use a separate fiber channel without raising security concerns. In addition, the extra bandwidth will allow Cisco IT Japan labs and sales locations to interconnect servers to storage using SAN iSCSI, FCIP, or other applications. Cisco TAC and engineering groups currently occupy a sizeable portion of the leased space at the Shinjuku facility. At some point within the next year, those groups probably will be relocated to lower-cost facilities outside of Shinjuku. "That's going to be a lot simpler for us to do because we just need to hook them into the new data center and extend another dark fiber to their new location," says Zhengming Zhang.
<urn:uuid:d0e091bb-e25d-448d-8186-96611cb31c92>
CC-MAIN-2017-04
http://www.cisco.com/c/en/us/about/cisco-on-cisco/enterprise-networks/cwdm-japanese-data-centers-web.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280065.57/warc/CC-MAIN-20170116095120-00223-ip-10-171-10-70.ec2.internal.warc.gz
en
0.953312
3,989
2.53125
3
XFP is short for 10 Gigabit small form factor pluggable. It is a standard for transceivers for high-speed computer network and telecommunication links that use optical fiber. It is protocol-independent and fully compliant to the following standards: 10G Ethernet, 10G Fibre Channel, SONET OC-192, SDH STM-64 and OTN G.709, supporting bit rate from 9.95G through 11.3G, along with its interface to other electrical components which is called XFI. The 10-Gigabit XFP transceiver module is a hot-swappable I/O device that plugs into 10-Gigabit ports. The XFP transceiver module connects the electrical circuitry of the system with the optical network. The compenets of XFP include the following parts: The XFI electrical interface specification was a 10 gigabit per second chip-to-chip electrical interface specification defined as part of the XFP multi-source agreement. It was also developed by the XFP MSA group. XFI provides a single lane running at 10.3125 Gbit/s when using a 64B/66B encoding scheme. A serializer/deserializer is often used to convert from a wider interface such as XAUI that has four lanes running at 3.125 Gbit/s using 8B/10B encoding. XFI is sometimes pronounced as “X” “F” “I” and other times as “ziffie”. XFP transceivers comply with the XFP multi source agreement developed by several leading companies in this industry. Typical types for XFP including the SR, LR, ER and ZR. XFP SR working distance is 300 meters max, it works with OM3 10GB multimode optical fiber. The other 3 types work with single mode fiber, XFP LR max distance is 10km, XFP ER is 40km, and XFP ZR max working span is 80km via SMF. XFP is regarded to be the new generation 10G solution after the Xenpak and X2 transceivers, many companies have developed the XFP 10G transceivers nowadays.
<urn:uuid:7877cd14-0c6f-4b87-bbea-94466e5a5c0f>
CC-MAIN-2017-04
http://www.fs.com/blog/xfp-transceiver-module.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00341-ip-10-171-10-70.ec2.internal.warc.gz
en
0.948613
459
2.84375
3
The world's fastest supercomputer, Titan, located at Oak Ridge National Laboratory (ORNL) in Tennessee, is deploying a new storage system from DataDirect Networks (DDN) to support its research into climate change impacts and alternative fuels. DDN's SFA12K-40 storage appliance will form the backbone of a new storage system called Spider II, which is capable of ingesting, storing, processing and distributing research data at one terabyte per second - ten times faster than comparable scale-out NAS systems. [ IN THE NEWS: Dell working on ARM supercomputer prototypes ] It is also designed with 40 petabytes of raw storage, which is enough to hold all the information in more than 227,000 miles of stacked books. This means that ORNL can dramatically increase Titan's computational efficiency and deliver more accurate predictive models than ever before. The ORNL Spider II configuration from DDN includes 36 DDN SFA12K-40 systems (each with 1.12 petabytes of raw storage capacity), and 20,000 disk drives. It runs an open source file system software called Lustre. The combination of DDN's and ORNL's experience of scaling Lustre in production environments will enable Titan to perform approximately six times faster with three times the capacity of its predecessor, Spider. "When building the world's fastest system for data intensive computing, we carefully considered all aspects of high-throughput I/O infrastructure and how efficient storage platforms can complement our supercomputer's efficiency," said Buddy Bland, project director for the Oak Ridge Leadership Computing Facility at Oak Ridge National Laboratory. "The ORNL and DDN teams have worked together to architect a file system designed to enhance the performance of our Titan supercomputer and enable our users to achieve unprecedented simulations and big data insights through massively scalable computing." Titan was named the world's fastest supercomputer in November 2012, leapfrogging the previous champion IBM's Sequoia. It runs 560,640 processors, including almost 300,000 AMD Opteron 6200 series cores and over 261,000 Nvidia K20x accelerator cores. Titan is designed to deliver a peak capability of over 27,000 trillion calculations per second, or 27 petaflops. It is used to help develop more energy-efficient engines for vehicles, model climate change and research biofuels, and can also be rented to third parties. "The world's toughest questions demand the toughest storage and the fastest technology to drive new levels of scientific insight," said Jean-Luc Chatelain, chief technology officer at DDN. "We're honored to continue our long-standing partnership with ORNL today and to be part of the future of Big Data and exascale computing tomorrow." This story, "Titan supercomputer gets new 40-petabyte storage system" was originally published by Techworld.com.
<urn:uuid:4e3dcb09-b29a-4732-a53e-9f4988595e4e>
CC-MAIN-2017-04
http://www.networkworld.com/article/2165407/network-storage/titan-supercomputer-gets-new-40-petabyte-storage-system.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00341-ip-10-171-10-70.ec2.internal.warc.gz
en
0.927034
586
2.875
3
A developer's tools control how he builds applications. Sure, anyone can hand-code software to do something precisely, starting with assembly language if necessary. But good development tools make some features easier to implement, integrate debugging and other process-related tools into the environment, and generally make the developer's life easier. One word summarizes all of this: wizard. ARTICLES IN THIS SERIES: THE FUTURE OF WEB DEVELOPMENT But with so many technologies, languages and frameworks, the situation can often become more complex rather than get simpler. Jochen Krause, CEO of Innoopract (the company behind Eclipse RAP), says, "The Java tool stack and runtime stack—this is incredibly complicated. There are at least 50 acronyms. Even with the best tooling, if you have such a complicated technology or stack of technology, it will always remain very difficult to build apps." Krause and others expect changes to occur in the development tools space, particularly in the languages and frameworks adopted: "That's why languages like Ruby, PHP, etc., are so helpful," he says. As a result, says Bob Brewin, Sun's software CTO, doing Ajax is really painful, "like building an aircraft carrier by hand." Hand-coded Ajax development today requires a large skill set, so several interesting technologies have materialized to simplify it. That trend—to develop a new capability and then find ways to simplify development—mirrors what happened in desktop computing. But Brewin believes the improvements will happen faster, because techniques can be borrowed from desktop development. "We invented it and now just have to copy it," he says. It's up to the tools to make the task easier. As Alex Russell, project lead for the open-source Dojo Toolkit, says, "My job is to intercede on developers' behalf to the browser gods." Because the Web cut everyone off in visual design and user design, he says, "We've all, on every front, been rebuilding the tool chains, and how we think about those problems." You Can Create Apps Faster. Can't You? Developers have always been under pressure to create software faster, but Tim Bray, director of Web technologies at Sun Microsystems, feels that increasing focus on time to market will require developers to choose tools that streamline the process. He expects the advent of new frameworks like Rails to enable projects to be done in days rather than months, in months rather than years. For more about Rails, see "Why Ruby on Rails Succeeded." The experts also expect more adoption of software methodologies such as Agile. "A quick time to market is good at meeting business objectives, and better for developing software," says Bray. "Get a few features right away; get feedback; get continuous improvement." Brian Goldfarb, Microsoft group product manager, UX platform and tools strategy, also expects test-driven development to become more common, and built into the tools. "The embers are there to begin the fire," he says. In two or three years, Goldfarb expects, after a lot of businesses are burned by difficult-to-maintain systems, they'll decide, "The next wave of technology is the ability to maintain a project." According to Goldfarb, companies need to ask, "How do we build software by building tests first?" Today, building a rich Internet app is "like building a ship in a bottle," says David Temkin, Laszlo Systems CTO. "You spend a lot of time optimizing, because it's a memory pig." We need a faster virtual machine environment, he believes, to address application speed. Important things for toolmakers to address, Temkin feels, are programming languages and the (related) execution speed of the client. It used to be that the key application bottleneck was bandwidth: the connection between the Web application and the server (remember dial-up?). But now, Temkin points out, "It's the limits of execution speed on the client. These languages aren't designed to produce full-out GUIs." Existing programming languages may not generate code that's fast enough to run on the eventual client. If you're writing browser apps, Temkin says, you're living in a virtual 1990. Previous generation languages like C and C++ are appropriate for lower-level software, according to Temkin, but not for UIs, because "they require too much detail knowledge." Every few years, programming languages ascend to a new level, he believes. "We're going to move up a new level in that stack." Will that stack be fixed? Sure. "All that stuff is going to get cleaned up one way or another, but probably not by way of the W3C," he says. Greater Reliance on Components and Mashups One defining characteristic of Web 2.0 applications is the use of mashups, combining content from multiple sources (usually by means of an API or Web service) into an integrated experience. One current example pulls together Google maps and rental property listings on Craigslist. Mashups are only the beginning, say these development experts. Jean-François Abramatic, chief product officer of ILOG, believes that mashups will become even more significant in application development: "We ain't seen anything yet," he says. Using mashups, says Abramatic, means that developers have to do less and less from scratch, and they can leverage existing applications. "This will become mainstream in the next few years," he says. Mashups are still primarily a consumer tool. According to Abramatic, "The concepts are good, but you need a more robust platform in the enterprise." To use mashups, corporate IT developers will need to watch for service availability. What happens if one data source is offline? However, the security questions about mashups are plentiful. "There's lots of ways to test the water on the current architecture," says Russell. The industry needs to determine how to consider security, at what level of granularity, but it can't be so complex that it doesn't get done. Plus, he points out, the applications have to ship this info down the wire; that gets really expensive. Mashups are "new" in the sense of using publicly provided data (or, less commonly, a company-internal data stream), but experienced object-oriented developers are familiar with the concept: separating both data and functionality into components and containers. David Intersimone, CodeGear vice president of developer relations, suggests that instead of thinking about tooling for the browser, developers realize, "We don't have to care about the browser." The important lesson—which applies today, not just two years from now—is to separate the user interface from the object. Objects are all about abstraction layers. "In the browser we have the document model and Java objects, etc.—but how do all these objects work together?" he asks. Microsoft's Guthrie expects software development to "just naturally gravitate" to using more software services (whether called mashups or something else) in a wider range of categories, "whether they're PayPal or commerce transactions or social networking." This won't necessarily be complex, he says; after all, RSS turns out to be a useful way to transfer information. "It's the simple stuff that usually works the best," says Guthrie. The goal of developing modular apps has been around for 20 years, says Krause, and it's part of the philosophy behind Eclipse. "Now you can develop really modular Web apps, and the components can be really reused," he says. In a few years' time, chat components will be common across applications, Krause says, and it will be the same functionality whether it's a Web app or a desktop app. One side effect of the increase in mashups and service/component-based development: It will become cheaper to launch a new application. It already has. "Seven years ago," says Guthrie, "You'd have to invest hundreds of thousands of dollars to launch a mapping app. Now anybody can do it for free or super-low cost." It's not just the cost of writing an app from scratch; what used to require huge data centers no longer does, and it's possible to start up a business relying entirely on online ads for income. The startup costs will continue to drop, according to Guthrie. "We're seeing that right now with the consumer space; we will also see it in the enterprise space in the next couple of years." Another side effect from more component-based applications: how we deploy, evaluate and consider enterprise software could return to the days when we chose single-use applications that each did one thing well. The computer industry pendulum may swing back to the era of "small tiny shrink-wraps"—some of which are open source—that IT manages and installs, Temkin says. You can buy components within the enterprise today, Temkin points out, such as the collaboration tool Basecamp, which is cheap enough to be included on an expense report without prior approval. "All these possibilities are now open," Temkin claims. Sounds great. But what about the platform the applications will run on, the Web browser? Can it take the load? That's our next subject, in Are Web Browsers Ready for the Next Generation of Internet Applications?.
<urn:uuid:ed1a6d1a-95aa-4078-892c-007228af3c58>
CC-MAIN-2017-04
http://www.cio.com/article/2437563/developer/making-development-less-difficult--interceding-with-the-browser-gods.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00425-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947661
1,938
2.671875
3
World War 1 and 2 - Causes and Consequences A war is called a world war when it affects the majority of the world's most powerful and populous nations. History is a proff that no war has ever proved to be good for humanity. All wars comes to an end some day after causing destruction on a huge level. World wars span multiple countries on multiple continents. Both World wars are a curse on the face of humanity. You might be wondering how many people died in world war 1 and 2? Well, it is difficult to know the exact figures but several hundred thousand of people including soldiers died in these 2 world wars. Read on to find out the causes and consequences of World war 1 and 2. World War 1 (August 4, 1914 to November 11, 1918) Causes of world war 1: German revelry proved to be the main cause of World War 1. Main Contestants of World War 1 - Central Powers comprising Germany, Australia-Hungary, Turkey and Bulgaria on the one hand, and - Allied Power comprising England, France, Belgium, Serbia, which were joined by Russia and Italy in 1915 and 1917, respectively. How the First World War Broke Out? When Austria attacked Serbia, after one month of Prince Ferdinand's murder, it drew Russia towards Serbia. Germany entered the fray to support Austria because it had vested interests in Turkey and was committed to support Austria. One by one, France, England and the other countries entered the war. Results & Consequences of World War 1 - Central powers were defeated. - About 50 lakh allied soldiers were killed and 1 crore and 10 lakh wounded. - Bulgaria, Turkey and Austria surrendered. - Germany signed the Armistice Treaty on November 11, 1918 and World War I ended. - In 1919 the Treaty of Versailles was signed which curbed powers of the German empire, further humiliating and weakening it. World War 2 (September 3, 1939 to August 14, 1945) Causes of world war 2: An unjust Treaty of Versailles, improper behavior of France, rise policy of expansion, and imperialism of England and France were some of the causes behind World War 2. Main Contestants of World War 2 - Axis Powers, also called the central powers which included Germany, Italy and Japan. - Allied Powers - Britain, France, Russia, US, Poland and Benelux countries. Results of World War 2 Hitler, who was responsible for this war, initially very successful but later met with strong resistance when he attacked Russia in 1941, and was forced to retreat to Berlin. On learning that Germany had collapsed, he committed suicide on April 30, 1945 in Berlin. - Germany was divided into two parts - East Germany under Russia and West Germany under the control of England, France and America (allies). - Russia emerged as the single biggest power in the world. - It was at this time that the struggle for freedom in colonies under European control in Asia (India), Myanmar (Burma), Sri Lanka (Ceylon), Malaysia (Malaya). Egypt etc. caught on. - The British Empire thus rapidly lost its leadership as more and more colonies won independence. - The UNO was then established in 1945. When Japan did not agree to the demands of the allied powers to surrender, the first atom bomb was dropped on Hiroshima on August 5, 1945 and the second on Nagasaki on August 9, 1945. Japan then surrendered unconditionally on August 14, 1945 and World War 2 ended. This list is based on the content of the title and/or content of the article displayed above which makes them more relevant and more likely to be of interest to you. We're glad you have chosen to leave a comment. Please keep in mind that all comments are moderated according to our comment policy, and all links are nofollow. Do not use keywords in the name field. Let's have a personal and meaningful conversation.comments powered by Disqus
<urn:uuid:2becc12b-8f9d-40f1-b2bd-0584a1003bba>
CC-MAIN-2017-04
http://www.knowledgepublisher.com/article-681.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00269-ip-10-171-10-70.ec2.internal.warc.gz
en
0.960849
822
3.515625
4
why we need to use decalre a group item and elementary item. You mean this? 02 ws-num3 pic s9(6)v99. You dont have to.. You can always do following. 01 ws-num3 pic s9(6)v99. 77 ws-num3 pic s9(6)v99. As i said previously, group items are persieved as alphanumeric. While generating assembly code for the numeric move between variable of different types, COBOL compiler generates code to 'align' two variables (unpacking/packing) so as to have correct computations between numbers with different declarations. move ws-num1 to ws-num2 This is considered as an alphanumeric move, so it giving you unexpected results. Joined: 23 Nov 2006 Posts: 19270 Location: Inside the Matrix This "ws-num2 is having abormal value " is incorrect. "ws-num2" has the correct value. What you tried to do was a character display of a packec-decimal field. What you "got" is the alphanumeric characters represented by the packed value. Rather than spending time trying to find ways to do this, it is far better to use numeric fields for numeric data.
<urn:uuid:0aa2612c-3ace-4028-b5b9-8880c2d25ffa>
CC-MAIN-2017-04
http://ibmmainframes.com/about19621.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00479-ip-10-171-10-70.ec2.internal.warc.gz
en
0.873687
277
2.53125
3
Military Drones Present And Future: Visual TourThe Pentagon's growing fleet of unmanned aerial vehicles ranges from hand-launched machines to the Air Force's experimental X-37B space plane. 9 of 22 Boeing's liquid-hydrogen powered Phantom Eye completed its first test flight in June 2012 at Edwards Air Force Base in California. With its 150-foot wingspan, the UAV climbed to just over 4,000 feet at a speed of 62 knots. Phantom Eye's environmentally friendly propulsion system (its "exhaust" is water) will let it stay aloft 10 miles high for up to four days. But watch out below: Upon landing, the vehicle's landing gear dug into the lake bed and was damaged. Image credit: Boeing Drones To Fly U.S. Skies, In DOD Plans Military Transformers: 20 Innovative Defense Technologies Spy Tech: 10 CIA-Backed Investments 14 Amazing DARPA Technologies On Tap Air Force Drone Controllers Embrace Linux, But Why? Secret Spy Satellite Takes Off: Stunning Images 5 Items Should Top Obama's Technology Agenda U.S. Military Robots Of The Future: Visual Tour Iran Hacked GPS Signals To Capture U.S. Drone 9 of 22
<urn:uuid:a318eebe-60fa-4382-bcbc-3504c152841a>
CC-MAIN-2017-04
http://www.darkreading.com/risk-management/military-drones-present-and-future-visual-tour/d/d-id/1107839?page_number=9
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00479-ip-10-171-10-70.ec2.internal.warc.gz
en
0.840318
259
2.8125
3
Peering for the Win Finding the Business Benefits of Intelligent Interconnects It's common knowledge that "the Internet" is actually a set of networks belonging to a diverse range of independent organizations such as content providers, ISPs, corporations, and universities. These networks create the Internet by interconnecting. Without these interconnections there would be no path for data originating in one network to travel to a destination in another network. But the fact that traffic can get from any Internet location to any other doesn't mean that all networks are directly connected. Instead, each network operator chooses which other networks to connect with. With that in mind, it's worth thinking a bit about the business and technical considerations involved when networks interconnect. Transit and Peering The first thing to know about a relationship between two networks is the form of interconnection. Two main types are common: - Transit: The networks interconnect so that one (usually an ISP, telco, or carrier) can provide reachability to the entire Internet for the other, which is typically an "endpoint" entity (e.g. enterprise, content or application provider, residential broadband provider, etc.). There is almost always an accompanying commercial relationship, meaning that the endpoint entity pays the ISP to carry traffic to and from the rest of the Internet. - Peering: The networks interconnect to exchange only traffic that originates or terminates within their own networks (or perhaps the networks of their direct customers). Peering is usually between — not surprisingly — peers, meaning entities that are comparable. A wholesale carrier whose primary business is selling transit is not going to agree to peer with an endpoint content provider who would typically be a customer. Compared to transit connections, peering can be advantageous to networks on both business and technical levels. Let's suppose, for example, that you've been going through network B to get traffic to and from network C, and you then discover that it would be possible to connect with network C directly. Why would you want to do so? The benefits of peering typically boil down to three primary areas: - Reduced Cost: Peering shifts traffic between the two parties onto a direct link between their two networks. Both parties benefit because now neither of them have to pay a "middleman" ISP to carry that traffic. So peering with network C would reduce your costs by eliminating the transit fees that you were paying network B to exchange traffic with network C. - Improved Performance: Bypassing intervening networks (like network B) reduces the number of hops between the two networks. That means less latency and fewer potential points of failure. - Resiliency: Peering links also act as a redundant path between the two networks. If the peering link fails, traffic can still flow via transit, and vice versa. A residential broadband provider might peer with large content networks (Google, Facebook, Netflix, etc.). Their users could still reach those top destinations via peering links if the transit links were congested — by a large DDoS attack, for example (assuming that the attack traffic doesn't originate from within the content providers' networks). Applying Network Analytics So now we understand why you might want to peer. But it doesn't make sense to peer with just anyone; you have to find a network with whom peering would be mutually beneficial. How do you do that? It turns out that when network flow records (e.g. NetFlow, IPFIX, sFlow, etc.) are correlated with BGP routing data in a datastore that's optimized for traffic analytics it's relatively easy to discover the best peering opportunities for your network. Presented within a well-designed query and visualization interface, BGP analytics will help you see your prime peering candidates, which are the remote ASNs that terminate or originate the majority of the traffic flowing into and out of your network. An added benefit of applying correlated Flow-BGP analytics is that you can find additional insights that don't fall squarely into the category of peering: - Transit Planning: Analytics might reveal that much of your traffic through an existing transit provider is actually being handed off to another transit provider before reaching its final destination. If the second provider sells transit for less, making a direct transit interconnection could cut your costs. It would also ensure that you avoid the relatively common problem of congestion-related disputes between "top tier" and "low cost" providers over who should pay for additional capacity at interconnection points. - Uncovering Sales Opportunities: If you're a transit provider, correlated Flow-BGP analytics can uncover leads for your sales team. Looking at top destination (or pass-through) ASNs who are not currently direct connections can reveal entities that receive a significant volume of traffic from your network, and who could benefit (in terms of cost or performance) by buying some transit from you. - Customer Cost Analytics: Transit providers can get a leg up on the competition by better understanding the routes taken by their customers' traffic. There's a lot more room to negotiate with a potential customer whose traffic gets delivered mostly via no-cost domestic peering links than with a customer who has a lot of traffic being delivered via high-cost international transit. Big Data, Big Benefits The common thread of the examples above is that better understanding — based on flow and BGP analytics — leads to better business and technical outcomes. And the key to better understanding is to recognize that flow data plus BGP data makes Big Data. It's not uncommon for a multi-homed network to generate billions of flow records per day. Until recently, however, traffic analysis solutions were severely limited in compute and storage capacity. That meant that they could provide summary reports, but not the kind of deep, path-aware analyses that offer the insights outlined above. Only a big data solution can handle the required data at the required scale. Kentik has introduced the industry's first purpose-built big data engine, built around a distributed post-Hadoop core, for network traffic and BGP analytics. Offered as a cost-effective SaaS, Kentik Detect includes key features such as real-time ad-hoc querying, alerting and DDoS detection, and intuitive, multi-dimensional flow visualizations. To learn more about how BGP and flow analysis has developed over time, check out our two-part post on The Evolution of BGP NetFlow Analysis. If you're ready to start taking advantage right now of the insights offered by big data-based network intelligence, contact us or sign up for a free trial.
<urn:uuid:584fec04-f523-406a-b885-7c7e1cefe745>
CC-MAIN-2017-04
https://www.kentik.com/peering-for-the-win/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00295-ip-10-171-10-70.ec2.internal.warc.gz
en
0.951936
1,348
2.640625
3
Cuijpers P.,VU University Amsterdam | Smit F.,Trimbos institute | Patel V.,London School of Hygiene and Tropical Medicine | Patel V.,Goa Medical College Sangath | And 3 more authors. PsyCh Journal | Year: 2015 Prevention of depressive disorders is one of the most important challenges for health care in coming decades. Depressive disorders in all age groups have a high disease burden and are associated with huge economic costs, and current treatments are only capable of taking away one-third of the (nonfatal) disease burden of depression under optimal conditions. Prevention may be one alternative strategy that may help in further reducing the disease burden of depression. Because of the worldwide increase in the number of older adults, the number of depressed older adults will also increase considerably in the next few decades, making prevention of depression an important priority for research. Identifying the high-risk target groups for preventive interventions is complicated because most risk indicators have a low specificity, indicating that most people from these groups will not develop the disorder despite increased risk levels. We describe one promising method to identify the best target groups, based on the principle that the high-risk group should be as small as possible, should be responsible for as many new cases of depression as possible, and that intervention be as effective as possible. The number of trials examining the possibility to prevent the onset of depressive disorders in those who do not (yet) meet diagnostic criteria for depression is increasing rapidly. A recent meta-analysis identified more than 30 randomized trials and these studies showed that the incidence of depressive disorders was 21% lower in the prevention groups compared with the control groups who did not receive the preventive intervention. Most of these trials are aimed at adolescents and younger adults. Only six trials were specifically aimed at older adults. The development of evidence-based preventive interventions for major depression and other mental disorders should be an important scientific and public health objective for the 21st century. © 2015 Institute of Psychology, Chinese Academy of Sciences and Wiley Publishing Asia Pty Ltd. Source
<urn:uuid:a9b255ec-8103-482f-868a-ddcbbc8b85e6>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/goa-medical-college-255502/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00021-ip-10-171-10-70.ec2.internal.warc.gz
en
0.937754
417
2.6875
3
More than 400 new U.S. cases of West Nile virus [infection] emerged in the last week in an outbreak that remains the 2nd worst on record but has begun to show signs of slowing. So far this year , 3545 cases have been reported to federal health officials as of 25 September 2012, up from 3142 reported the week before, the CDC said in its weekly update of outbreak data. About 38 per cent of all cases have been reported in Texas. Other states with large numbers of cases include Mississippi, Michigan, South Dakota, Louisiana, Oklahoma, and California. A total of 147 people have died from the disease, compared with 134 reported one week ago. Just over half of the cases reported to the CDC this year have been of the severe neuroinvasive form of the disease, which can lead to meningitis and encephalitis. The milder form of the disease causes flu-like symptoms and is rarely lethal. Experts believe the disease originated in Africa and was 1st detected in New York City in 1999. Outbreaks tend to be unpredictable. Hot temperatures, rainfall amounts and ecological factors such as bird and mosquito populations have to align just right to trigger an outbreak such as the one this year . The CDC said the number of cases this year is the highest reported to federal health officials through the last week in September since 2003, the year with the most cases. People residing in the areas where cases have occurred are well advised to take measures to avoid mosquito bites, and owners of equine animals should have their animals vaccinated. One hopes that the incidence of WNV is indeed beginning to decline.
<urn:uuid:e3a73d7a-162a-48ae-9a7f-ef1b26bd63db>
CC-MAIN-2017-04
https://ems-solutionsinc.com/blog/for-those-of-you-keeping-scorethis-year-still-is-the-second-worse-on-record-for-west-nile-be-diligent-it-is-still-out-there/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280065.57/warc/CC-MAIN-20170116095120-00224-ip-10-171-10-70.ec2.internal.warc.gz
en
0.968587
332
3.40625
3
(03-Aug-11) Earlier this year, there were a smattering of media reports about the Internet running out of addresses. People who don’t spend their days immersed in the Web world could be forgiven for wondering if this was another looming technology “crisis” like the Y2K bug. The answer is no, but there is a transition coming – one that might be better compared to the switch from analog to digital television, or the implementation of 10-digit telephone dialling. So don’t panic, but take the time to understand how you will adapt to this change. Basically, it involves a new protocol called IP Version 6, or IPv6, that allows for more addresses. Internet Protocol, or IP, is a way to address all the devices connected to the Internet. The current version of IP, Version 4, was designed in the 1970s to be able to handle 4.3 billion distinct addresses. Vinton Cerf, known as the father of the Internet, has said publicly he considered this “enough for an experiment. The problem is, the experiment never ended.” The Internet has grown beyond its creators’ expectations. In February, the last blocks of IPv4 addresses were handed out to Internet service providers. That doesn’t mean they’re all gone – virtually all service providers still have individual addresses that are unassigned – but supplies are dwindling.
<urn:uuid:293cb8e9-e7b9-4b49-aa14-04507c4145c3>
CC-MAIN-2017-04
https://www.infotech.com/research/it-the-globe-and-mail-when-the-internet-runs-out-of-addresses
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280410.21/warc/CC-MAIN-20170116095120-00132-ip-10-171-10-70.ec2.internal.warc.gz
en
0.951784
294
2.953125
3
By Bill Scott, PE, PMP, PgMP I. Project Baselines Stakeholders measure projects by how well they are executed within the project constraints or baselines. A baseline is an approved plan for a portion of a project (+/- changes). It is used to compare actual performance to planned performance and to determine if project performance is within acceptable guidelines. Every project has at least four project baselines. There may be others, depending on the project and definitions used. Schedule and Budget are the focus of this paper and the terms activity and work elements are synonymous. Schedule and cost (budget) are two of the major legs of the project constraint polygon. Without the schedule and budget baselines plans, one does not know where the project stands relative to planned schedule progress or planned budget performance. The schedule and budget baselines, along with other baselines, are developed in the planning phase of the project. The project plan is approved prior to execution by the project sponsor or an appropriate senior level manager. The project plan includes the budget and schedule. Schedule determines when work elements (activities) are to be completed, milestones achieved, and when the project should be completed. The budget determines how much each work element should cost, the cost of each level of the work breakdown schedule (WBS), and how much the total project should cost. Actual performance can be compared to these plans to determine how well the project is progressing or finished. Schedules and budgets are interlocked, and most likely an increase in one causes an increase in the other. II. Project Budget The project budget is a financial plan for all project expenditures (cost). Success in project budget management depends on, amongst other things, the creation of a comprehensive, consistent, and reliable project budget. Some people want to use the term "accurate" in the above definition. But, the word "accurate" has no place in the project world. Reliable and consistent are the terms that should be used. By definition, the project budget cannot be accurate as it is an estimate. Normal ranges of project budget variability depends on the project, the organization, type of business (and many other factors) but usually falls within +/- 10%. A. How to Develop a Project Budget In the Project Management Body of Knowledge Guide® world, there are two processes to developing a project budget. The first process is Estimate Cost, which is often confused with the Determine Budget process. Both of these processes are normally preceded by a project management team planning process, which is executed as part of the Develop Project Management Plan. This planning process is known as the Project Cost Management or the Cost Management Plan. The Cost Management Plan outlines the processes involved in determining organizational cost categories, estimating, budgeting, and controlling cost, so that the project can be executed within the approved budget. The Estimate Cost process is not only confused with Determine Budget but is also widely misunderstood. Many think that this process estimates the total cost of the project. But this is not correct, at least not directly. The Estimate Cost process estimates the cost for each of the work elements and records the basis of that cost. That is as far as Estimate Cost goes! The second of the three processes in Project Cost Management is the Determine Budget process, which rolls work element cost upward, applies cost aggregation, applies project contingency, makes a cash flow estimate, and now you have a budget for the various levels of the WBS and the total project. B. Why a Project Budget is Important Based on the work above, we now have a budget for: - Individual Activities - Work Packages - The Total Project This level of detail allows a project manager (PM) to evaluate the budget performance of the project from the top down or from the bottom up. If a work package is running over or is in danger of over running the budget, the project manager can drill down until he/she finds the problem or potential problem. The drill down can be by the PM or in conjunction with the assigned team member. One other very powerful tool that helps in the analysis of project budget performance is the Earned Value Method (EVM). EVM can assist you in evaluating project budget performance (what are you accomplishing for the funds you are expending) and in calculating a Cost Performance Index (CPI), which is a representation of the effectiveness of your spending. EVM can calculate a Cost Variance (CV), which is the difference between the value of the work completed and the amount of funds expended to accomplish that work. This will tell you the magnitude of the over- or under-run or if you are on budget. EVM can be applied down to the work element level, if the appropriate level of detail exists. Variance analysis is another tool to help the PM understand why work elements (or above) are over- or underbudget. The Cost Management Plan probably sets thresholds for overruns (say 10%), a different threshold for under runs (say 15%), to trigger your attention. Understanding why work elements are overrunning will assist the PM todevelop solutions (action plans) to bring the project back within acceptable ranges. Understanding why work elements are significantly under budget assist the PM in feeding this information forward to new project budget development. Regardless of experience, care, or execution effort, project budget variances will occur. This is just a fact of the project world. While they cannot all be eliminated, they can be reduced for future projects. Some (not many) projects will finish very close to the budget. More projects will finish within acceptable ranges (+/-10%). Others (we hope not many) will finish well outside the acceptable range (>10% over or under). Using the techniques outlined here will reduce the number of projects in this category and reduce the size of the over runs. C. Tips on How to Successfully Manage a Project Budget - Capture all of the scope (scope statement, WBS, and WBS dictionary). If you do not capture the total project scope correctly, there is little hope that the project can be executed for the budget or schedule. - Insist on input from all stakeholders. Penetrate through stated needs and include implied needs. - Determine the various cost categories used at the organization. - Develop Project Team and Project Management Team trust. - Develop a reliable, consistent, sufficiently detailed WBS and time decomposition structure. Estimate Cost and Determine Budget. - Stop scope and grade creep. Eliminate gold plating. None of these adds value to the project. Your team is your first line of defense. - Perform EVM, variance, and trend analysis. - Continuously communicate to stakeholders on project status, project direction, and what the project will look like at completion. - Use organizational process assets (OPA) to develop, analyze, and challenge. - Avoid the pitfalls in Section IV. - Take action when indicated! Sooner rather than later. III. Project Schedule The project schedule is a document that, if properly prepared, is usable for planning, execution, monitoring/controlling, and communicating the delivery of the scope to the stakeholders. The main purpose of a project schedule is to represent the plan to deliver the project scope over time. A project schedule, in its simplest form, could be a chart of work elements with associated schedule dates of when work elements and milestones (usually the completion of a deliverable) are planned to occur. In addition to guiding the work, the project schedule is used to communicate to all stakeholders when certain work elements and project events are expected to be accomplished. The project schedule is also the tool that links the project elements of work to the resources needed to accomplish that work. As a minimum, the project schedule includes the following components: - All activities - A planned start date for the project - Planned start dates for each activity - Planned finish dates for each activity - Planned finish date for the project - Resource assignments - Calendar based - Activity durations - The "flow" (sequence) of the various activities - The relationships of activities - An identified critical path(s) - Total and free float A. How to Develop a Project Schedule PMI® has a Develop Schedule process and the main output is the project schedule. This is the result of four previous processes plus the work of up to eight tools and techniques for the Develop Schedule process. The previous processes are: - Define Activities (work elements) - Sequence Activities - Estimate Activity Resources - Estimate Activity Duration The tools and techniques available to develop the schedule are: - Schedule network analysis - Critical Path Method - Critical Chain Method - Resource leveling - What-if scenarios - Leads and lags - Schedule compression - Scheduling tools B. Why a Project Schedule is Important Based on the work above, we now have a schedule for: - Individual Activities - Work Packages - The Total Project This level of detail allows a project manager evaluate the schedule performance of the project from the top down or from the bottom up. If a deliverable is slipping or is in danger of slipping, the project manager can drill down until he/she finds the problem or potential problem. One other very powerful tool that will help in this analysis is the Earned Value Method (EVM). EVM can assist you in evaluating project schedule performance (what have accomplished related to the plan), calculate a Schedule Performance Index (SPI) which is a representation of the effectiveness of accomplishing your planned schedule. EVM can also calculate a Schedule Variance (SV) which is the difference between the value of the work completed and value of the planned work. This will tell you the magnitude of the behind schedule, ahead of schedule, or if zero you are on schedule. EVM can be applied down to the work element level, if the appropriate level of detail exists. EVM does have several draw backs, but there are solutions to the draw backs: 1. EVM ignores the critical path. There are two thing we can do to solve this problem. a. Perform a separate CP analysis. b. Strip out all non-CP work elements and perform a second EVM analysis. 2. As the project nears completion, EVM breaks down for schedule analysis. This is because as the project nears completion, EV approaches PV, and in fact reaches PV at project completion. SV and SPI lose their meaning Variance analysis is another tool to help the project manager understand why work elements (or above) are behind or ahead of schedule. The Time Management Plan probably sets thresholds for behind schedule (say 5%), a different threshold for ahead of schedule (say 10%), to trigger your attention. Understanding why work elements are behind schedule will assist the project manager in developing solutions (action plans) to bring the project back within acceptable ranges. Understanding why work elements are significantly ahead of schedule will assist the project manager in feeding this information forward to new project schedule development. Regardless of care or execution, project schedule slippages will occur. This is just another fact of the project world. While they cannot all be eliminated, they can be reduced for future projects. Some (not many) projects will finish very close to the schedule date. More projects will finish within acceptable ranges (+/-5%). Others (we hope not many) will finish well outside the acceptable range (>>10% behind or ahead). Using the techniques outlined here will reduce the number of projects in this category and reduce the size of the behind variances. C. Tips on How to Successfully Manage a Project Schedule - Use all of the tips from "Successfully Managing a Project Budget". - Avoid the pitfalls in Section IV. - Get reports, even if you have to have them customized, from your scheduling software that tells YOU what is going on with the project and schedule accomplishment. - When work elements slip, analyze the cause and impact. Take action as necessary. These things will not fix themselves. - When resources do not materialize as planned and agreed, estimate the impact, and communicate this to management! - When things go wrong, analyze why, estimate the impact, communicate with stakeholders and take action to bring the schedule accomplishment back within acceptable ranges. IV. Why Projects are Late and Over Budget A. Omissions - Leaving out work that must be done. Examples could include documentation, interfaces with the PMO, or interfaces with other projects. B. Merging - The point at which several project schedule paths meet. Successor work cannot begin until predecessor work is complete. The time to get through this point tends to increase with the number of activities being merged. C. Errors - Mistakes or work not done at all. Errors, even though expected, are rarely provided for in the planning phase. D. Rework - Work not completed in accordance with company standards and must be redone. There is a human nature reluctance to report bad news. E. Failure to understand the complexity of the project - Our inability (especially true of technical people) to understand the complexity of the work planned. This causes us to underestimate both budgets and durations. F. Queuing - Improper allocation of resources to critical path activities. G. Multitasking - Most companies nowadays demand their people multitask. This causes project activities to wait on resources and suffer efficiency degrades, while switching activities as well as delays in the network as each activity is extended. H. Student Syndrome - Team members waiting to start work till there is schedule pressure. This will ultimately affect the critical path and the project completion date. I. Policy - Company policy about merits, bonuses, performance reviews, or other rewards can drive people to do the wrong thing. Rewards or punishments give out for being early, on time, late, over budget, on budget, or under budget will drive people to protect themselves and do things not to generate rewards but to insure no pain. J. Level of Effort - Work that is not related to activities, but extends for the project duration will increase with schedule extensions. About the Author As an instructor and consultant for Global Knowledge, Mr. Scott shares his experience and expertise as a Professional Engineer (PE), a certified Program Management Professional (PgMP) and Project Management Professional (PMP) who specialized in large, complex, long-term, and constructed environmental project work for electric utilities and heavy industry. Mr. Scott has over 30 years experience in engineering, managing projects, training and as a professional skills consultant. In addition to program and project management experience, Mr. Scott was a certified Arbitrator with the American Arbitration Association for 10 years specializing in construction and commercial disputes. He has authored numerous papers on project management, and developed the curriculum for many project management courses. Mr. Scott has BS in Electrical Engineering from the University of Alabama, a BS in Mechanical Engineering from the University of Alabama, a Certificate in International Operations from the Stockholm School of Economics, and has attended numerous short courses on Engineering, Business, Project and Program Management. Mr. Scott is a member of National Society of Professional Engineers (NSPE), the Institute of Electrical and Electronic Engineers (IEEE), and the Project Management Institute (PMI).
<urn:uuid:0e21ff1f-0aa3-48e5-89f7-ee0755caa9fa>
CC-MAIN-2017-04
https://www.globalknowledge.com/ca-en/content/articles/importance-of-schedule-and-cost-control/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280791.35/warc/CC-MAIN-20170116095120-00040-ip-10-171-10-70.ec2.internal.warc.gz
en
0.913606
3,141
3.359375
3
In the past several decades, technologies have evolved almost immeasurably, certainly including the development of data storage. Humankind has always tried to find ways to store information. People have become accustomed to technological terminology, such as CD-ROM, USB Key, and DVD. But today, the most advanced storage solution may be the cloud computing. About how to achieve the “cloud”, some people say that optical fiber is the key to cloud computing. So, what is cloud computing and why do we need optical fiber to get there? Today, we are going to the “Cloud” and find the out answer. Though the term “cloud computing” is everywhere and closely linked with our life, we do not really know what it is just like many terminologies that we don’t know. However, unlike other terminologies, we are more interested in cloud computing because of its attractive features, applications or maybe the interesting name. Why is it called as “cloud” but not “rain” or “snow”? The most simple explanation is that we usually use “cloud” to represent the network. “Cloud” the term describes an image of the complex infrastructure, which cover all the technical details. Obviously, the cloud computing has nothing to do with the weather “cloud”. It is just an analogy to give it a body to imagine. In fact, cloud computing is a model for computing transforming. In this model, data and computation are operated somewhere in a “cloud”, which is some collection of data centers owned and maintained by a third party. This enables ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources that can be rapidly provisioned and released with minimal management effort or service provider interaction. There are public cloud, private public and hybrid cloud. When a cloud is made available in a pay-as-you-go manner to the general public, we call it a public cloud. And when the cloud infrastructure is operated solely for a business or an organization, it is called private cloud. A composition of public and private cloud is called hybrid cloud. A hybrid cloud integrates the advantages of public cloud and private cloud, where private cloud is able to maintain high service availability by scaling up their system with externally provisioned resources from a public cloud when there are rapid workload fluctuations or hardware failures. Generally, cloud computing may be considered to include the following layers of service: IaaS (Infrastructure as a Service), PaaS (Platform as a Service) and SaaS (Software as a Service). The implementation of cloud computing depends on high bandwidth. If without an enough bandwidth, cloud computing is impossible. In the “cloud”, users’ terminals are simplified into a pure and single device with only input and output functions but meanwhile utilize the powerful computing and processing functions from the “cloud”. This means that the terminal must have a very fast connection, because the simple terminal means fast network and powerful platform requirement, where “pipes” are put forward higher requirement. Thus, fiber is the ideal “pipe” for cloud computing. In fact, increasingly more computer applications, software and even file storage now reside on the Internet or in the “cloud”. Yet another driving force is mobile Internet traffic, which relies heavily on cloud computing. It is said that there is over 1 Exabyte (i.e. 1,073,741,824 Gigabytes) of data currently stored in the cloud. And this number is growing exponentially every day. The greatest thing that will limit your ability to work seemlessly in the “cloud” is your Internet connection. Thus, to access the tremendous amounts data we need fiber networks that can carry Terabits—one trillion bits per second. Optical fiber can offer more available bandwidth and speed which meets the demands of the “cloud”. Obviously, no technology is more effective at meeting that challenge than fiber at present. When talking about optical fiber, FTTH (Fiber to the Home) may be the hot topic. FTTH infrastructure is expected as a solution to the growing demands for high bandwidth. It brings fiber-optic connections directly into homes, allowing for delivery speeds up to a possible 100 Mbps, or even more. These speeds open the door to a variety of new services and applications for residential, business and public service markets. The relationship between FTTH and cloud computing is subtle. FTTH which will encourage growth in cloud computing with its benefits. And the growth of cloud computing may drive the development of FTTH. Cloud computing is seen by many as the next generation of information technology. The abundant supply of information technology capabilities offers many benefits to our lives. However, like any new technology advancement, cloud computing also faces many challenges, e.g. cloud security. Though there are many unknown factors in the “cloud” waiting for us to explore, it is no doubt that we need optical fibers in order to better reach the “cloud”. Now, with the benefits of optical fibers, the cloud computing is increasingly developing. Will it automatically work out better and cheaper for you in the long term? What’s your opinion?
<urn:uuid:408f950f-0585-40c5-8677-f53133cb1f0c>
CC-MAIN-2017-04
http://www.fs.com/blog/why-do-we-need-optical-fiber-to-get-to-the-cloud.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00342-ip-10-171-10-70.ec2.internal.warc.gz
en
0.946039
1,085
3.515625
4
#include <screen/screen.h> int screen_flush_blits( screen_context_t ctx, int flags ) - A connection to screen (acquired with screen_create_context()). - A flag used by the mutex. Specify SCREEN_WAIT_IDLE if the function should block until all the blits have been completed. Calling screen_flush_blits() flushes all delayed blits and fills since the last screen_post_window() or screen_flush_blits(). The blits will start executing shortly after you call the function. The blits may not be complete when the function returns, unless the SCREEN_WAIT_IDLE flag is set. This function has no effect on other non-blit delayed calls. The screen_post_window() function does an implicit flush of any pending blits. The content that wishes to be presented via the call to screen_post_window() is most likely the result of any pending blit operations completing. If the function succeeds, it returns 0 and the blit buffer is flushed. Otherwise, the function returns -1 and errno is set.
<urn:uuid:5e2d7012-feca-4312-848d-24884fcfa3b4>
CC-MAIN-2017-04
https://developer.blackberry.com/playbook/native/reference/com.qnx.doc.screen.lib_ref/topic/rscreen_flush_blits.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00004-ip-10-171-10-70.ec2.internal.warc.gz
en
0.685448
246
2.546875
3
SQL Injection is the manipulation of web based user input in order to gain direct access to a database or its functions. Read on through this SQL injection tutorial to understand how this popular attack vector is exploited. The majority of modern web applications and sites use some form of dynamic content. This content can be in the form of articles, blog posts, comments, guest books, shopping carts, product lists, photo galleries, personal details, usernames, passwords the list goes on. Whether the web server is Apache on Linux or IIS on Windows, if its running a server side scripting language such as PHP, ASP, JSP, CFM it is likely there is a database in the background storing all this dynamic content. SQL Injection involves bypassing the normal methods of accessing the database content and injecting SQL queries and statements directly to the database through the web application in order to steal, manipulate or delete the content. System access is even possible in many instances where the database is able to gain access to system resources, this can end up with entire system compromise and attackers in your network (not only stealing all your data). Have you looked closely at the full URL of the websites you visit? Notice the ?itemid=944 ... this is a parameter that is sent via the web application to the database in order to retrieve the content you are looking at. Through HTTP GET based SQL injection we can manipulate these parameters to send unintended statements into the Database. For example; Instead of retrieving article number 1, why don't you show me article number 1 AND all the users and passwords in your database.... The online sql injection test from HackerTarget.com will test each parameter on the url for possible SQL injection using the excellent tool SQLmap. The only data obtained with this test if a vulnerable parameter is found is the database version. Sqlmap can also be used to show the results of much more devastating requests such as retrieving all the data / specific tables of data from the database or even the insertion of code execution commands and shells. SQL Injection Vulnerabilities are also very prevalent in the form fields of web applications. Form based sql injection is conceptually the same, the only difference being the rogue SQL statements are inserted via a POST request on the form submit rather than the HTTP GET parameter. Username / Password forms are a well known point of attack. One type of attack allows the bypassing of the password part of the login. This tells the database to not worry about the rest of the SQL query (the password part) and just perform the function of "if username = googleadmin and a=a --" (then give the user access to the system). Oops! SQL Injection can also be used to attack other points of web applications, even cookie parameters - however HTTP GET and HTTP POST requests are the most common vectors. So how can it be fixed? It is simple in theory, not so easy in practice as can be seen by the on going attacks with SQL injection based compromises resulting in literally millions of database records lost. All user editable points of input into a web application must have the input's sanitised to prevent the execution of unauthorised SQL code. The OWASP site has some excellent information if you are looking for more detailed technical resources.
<urn:uuid:ca7711d8-e68b-4965-8c72-4b3d82471395>
CC-MAIN-2017-04
https://hackertarget.com/sql-injection/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00490-ip-10-171-10-70.ec2.internal.warc.gz
en
0.884449
675
3.4375
3
We’ve seen or heard about drivers on cell phones causing accidents. But new research from the University of Utah also shows that such drivers are also responsible for slowing down traffic flows. Those talking on cell phones tend to drive more slowly on freeways, pass slowgoing vehicles less frequently and generally take longer to get from one point to another, the researchers found. This can cost society in terms of lost productivity, fuel costs and more, the researchers concluded. “At the end of the day, the average person’s commute is longer because of that person who is on the cell phone right in front of them,” says University of Utah psychology Professor Dave Strayer, leader of the research team, in a statement. “That SOB on the cell phone is slowing you down and making you late.” The research, based on a PatrolSim driving simulator, is being presented in Washington, D.C., on Jan. 16 during the Transportation Research Board’s annual meeting. Strayer’s research group has issued past studies comparing the impairment of cell phone wielding drivers to that of drunk drivers and showing that hands-free cell phones are no less dangerous than handheld ones since it is the conversation that is the distraction. Much research effort has gone into exploring the various safety and social ramifications of using cell phones in recent years. Johns Hopkins University researchers earlier this year found that people using cell phones or text messaging in mid-conversation or during an appointment or meeting cracked its Terrible 10 Rude Behaviors List. Cell phone users have even confessed to being a bunch of dangerous, rude liars, according to a Pew Research study . Carriers, meanwhile, have issued research countering other research about the safety of cell phone transmissions. A four-year long study of cellular telephone base stations out of Japan found their transmissions pose no risk to human health. And of course, this picture from Russia really emphasizes the dangers of mixing driving and cell phones. Several states, including California , have also banned driving while holding a cell phone. Speaking of cell phone dangers, my blogging colleague's latest brush with Mr. Phone-in-his-Ear .
<urn:uuid:b22a1c63-13db-4073-8db6-cbb63a356499>
CC-MAIN-2017-04
http://www.networkworld.com/article/2350222/data-center/now-you-can-blame-cell-phone-wielding-drivers-for-causing-traffic-jams--not-just-acciden.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00334-ip-10-171-10-70.ec2.internal.warc.gz
en
0.952395
452
3.046875
3
Private Browsing, released by Apple in October 2011 as part of iOS 5 in, is a locale for the phone Safari browser that immobilizes many usual tracking and information collection tools that are familiar to many browsers. Private browsing is a privacy tool in some browsers that is used to stop browsing history and the webpage cache. This lets a person browse the internet without hogging local information that could be recovered later. Private Browsing will also halt the collection of information in cookies (Flash or others.) This security feature is only on the confined computing gadget as it is still likely to recognize sites by linking the IP address on the server. Allowing Private Browsing stops sites from putting cookies on iOS devices. While cookies can still be used by websites to follow guests for publicity purposes, they also allow websites to memorize user data that can be used to mechanically log a consumer back in, or to fill particular details automatically. For instance, if you go on Amazon.com and put in your own information, the site will remember you when you return to the website later. Without these cookies, you would have to put your details while logging in on Amazon every time. User’s might find that allowing Private Browsing also stops Safari from tracking the web page and look for history or automatically-filled data. Private Browsing does not offer protection from viruses (as long as they are in the iOS), information phishing or attacking attempts, or monetary or identity pilfering. While one is making use of Private Browsing, one will be seen by the server or site but this would not be recorded on the iOS device. Some users might never need to use Private Browsing, and cookies can at most of the times improve a user’s skill on the Web as much as they aid websites gauge user traffic. But for people who want zero history of their browsing information, Private Browsing can prove to be very useful. One can make use of Private Browsing when banking online from an iDevice, or when shopping online from a family member’s iDevice, or when screening content that you do not want in the gadget’s history for example adult sites. Other common uses of Private Browsing include doing searches that are not subjected to previous browser history or caches or friends’ references, which might burden and more greatly rank particular results than the others. Private Browsing is also very commonly be used to prevent unintentional saving of log in IDs to user accounts, to log into many accounts at the same time and for testing sites. The Mozilla Foundation performed an investigation about client activities when Private Browsing is turned on and the length of time through which the sitting lasts. The results stated that many sittings lasted for only about 10 minutes, and there was an increase in activity between 11 am to 2 pm. A slight peak was recorded for about an hour or two after 12 am.
<urn:uuid:5a1de851-ac7a-4fb4-8290-23c650624f1e>
CC-MAIN-2017-04
http://www.hackersnewsbulletin.com/2015/03/iphone-private-browsing-private-browsing.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00296-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947268
601
2.734375
3
A paper published in Nature this week shows that human creativity trumps computer software, at least in the protein folding arena. A protein folding game, called Foldit, that allows people to puzzle together protein structures, is proving to be quite successful. The game was conceived by scientists at the University of Washington after they got the idea that mere mortals could use their intuition to tweak protein structures in novel ways. Protein folding has been the domain of supercomputing for some time, given that it’s basically an exercise in molecular dynamics. But the FLOPS needed to simulate proteins of any size is considerable, so even petascale machines have to take some computational short-cuts to predict the molecular structures. Humans, though, can use creativity as a short-cut when problem solving. From the University of Washington announcement: It turns out that people can, indeed, compete with supercomputers in this arena. Analysis shows that players bested the computers on problems that required radical moves, risks and long-term vision – the kinds of qualities that computers do not possess. Ars Technica does a deeper dive into why the protein folding software often comes up short when it starts crunching on really big structures: It sounds simple, but with anything more than a short chain of amino acids, there are a tremendous number of potential configurations to be sampled in 3D space, which can bring powerful computers to their knees. The Rosetta algorithm handles the huge energy landscape it needs to scan by taking big leaps between different configurations, then attempting to minimize the energy by making smaller tweaks. This lets it sample large portions of the structural landscape, but sometimes leaves it stuck: the path between its current location and an energy minimum may take it through a high energy state, which would keep Rosetta from finding the solution. But it may be only a matter of time before the supers regain the upper hand. The University of Washington researchers are trying to analyze the approaches used by more successful Foldit players, with the idea of trying to replicate those strategies in software.
<urn:uuid:aa74ced4-e6af-4ae7-b19a-9a03be1b5df5>
CC-MAIN-2017-04
https://www.hpcwire.com/2010/08/04/humans_out-compute_supers_at_protein_folding/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00022-ip-10-171-10-70.ec2.internal.warc.gz
en
0.953342
415
3.140625
3
Have you ever wondered about all the different steps you have to go through when creating an SSL certificate? I did too, so I researched it. This page will go over the basic steps for creating a certificate, as well as give an overview of what each step is actually doing. - a CSR is a block of encrypted text generated on the server that cert will be used on - it contains org name, common name (domain name), locality, country, and the public key - a certificate authority will use the CSR to create the certificate. - most CSRs are base-64 and are in the PEM format - you can create a CSR with the command openssl req -new -keyout server.key -out server.csr
<urn:uuid:fc422e09-f797-4ab9-a0f2-4f57d7a33eca>
CC-MAIN-2017-04
https://danielmiessler.com/study/certificates/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00536-ip-10-171-10-70.ec2.internal.warc.gz
en
0.934569
157
3.0625
3
Father's Day is a day of commemoration and celebration of Dad. It is a day to not only honor your father, but all men who have acted as a father figure in your life whether as Stepfathers, Uncles, Grandfathers or Big Brothers. Father Day History In the United States, the first modern Father's Day celebration was held on July 5, 1908, in Fairmont, West Virginia. It was first celebrated as a church service at Williams Memorial Methodist Episcopal Church South, now known as Central United Methodist Church. Grace Golden Clayton, who is believed to have suggested the service to the pastor, is believed to have been inspired to celebrate fathers after the deadly mine explosion in nearby Monongah the prior December. This explosion killed 361 men, many of them fathers and recent immigrants to the United States from Italy. Another possible inspiration for the service was Mother's Day, which had recently been celebrated for the first time in Grafton, West Virginia, a town about 15 miles away. Another driving force behind the establishment of the integration of Father's Day was Mrs. Sonora Smart Dodd, born in Creston, Washington. Her father, the Civil War veteran William Jackson Smart, as a single parent reared his six children in Spokane, Washington. She was inspired by Anna Jarvis's efforts to establish Mother's Day. Although she initially suggested June 5, the anniversary of her father's death, she did not provide the organizers with enough time to make arrangements, and the celebration was deferred to the third Sunday of June. The first June Father's Day was celebrated on June 19, 1910, in Spokane, WA. Unofficial support from such figures as William Jennings Bryan was immediate and widespread. President Woodrow Wilson was personally feted by his family in 1916. President Calvin Coolidge recommended it as a national holiday in 1924. In 1966, President Lyndon Johnson made Father's Day a holiday to be celebrated on the third Sunday of June. The holiday was not officially recognized until 1972, during the presidency of Richard Nixon. In recent years, retailers have adapted to the holiday by promoting male-oriented gifts such as electronics, tools and greeting cards. Schools and other children's programs commonly have activities to make Father's Day gifts. So, In short, Happy Fathers Day to all of you Dad's out there! Have a fun-filled day with your families and friends! Fathers Day is a primarily secular holiday inaugurated in the early 20th century to complement Mother's Day in celebrating fatherhood and parenting by males, and to honor and commemorate fathers and forefathers. Father's Day is celebrated on a variety of dates worldwide, and typically involves gift-giving to fathers and family-oriented activities. The officially recognized date of Father's Day varies from country to country.
<urn:uuid:e9dc3e0b-3203-485c-a962-043c7baeeda0>
CC-MAIN-2017-04
http://www.knowledgepublisher.com/question.php?ID=340
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280899.42/warc/CC-MAIN-20170116095120-00260-ip-10-171-10-70.ec2.internal.warc.gz
en
0.979305
566
3.53125
4
Bitcoin is a digital currency whose creation and transfer is based on an open source cryptographic protocol. There are many benefits to using it (no transaction fees, anonymous payments, etc.), but there are also risks involved. The first Bitcoins were created in 2009, and their initial value was set by individuals. Since then, a lot of people, organizations and businesses have expressed interest in the currency and have begun “mining” and using it. The value / price of Bitcoins has risen greatly over the years and in 2013 especially, attracting speculators and criminals. The question is: if you create / buy / use Bitcoins, what can go wrong? The biggest danger comes from malware. Most people keep them in their digital wallets, and malware such as the Infostealer.Coinbit are able to search the infected computer for the Bitcoin wallet.dat file and send it to the criminal(s). This can be prevented by encrypting your wallet with a strong password so that criminals can’t brute-force it open. Malware that uses the victim’s computer’s CPU and other resources to mine new Bitcoins is a danger both to those that use the currency and those who don’t and have no idea what it is. The victims often do not get robbed of the Bitcoins they might own, but they get stuck with massive electricity bills and their computers work overtime, which increases the chances of them braking down. Also, the speed with which the affected computers process other tasks given to them by their legitimate users slows down, affecting the work for which they are paid or they do in their spare time. Finally, the Bitcoin-mining is often only one of the things that a particular piece of malware is able to do (see the ZeroAccess Trojan), which creates additional risks for the users. And if you thought that Mac users are safe from such malware, the DevilRobber Trojan will prove you wrong. Also, if you believe that being careful what you download online will keep your computer safe from software that will harness its resources to create Bitcoins, you have only to read about the latest discovery of Bitcoin-mining code in a popular gaming client, courtesy of a greedy insider in the E-Sports Entertainment Association. Online Bitcoin exchanges have recently been plagued with strong DDoS attacks and breaches. Mt.Gox, the world’s largest one, has been downed a little over a week ago by a strong DDoS attack. Even through it was quickly brought online again, the disruptions affected its overall functioning for a while and transactions were suspended. All this influenced the price of Bitcoins, and it is believed that the attackers might have profited (and users lost) from the unexpected up and down swings. Bitcoin exchange service BitInstant has suffered a breach in March that resulted in the loss of nearly $12,500 in Bitcoins. I’m sure that its users were not affected by the loss, but they might have been if the attackers managed to steal bigger amounts. Let me end all this by pointing you towards an overview of the functioning of a very successful Bitcoin-mining botnet that went undiscovered for more than six months, and whose use of Tor for internal communication and the use of Hidden Services for protecting the backend infrastructure has made it practically impervious to takedowns. I’m sure it’s not the last one.
<urn:uuid:d2b2a0ca-6392-433b-9c46-30c7d12e9939>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2013/05/02/a-primer-on-bitcoin-risks-and-threats/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00317-ip-10-171-10-70.ec2.internal.warc.gz
en
0.963102
693
2.84375
3
Design with Multiplexers Consider the following design, taken from the 5th edition of my textbook. is a correct implementation of the Carry–Out of a Full Adder. In terms of Boolean expressions, this is F(X, Y, Z) = S(3, 5, 6, 7). try this with a common circuit emulator, such as Multi-Media Logic, and find that we need to think about more. An Eight–to–One MUX in Multi–Media Here is the circuit element selected in the Multi–Media Logic tool. is an 8–to–1 MUX with inputs labeled 7 through 0, or equivalently X7 through X0. This is expected. The selector (control) lines are as expected; 2 through 0. my notes, I use M for the output of the Multiplexer. This figure uses the symbol Y (not a problem) and notes that real multiplexers also output the complement. only issue here is the enable. Note that the MUX is enabled low; this signal must be set to ground in order for the multiplexer to function as advertised. Carry–Out of a Full Adder Here is a screen shot of my implementation of F(X, Y, Z) = S(3, 5, 6, 7). NOTE: Show simulation here. Gray Codes: Minimal Effort Testing the above circuit with three basic inputs S2, S1, S0. How can one test all possible inputs with minimum switching? One good answer is to use Gray Codes for input. Here are the 2–bit and 3–bit codes. generate an (N + 1)–bit code set from an N–bit code set. 1. Write out the N–bit codes with 0 as a prefix, then 2. Write out the N–bit codes in reverse with 1 as a prefix. 00, 01, 11, 10 becomes 000, 001, 011, 010, 110, 111, 101, and 100 Testing the Carry–Out Circuit If the Enable switch is set to 1, the output is always 0. Y’ = 1. Set the Enable switch to 0 and generate the following sequence. Start with S2 = 0, S1 = 0, S0 = 0. 0 0 0 Click S0 to get 0 0 1 Click S1 to get 0 1 1 Click S0 to get 0 1 0 Click S2 to get 1 1 0 Click S0 to get 1 1 1 Click S1 to get 1 0 1 Click S0 to get 1 0 0 Design with Decoders We now look at another circuit from my textbook. This shows the implementation of a Full Adder with an active high decoder and two OR gates. The outputs are: F2 the Carry–Out F1(A, B, C) = S(1, 2, 4, 7) = P(0, 3, 5, 6) F2(A, B, C) = S(3, 5, 6, 7) = P(0, 1, 2, 4) PROBLEM: Almost all commercial decoders are active low. Active Low Decoders let’s use 3–to–8 decoders to describe the difference between active high and active low. the active–high decoder, the active output is set to +5 volts (logic 1), while the other outputs are set to 0 volts (logic 0). the active–low decoder, the active output is set to 0 volts (logic 0), while the other outputs are set to +5 volts (logic 1). Enabled Low, Active Low Decoders All commercial decoders have an enable input; most are enabled low. the decoder is enabled low, when the input signal E’ = 1, none of the decoder outputs are active. Since the decoder is active low, this means that all of the outputs are set to logic 1 (+5 volts). the decoder is enabled low, when the input signal E’ = 0, the decoder is enabled and the selected output is active. Since the decoder is active low, this means that the selected output is set to logic 0, and all other outputs are set to logic 1. Why Active Low / Enabled Low? This is a conjecture, but it makes sense to me. The active–high decoder is providing power to the device it enables. active–low decoder is just providing a path to ground for the device it It is likely that this approach yields a faster circuit. Back To Active High: A Look At F2 Seeking a gate that outputs 1 if at least one of its inputs is 1, we are led to the OR gate. Active Low: F2(X, Y, Z) = P(0, 1, 2, 4) is 1 if and only if none of the outputs Y0, Y1, Y2, or Y4 are selected. of those outputs must be a logic 1. This leads to an AND gate implementation. Full Adder Implemented with a 3–to–8 Decoder sum is at top: F(X, Y, Z) = P(0, 3, 5, 6) The carry–out is at bottom: F(X, Y, Z) = P(0, 1, 2, 4) Where are the Decoders? One will note that the Multi–Media Logic tool does not provide a decoder circuit. Fortunately, a 1–to–2N demultiplexer can be made into an N–to–2N decoder. at the circuit to the left. The control signals C1,C0 select the output to receive the input This is exactly equivalent to a decoder. the circuit at right, the selected output gets the input, now called “Enable”. For the demultiplexers we use, the other outputs get a logic 1. We can fabricate an active low decoder. The MUX as an Active–Low Decoder Here is the 2–to–4 Demultiplexer as an 2–to–4 active low decoder. is an answer to one of the homework problems: use a 2–to–4 decoder for XOR. The function is either S(1, 2) or P(0, 3).
<urn:uuid:c8ae6c9d-9ac1-4820-b2ca-002829d235b5>
CC-MAIN-2017-04
http://edwardbosworth.com/My5155_Slides/Chapter05/DesignWithRealDevices.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280065.57/warc/CC-MAIN-20170116095120-00225-ip-10-171-10-70.ec2.internal.warc.gz
en
0.858534
1,406
3.265625
3
System migration is a method of installing a system at a different version that is different from its current version. The IBM AIX installation provides various methods to install a system at different versions. You can choose the following methods: - A new and complete overwrite installation overwrites all data on the selected hard disk. - A preservative installation preserves old user data, which is in the root volume group. - A migration installation upgrades AIX to a different version or release while preserving the root volume group. This article focuses on the AIX system migration installation method. It provides step-by-step instructions for migrating a system from one AIX version to another AIX version using a Network Installation Management (NIM) server. A migration installation attempts to preserve all user configurations, while moving the operating system from one AIX version to another AIX version. During a migration installation, the installation process determines which optional software products are installed on the existing version of the operating system. The main advantage with the migration installation compared to a new and complete overwrite is that most of the file sets and data is preserved on the system. It keeps all the directories such as /home, /usr, /var, logical volumes information and configuration files. The /tmp file system is not preserved during the migration of the system. During the migration of a system from one version to another AIX version, the following steps are taken: - Saving the existing configuration files - Preparinge and removing old files - Restoring the system with new configuration files - Removing unsupported or unnecessary file sets - Migrating configuration data wherever it is applicable and possible - Updating additional file sets when required by other file sets The migration planning process involves various steps. The administrator has to prepare a checklist before migrating the system from one version to another AIX version to take care of the following steps: - Backing up the current existing environment to prevent data loss - Checking for the hardware requirements for the migrated version - Checking for security vulnerability issues with the new AIX version - Deciding on the migration strategy Hardware requirements vary from one AIX version to another AIX version. Make sure that the new operating system supports your hardware. Read the release notes for your hardware and the corresponding AIX operating system requirements. Also, along with checking the hardware, another important task to do on your system before starting a migration installation is to upgrade the microcode level of your system and of all adapters and other devices. Visit Fix central to find and download microcode upgrades, see the Resources section. There are different ways to migrate your system from one AIX version to another AIX version: - Migration by using NIM - Migration by using a CD or DVD drive - Migration by using mksysb - Migration by using a alternate disk migration. These methods help to migrate the system from one version to another version. However, there are some advantages and disadvantages with these mechanisms. If you have several systems in your environment, choosing the NIM method is the best option to migrate the system. The NIM method provides a mechanism to access the system remotely and it is the most time-saving method. Note that the NIM master needs to be configured so that NIM clients can use the resource on the NIM master during the migration. Refer to the Resources section for configuring the NIM master. Steps for migration using NIM - Remove the /etc/niminfo file on the NIM client system if it exists. - Run the smit nimcommand on the NIM client. - Select Configure Network Installation Management Client Fileset for allocating the resource from the NIM master as show in Figure 1. Figure 1. smit nim - Enter the system name as the host name of the NIM client that you want to install. In this example, P7he42 is shown as the NIM client. Enter the Primary Network Install Interface of the system as en0. Finally provide the NIM master details. Enter the host name of the network installation master from where you want to select lpp_source and location (for example, distnim.austin.ibm.com) as shown in Figure 2. Figure 2. Providing the NIM master details on the NIM client - Press Enter to continue. The command status is OK. - Run the smit nimcommand again on the NIM client and select Manage Network Install Resource Allocation from the menu list and then select Allocate Network Install Resources as shown in Figure 3. Figure 3. Allocating resources on the NIM master - Select the lpp_source and spot of the corresponding build which needs to be installed and press ENTER as shown in Figure 4. Resources will be allocated to the NIM client during this operation. Figure 4. Select NIM resources - Run the smit nimcommand again and select Perform a NIM Client Operation from the menu list as shown in Figure 5. Figure 5. Perform NIM operation - Select the bos_instmethod for the NIM client installation as shown in Figure 6. Figure 6. bos_inst method of NIM client installation If your environment has automation scripts for the bos_inst installation, then select the prompt installation option during the resource allocation of the bos_inst script. - Finally change the ACCEPT new license agreements value to Yes as shown in Figure 7 and press Enter to confirm the installation. Figure 7. Accepting new license - After the NIM client operation, the client partition restarts automatically. Open a new terminal session using the Hardware Mangement Console (HMC). The client partition boots into the SMS menu and the resource allocation packet count starts to perform the OS Figure 8. Select the terminal as system console After everything is successfully allocated, you are prompted to choose the system console as the opened terminal by selecting option 1 as shown in the above Figure 8. The installation options is displayed on this console. - The installation options are displayed in English by selecting the option 1 as shown in the Figure 9. Figure 9. Displaying installation options in English - Installation options are displayed to choose the method of the OS installation. For a migration and preservation installation, select option 2 as shown in Figure 10 to change the default installation Figure 10. Change the installation settings - After choosing the change/show installation settings as shown in Figure 10, you are promted to select the installation method. Choose option 3 for the migration installation as shown in Figure 11. Figure 11. Choosing the migration installation - Choose the disk drive where the operating system needs to be installed by selecting the corresponding sequence number as shown in Figure 12. Figure 12. Choose required the disk drive from the list - Finally change the primary language as required, by selecting option 2. After selecting the primary language, press 0 to install the operating system with the required installation settings as shown in Figure 13. Selection of language and installation settings The installation will proceed with the above settings. After the system is completely migrated to the targeted AIX version, the system automatically restarts. You can log in directly using Telnet or Secure Sheell (SSH) services. A migration installation migrates a system from one AIX version to another AIX version. A migration installation preserves old user data and configuration files. The NIM installation method helps to migrate the system from one AIX version to another AIX version. - Overview of AIX Installation and Migration, provides information about AIX installation and migration methods. - Power of Network Installation Manager, provides steps to configure NIM Server. - AIX Version 4.3 to 5L Migration Guide, provides information about migration procedures and prerequisites. - In the XML area on developerWorks, get the resources you need to advance your XML skills, including DTDs, schemas, and XSLT. - Stay current with developerWorks technical events and webcasts focused on a variety of IBM products and IT industry topics. - Attend a free developerWorks Live! briefing to get up-to-speed quickly on IBM products and tools as well as IT industry trends. - Follow developerWorks on Twitter. - Watch developerWorks on-demand demos ranging from product installation and setup demos for beginners, to advanced functionality for experienced developers. Get products and technologies - Find and download microcode from Fix central - Evaluate IBM products in the way that suits you best: Download a product trial, try a product online, use a product in a cloud environment, or spend a few hours in the SOA Sandbox learning how to implement Service Oriented Architecture efficiently. - Try out IBM software for free. Download a trial version, log into an online trial, work with a product in a sandbox environment, or access it through the cloud. Choose from over 100 IBM product trials. - Participate in the discussion forum. - Get involved in the My developerWorks community. Connect with other developerWorks users while exploring the developer-driven blogs, forums, groups, and wikis. - Follow developerWorks on Twitter. - Participate in developerWorks blogs and get involved in the developerWorks community. - Get involved in the My developerWorks community. - Participate in the AIX and UNIX® forums:
<urn:uuid:7e7f3fdd-4eef-4d79-9b28-cc6c7f5f0adf>
CC-MAIN-2017-04
http://www.ibm.com/developerworks/aix/library/au-aix-system-migration-installation/index.html?ca=drs-
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00161-ip-10-171-10-70.ec2.internal.warc.gz
en
0.862771
1,968
2.53125
3
Researchers at the University of California-Berkeley are preparing a wave "carpet" demonstration project they hope to install on the sea floor along the Oregon coast. The project would employ a submerged, flexible surface that would rock with the motion of the ocean waves, pressing down on a series of piston-pumps that would send a compressed column of water to the shore, where it could be converted into electricity. Researchers say it has the potential to be durable, portable and highly efficient at converting wave motion to energy. One hundred square meters of undersea carpet has the potential to provide as much energy, said Mechanical and Ocean Engineering Professor Reza Alam, as a soccer field full of solar arrays. While the concept and project have gained attention, it is in the early stages. Researchers hope to install the project in 2016. Researcher Marcus Lehmann recently successfully completed a small fundraising round on the crowdfunding site Experiment.com. "We completed our proof-of-concept prototype and are working on increasing the efficiency further," Lehmann said by email, adding, "Oregon is a good location for wave energy developers." ©2014 The Oregonian (Portland, Ore.)
<urn:uuid:76a70809-6fcb-4c9d-a6c5-12ab54a1d441>
CC-MAIN-2017-04
http://www.govtech.com/education/Oregon-to-Host-Wave-Energy-Carpet-Project-with-Unobtrusive-Technology.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00555-ip-10-171-10-70.ec2.internal.warc.gz
en
0.952223
238
3.28125
3
According to Medical Identity Fraud Alliance, in 2013 more than 1.8 million people fell prey to medical identity theft. A hacker can steal your medical information leaving you with a fatal prescription while getting themselves treated at your expense, leaving you with a hefty bill to pay, security experts from McAfee warns. By using fake ID and false insurance cards, the thief can pose as a patient and have procedures carried out, but the problem is not just confined to monetary issues. The procedures would be recorded in the victim’s name and, in future, could lead to misdiagnosis that can turn fatal. Imposters can also mix up medical records and, based on that, patients can also be prescribed the wrong drugs. Though it is difficult to stop identity theft, the security researchers suggest a few preventative measures. These include encrypting the softcopies of the medical records and locking the hard copies, so that no one else can access them. Security experts also suggest proper disposal of medical documents after use as it can be misused by scamsters. Among the other suggestions, the security experts also warned against carrying medical card, social security number or any other identification card when not required, as they can be misused.
<urn:uuid:b0c737ea-69ca-41f0-bfdb-f901a1bcf118>
CC-MAIN-2017-04
http://www.cbronline.com/news/enterprise-it/medical-identity-theft-can-leave-you-with-wrong-prescription-220714-4323871
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283301.73/warc/CC-MAIN-20170116095123-00187-ip-10-171-10-70.ec2.internal.warc.gz
en
0.95956
254
2.5625
3
Updated: Software created by researchers at Rensselaer Polytechnic Institute uses a pattern-recognition process called kernel learning to more quickly assess molecules' properties. Researchers at Rensselaer Polytechnic Institute this month added a software program that uses adaptive learning to the roster of programs available for assessing molecules properties. While pharmaceutical companies already have software that searches through databases to screen for drugs for a given therapy, the new software works much faster by using neural networks and adaptive-learning methods to model compounds and predict their behavior. Drug-discovery companies all employ computational tools to aid in finding leads for drug development. But scientists at Rensselaer in Troy, N.Y., say the move into predictive modeling marks a shift away from laboratories assays of mathematical, computer-run models. Laboratories with the most high-throughput techniques can test a few hundred thousand molecules a day; existing computer programs can process just fewer than a million. But the Rensselaer software can crunch more than 10 million molecules a day, according to High Performance Computing. The software looks for similarities between molecules in a given database and those with known therapeutic potential. The advantage is chiefly amount and type of chemical information that is available through this method; for a method that produces this much chemical information, the speed is quite fast. The software comes from a National Science Foundation-funded project called Drug Discovery and Semi-Supervised Learning (DDASSL, pronounced "dazzle"). Curt Breneman, a chemistry professor; Kristin Bennett, a mathematics associate professor; senior research associate N. Sukumar; and Mark Embrechts, an associate professor in decision sciences and engineering systems, worked together to develop the software. Computer testing is less expensive and faster than testing actual molecules, and allows workers to pare down the number of tests that need to be performed. Dr. Breneman says, "That approach helps to focus more attention on molecules with the highest probability of success, and also allows dead-ends to be identified before many resources are expended on them. The ultimate pay-off of this methodology may be that it can help to speed up the development of new drugs." Though several software programs already exist to assess compounds in silico, they can be slow, not particularly predictive or both. The Rensselaer software uses two shortcuts to search large molecular databases rapidly. First, the software renders a description of both a molecules shape and the electrical properties on its surface as a set of numbers. These number sets can be processed rapidly by a computer. Then, the software searches for common chemical properties associated with molecules for a particular therapy. It does not use the method of so-called docking software, which looks at the interaction of a molecule with a particular protein. Instead, it uses a pattern-recognition process called kernel learning. The software is presented with a small set of molecules with the right features, which are analyzed as described above. Then, the software churns through a molecular database, looking for promising compounds. "Conventional techniques are not truly predictive and dont work," Bennett said. "So, we borrowed pattern-recognition techniques already used in the pharmaceutical industry and added algorithms based on support vector machines. That gives us a technique to predict which molecules are promising." Projects are under way to further evaluate how predictive the new software is. Pattern-recognition techniques are rapidly becoming more sophisticated and more capable of using data from laboratory experiments. In unrelated work, researchers at the Harbor-ULCA Medical Center used computational methods and proteomics to find a structure that is common to otherwise diverse and distinct antimicrobial peptides. In a recent review in Science magazine, Yale University chemistry professor William Jorgensen stressed that no single computer program will be sufficient to find drug candidates and that some of the slower processes yield absolutely crucial information "There is not going to be a voilà moment at the computer terminal," he wrote. "Instead, there is systematic use of wide-ranging computational tools to facilitate and enhance the drug-discovery process." Editors Note: This story was updated to include additional information and comments from a discussion with Curt Breneman. Check out eWEEK.coms Enterprise Applications Center at http://enterpriseapps.eweek.com for the latest news, reviews, analysis and opinion about productivity and business solutions. Be sure to add our eWEEK.com enterprise applications news feed to your RSS newsreader or My Yahoo page:
<urn:uuid:3f968b49-fd40-4ada-8d31-530f959f4e51>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Enterprise-Applications/Adaptive-Learning-Speeds-New-DrugScreening-Software
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283301.73/warc/CC-MAIN-20170116095123-00187-ip-10-171-10-70.ec2.internal.warc.gz
en
0.915212
919
3.296875
3
Q. What is the earth’s sun expected to become at the end of its life? One day our Sun will die. At 4.6 billion years old, it’s currently about halfway through its life. While too small to go out in a giant supernova bang, 5 billion years from now, as it runs out of hydrogen fuel, the Sun will expand as a red giant, engulfing the orbits of nearby Mercury, Venus and Earth. At this point the Sun’s outer layers will be so unstable that they will fly off into space and form planetary nebula. What remains of the Sun will begin to compact and slowly cool as a white dwarf roughly the size of Earth.
<urn:uuid:11658347-56dc-409d-b69f-d063f9eac2d8>
CC-MAIN-2017-04
http://www.lifesize.com/video-conferencing-blog/12-days-of-geek-day-11/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00491-ip-10-171-10-70.ec2.internal.warc.gz
en
0.945667
145
3.359375
3
SQL Server uses an undocumented function, pwdencrypt() to produce a hash of the user’s password, which is stored in the sysxlogins table of the master database. This is probably a fairly common known fact. What has not been published yet are the details of the pwdencrypt() function. This paper will discuss the function in detail and show some weaknesses in the way SQL Server stores the password hash. In fact, as we shall see, later on I should be saying, ‘password hashes’. Download the paper in PDF format here.
<urn:uuid:350bba4f-6435-47fb-9f65-476c4dc9cb0c>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2002/07/10/microsoft-sql-server-passwords-cracking-the-password-hashes/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00491-ip-10-171-10-70.ec2.internal.warc.gz
en
0.94069
119
2.609375
3
Chapter 7B – The Evolution of the Intel Pentium chapter attempts to trace the evolution of the modern Intel Pentium from the chip, the Intel 4004. The real evolution begins with the Intel 8080, which is an 8–bit design having features that permeate the entire line. Our discussion focuses on three organizations. IA–16 The 16–bit architecture found in the Intel 8086 and Intel 80286. IA–32 The 32–bit architecture found in the Intel 80386, Intel 80486, and most variants of the Pentium design. IA–64 The 64–bit architecture found in some high–end later model Pentiums. The IA–32 has evolved from an early 4–bit design (the Intel 4004) that was first announced in November 1971. At that time, memory came in chips no larger than 64 kilobits (8 KB) and cost about $1,600 per megabyte. Before moving on with the timeline, it is worth recalling the early history of Intel. Here, we quote extensively from Tanenbaum [R002]. “In 1968, Robert Noyce, inventor of the silicon integrated circuit, Gordon Moore, of Moore’s law fame, and Arthur Rock, a San Francisco venture capitalist, formed the Intel Corporation to make memory chips. In the first year of operation, Intel sold only $3,000 worth of chips, but business has picked up since then.” “In September 1969, a Japanese company, Busicom, approached Intel with a request for it to manufacture twelve custom chips for a proposed electronic calculator. The Intel engineer assigned to this project, Ted Hoff, looked at the plan and realized that he could put a 4–bit general–purpose CPU on a single chip that would do the same thing and be simpler and cheaper as well. Thus in 1970, the first single–chip CPU, the 2300–transistor 4004 was born.” “It is worth note that neither Intel nor Busicom had any idea what they had just done. When Intel decided that it might be worth a try to use the 4004 in other projects, it offered to buy back all the rights to the new chip from Busicom by returning the $60,000 Busicom had paid Intel to develop it. Intel’s offer was quickly accepted, at which point it began working on an 8–bit version of the chip, the 8008, introduced in “Intel did not expect much demand for the 8008, so it set up a low–volume production line. Much to everyone’s amazement, there was an enormous amount of interest, so Intel set about designing a new CPU chip that got around the 8008’s limit of 16 kilobytes of memory (imposed by the number of pins of the chip). This design resulted in the 8080, a small, general–purpose CPU, introduced in 1974. Much like the PDP–8, this product took the industry by storm, and instantly became a mass market item. Only instead of selling thousands, as DEC had, Intel sold millions.” The 4004 was designed as a 4–bit chip in order to perform arithmetic on numbers stored in format, which required 4 bits per digit stored. It ran at a clock speed of 108 KHz and could address up to 1Kb (128 bytes) of program memory and up to 4Kb (512 bytes) of data memory. history of the CPU evolution that lead to the Pentium is one of backward with an earlier processor, in that the binary machine code written for that early model would run unchanged on all models after it. There are two claims to the identity of this early model, some say it was the Intel 8080, and some say the Intel 8086. We begin the story with the 8080. 1974 The Intel 8080 processor is released in April 1974. It has a 2 MHz clock. It had 8–bit registers, and 8–bit data bus, and a 16–bit address bus. The accumulator was called the “A register”. 1978 The Intel 8086 and related 8088 processors are released. Each has 16–bit registers, 16–bit internal data busses, and a 20–bit address bus. Each had a 5MHz clock; the 8088 ran at 4.7 MHz for compatibility with the scan rate of a standard TV, which could be used as an output device. The main difference between the 8086 and the 8088 is the data bus connection to other devices. The 8086 used a 16–bit data bus, while the 8088 used a cheaper and slower 8–bit data bus. The 16–bit accumulator was called the “AX register”. It was divided into smaller registers: the AH register and Neither the 8086 nor the 8088 could address more than one megabyte of memory. Remember that in 1978, one megabyte of memory cost $10,520. According to Bill Gates “Who would need more than 1 megabyte of memory?” 1980 The Intel 8087 floating–point coprocessor is announced. Each of the 80x86 series (8088, 8086, 80286, 80386, and 80486) will use a floating–point coprocessor on a separate chip. A later variant of the 80486, called the 80486DX was the first of the series to including floating–point math on the CPU chip itself. The 80486SX was a lower cost variant of the 80486, without the FPU. 1982 The Intel 80186 was announced. It had a clock speed of 6 MHz, and a 16–bit data bus. It might have been the successor to the 8086 in personal computers, but its design was not compatible with the hardware in the original IBM PC, so the Intel 80286 was used in the next generation of personal computers. 1982 The Intel 80286 was announced. It extended the address space to 24 bits, for astounding 16 Megabytes allowed. (Intel should have jumped to 32–bit addressing, but had convincing financial reasons not to do so). The 80286 originally had a 6 MHz clock. A number of innovations, now considered to be mistakes, were introduced with the Intel 80286. The first was a set of bizarre memory mapping options, which allowed larger programs to run. These were called “extended memory” and “expanded memory”. We are fortunate that these are now history. Each of these memory mapping options was based on the use of 64 KB segments. Unfortunately, it was hard to write code for data structures that crossed a segment boundary, possibly due to being larger than 64 KB. The other innovation was a memory protection system, allowing the CPU to run in one of two modes: real or protected. The only problem is that no software developer elected to make use of these modes. As a result of the requirement for backward compatibility, every IA–32 processor since the 80286 must include this mechanism, even if it is not used. 1983 The introduction of the Intel 80386, the first of the IA–32 family. This CPU had registers, 32–bit data busses, and a 32–bit address bus. The 32–bit accumulator was called the “EAX register”. The Intel 80386 was introduced with a 16 MHz clock. It had three memory modes: protected, real, and virtual. We now have three protection modes to ignore. Lesson: The hardware should evolve along with the system software (Operating Systems, Run–Time Systems, and Compilers) that uses it. Here is the structure of the EAX register in the Intel 80386 and all of the following in the IA–32 line. This structure shows the necessity to have backward compatibility with the earlier models. The 16–bit models had a 16–bit accumulator, called AX. The 8–bit model had an accumulator, called A, that is now equivalent to the AL 8–bit register. Structure of the EAX register in the Intel 80386. There is no name for the high–order 16 bits of EAX. The AX, AH, and Backward Compatibility in the I/O Busses Here is a figure that shows how the PC bus grew from a 20–bit address through a 24–bit address to a 32–bit address while retaining backward compatibility. is that I/O components (printers, disk drives, etc.) purchased for the Intel should be plug–compatible with both the Intel 80286 and Intel 80386. Those purchased for the Intel 80286 should be plug–compatible with the Intel 80386. The basic idea is that one is more likely to buy a new computer if the old peripheral devices can still be used. Here is a picture of the PC/AT (Intel 80286) bus, showing how the original configuration kept and augmented, rather than totally revised. Note that the top slots can be used by the older 8088 cards, which do not have the “extra long” edge connectors. This cannot be used with cards for the Intel 80386; that would be “forward compatibility” The Intel 80386 was the first of the IA–32 series. In you instructor’s opinion, it was the first “real computer CPU” produced by Intel. The reason for this opinion is that it was the first of the series that had enough memory and a large enough address space to remove the need for some silly patches and kludges, such as extended and expanded memory. 1989 The Intel 80486 is introduced. It was aimed at higher performance. It was the first of the Intel microprocessors to contain one million transistors. As noted above, later variants of the 80486 were the first to incorporate the floating point unit in the CPU core. 1992 Intel attempts to introduce the Intel 80586. Finding that it could not get a trademark a number, Intel changed the name to “Pentium”. The name “80586” was used briefly as a generic name for the Pentium and its clones by manufacturers such as AMD. 1995 The Pentium Pro, a higher performance variant of the Pentium was introduced. four new instructions, three to support multiprocessing. 1997 The MMX (Multimedia Extensions) set of 57 instructions was added to both the Pentium and the Pentium Pro. These facilitate graphical and other multimedia computations. 1999 The Pentium III was introduced, with the SSE (Streaming SIMD Extensions) instruction set. This involved the addition of eight 128–bit registers, each of which could hold four independent 32–bit floating point numbers. Thus four floating point computations could be performed in parallel. 2001 The Pentium 4 was shipped, with another 144 instructions, called SSE2. 2003 AMD, a producer of Pentium clones, announced its AMD64 architecture to expand the address space to 64 bits. All integer registers are widened to 64 bits. New execution modes were added to allow execution of 32–bit code written for earlier models. 2004 Intel adopts the AMD64 memory model, relabeling it EM64T (Extended Memory 64 Most of the IA–32 improvements since this time have focused on providing graphical services the game playing community. Your instructor is grateful to the gamers; they have turned high–end graphical coprocessors into commodity items. One can get very good graphics card really cheap. Consider the NVIDIA GeForce 8600 graphics processor with 512 MB of 400 MHz graphics memory (DDR transferring 32 bytes per clock cycle), a 675 MHz graphics processor, supporting 2048 by 1536 resolution. It costs $210 bundled with software. The Trace Cache implementations of the Pentium architecture include at least two levels of While we plan to discuss this topic in some detail later in this text, we must bring it up now in order to focus on a development in the architecture that began with the Pentium III in 1999. Pentium designs called for a two–level cache, with a split L1 cache. There was a 16 KB L1 instruction cache and a 16 KB L1 data cache. Having the split L1 cache allowed the CPU to fetch an instruction and access data in the same clock pulse. (Memory can do only one thing at a time, but two independent memories can do a total of two things at a time.) Here is a figure showing a typical setup. Note that the CPU does not write to the Instruction Cache. By the time that the Pentium III was introduced, Intel was having increasing difficulty in obtaining fast execution of its increasingly complex machine language instructions. The solution was to include a step that converted each of the complex instructions into a sequence of simpler instructions, called micro–operations in Intel terminology. These simpler operations seem to be designed following the RISC (Reduced Instruction Set Computer) approach. Because these micro–operations are simpler than the original, the CPU control unit to interpret them can be hardwired, simpler, and faster. By the time the Pentium 4 was introduced, this new design approach had lead to the replacement of the 16 KB Level–1 Instruction Cache with the ETC (Execution Trace Cache). Unlike the Instruction cache, which holds the original Pentium machine language instructions, the ETC holds the micro–operations that implement these instructions. The Intel 8086 and later use a segmented address system in order to generate addresses from 16–bit registers. Each of the main address registers was paired with an offset. The IP (Instruction Pointer) register is paired with the CS (Code Segment) register. Each of the IP and CS is a 16–bit register in the earlier designs. NOTE: The Intel terminology is far superior to the standard name, the PC (Program Counter), which is so named because it does not count anything. The SP (Stack Pointer) register is paired with the SS (Stack Segment) register. The Intel 8086 used the segment:offset approach to generating a 20–bit address. The steps are as follows. 16–bit value in the segment register is treated as a 20–bit number with four leading binary zeroes. This is one hexadecimal 0. 20 bit value is left shifted by four, shifting out the high order four 0 bits in four low order 0 bits. This is equivalent to adding one hexadecimal 0. 16–bit offset is expanded to a 20–bit number with four leading 0’s and added to the shifted segment value. The result is a 20–bit address. Example: CS = 0x1234 and IP = 0x2004. CS with 4 trailing 0’s: 0001 0010 0011 0100 0000 or 0x12340 IP with 4 leading 0’s: 0000 0010 0000 0000 0100 or 0x02004 Effective address: 0001 0100 0011 0100 0100 or 0x13344 Thirty–Two Bit Addressing All computers in the IA–32 series must support the segment:offset method of addressing in order to run legacy code. This is “backwards compatibility”. addressing mode in the IA–32 series is called a “flat address space”. The 16–bit IP (Instruction Pointer) is now the lower order 16 bits of the EIP (Extended Instruction Pointer), which can be used without a segment. The 16–bit SP (Stack Pointer) is now the lower order 16 bits of the ESP (Extended Stack Pointer), which also can be used without a segment. of addressing modes has given rise to a variety of “memory models” based on the addressing needed for code and data. Memory Models: These are conventional assembly language models based on the size of the code and the size of the data. Code Size Data Size Model Under 64 KB Under 64 KB Small or Tiny Over 64KB Under 64 KB Medium Under 64 KB Over 64 KB Compact Over 64 KB Over 64 KB Large The smaller memory models give rise to code that is more compact and efficient. The IA–32 Register Set register set contains eight 32–bit registers that might be called “general though they retain some special functions. These registers are: EAX, EBX, ECX, EDX, ESP, EBP, ESI, and EDI. These are the 32–bit extensions of the 16–bit registers AX, BX, CD, DX, SP, BP, SI, and DI. segment registers (CS, DS, SS, ES, FS and GS) appear to be retained only for compatibility with earlier code. In the original Intel 8086 design, the AX register was considered as a single accumulator, with the other registers assigned supporting roles. It is likely that most IA–32 code maintains this distinction, though it is not required. The IA–64 Architecture architecture is a design that evolved from the Pentium 4 implementation of the architecture. The basic issues involve efficient handling of the complex instruction set that has evolved over the 35 year evolution of the basic design. The IA–64 architecture is the outcome of collaboration between Intel and the Hewlett–Packard Corporation. In some sense, it is an outgrowth of the Pentium 4. architecture has many features similar to RISC, but with one major exception: it expects a sophisticated compiler to issue machine language that can be exploited by the superscalar architecture. (Again, we shall discuss this in the future.) The current implementations of the IA–64 are called the “Itanium” and “Itanium 2”. One wonders if the name is based on that of the element Titanium. In any case, the geeks soon started to call the design the “Itanic”, after the ship “Titanic”, which sank in 1912. The Itanium was released in June 2001: the Itanium 2 in 2002. Here are some of the features of the IA–64 design. 1. The IA–64 has 128 64–bit integer registers and 128 82–bit floating–point registers. IA–64 translates the binary machine language into 128–bit instruction words that represent up to three assembly language instructions that can be executed during one clock pulse. A sophisticated compile emits these 128–bit instructions and is responsible for handling data and control dependencies. More on this later. design might be called “VLIW” (Very Long Instruction Word) except that Intel seems to prefer “EPIC” (Explicitly Parallel Instruction Computing). design allows for predicated execution, which is a technique that can eliminate branching by making the execution of the instruction dependent on the predicate. There are sixty–four 1–bit predicate registers, numbered 0 through 63. With one exception, each can hold a 0 (false) or a 1 (true). Predicate register pr0 is fixed at 1 (true). Any instruction predicated on pr0 will always execute. a full appreciation of predication, one would have to understand the design of CPU, especially the handling of control hazards. This is a topic for a graduate course. Here we shall just give a simple code example to show how it works. Consider the statement: if (p) then S1 else S2 ; where S1 and S2 are statements in the high level language. normal compilation, this would be converted to a statement to test the predicate (Boolean expression that can be either true or false), execute a conditional branch around statement S1, and follow S1 by an unconditional branch around S2. In predication, the compilation is simpler, and equivalent to the following two statements. (p) S1 ; // Do this if the predicate is true. (~p) S2 ; // Do this if the predicate is false. execution of this pair of statements is done together, in parallel with evaluation of the predicate. Depending on the value of the predicate, the effect of one of the instructions is committed to memory, and the results of the other statement are discarded. of the goals of advanced architectures is the execution of more than one instruction at a time. This approach, called “superscalar”, must detect which operations can be executed in parallel, and which have data dependencies that force sequential execution. the design of pipelined control units, these data dependencies are called “data hazards”. Here is an example of a pair of instructions that present a data hazard. x = y – z ; w = u + x ; that these two instructions cannot be executed in parallel, while the next pair so executed. In the first set, the first instruction changes the value of x. Parallel execution would use the old value of x and hence yield an incorrect result. x = y – z ; w = u + z ; // This pair can be executed in parallel. designs, dating back to the CDC-6600, used a hardware mechanism to detect which instructions could be executed in parallel. The difficulty with this approach is the increasing complexity of such a control unit, leading to slower execution. The IA–64 strategy is called “explicit parallelism”, in which the compiler statically schedules instructions for parallel execution at compile time, rather than the control unit do so dynamically at run time. calls for the compiler to emit 128–bit bundles, each containing three instructions, and a template that defines which of the parallel execution units are to be used. Each of the three instructions in a bundle is called a syllable. Here is the structure of a bundle. Instruction Slot 2 Instruction Slot 1 Instruction Slot 0 Here is the instruction of a syllable, fit into an instruction slot. More of the instruction CPU design problem, called the “power wall” was discussed in chapter 1 of this text. The commonly accepted solution to this problem, in which designers try to get more performance from a CPU without overheating it, is called a multicore CPU. This is basically a misnomer, because each core in a multicore CPU is an independent CPU. Thus a quad–core Pentium chip actually contains four CPUs. Examples of this strategy include the Intel iCore3, iCore5, and iCore7 (sometimes called “Core i3”, “Core i5”, and “Core i7”) designs. The i3 is the entry level processor, with two cores. The i5 is a mid–range processor with 2 to 4 cores. The i7 is considered to be the high–end processor, with 2 to 6 cores. Motherboards and slots with the evolution of the CPU chips, we see an evolution of the support Here we study the hardware used to integrate the CPU into the system as a whole. A CPU socket or CPU slot is a mechanical component that provides mechanical and electrical connections between a device (usually a microprocessor) and a printed circuit board (PCB), or motherboard. This allows the CPU to be replaced without risking the damage typically introduced when using soldering tools [R016]. Common sockets utilize retention clips that are designed to apply a constant force, which must be overcome when a device is inserted. For chips that sport a high number of pinouts, either zero-insertion force (ZIF) sockets or land grid array (LGA) sockets are used instead. These designs apply a compression force once either a handle (for ZIF type) or a surface plate (LGA type) is put into place. This provides superior mechanical retention while avoiding the added risk of bending pins when inserting the chip into the socket. CPU sockets are used in desktop computers (laptops typically use surface mount CPUs) because they allow easy swapping of components, they are also used for prototyping new circuits. The earliest sockets were quite simple, in fact they were DIP (Dual In–line Pin) devices. A typical CPU for such a slot might be an Intel 4004 or an Intel 8086 (with different pin count). Here is a picture of the Intel 8086, showing one of the two rows of pins. The complexity of the IA–32 series processors grew as the series evolved, and the number of pin–outs required grew with it. By the time of the Intel 80486, a DIP arrangement was impossible. Here is a picture of the Intel 80486DX2. The Intel 80486 had 196 pins arranged as a hollow rectangle. It should be obvious that it required more than a DIP socket. The sockets for the late Intel 80x86 series and the early Pentium series came in a number of sizes in order to accommodate the number of pins on the chip. Here is a table of some of the early sockets used for the IA–32 series. Intel 8086, Intel 8088 Intel Pentium, AMD K5 Intel Pentium, Intel Pentium MMX, AMD K6 Intel Pentium Pro With the introduction of the Pentium II CPU, the transition from socket to slot had become necessary. With the Pentium Pro, Intel had combined processor and cache dies in the same package, connected by a full-speed bus, resulting in significant performance benefits. Unfortunately, this method required that the two components be bonded together early in the production process, before testing was possible. As a result, a single, tiny flaw in either die made it necessary to discard the entire assembly, causing low production yield and high cost. Intel subsequently designed a circuit board where the CPU and cache remained closely integrated, but were mounted on a printed circuit board, called a Single-Edged Contact Cartridge (SECC). The CPU and cache could be tested separately, before final assembly into a package, reducing cost and making the CPU more attractive to markets other than that of high-end servers. These cards could also be easily plugged into a Slot 1, thereby eliminating the chance for pins of a typical CPU to be bent or broken when installing in a socket. Slot 1 refers to the physical and electrical specification for the connector used by some of Intel's microprocessors, including the Pentium Pro, Celeron, Pentium II and the Pentium III. Both single and dual processor configurations were implemented. Slot 1 (also Slot1 or SC242) is a Slot-type connector with 242 contacts. This connector was designed for Pentium II family of processors, and later used for Celeron budget line of processors. Pentium III was the last microprocessor family that used the Slot 1. For its next generation of Pentium processors - Pentium 4, Intel completely abandoned the Slot1 architecture. The fastest processor that can be used in the Slot 1 motherboards is the Pentium III 1133 MHz with 133 MHz FSB [R012]. The picture on the left shows a typical slot 1 connector mounted on a motherboard. The picture at right shows a CPU mounted in the slot, along with its rather large cooling fans. 1 connector is 5.23" long (13.29 cm). Besides the actual connector, the Slot 1 also includes SEC cartridge retention mechanism required to support a processor in SEC cartridge and a heatsink. Maximum supported weight of the processor with the heatsink is 400 grams. Slot 2 refers to the physical and electrical specification for the 330-lead Single Edge Contact Cartridge (or edge-connector) used by some of Intel's Pentium II Xeon and certain models of the Pentium III Xeon. When first introduced, Slot 1 Pentium IIs were intended to replace the Pentium and Pentium Pro processors in the home, desktop, and low-end SMP markets. The Pentium II Xeon, which was aimed at multiprocessor workstations and servers, was largely similar to the later Pentium IIs, being based on the same P6 Deschutes core, aside from a wider choice of L2 cache ranging from 512 to 2048 KB and a full-speed off-die L2 cache (the Pentium 2 used cheaper 3rd party SRAM chips, running at 50% of CPU speed, to reduce cost). Because the design of the 242-lead Slot 1 connector did not support the full-speed L2 cache of the Xeon, an extended 330-lead connector was developed. This new connector, dubbed 'Slot 2', was used for Pentium 2 Xeons and the first two Pentium III Xeon cores, codenamed 'Tanner' and 'Cascades'. Slot 2 was finally replaced with the Socket 370 with the Pentium III Tualatin; some of the Tualatin Pentium IIIs were packaged as 'Pentium III' and some as 'Xeon', despite the fact they were identical [R014]. Socket 370 (also known as the PGA370 socket) is a common format of CPU socket first used by Intel for Pentium III and Celeron processors to replace the older Slot 1 CPU interface on personal computers. The "370" refers to the number of pin holes in the socket for CPU pins. Modern Socket 370 fittings are usually found on Mini-ITX motherboards and embedded systems [R015]. Here is a picture of the PGA370 socket. The socket is a ZIF (Zero Insertion Force) type, designed for easy insertion. As noted, it has 370 dimensions are 1.95 inches by 1.95 inches, or approximately 5 centimeters on a side. was designed to work with a Front Side Bus operating at 66, 100, or 133 MHz. The design voltage range is 1.05 to 2.10 volts. mass of the Socket 370 CPU cooler should not exceed 180 grams (a weight of about 6.3 ounces) or damage to the die may occur. The LGA 775, also known as Socket T, is one of the latest and largest Intel CPU sockets . LGA stands for land grid array. Unlike earlier common CPU sockets, such as its predecessor Socket 478, the LGA 775 has no socket holes; instead, it has 775 protruding pins which touch contact points on the underside of the processor (CPU). The Prescott and Cedar Mill Pentium 4 cores, as well as the Smithfield and Presler Pentium D cores, used the LGA 775 socket. In July 2006, Intel released the desktop version of the Core 2 Duo (codenamed Conroe), which also uses this socket, as does the subsequent Core 2 Quad. Intel changed from Socket 478 to LGA 775 because the new pin type offers better power distribution to the processor, allowing the front side bus to be raised to 1600 MT/s. The 'T' in Socket T was derived from the now cancelled Tejas core, which was to replace the Prescott core. Another advantage for Intel with this newer architecture is that it is now the motherboard which has the pins, rather than the CPU, transferring the risk of pins being bent from the CPU to the The CPU is pressed into place by a "load plate", rather than human fingers directly. The installing technician lifts the hinged "weld plate", inserts the processor, closes the load plate over the top of the processor, and pushes down a locking lever. The pressure of the locking lever on the load plate clamps the processor's 775 copper contact points firmly down onto the motherboard's 775 pins, ensuring a good connection. The load plate only covers the edges of the top surface of the CPU (processor heatspreader). The center is free to make contact with the cooling device placed on top of the CPU. An examination of the relevant Intel data sheets shows that LGA 775 which is used for consumer level desktops and LGA 771 used for (Xeon based) workstation and server class computers appear to differ only in the placement of the indexing notches and the swap of two address pins. Many pins devoted to functions such as interfacing multiple CPUs are not clearly defined in the LGA 775 specifications, but from the information available appear to be consistent with those of LGA 771. Considering that LGA 775 predated LGA 771 by nearly a year and a half, it would seem that LGA 771 was adapted from LGA 775 rather than the other way around. The socket has been superseded by the LGA 1156 (Socket H) and LGA 1366 (Socket B) sockets. Here is a picture from [R017] of the LGA 775 mounted on some sort of motherboard.
<urn:uuid:e35c68f1-444f-4b25-9bca-b1c36633a534>
CC-MAIN-2017-04
http://edwardbosworth.com/CPSC2105/MyTextbook2105_HTM/MyText2105_Ch07B_V06.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280834.29/warc/CC-MAIN-20170116095120-00307-ip-10-171-10-70.ec2.internal.warc.gz
en
0.913478
7,172
3.6875
4
The Nuclear Regulatory Commission has rejected the notion that it is not ready to decide whether aging atomic power plants need to make upgrades intended to limit radiation releases during a major crisis, but its ultimate action on the matter is not yet clear. In the wake of the 2011 Fukushima Daiichi nuclear plant meltdowns in Japan, the commission has identified a host of potential ways to improve the security and safety of U.S. reactors. It has divided those potential measures into tiers, identifying some as requiring action sooner and others further down the road. Some Democrats and watchdog groups have suggested that the agency has not moved quickly enough to act on potential improvements. In a Friday letter, Senate Environment and Public Works Committee Chairwoman Barbara Boxer (D-Calif.) noted that today it two year anniversary of the onset of the Japanese crisis and that NRC staff issued a report on post-Fukushima recommendations in July 2011. Boxer asked commission Chairwoman Allison Macfarlane for provide a status update in advance of a hearing next month. Republicans lawmakers have suggested, however, that the commission should not address its post-Fukushima response in a piecemeal fashion. Nor should it issue any major new requirements before conducting a thorough comparison of the Japanese and U.S. regulatory systems and devising a comprehensive plan for how to fill any potential gaps that could exacerbate a terrorist attack or natural disaster affecting a nuclear reactor. During a recent hearing, Representative Edward Whitfield (R-Ky.) suggested that various safety and security issues that have arisen since the Fukushima incident “seem so interdependent." He questioned why the commission appeared to be making efforts to address them “independently and separate” from one another. Specifically, Whitfield and other House Republicans have suggested that the commission should not require aging nuclear plants to install filtered vents until it completes the regulatory comparison and post-Fukushima reactor safety plan. Agency staff, along with many Democrats and watchdog groups, say filtered vents would limit radiation releases in the event of a terrorist attack or natural disaster. Should a facility lose power, vents relieve pressure building inside a heating reactor core while filters would reduce the amount of radiation that passes through the vents. In response to Whitfield’s concerns, all five presidentially appointed commissioners said they were taking a comprehensive look at how requiring filtered vents and other possible post-Fukushima actions could impact one another, but suggested it would not be prudent to delay certain actions deemed to be the most pressing. Republican Commissioner Kristine Svinicki suggested that if the agency did not dispose of some issues, it would create a state of perpetual uncertainty for the industry. “We’re trying to strike a balance,” Svinicki said during the Feb. 28 hearing. “We’re attempting to integrate as well as we can.” Commissioner William Ostendorff, also a Republican, said at the meeting that there “has been significant consideration of interlapping” between all issues the commission has addressed recently, suggesting that it was not making individual regulatory decisions in a vacuum. In a recent letter to House Republicans, the commission also noted that it has already “conducted a regulatory comparison of the station blackout regulations that existed in Japan at the time of the” Fukushima incident and that it “continues to evaluate the various technical and regulatory factors in Japan that contributed to the accident.” The commission in the letter also defended its staff’s estimates on the cost to install filtered vents. While NRC staff projected about $16 million per plant, industry officials put the figure closer to $45 million. They argue that NRC evaluators are only accounting for the cost of filter components purchased from outside vendors and are not including the expense of additional modifications operators might have to make on-site to make the filters viable. The commission’s letter, though, says the expense estimates “were intended to cover not only the equipment costs, but also the site specific engineering and plant modification costs.” It adds that the “estimate used in the NRC’s staff’s assessment was based upon discussions with vendors, regulators, and plant operators who have had experience with the installation of filtering systems at foreign nuclear power plants.” Republicans have asserted that the agency should only pursue post-Fukushima regulatory actions if the anticipated safety and security gains outweigh the costs of compliance. In their letter, the commissioners responded by noting that they considered but took no action on several post-Fukushima requirements. “Examples of items considered but not acted upon or implemented include the immediate shutdown of operating plants, the installation of various systems, structures, and components (beyond ongoing actions), the staging of robots to provide access to contaminated areas, adding multiple and diverse instruments to measure parameters such as spent fuel pool level and requiring all plants to install dedicated bunkers with independent power supplies and coolant systems,” the commissioners said in the Feb. 15 letter. It remains unclear, however, whether the commissioners will decide to go forward with a filtered vent requirement. Spokesman Scott Burnell told Global Security Newswire that the agency continues to deliberate on its course of action. In their letter to House Republicans, the commissioners suggested that some already-established NRC requirements could help mitigate radioactive releases from a terrorist attack or Fukushima-style event. “The addition of backup equipment to supplement current safety systems and development of mitigating strategies, such as those implemented in the U.S. following Sept. 11, 2001, to address such external hazards and plant conditions might have supported the efforts of plant operators to mitigate the event at Fukushima Daiichi,” the commissioners said. “These measures would provide additional protection for the existing barriers; including the reactor fuel, coolant systems, and containments.” At least one NRC panel member, Svinicki, has previously expressed opposition to a filtered vent requirement. She argued last year that existing protective measures should prevent them from being necessary. In January, the commission rejected a watchdog group’s legal bid to have it require filtered vents without deciding whether to issue a similar mandate on its own terms.
<urn:uuid:824497e2-c55d-414c-9380-79912daf8dff>
CC-MAIN-2017-04
http://www.nextgov.com/defense/2013/03/nrc-new-nuclear-plant-safety-measures-not-premature-final-decision-pending/61803/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280834.29/warc/CC-MAIN-20170116095120-00307-ip-10-171-10-70.ec2.internal.warc.gz
en
0.953791
1,275
2.546875
3
Astronomy is the archetypal pure science. It can seem frankly pointless, especially when the projects like the Hubble Space Telescope cost a billion dollars or more. The point of astronomy is a vexed question, and one that politicians often have to engage with. Some pundits justify astronomy on the grounds of the spinoffs, like better radio antenna technology. Some say our destiny is to emigrate from the planet, so we'd better start somewhere. Others simply assert unapologetically that peering into the heavens is what homo sapiens does, at any cost. Here's a different rationale. To my mind (as an ex astronomer) the deepest practical value of astronomy is it enables us to perform experiments that are just too grand to ever be done on earth. The limits of earth-bound experiments of course change over time, but during each era, astronomy has furnished answers to questions that elude terrestrial investigation. For example, astronomers were the first to: And there must be other examples. I mention General Relativity, which was another one of those apparently academic pursuits, until it came to the rescue a few years ago in the most practical way. Soon after the Global Positioning Satellite (GPS) system came online, its results started drifting. Engineers realised quickly that the high precision clocks in orbit were getting out of sync with those on the ground. And the explanation turned out to be gravity. According to General Relativity, a clock will run more slowly in a gravitational field, and because the force of gravity is slightly lower above the Earth than on the surface, the GPS clocks were running faster than expected. The effect was just a few parts in a billion, but enough to cause the positioning results to drift by a few metres, and whatsmore, to get worse over time. By reprogramming the GPS controllers to account for gravitational time dilation the problems were solved and the system has been stable ever since. So if it wasn't for Einstein, your sat nav wouldn't work, and you'd be lost. Or rather, more lost than you are now. So astronomy is supremely practical. And here's another thing. Astronomy occasionally provides the most profound and compelling truths about reality. My favorite example is a classic gooose-bump moment in the history of science: Galileo's discovery of sunspots and his appreciation of what they meant. Until then, everyone thought the Sun was a perfect unchanging disc of light. After projecting the Sun through his newly uinvented teleschope onto a white sheet, Galileo soon noticed blemishes which rather put paid to the perfection. But much more importantly, Galileo saw that the spots were moving. They drifted across the face of the disc over a few hours, disappeared, and then returned, on the other side where they began. In today's idiom, he might have said "O.M.F.G!" The sun turns out to be a turning ball! It was a critical moment of unification in human culture, contributing more evidence to the realisation that all of the things and all of the stuff in the universe are fundamentally the same. The Coppernican revolution was much more than star gazing: it reset humankind's understanding of the mystical, reinforcing that everything is ordered, and ordinary. Corporeal, and explicable to human minds. It's not the only time a radical, instant, disruptive re-framing of our place in the world has been delivered by astronomy. It wasn't really so long ago -- in the early 20th century -- that Edwin Hubble and his peers established that the 'nebulae' were actually galaxies just like the Milky Way, and that the universe therefore had to be tens of millions of times bigger than previously thought possible. This kind of revelation about our place in the scheme of things only comes from astronomy. Hubble may have been expensive but the images it continues to generate have done two things. By peering so deeply into space, it has exploded our notion of the universe's diversity and brought back the beauty of so many structures for all to see. Second, while pointing outward, it has, at least for me, made me appreciate, not unlike those iconic Apollo images of the earth from the moon, at the same time how unique our earth is and, because of that farsighted telescope and others here on earth, how unlikely it is, given that scope, that we are alone in this universe.
<urn:uuid:2af2eaec-ba32-40a3-adf8-614eb05069d5>
CC-MAIN-2017-04
http://lockstep.com.au/blog/2011/01/27/why-astronomy
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00087-ip-10-171-10-70.ec2.internal.warc.gz
en
0.965436
908
2.6875
3
The macho nature of football makes it difficult for fogged and staggering players to take themselves off the field after concussive blows to the head. And even the most vigilant coaches and parents find it difficult to judge the severity of an impact to the helmet. But the Cambridge body-monitor company MC10 and Reebok have invented a skullcap with sensors and LED lights that can be worn under helmets. Called CheckLight, the device flashes yellow for a moderate blow and red for a severe blow. It also keeps a running count of less-severe blows, flashing a warning when the number crosses 100. CheckLight can’t replace the judgment of doctors and trainers, but it could be a crucial alert system — especially in school sports. Read the full article at the Boston Globe: http://www.bostonglobe.com/opinion/editorials/2013/08/11/helmet-lights-f....Back to all News
<urn:uuid:a4d6085f-8f33-4443-b2fb-42f0abfc17f2>
CC-MAIN-2017-04
http://www.northbridge.com/football-lighting-path-safer-game
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284411.66/warc/CC-MAIN-20170116095124-00141-ip-10-171-10-70.ec2.internal.warc.gz
en
0.911831
195
2.609375
3
Converts dates between internal and external format. p is the special processing operator. Can be any one of the following: n is a number from 0 to 4 that specifies the how many digits to use for the year field. If omitted, the year will have four-digits. If n is 0, the year will be suppressed. s is a non-numeric character to be used as a separator between month, date, and year. Must not be one of the special processing operators. Dates are stored internally as integers which represent the number of days (plus or minus) from the base date of December 31, 1967. For example: If you do not specify a special processing operator (see later) or an output separator, the default output format is two-digit day, a space, a three-character month, a space, and a four-digit year. If you specify just an output separator, the date format defaults either to the US numeric format "mm/dd/yyyy" or to the international numeric format "dd/mm/yyyy" (where / is the separator). You can change the numeric format for the duration of a logon session with the DATE-FORMAT command. Field 8 codes are valid but, generally, it is easier to specify the D code in field 7 for input conversion. Dates in output format are difficult to use in selection processing. If you are going to use selection processing and you want to use a code which reduces the date to one of its parts, such as DD (day of month), the D code must be specified in field 8. Field 7 input and output conversions are both valid. Generally, for selection processing, you should specify D codes in field 7. An exception is when you use a formatting code, such as DM, that reduces the date to one of its parts. If no year is specified in the sentence, the system assumes the current year on input conversion. If only the last two digits of the year are specified, the system assumes the following years:
<urn:uuid:9d5ac27c-34a1-4850-a8ff-714fbc0e3154>
CC-MAIN-2017-04
http://www.jbase.com/r5/knowledgebase/manuals/3.0/30manpages/man/jql2_CONVERSION.D.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281419.3/warc/CC-MAIN-20170116095121-00169-ip-10-171-10-70.ec2.internal.warc.gz
en
0.80716
428
2.8125
3
Calvo P.,Affinity Foundation Animals and Health | Calvo P.,Autonomous University of Barcelona | Calvo P.,IMIM Institute Hospital del Mar dInvestigacions Mediques | Duarte C.,Amigos y Amigos | And 9 more authors. Animal Welfare | Year: 2014 Animal hoarding is considered to be an under-reported problem, which affects the welfare of both people and animals. Few published studies on animal hoarding are available in the scientific literature, particularly outside North America. The present study was designed to obtain data on animal hoarding in Spain, with a particular focus on animal welfare issues. Data were obtained retrospectively from 24 case reports of animal hoarding involving a total of 1,218 dogs and cats and 27 hoarders. All cases were the result of legal intervention by a Spanish humane society during the period from 2002 to 2011. Hoarders could be characterised as elderly, socially isolated men and women who tended to hoard only one species (dog or cat). Most cases presented a chronic course of more than five years of animal hoarding. The average number of animals per case was 50, with most animals being dogs. In 75% of cases the animals showed indications of poor welfare, including poor body condition, and the presence of wounds, parasitic and infectious illnesses. Amongst the hoarded animals aggression and social fear were the most commonly reported behaviours. To the authors' knowledge, this is the first report on animal hoarding in Spain and one of the first in Europe. Further studies are needed to fully elucidate the epidemiology, cross-cultural differences and aetiology of this under-recognised public health and welfare problem. More research might help to find efficient protocols to assist in the resolution and prevention of this kind of problem. © 2014 Universities Federation for Animal Welfare. Source
<urn:uuid:d212f333-b893-4f9e-bfdb-f2494d2e4fc5>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/affinity-foundation-animals-and-health-2456352/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00013-ip-10-171-10-70.ec2.internal.warc.gz
en
0.943507
376
2.625
3
For some reason the power companies in this country are seen as an entity with some nefarious agenda by more people than they are willing to admit. This was the case long before the implementation of smart grids and when that technology was added to the mix, the flame got a little hotter. Privacy is a very important component and any violation of this privacy is going to face the wrath of any technology that violates it. One of the biggest benefits of smart meters is you will only be billed for what you use and not an approximation between meter readings. In the past meter readings were made very rarely, maybe once a year. This generally resulted bills being higher than the actual usage by the customer. The new smart grid technology provides options to connect your smart meter to home display units or your smartphone so you can see what you are using at all times. With this information you can control your consumption and find out your exact bill each and every month. This information can be used to implement plans and obtain services from energy suppliers to make your energy use more efficient and cost effective. The issue with smart grids is who controls the data that is being generated by your home. Generally the operators get access to the personal and non-personal data, and this is where issues of privacy start creeping in. The non-personal data is the voltage control, power quality and any other maintenance issues related to the power. The personal data not only includes your personal information for billing services, but also how you use power in your home. The information about your power consumption can be used to generate consumer profiles about you and sold to businesses. The biggest concern with this system is once it is in place your options are very limited. The meters are extremely secure equal to that of Internet banking profiles. So creating a protection profile from the meter will require some expertise and tampering with the system will probably bring the men from the power company to your home. The issue of the security level for these devices is being addressed in some European countries like Germany and the Netherlands. The question is why is so much security needed for a meter? That question is of course going to inspire more nefarious thought, whether it is justified or not. The fact that smart meters are being subsidized by the government doesn’t help matters either. George Orwell probably had a lot to do with instilling this subconscious paranoia about big brother in our psyche, but smart grids are only designed to improve power consumption and nothing else. We give more information on our Facebook (News - Alert) page than a smart meter will ever be able to collect. The fact is privacy is a precious commodity that is becoming scarcer as newer technology becomes available and one of the only ways to avoid it is if you live off the grid.
<urn:uuid:98503fd9-cc39-4379-9648-cc102d2acbeb>
CC-MAIN-2017-04
http://www.iotevolutionworld.com/topics/smart-grid/articles/2012/12/31/321118-power-companies-smart-grids.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00499-ip-10-171-10-70.ec2.internal.warc.gz
en
0.960327
552
2.515625
3
As Google say, "Cross-site scripting (XSS) bugs are one of the most common and dangerous types of vulnerabilities in Web applications. These nasty buggers can allow your enemies to steal or modify user data in your apps..." So they have decided to help us to learn how to exploit these kinds of vulnerabilities by creating a vulnerable web site at: There are 6 exercises to resolve. Before starting to resolve these issues... Why should I know how to exploit a XSS vulnerability? - To be more qualified in the security field. - To make money. Currently, Google is paying up to $7,500 for dangerous XSS bugs discovered in their most sensitive products. But Google is not the only one who is paying a bounty for disclosing vulnerabilities. Others like Yahoo, Facebook or Paypal have the same policy of rewards for discovering bugs. In this post, we are going to resolve 3 issues proposed by Google. In the next post, we will resolve the latest ones. That is the easiest exercise. Our input will be directly included in the page without proper escaping. By inserting the code below, we will be successful. This exercise is an example of how to perform a persistent or stored Cross-Site Scripting attack in a simple way. <img src=x onerror=alert('BehindTheFirewalls')> This exercise is a little complex because the user doesn't have an input to try to exploit the XSS. But what happen if we rewrite the URI? If we change "#1" by "#11111"... So, if we add #11111'onerror=alert('BehindTheFirewalls')> at the end of the URL, the code will be: <img src='/static/level3/cloud#11111'onerror=alert('BehindTheFirewalls')>'.jpg' /> And the alert will appear. These are the three posible options to exploit this vulnerability.
<urn:uuid:6bfa3c16-1c50-4df9-98e1-8b488a728d4f>
CC-MAIN-2017-04
http://www.behindthefirewalls.com/2014/06/xss-game-by-google-exercises-1-2-and-3.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00556-ip-10-171-10-70.ec2.internal.warc.gz
en
0.836955
409
2.53125
3
January 2011 saw a rush of activity around the concept, defence and rules for cyber warfare. There was a report by the OECD, various conferences across the globe (most noticeably the Annual Security Summit in Munich), and the publication of a report by the EastWest Institute on the rules of engagement concerning cyber warfare. The key message in William Hague’s speech at the Munich Security Conference in February was the creation of cyber warfare rules. This concept – that cyber warfare can have some sort of Geneva Convention – is laudable, but is it practical? Let’s look at some of the issues. To start with, what is cyber warfare? It seems a popular buzzword at the moment, and its use and misuse feeds the confusion. In reality, we can take this over-simplified term and categorise it further. We can talk of cyber espionage, cyber terrorism and cyber war. Even then we end up over-simplifying events and activities that blur significantly at the margins. Ever since a single secret existed, people have tried to uncover it. Espionage is as old as mankind and unlikely to disappear soon. The only real difference between “normal” espionage and cyber espionage are the techniques. A modern Mata Hari is more likely to have a USB thumb drive down her garter than a spy camera. In fact, a modern Mata Hari is more likely to be a geek in a bunker than some sort of glamorous femme fatale. Espionage has never really had rules; it’s always enjoyed the deniability and third-party agent of fiction and fact. So with cyber espionage, what changes? It’s easier to do, there is less risk, it’s more scatter gun than rifle shot, and ultimately impossible to codify. It’s also unlikely to replace traditional espionage, but rather just enhance the capability and make protection from it more complex. By its very name, terrorism is not subject to rules. Terrorism intends to bring fear to a population though unacceptable acts, and the thought of terrorists obeying a voluntary code of conduct seems bizarre. We also enter one of the blurred areas in any discussion of terrorism – is it state-sponsored, is it rebellion, is it freedom fighting? Events in Libya and Egypt point to the difficulties here. No matter where we go on these vectors, we are left with the unquestioning belief in the impossibility of defining rules that everyone would stick to. Yes, you can have rules about who can take who to court, but that is not a deterrent or defence, especially if the ‘enemy’ wins its ‘battles’. Cyber war is possible. One nation-state can attack another nation-state's infrastructure, communication and wealth with a cyber attack. The Russia/Georgia conflict demonstrated this effectively. However, in the same way that you cannot subjugate a nation with just air power, cyber war is only effective as part of a multifaceted kinetic battle (where real things go bang) as well. Russia did significant cyber damage to Georgia’s infrastructure and morale, but in the end it was the troops on the ground that prevailed, albeit against a weakened enemy. So will we see nation states exclusively waging cyber wars on other nation states in the future? I believe it is unlikely in any formalised manner. I can foresee state-on-state espionage and third-party terrorism – deniable, very difficult to attribute, and difficult to defend against. Regardless, in the event of cyber war as part of a kinetic battle, there is great value in defining which rules of engagement apply. Is it acceptable to take down an air traffic control system when civilian transport relies on the same infrastructure? Is it acceptable to take out national water or electricity systems? In these areas, rules of engagement make sense. Many nation-states abide by the existing conventions, again perhaps with some blurred edges. Categorising what is legitimate and what is not would be a valuable step forward. Is the concept of cyber war hype? Some suggestions have been made that cyber war is a creation of the military contractors as a way to generate revenues by creating panic in worried administrations. The OECD’s report poured cold water on the concept of cyber Armageddon, but the challenge is that the genie is out of the bottle here. Stuxnet proves it’s doable, and with all things scientific and military, once you prove it can be done, everyone’s doing it – so it’s real. Perhaps its not worthy of the hysteria displayed in some quarters – stories of infecting satnavs and causing cars to explode do veer on the ridiculous – but any intelligent state with secrets and enemies must develop a sense of paranoia and do its utmost to protect itself and its citizens. What’s fair game in a cyber attack – not just the State itself? In cyber war we can see that rules might exist, but in terrorism and espionage where are the boundaries? Attacking the banking systems, water or electricity might seem fair game to terrorists, but where does state or state-sponsored espionage stop? Stealing commercial intellectual property, be it designs, financials or strategic plans, was one of the objectives of the Aurora attacks in 2010. Again, many would believe that these were state sponsored. But so what? Against the utopian ideal of proportionality and a set of conventions in cyber space, what would you do differently if they existed? Would you say ‘great, I can switch off my firewalls’, ‘superb, I can remove that pesky encryption’, or would you say ‘actually, I need to focus as much on defence as I do now, maybe more because the game has changed ‘. Rules are great, but reality means that not everyone plays to them and as a responsible security professional would you last long if you said to your minister or CEO: ‘it’s not fair, I trusted they’d play by the rules, it’s not my fault we got hacked’. Well, would you? As the OECD stated in their report, your defence is not the rules of engagement but practical things you do in protection. Putting enough emphasis on security when you design systems, putting effective procedures in place and reviewing them regularly, perhaps accepting that 100% protection is now a myth and you need to have the resources and skills in place to perform effective incident response when (not if) they get through. The concept of cyber warfare, real or not, rules or not, just made your job harder. Frank Coggrave is general manager EMEA at Guidance Software. Coggrave has more than 20 years of experience in the security industry, working with a number of high-tech companies, including Telelogic, Continuus Software, Jacada, Texas Instruments and Websense. He holds a BSc in computer science from Brunel University.
<urn:uuid:d1683a3a-1a60-48dc-9eea-d1809a6ec6cf>
CC-MAIN-2017-04
https://www.infosecurity-magazine.com/opinions/comment-cyber-war-is-it-defensible/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280834.29/warc/CC-MAIN-20170116095120-00308-ip-10-171-10-70.ec2.internal.warc.gz
en
0.959669
1,426
2.59375
3
Extracts a portion of the enclosed tokens. Specify the starting character index: Index 0 is the first character. Positive indexes are an offset from the start of the string. Index -1 is the last character. Negative indexes are an offset from the last character toward the start of the string. For example, if the start is specified as -2, then it starts reading the first character from the end. If -3 is specified, then is starts 2 characters from the end. Number of characters from the start to include in the substring. Negative numbers are interpreted as (total # of characters + length) + 1. For example, -1 represents the entire length or the original string. If -2 is specified, the length is the entire -1. For a string with 5 characters a length of -1 = (5 + (-1)) + 1 = 5, -2 = (5 + (-2)) + 1 = 4, etc. This example sets the e-mail address to be firstname.lastname@example.org where the name equals the first character of the Given Name plus the Surname. The policy name is Policy: Create E-mail from Given Name and Surname, and it is available for download at the Novell Support Web site. For more information, see Understanding Policies for Identity Manager 4.0.2. To view the policy in XML, see 001-Command-SetEmailByGivenNameAndSurname.xml. The Substring token is used twice in the action Set Destination Attribute Value. It takes the first character of the First Name attribute and adds eight characters of the Last Name attribute together to form one substring.
<urn:uuid:6dceb4a2-3685-494f-9d70-bba1fafb3ce9>
CC-MAIN-2017-04
https://www.netiq.com/documentation/idm402/policy_imanager/data/bxj6uwd.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00216-ip-10-171-10-70.ec2.internal.warc.gz
en
0.831422
356
2.640625
3
Astronauts aboard the International Space Station took a page from MacGyver's book recently. After a crucial component failed and astronauts failed to secure a bolt during an eight-hour spacewalk, astronauts rigged a makeshift device for a second attempt at repair, Tecca.com reported. On the first attempt, two astronauts tried to replace a power switching component needed for routing electricity from two of the station's eight solar arrays to other systems. The astronauts were unable to secure the component to the station because they suspected one of the two bolt holes was filled with metal shavings, so they tied it down with straps as a temporary fix. Then, using a toothbrush attached to a makeshift pole and a can of compressed nitrogen, astronauts Sunita Williams and Akihiko Hoshide managed to remove the metal shavings from the hole and properly bolt the component to the station. If the attempt had failed, astronauts would have been required to remove the component and bring it inside the International Space Station for inspection. Photo: Japan Aerospace Exploration Agency astronaut Akihiko Hoshide participates in the mission's third session of extravehicular activity. Courtesy of NASA.
<urn:uuid:b11eaecf-c79f-4246-9af4-980b18de06fc>
CC-MAIN-2017-04
http://www.govtech.com/technology/ISS-Astronauts-Save-Day-Toothbrush.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.60/warc/CC-MAIN-20170116095120-00152-ip-10-171-10-70.ec2.internal.warc.gz
en
0.930888
237
2.96875
3
For many years those who concerned themselves with an organization’s cyber security warned of threats, both internal and external, urging organizations to adequately protect themselves. Unfortunately the message either went unheeded, or the security measures implemented were insufficient, as we’ve seen headline after headline of victims who’ve suffered a breach. While the severity may vary, each is an example of the concerned organization’s security measures being found wanting. And the battle has only just begun as, in recent months, security professionals have amplified the cry following attacks from DigiNotar, Anonymous and WikiLeaks – and the list goes on. Organizations need to get a grip on their borders, now, or risk losing the cyber fight and becoming another statistic. The threats an organization faces are multifaceted. While it’s possible, and perhaps tempting, to spend millions plugging every hole the reality is it’s impractical. Instead a more common sense approach to security is required. At the heart of the problem is that, with every breach incident, the likelihood of a user’s identity having been compromised increases. Therefore, it doesn’t matter how intrinsic the overarching security approach, it is fundamentally flawed if you fail to establish that the person gaining access is who they claim to be. For example, while a VPN provides a secure communication tunnel, if a hacker can mimic a legitimate user and use their key to open the pipe then everything traveling down it is insecure. Physical security is common sense in a virtual world too In the real world we regularly use Chip and Pin to make a purchase in the high street, or withdraw cash from an ATM. Quite simply this is two factor authentication in action in the real world. Before clarifying what constitutes two factor authentication, it is probably worth stating what it is not. A password on a PC (often referred to as single factor authentication) is the equivalent of a basic lock – only slightly better than nothing. It stands to reason that two passwords – even if one is a data pattern or random characters from a memorable phrase, it is actually just two locks. Although it may slow an intruder by having another password, common sense should tell you it still isn’t adequate. Two factor authentication, in its very basic sense, is the combination of two different elements from a choice of three: - Something you know – such as a pin or password - Something you own – such as a key, token or the chip embedded in a credit card - Something specific to the person – such as a fingerprint, or retina. The downside of something specific to the person, or biometrics as it’s widely referred, is hardware. A physical reader needs to be installed at each point of entry from where a user may authenticate. In today’s modern society users want to use a myriad of devices, and places, to access the corporate network – be it laptop, iphone, cyber café PC, etc. It makes the “specific’ element either very expensive when accommodating every possible access point or simply impractical. A further consideration, given the changing pace of technology, is whether a biometric system will be adaptable to tomorrow’s devices? This leaves the combination of something you know and something you own as the only practical two factor authentication solution. The keys to the vault Something you own invariably means a security token. There are two types – hardware-based i.e. a physical token and software based, such as an SMS-based token received on a mobile phone – often described as a tokenless two-factor-authentication system. I assume you wouldn’t walk away, leaving your car with the key in the ignition, or even glued to the outside, and expect it to still be there when you get back? Well, if security tokens are carried in the same bag as the lap top – and let’s face it a lot of people will do that, it’s effectively the same thing. If the laptop goes missing, who ever finds it also has the key to make it work – another example of how common sense can influence security practices. From a user perspective, especially with multiple accounts spanning both work and personal aspects, if each requires a physical token then people will end up carrying around a necklace of tokens – inconvenient and cumbersome. However, by harnessing SMS technology, organisations can utilise existing mobile technology – whether corporate or personally owned, to replicate the physical token. And there’s no reason why dozens of soft tokens can’t be carried on a single device. Today, practically everyone has a mobile phone with many reliant on its varied functionality. The result is it is rarely forgotten and, should the user misplace the device, the loss is quickly realised and reported. SMS also offers cost effective deployment over the costs of sending and managing physical tokens. Futhermore, with pre-load funcationality, a new code is automatically generated when a log in attempt is made. This eliminates any concerns over SMS delays or blackspots. Moreover, the receipt of this new code acts as a further security layer as a user is notified that their username has been used to gain access to the system (whether successful or not). They can then raise an alert with the administrator to the potential threat and mitigation steps quickly instigated – simply not possible with a physical token. Activate the alarm An alarm is a great deterrent but is only useful when switched on – common sense really. The same principle applies to your computer. If it’s turned on, and logged on to the network, the “alarm’ is effectively deactivated. Anyone who happens upon your device will have carte blanche to any applications and systems you’ve authenticated to. The damage a malicious person can cause, in a matter of moments, could have repercussions far greater than anything which may occur offline. Infact, an unscrupulous individual could potentially do more long term harm with just a few minutes access to your unmanned PC than a few hours unrestricted access to your home! If there’s one thing I firmly believe it’s that there’s no perfect security solution and what works for one organisation will not necessarily be suitable for another. However, if you get the foundations right and apply a balanced approach that makes it difficult for malicious individuals to cause them harm. A security system can do only so much. Despite the new gadgets and available technology, common sense will always remain your best defense.
<urn:uuid:c9df5f1d-1381-468c-bc81-6cab58cb9881>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2011/09/23/strong-security-is-just-common-sense/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00088-ip-10-171-10-70.ec2.internal.warc.gz
en
0.936623
1,345
2.78125
3
The good news about public hotspots is that they're everywhere. The bad news is that they're not secure. John Gates explains how to get past their problems and still be able to conduct secure computing over an insecure hotspot. Wireless broadband Internet access via hotspots is convenient for both the casual surfer and the Internet-dependent teleworker. Unfortunately, current security technologies integrated into wireless LAN products offer insufficient protection here, and mobile users must be wary when accessing the central company network via a hotspot. What is necessary is a security solution that protects the teleworkers' place in all phases of connection construction on hotspots-without risky, foreboding configurations and without the help of users or administrators. This article will illuminate the effectiveness of VPN security mechanisms, data encryption, strong authentication and personal firewalls. Plus, it will show how optimal protection can be achieved by dynamically integrating each of these Risks in the WLAN Each user can access public WLANs with correspondingly equipped terminals. The terminals automatically obtain an IP address, in the sense that they recognize the SSID (service set identifier) of the WLAN. Thus, they put themselves within range of the access points and are able to gain access permission onto the WLAN. Data security, or protection of participating devices from attacks, is not guaranteed by the WLAN operator. Security is limited to monitoring authorized network access in order to eliminate misuse of the server administration. User identification serves solely for the acquisition and the accounting of time online. However, how does it look regarding the protection of sensitive information during data transmission? How can the PC optimally seal itself off from attacks from the WLAN and the Internet? Because the actual security risk on the hotspot originates from having to register with the operator outside the protected area of a VPN, as a rule it has to take place by means of the browser. During this time frame, the terminal device is unprotected. This stands in opposition to a company security policy that prohibits direct surfing on the Internet and that only permits certain protocols. Basically, VPN mechanisms and data encryption serve to protect confidentiality. The corresponding security standards are IP Security tunneling and AES (Advanced Encryption Standard) encryption for data, and X.509 v3 for access protection. Additional security mechanisms such as certificates in a PKI (public-key infrastructure) or onetime password tokens complement or replace the usual user ID and password. A personal firewall offers the required protective mechanisms against attacks from the Internet and from the public WLAN. Here, stateful packet inspection is critical. If this is not provided, it is not advised to use a hotspot for mobile computing. VPN client and external personal firewall For a VPN solution with a separately installed firewall, the ports for HTTP/HTTPS data traffic to the personal firewall must be activated during hotspot registration. This can take place in three different ways: 1. The firewall rules for HTTP/HTTPS are firmly preconfigured in order to guarantee the functionality with the desired hotspots. 2. The configuration allows that the ports are opened for HTTP/HTTPS as needed for a certain time window (such as 2 minutes). 3. The user has administration rights and independently changes the firewall In all three cases there exists the risk that the user may surf outside of the secure VPN tunnel on the Internet and encounter destructive software such as viruses, worms or Trojans. Temporarily opening the firewall creates the danger of deliberate misuse by the user on the basis of multiple actuations of the time window. If the personal firewall fundamentally permits no communication outside of the configuration, then the user has to activate the corresponding firewall rules for the duration of registration on the hotspot. This requirements-based opening of the personal firewall involves the greatest risk of misconfigurations. The user must have a firm grasp of the exact changes being made and the exact environment in which they are made. Employee security awareness and technical know-how determine the security level quality. A large security risk also exists when user data (user ID and password) is spied out externally on the hotspot during the registration process. With the aid of a notebook computer, a hacker can simulate both the hotspot and the WLAN SSIDs. If a user then registers on a hotspot, he does not land at the access point of the provider but rather on the notebook of the hacker. Because of the previously mirrored access point Web pages, the user assumes that he is authenticated on the hotspot. However, in reality, he is on the notebook of the hacker and his personal registration data is now exposed. Providers always attempt to protect the hotspot registration pages through SSL (Secure Sockets Layer) processing (HTTPS), but that does not always succeed. For example, a user who arrives at a manipulated hotspot obtains the following report from the browser: "A problem exists with the security certificate on the Web site." In the background of this report, the attacker has only recreated the hotspot registration page and does not use the original certificate. For the lay person, this may not be recognizable at first glance, and it is incumbent to him to decide whether or not he should trust the certificate. To avoid placing a user in the position of having to make this decision, the hotspot registration should flow transparently before construction of the VPN. A solution that has proven itself in practice is the so-called registration script that takes over the transmission of registration and the inspection of the certificate at the hotspot. The requirements for the functionality of a personal firewall with mobile computing on WLANs are multilayered. They also apply to the critical phases during the registration and sign-off process on the hotspot. Requirements must be known at the earliest possible time and should be in place from system start. They also must remain when no VPN connection exists or when it has been deactivated. Furthermore, the user should be safeguarded against arbitrarily reconfiguring or completely shutting off the personal firewall.
<urn:uuid:2fd58074-c2fa-4d99-ada7-11971ec1c258>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Desktops-and-Notebooks/How-to-Avoid-Security-Risks-for-Mobile-Computing-on-Public-WLANs
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00574-ip-10-171-10-70.ec2.internal.warc.gz
en
0.894952
1,322
2.71875
3
By Girish Solanki Surface active agents or Surfactants, as they are commonly known, are multi-functional chemical entities that have a wide range of applications in household detergents, personal care products and as vital components in a multitude of industrial and institutional sectors. The food industry also comprises one of the largest end users for surfactants but the kind and quantity of surfactants that can be used are limited by considerations of contamination and potential toxicity. Besides being used as a part of cleaning formulations in the food industry, surfactants also find more direct use as emulsifiers in food formulations and to a lesser extent, as fat substitutes. There are several kinds of surfactants in use, however ionics (anionics, cationics, amphoterics) and non-ionics dominate the market. In recent years the traditional stronghold of anionic surfactants in several sectors has considerably weakened and non-ionics have seen resurgence. However, non-ionics have traditionally comprised the most common type of surfactants used in the food industry (food emulsifiers). They commonly include mono- and di-glycerides, derivatives such as acetylated, succinylated and diacetylated tartaric esters of distilled monoglycerides, lactylated esters, sorbitan esters, polysorbates, propylene glycol esters, sucrose esters and polyglycerol esters among others. Emulsifiers are additives that allow normally immiscible liquids such as oil and water to form a stable mixture, including preventing phase separation. They are widely used in the food industry to perform several functions as listed in the table below. Within the food industry, bread and bakeries are the two segments utilizing the largest volume of surfactants.
<urn:uuid:4ad2ba04-6181-42d1-a8e4-e658aee0f413>
CC-MAIN-2017-04
http://www.frost.com/sublib/display-market-insight-top.do?id=LEAW-4ZCE3G
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284411.66/warc/CC-MAIN-20170116095124-00142-ip-10-171-10-70.ec2.internal.warc.gz
en
0.952084
368
3.109375
3
Andre M.R.,Sao Paulo State University | Baccarim Denardi N.C.,Sao Paulo State University | Marques de Sousa K.C.,Sao Paulo State University | Goncalves L.R.,Sao Paulo State University | And 8 more authors. Ticks and Tick-borne Diseases | Year: 2014 Recently, tick and flea-borne pathogens have been detected in wild carnivores maintained in captivity in Brazilian zoos. Since free-roaming cats are frequently found in Brazilian zoos, they could act as reservoirs for arthropod-borne pathogens, which could be transmitted to endangered wild carnivores maintained in captivity in these institutions. On the other hand, stray cats in zoos may play a role as sentinels to pathogens that circulate among wild animals in captivity. The present work aimed to detect the presence of Anaplasmataceae agents, hemoplasmas, Bartonella species, piroplasmas, and Hepatozoon sp. DNA in blood samples of 37 free-roaming cats in a Brazilian zoo. Three (8%) cats were positive for Anaplasma spp. closed related to Anaplasma phagocytophilum; 12 (32%) cats were positive for hemoplasmas [two (5%) for Mycoplasma haemofelis, five (13.5%) for Candidatus Mycoplasma haemominutum, and five (13.5%) for Candidatus Mycoplasma turicensis]; 11 (30%) were positive for Bartonella spp., six (16%) were positive Babesia vogeli and one (3%) for Theileria sp. Coinfection with multiple arthropod-borne agentes was observed in sampled cats. None of sampled cats were positive for Ehrlichia spp., Cytauxzoon spp., or Hepatozoon spp. in PCR. This is the first molecular detection of Babesia vogeli and Theileria sp. in domestic cats in Brazil. The control of the population of free-roaming cats in these conservation institutions is much needed aiming to prevent the potential transmission to endangered wild animals maintained in captivity, such as wild neotropical wild felids, as well as to human beings visiting zoos. © 2014 Elsevier GmbH. Source
<urn:uuid:42f76c29-dd92-4394-82b8-28d6586636f8>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/fundacao-parque-zoologico-de-sao-paulo-fpzsp-1586443/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280899.42/warc/CC-MAIN-20170116095120-00262-ip-10-171-10-70.ec2.internal.warc.gz
en
0.907827
497
2.96875
3
The IEEE has published its standard for the use of white spaces for wireless broadband. White spaces have yet to be commercialized in the U.S., despite the advocacy of the technology from a variety of companies, including Microsoft and Google. Tests of some of the white space transmission systems developed in the past failed to assure that the systems would not interfere with current users of the spectrum. The IEEE attests that systems relying on the new standard will not interfere with the signals of adjacent broadcast TV stations. The standard for wireless regional area networks (WRANs) is designated IEEE 802.22TM. The standard takes advantage of the otherwise unused spectrum set aside as buffers between TV channels, aka white spaces. Written to be applicable to all markets internationally, the standard takes advantage of the favorable transmission characteristics of the VHF and UHF TV bands to provide broadband wireless access over a large area over 60 miles (about 100 km) from the transmitter. Each WRAN can deliver up to 22 Mbps per channel without interfering with reception of existing TV broadcast stations. This technology is expected to be useful for serving less densely populated areas, such as rural areas, and developing countries where most vacant TV channels can be found. IEEE 802.22 incorporates advanced cognitive radio capabilities, including dynamic spectrum access, incumbent database access, accurate geolocation techniques, spectrum sensing, regulatory domain dependent policies, spectrum etiquette and coexistence for optimal use of the available spectrum, the IEEE said.
<urn:uuid:e31bfd97-24d7-4979-b937-2e69c37282b5>
CC-MAIN-2017-04
https://www.cedmagazine.com/news/2011/07/ieee-intros-standard-white-spaces
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00564-ip-10-171-10-70.ec2.internal.warc.gz
en
0.902054
298
2.9375
3
Los Angeles County is the largest county in the nation. Its population of approximately 9.9 million is exceeded by only eight states. There are 88 cities in L.A. County, covering a geographic span of 4,081 square miles. Yet 65 percent of the county -- home to a million people -- remains unincorporated. Those people and the citizens of some 40 of the 88 cities look to the men and women of the Los Angeles County Sheriffs Department -- the largest sheriffs department in the world -- for police protection. To handle the enormous number of calls generated by such a Herculean endeavor, the sheriffs dispatch system was overhauled in the early 90s. The Mobile Digital Communications System (MDCS) that resulted allows patrol officers to receive calls and acknowledge them through a mobile digital terminal (MDT). The MDT can also query online against justice agency databases such as the Department of Motor Vehicles and the Wanted Person System. Unfortunately, the deployment of one tool sometimes renders another obsolete or, worse, unusable. So it was with the Mobile Digital Communications System and the original Regional Allocation of Police Services System (RAPS). RAPS is a data-management system that tracks the activities of deputies in the field for the purpose of billing contract cities and to determine the appropriate allocation of personnel. The old RAPS system was housed on a mainframe and obtained its data from paper logs prepared by deputies. This data was used to justify additional sales of service to contract cities. The reports also helped law enforcement determine if a particular area needed more law enforcement attention. However, according to Sergeant John Aerts, who was responsible for billing some of the contract cities, the reports were often late. "I would get the reports a month late. In May I would know that I had been 600 minutes short in a particular city [in April]." But once MDCS eliminated the need for the paper log, those reports went from late to non-existent. Suddenly, L.A. County had no way to track deputies, gather data or compile statistics. The solution was a program that could receive data directly from the dispatch system, store it and present it in a useful manner. The system would bear the name RAPS, the same name of the system it replaced. David Ramirez, currently the Data Center manager for the L.A. County Sheriffs Department, was working for the county at that time as a consultant to the sheriffs department and became the main developer of the system. "We developed an application using Oracle RDBMS and tools that captured the data directly from the dispatch system," said Ramirez. "It still carries the same name, but is radically different technology." Because it was to be an enterprise-wide system, a steering committee was appointed. Twelve RAPS coordinators were selected to serve on the steering committee, one of whom was Sgt. Aerts. Ramirez considered this an asset. "We would not have been able to do it without Johns expertise in the departments business practices ... He has a better understanding of community needs than anyone and wanted to make sure the system could provide statistics to justify the allocation of additional manpower in the communities." Sgt. Aerts looked at RAPS as a way to make life easier. "It gives you a daily or monthly look at exactly where you are. You know if you are short and have to add cars in a particular area." RAPS captures data from the MDCS in a download every 24 hours at 4 a.m. The data is processed and stored and available online via Oracles Forms Graphic User Interface. "It was designed," Ramirez said, "to be intuitive and totally user friendly." There are currently about 10 years worth of data on the system. The data is up-to-date within the 24-hour timeframe that it takes to be downloaded from MDCS. While MDCS allows for realtime inquiries, the data is only available for seven days and there is no historical record of changes to it. RAPS captures and processes all changes to the records during the seven days it is retained in MDCS. RAPS captures data such as how many times a patrol vehicle has been dispatched to a particular location and how much time deputies spend in various activities. From the moment he or she signs on to the system, the deputys time is tracked. Each time the deputy acknowledges a call or begins or ends an activity, a time stamp is created. Wendy Harn, assistant director of Management Information Services for the sheriffs office, is a user of the system. "RAPS basically automated the Deputy Daily Log, which was a manual system of logging all deputy activities on a shift," she said. "It contains call history by location, observation activity and detail activity. Information is available by location, unit, station, call types, etc." Once data is entered, it cannot be changed. "RAPS is a read-only system," said Ramirez. "The user has the ability to download the data onto a spreadsheet and massage it if desired. But the data within RAPS is legally binding and must reflect the original MDCS data." Harn, whose department is responsible for reporting crime statistics, the crime analysis program and GIS, said RAPS helps the communities within the county because it "allows for more efficient monitoring of the types of service being provided and where." She also believes it aids officer safety by providing online access to address history, so an officer knows in advance whether there have been previous problems in a particular area and whether or not back-up will be needed. Overall, RAPS provides the Los Angeles County Sheriffs office with data showing how deputies spend their time. It also ensures the contract cities get their moneys worth, helps protect officers by providing historical information about people and places, and improves public safety by putting police protection where it is needed most. Car 54, we found you.
<urn:uuid:3753a6c5-b05b-4c71-8686-91528f415a76>
CC-MAIN-2017-04
http://www.govtech.com/magazines/gt/Car-54-Where-Are-You.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279923.28/warc/CC-MAIN-20170116095119-00382-ip-10-171-10-70.ec2.internal.warc.gz
en
0.962411
1,214
2.625
3
NIST team sees in stereo NIST team sees in stereo - By Patricia Daukantas - Aug 26, 2001 Steven G. Satterfield, a NIST computer specialist, studies a 3-D visualization of idealized particles inside wet concrete using a RAVE device. The aim of the visualizations is to improve research, NIST's Judith E. Devaney says. Visualization setup lets researchers walk through concrete NIST's Peter M. Ketcham views a 3-D simulation of a Bose-Einstein condensate, a state of matter in which a cluster of supercold atoms starts to behave as a single entity. A team of computer scientists at the National Institute of Standards and Technology is inventing new ways to study the most common'and the rarest'substances on Earth. The scientists specialize in visualization techniques to show their colleagues in physics and materials science the results of complex simulations running on powerful parallel computers. They can image, for example, random movements of particles inside wet concrete. They can picture small knots of atoms in a state of matter so unusual that it wasn't created in the laboratory until 70 years after theorists predicted it. On the surface, the scientific collaborations appear very different, said Judith E. Devaney, leader of the Scientific Applications and Visualization Group at NIST's Information Technology Laboratory in Gaithersburg, Md. But the specialties'parallel computing, data mining and visualization'are alike because all involve pattern recognition.Math foundation Scientists at other NIST labs in Gaithersburg and Boulder, Colo., devise the basic algorithms for the virtual experiments. Programmers in Devaney's group convert the models to detailed programs for NIST's parallel computers. Then the visualization specialists transform the data sets into pictures or movies for easier understanding. Devaney and her colleagues regard the process as a feedback loop that proceeds from theory through development of a basic model, sometimes altering the theory. 'It's a fairly tight loop because the goal is always to have more realistic models,' Devaney said. 'You want to make sure the scientists get what they need so they can move on to the next step.' Devaney's group collaborates with NIST's Building and Fire Research Laboratory to study concrete, the most widely used manmade product. The building research lab recently organized a consortium called the Virtual Cement and Concrete Testing Laboratory, with a Web site at ciks.cbt.nist.gov/vcctl . Companies join the consortium to get access to NIST's expertise. 'The computer codes are at the point now where it is considered realistic' to move concrete testing into the virtual world, Devaney said. Simulating new types of concrete, instead of mixing up physical samples, not only saves money but also could lead to new forms of concrete with greater crack resistance or special colors. Devaney's group is developing the visible cement data set, a virtual experiment depicting hydration, or formation of chemical bonds between water molecules and other substances in the mix. When the data set is finished later this year, it will go on the Web for all to use, Devaney said. NIST concrete researchers also participate in another area of the visualization group's research: so-called immersive computing, in which scientists get up close and personal with large, colorful representations of their data. In the photograph on Page 1, NIST computer scientist Steven G. Satterfield demonstrates a 3-D cement flow based on a numerical simulation by materials scientist Nicos S. Martys. The model computed how ellipsoidal cement particles, oriented randomly within a cube-shaped volume, would line up based on shear forces. In three dimensions, the multicolored, blimplike objects followed jagged lines like beads on a wire. The colored lines showed the paths. To get that immersive feeling, the NIST visualization group uses a Reconfigurable Automatic Virtual Environment (RAVE) device from Fakespace Systems Inc. of Kitchener, Ontario.Getting the groove on The setup consists of an 8-foot-square screen with a 1,280- by 1,024-pixel display. A 12-processor SGI Onyx 3000 visual supercomputer in the RAVE system generates two images, one for each eye, with polarized glasses to help the observer's brain assemble the two images into a single 3-D vision. Casual viewers could use plain polarized glasses, but concrete researchers might turn to the wired Crystal Eyes headset from StereoGraphics Corp. of San Rafael, Calif., which tracks the observer's head movements. 'The image actually changes as you move your head around,' Satterfield said, and observers feel as if they are squeezing through the wet concrete. A flashlight effect in the RAVE system's control wand casts a white spot where the wand points.Wiggle control The software behind the RAVE hardware is a graphics file loader called DIVERSE, for device-independent virtual environments'reconfigurable, scalable, extensible. The open-source software came from Virginia Polytechnic Institute, at www.diverse.vt.edu . DIVERSE reads the positions of the head-tracking goggles and the control wand. 'Scientists work on the experiments, they work on the math, they have this sort of in-their-head notion about what's going on,' Satterfield said of the immersive environment. 'When you bring it to life, they say 'Oh yeah, that's exactly it,' or 'No, that's not it,' or 'That's it, but that thing's wiggling funny like it's not right.' And then they go back and look at the math.' The next step in improving the simulations will be to replace the ellipsoids with more realistic particles of uneven size, ragged shapes and differing chemical properties, Satterfield said. Devaney's group has posted some Apple QuickTime versions of the concrete flow simulations at math.nist.gov/mcsd/savg/vis/concrete/concrete.html . The movies lack the 3-D effect and interactivity. Peter M. Ketcham, another computer scientist in the visualization group, said he works with NIST physicists to picture their models of the microscopic phenomenon known as Bose-Einstein condensation. A Bose-Einstein condensate is a state of matter in which a cluster of supercold atoms starts to behave as a single entity at a few billionths of a degree above absolute zero. Scientists predicted the quantum-physics effect in the mid-1920s, but not until 1995 was the phenomenon demonstrated. Ketcham worked with six NIST colleagues and a collaborator from the University of Washington to turn the physicists' computer models into movies. Their 3-D simulation of a cluster of rubidium atoms contains 8 million data points. Going against custom, NIST designated low-density regions as bright and high-density regions as dark to see if the low-density areas would develop tiny whirlpools, or vortices. In the simulation, the vortices showed up as bright vertical, tornadolike bands. At the time of the visualization research, physicists were hotly debating the existence of such vortices, Ketcham said. An equatorial ring in early versions of the simulation had no physical reality. 'Essentially it was the result of a mistake in the simulation,' he said. The feedback loop between the visualization group and the researchers brought improvements to the simulation, and the ring disappeared.
<urn:uuid:ec3bf428-8aae-4ced-a51d-667f64840e37>
CC-MAIN-2017-04
https://gcn.com/Articles/2001/08/26/NIST-team-sees-in-stereo.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280929.91/warc/CC-MAIN-20170116095120-00106-ip-10-171-10-70.ec2.internal.warc.gz
en
0.908099
1,567
2.625
3
A $3.6 million EU project investigating how data centers can be designed and operated to make more efficient use of renewable energy has been launched. The three-year RenewIT project plans to develop a web-based planning tool that will help data centre operators understand the costs related to building a facility that uses renewable energy, such as wind, solar and biomass, for power, as well as for cooling, with air and sea water. Project spokesperson Andrew Donoghue of 451 Research said that "only a minority" of European data centres are currently powered by renewable energy. "Of those that do, the motivation is usually to gain positive publicity or curry favour with regulators rather than for purely commercial reasons," he said. According to the project co-ordinator, Dr Jaume Salom of IREC, the main roadblocks to using renewable energy are the perceived costs and the lack of tools to help operators make decisions about using it. "This project aims to overcome some of these obstacles by designing tools to evaluate the environmental performance and the share of renewable energy sources in the emerging concept of Net Zero Energy datacentres," said Salom. The fluctuating nature of renewable energy is one of the main challenges of using it to power data centres, which are today built to receive continuous power flow. The RenewIT project hopes to address this by developing tools that help match the intermittent flow of renewable energy with the applications and workloads being executed by the data centre. Barcelona Supercomputing Centre, RenewIT's partner, will develop algorithms for scheduling workloads, to add to current research on relationships between performance and energy consumption. RenewIT will also look at ways to better integrate data centres with smart cities infrastructure by plugging into smart grid and micro grids. The project will use its links with eight data centres across Europe to test the robustness and end user applicability of the project's technical energy concepts and simulation software tools in a live environment. RenewIT began on 1 October 2013 and is led by not-for-profit energy research centre Catalonia Institute for Energy Research (IREC). The other members of the project are 451 Research, Barcelona Supercomputing Center (BSC), Loccioni Group of Italy, AIGUASOL, Amsterdam-based datacentre design specialist DEERNS, and Technische Universitt Chemnitz, Professorship Technical Thermodynamics. This story, "EU Research on How Data Centers Can Use More Renewable Energy Launches" was originally published by Techworld.com.
<urn:uuid:f8d1012e-5071-4cd8-9804-02f73d990584>
CC-MAIN-2017-04
http://www.cio.com/article/2380364/data-center/eu-research-on-how-data-centers-can-use-more-renewable-energy-launches.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00014-ip-10-171-10-70.ec2.internal.warc.gz
en
0.941875
520
2.890625
3
A key to improving the quality of IT service begins with understanding and utilizing one of ITIL’s simplest concepts - the Expanded Incident Lifecycle. If you have attended an ITIL Foundation course, you undoubtedly remember the slide depicting the Expanded Incident Lifecycle (Figure 1, below). That is the graphical timeline that starts with an Incident on the left, progresses through the various stages of diagnosis, repair, restoration and closure, and then continues to the next Incident. The labels dispersed along the Incident timeline are not just handy monikers that the Service Desk uses to report the changing state of an Incident. They represent critical intersections of ITIL processes and activities and provide a roadmap to shorten the time to recover from an Incident and lengthen the time of error-free operation. Mean Times to . . . Before we start, let’s review a few key ITIL measurements, the “Mean Times to . . .” MTTR (Mean Time to Repair) - This is the average elapsed time between detecting an Incident and repairing the failed component; e.g., diagnosing and replacing a failed disk. Upon the completion of this activity, there is a functioning disk, but data has not been restored, and the users are still unable to access or use the service. Essentially this measures the technical response to diagnose and repair the failed component. The shorter this time, the better because shortened times mean less downtime for the user. MTRS (Mean Time to Restore Service) - This is the average elapsed time between detecting an Incident and fully restoring the service to the user; e.g., restoring data to the disk, recovering and restarting interfaces to other applications, informing the users that the service is available, and initiating user access (you may not want all of your users to log in simultaneously upon repair of the service!). This is a measure of the quality of your operational processes, as well as system design to facilitate recovery after failure. Again, shortening these times should be your goal. MTBF (Mean Time Between Failures) - This is the average elapsed time between restoration of service following an Incident and detection of the next Incident. In this case, a big number representing a long time between failures is good because it indicates a reliable service. MTBSI (Mean Time Between System Incidents) - This is the average elapsed time between Incidents, including downtime represented by the MTTR and MTRS measurements. By understanding the proportion of repair and restoration time versus failure-free time for a particular service, you can begin to prioritize service and system improvements. For example, you may decide to commit resources to improving a critical business service that experiences few, but lengthy, failures, and give a lower priority to repairing a less business-critical service that experiences frequent failures, but which require few resources and time to restore. Expanding the Expanded Incident Lifecycle Now that we’ve looked at what the Expanded Incident Lifecycle diagram tells us, let’s take a look at which ITIL processes support it, and how you can use it to pinpoint areas to automate or improve. Occurrence – By definition, an Incident is an unplanned disruption to an agreed service. ITIL offers a number of proactive ways to protect a service: - Capacity Management – Capacity Management seeks to ensure that proactive measures to improve performance of services are implemented when it is cost-justifiable to do so. - Availability Management – Availability Management identifies Vital Business Functions (VBF) that are critical to the business and implements designs that reduce the likelihood of unavailability. - IT Service Continuity Management – IT Service Continuity Management (ITSCM) evaluates risks and threats to IT services and seeks to avoid them or to moderate them if they do occur. - Information Security Management – Information Security Management proactively improves security controls, security risk management and reduction of security risks. - Access Management – Access Management executes policies set by Information Security Management. - Service Level Management – Service Level Management does not really prevent Incidents, but it works with the business to define the levels of agreed services. Detection - Incident resolution starts when a user or an automated system detects an error with a Configuration Item. Detection generally occurs sometime after the occurrence of the event. The goal is to shorten the time between Occurrence and Detection as much as possible. This activity ties directly to: - Capacity Management – Capacity Management ensures capacity can be monitored and measured. - Event Management – Event Management establishes threshold monitoring activities to detect Incidents early. - Incident Management – Incident Management should interface with the Service Desk and Event Management (as well as the Operations Management and Technical Management functions) to bring all Incidents under the control of Incident Management. - Service Desk – The Service Desk should have many channels for users to report Incidents when they occur. Diagnosis – During this stage, staff members try to identify the characteristics of the Incident and match it to previous Incidents, Problems and Known Errors. If Incident Management cannot match the Incident, the Problem Management process should start. - Incident Management – Establish a good interface with Problem Management to match the Incident to existing Problems and Known Errors or to report a new Problem. - Problem Management – Establish strong Problem Management procedures to rapidly and accurately diagnose problems. - Supplier Management – Establish Supplier Management procedures that document how third-party suppliers will be involved in diagnosis activities. - Service Level Management – Agree and establish Operational Level Agreements (OLA) with the Operations and Technical Management functions so everyone knows how to prioritize an Incident or Problem. - Technical Management Function – Document working procedures so that all staff knows what their roles and responsibilities for diagnosing Problems are. Repair – Sometimes a repair might raise a Request for Change (RFC) to change one or more Configuration Items (CI). After the CI is repaired, it may still be unavailable to the user and require recovery. - Change Management – Establish strong Change Management procedures to control changes made as a result of a problem diagnosis. - Supplier Management – Establish Supplier Management procedures for repairs that are made by third-party service providers. - Technical Management Function – Ensure Technical Management staff have the proper levels of skills and training. Recovery – This is the process of restoring the failed CI to the last recoverable state. This includes any required testing, final adjustment, configuration, etc. - Change Management – Ensure Change Management includes recovery steps in its planning. - Operations/Technical Management Function – Ensure that the Operations/Technical Management functions of Service Operation document and understand the steps to recovery. Recovery also has a proactive side, which results in designing services and systems that faster and easier to recover. - Problem Management – Problem Management should review and document problems and potential problems to develop proactive Problem solutions that can be shared by all IT Service Lifecycle phases and processes. - Service Design – Establish strong Service Design processes to design services to expedite their recovery from failure. Restoration – Service restoration makes the recovered service available to the user, so that the user can resume work. - Service Desk – Establish a strong interface with the Service Desk to manage user communications during implementation of the change and restoration of the service. - Service Transition – Establish a strong interface with Service Transition to implement changes and restore service to the users. - Operations Management Function – Establish good documented procedures with the Operations Management Function to implement changes and restore service to the users. - Technical Management Function – Establish good documented procedures with the Technical Management Function to implement changes and restore service to the users. On the proactive side, restoration capabilities can be “designed into” the service: - Service Design – Service Design should include restoration considerations in its analysis and design of new services. Closure – Closure occurs some time after restoration. It should give the user ample time to “shake out” the repaired service to ensure that it is really working, but it should not be so far into the future that users and staff have difficulty reconstructing what the parameters of the actual failure were. - Service Desk/Incident Management – The Service Desk and Incident Management process should formally close each Incident after verifying its closure with the user. - Change Management – Change Management includes an immediate technical review to ensure that the Change has been implemented properly and does not create other problems. Later it does a long-term review to determine whether the change has created the benefits (beyond resolving a Problem) that the user had anticipated. - Service Level Management – The Service Level Management process should agree with the business what constitutes “closure” of an Incident. Also, it includes Incidents in its periodic performance reviews with the business. The Final Step – Closing the Loop You do not see the “final” step in the Expanded Incident Lifecycle because it is not really a step, but an action “implied” by the four “Mean Time” measurements. The last step that ties together the steps along the timeline is to use MTTR, MTRS, MTBF and MTBSI to measure and analyze the effectiveness and efficiency of all of the activities and processes that contribute to Incident restoration. Were appropriate resources available to assist with the Incident resolution? Were appropriate interfaces in place so that resources could be applied in a timely manner? And, finally, did you learn something that can help the process work better the next time – or prevent the Incident from occurring?
<urn:uuid:7729dd68-4f64-472a-ad71-24777d7b8721>
CC-MAIN-2017-04
http://www.itsmsolutions.com/newsletters/DITYvol5iss7.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00429-ip-10-171-10-70.ec2.internal.warc.gz
en
0.907183
1,966
2.71875
3
Earlier this week comes news of another privacy breach. It appears that the British Columbia Health Ministry has suffered a privacy breach involving their PhamaNet system. After what appeared to be some suspicious activity the ministry conducted a forensic examination and discovered that an unknown party had accessed the system from March 9 until June 19, 2014 when the access was discovered and terminated Medical histories of 34 people were also accessed in the breach, which took place between March 9 and June 19, but no fraudulent prescriptions were obtained. No banking information was taken, but the government warned the perpetrator did access enough personal details to make identity theft a concern. Affected patients are being contacted by letter starting Friday, and the ministry is urging them to keep a close eye on their bank accounts, credit cards and online services. In all, 1,600 patients were affected by the breach. So, what controls were in place? From the PharmaNet site: PharmaNet complies with the B.C. Freedom of Information and Protection of Privacy Act. It is subject to strict privacy and security measures designed to prevent unauthorized access and protect the information in its databases. For instance, PharmaNet operates behind a "firewall." All users must sign a confidentiality agreement before being granted access and must provide a unique identification code when logging on to PharmaNet. Furthermore, PharmaNet consists of separate components—each component is accessible only to the specific users who require access for their work. Hmm, the fact that the word "firewall" is in quotations gives me pause. What other controls were in place? I would hazard that based on this access that the data in question had no encryption involved. In a press release the Ministry had this to say, “The privacy breach involved the names, dates of birth, addresses, telephone numbers, and personal health numbers (BC Services Card or Care Card numbers) of all the affected people. For 34 people, the unauthorized access also involved looking at medication histories.” So, how did this person access the system exactly? Was an administrative password guessed or compromised? Was there a network breach? This is decidedly lacking some salient details.
<urn:uuid:81d569ff-af61-4277-8db0-2595b53b9e28>
CC-MAIN-2017-04
http://www.csoonline.com/article/2455121/privacy/bc-health-ministry-hacked.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280266.9/warc/CC-MAIN-20170116095120-00181-ip-10-171-10-70.ec2.internal.warc.gz
en
0.974169
435
2.59375
3
Cooling remains a major target for energy-efficiency improvement in data centers, and ASHRAE’s new operating temperature guidelines for IT equipment create an even greater opportunity to exploit free cooling. In particular, the one of the professional organization’s allowable temperature ranges permits free cooling all year round in almost every location in the world. Air-Side Economization Strategies Even considering only the power used to operate servers and other IT equipment—and ignoring for the moment peripheral infrastructure like cooling, lighting and so on—data centers are energy hogs. But the need to remove waste heat from the facility just compounds the problem, particularly when air conditioners (or other cooling apparatus) are commissioned with the task of keeping things cool. A flagging economy, political tensions with key energy-producing nations and rising energy costs are all leading companies to look for ways to reduce the power consumption of their data centers. And cooling is a major target by way of air-side economization, which involves bypassing traditional cooling infrastructure to enable the use of outside air to keep operating temperatures down. (Water-side economization follows a similar strategy for its own particular infrastructure.) An APC whitepaper (“Economizer Modes of Data Center Cooling Systems”) offers a good overview of the basic methods of air-side economization in data centers. Each approach uses outside air as a heat sink, but they differ mainly in the isolation of outside air and the type of heat exchanger used. One strategy, for instance, is to simply draw in cool outside air, mixing it with some warmer exhaust to maintain specific temperatures and humidity, and to supply that air to the cold aisles in the data center. The intake air may also be filtered to limit contamination. To avoid problems of contamination and humidity variation, another broad strategy is to use a heat exchanger, whether a fixed-plate form or a heat wheel. Obviously, free cooling is not entirely free, since even air-side economization requires energy to move air, rotate a heat wheel, and so on. Nevertheless, this certainly beats the energy consumption of traditional, mechanical cooling methods like CRACs. But using outside air is more efficient when the temperature difference relative to the warm inside air is greater, and few locations would seem to offer sufficiently cool temperatures to enable year-round air-side economization. But the new guidelines from the American Society of Heating, Refrigerating and Air Conditioning Engineers (ASHRAE) actually make free cooling a year-round possibility almost everywhere. ASHRAE’s Updated Temperature Guidelines ASHRAE updated its recommended and allowable temperature and humidity guidelines for IT equipment in May 2011. In addition to the larger recommended range, ASHRAE specifies four “allowable” ranges for data centers—A1, A2, A3 and A4—which permit server inlet temperatures of up to 45°C, or 113°F, for short periods of time (in the case of A4; the other ranges allow progressively lower temperatures). Outside of the recommended range, companies must evaluate manufacturer and other requirements to determine which allowable ranges—if any—are permissible for their data centers. These recommended and allowable ranges are reviewed briefly in The Green Grid’s whitepaper entitled “Updated Air-Side Free Cooling Maps: The Impact of ASHRAE 2011 Allowable Ranges.” The Green Grid whitepaper updates that organization’s free-cooling maps, which identify the availability of free cooling (in hours per year) throughout the world. For the A3 allowable range (less than 40°C, or 104°F), the map of North America shows that nearly the entire continent can use free cooling all year round, with the exception of a few isolated areas in which small portions of the year are too warm or humid. The maps of Europe and even Japan show similar free cooling availability. For lower maximum temperatures (such as 35°C, or 95°F), significant areas of North America and Japan are much more limited in their free-cooling potential, as indicated by The Green Grid’s cooling maps. Of course, this year-round free cooling applies only to data centers for which IT equipment can withstand the maximum allowable temperature of the range for short periods of time—neither all facilities nor all manufacturers will permit such a range. Again, a company must carefully evaluate its own particular situation before selecting one of the ASHRAE allowable operating temperature/humidity ranges. The recommended range remains a default, and companies running data centers in this range will have to rely more heavily on traditional cooling methods, depending of course on their location. What’s the Next Target After Cooling? As companies maximize their savings by relying more heavily on free cooling, the question will arise as to the next major area of focus. Naturally, companies are pursuing every available method to reduce energy consumption and, thus, operating costs. But certain areas seem to have more focus than others: for instance, virtualization has been a hot trend for years as it seeks to maximize server usage and efficiency, thereby avoiding energy waste owing to idle time. Air-side economization and other methods of free cooling will help push data center PUEs (power usage effectiveness ratings) closer to 1.0, but even if that goal is achieved, data centers will still be power hogs. Every system will have a certain level of inefficiency, but even in a nearly ideal data center, the IT equipment will still consume vast amounts of power. But although semiconductor process technologies continue to progress, enabling more computational power on smaller and less power-hungry chips, the end may well be in sight. Eventually, Moore’s Law will end, and traditional semiconductor technology will offer no more power improvements (at least by way of simply shrinking the manufacturing process). At this point, the pursuit of greater efficiency will be reduced to tiny gains here or there, leaving few options for significant improvements that offer good return on investment. At such a point, companies will have little choice but to scale energy consumption directly with increasing demand. ASHRAE’s updated temperature and humidity guidelines enable companies that are capable of using the A3 or A4 allowable ranges to use year-round air-side economization virtually regardless of their location in North America, Europe and Japan. Reduced reliance on traditional cooling means lower energy consumption, higher PUE, reduced infrastructure and capital costs, and less environmental impact. Not all facilities will be able to use the A3 or A4 ranges, meaning that they will still have to rely on mechanical cooling. In particular, high-density implementations and those with particularly sensitive equipment (whether in reality or just because the manufacturer says so) won’t be able to operate in these ranges. But the greater availability for free cooling creates an opportunity for IT equipment manufacturers as well: by designing equipment to handle higher operating temperatures, they can gain marketing points by boasting lower peripheral operating expenses (i.e., lower cooling costs). Photo courtesy of cogdogblog
<urn:uuid:3a9cb51f-812b-42b1-99af-36a5f0d59246>
CC-MAIN-2017-04
http://www.datacenterjournal.com/ashrae-guidelines-enable-year-round-free-cooling/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00291-ip-10-171-10-70.ec2.internal.warc.gz
en
0.916671
1,451
2.796875
3
CURRENCY, WEALTH OF RESOURCES CITED AS GREATEST CLASSROOM BENEFITS -- OBSTACLES, DIFFICULTIES HINDER WIDER INTERNET USE -- Alexandria, VA----More than four out of five educators (82%) report using the Internet in some portion of their teaching. Of those who do, 38% say it provides current information, and 27% say it offers a wealth of resources they would not otherwise have. The voluntary survey was conducted by Cable in the Classroom (CIC) at the Florida Educational Technology Conference. Respondents included classroom teachers (58%), technology/computer specialists (19%), media specialists (12%), school administrators (5%), and other school positions (6%). The survey found the percentage of high school teachers using the Internet slightly higher than the percentage of elementary and middle school teachers. Educators who do not use the Internet cite the lack of multimedia-capable computers as the number one reason. Other barriers to wider educational usage include: budget constraints, difficulty managing Internet use in the classroom, difficulty integrating Internet resources within curriculum, lack of control over the materials accessed, lack of time for Internet training, lack of access to a telephone or high-speed line, and the lack of material relevant to the curriculum. "Educators realize the power of the Internet as a teaching tool and are excited about the vast educational resources it provides them," said Peggy O’Brien, Ph.D., executive director of CIC. "Where obstacles stand between teachers and the huge learning boost the Internet offers, we must remove them. Where difficulties surrounding access to technology and training exist, we must eliminate them." Other key findings from teachers who use the Internet include: When choosing the Internet in teaching, the following factors most influence teachers’ decisions: relevance to the topic being taught (69%), likelihood of motivating students (49%), appropriateness to the needs of individual students (43%), alignment to standards (40%), and compatibility with classroom technology (30%). Discovery (19%), followed closely by Scholastic (18%), were the most popular Web sites among educators who use the Internet in their classrooms. Other Web sites mentioned included CNN (14%), Marco Polo (9%), Yahooligans and Yahoo and Google (7% each), Weather Channel (6%), and Ask Jeeves and AOL (including [email protected]) and National Geographic (5% each). The features that make these sites valuable and useful to teachers include: ease of access and use (31%); the quantity and quality of information they contain (20%); lesson plans (16%); the quantity and quality of teaching materials they contain (15%); the fact that they have current information (15%); interactivity (11%); their graphics, videos, and pictures (9%); good search engines (especially for children) (9%); and their user-friendliness (8%). CIC represents the cable telecommunications industry’s commitment to education – to improve teaching and learning for children in schools, at home, and in their communities. This is the only industry-wide philanthropic initiative of its kind; since 1989, 8,500 cable companies and 39 cable networks have provided free access to commercial-free, educational cable content and new technologies to 81,000 public and private schools, reaching 78 percent of K-12 students. CIC focuses on five essential elements to ensure quality education in the 21st century: visionary and sensible use of technologies, engagement with rich content, community with other learns, excellent teaching, and the support of parents and other adults. 1800 North Beauregard Street, Suite 100 Alexandria, VA 22311 703.845.1400; 703.845.1409 f
<urn:uuid:fae9111d-0e31-4e9e-bcfb-e539e058b568>
CC-MAIN-2017-04
https://www.ncta.com/news-and-events/media-room/article/1641
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00015-ip-10-171-10-70.ec2.internal.warc.gz
en
0.923101
766
2.75
3
British Airways passengers may soon be offered a ‘digital pill’ to help cabin crew monitor their happiness during flights. The airline is said to be investigating the use of ingestible sensors or ‘digital pills’ to wirelessly monitor health information inside a passenger’s body. The idea is to create a technology that can assess a passenger’s wellness during flight, and to help combat jetlag by aiding their sleep, eating and exercise patterns. Supposedly the sensors would work alongside in-cabin sleep monitors and data from wearables and smartphones, thus enabling a personalized “travel environment” for each passenger. British Airways and IoT British Airways outlined its vision for an Internet of Things (IoT) based “system and method for controlling the travel environment of a passenger” in a patent application published by the Intellectual Property Office. The patent indicates that British Airways is interested in monitoring things like when a passenger is awake, asleep, hungry, nervous, hot, cold or uncomfortable. The technology it is exploring may include but is not limited to: cameras tracking body movement, sensors monitoring climate, lighting, humidity, sleep, eye movement, heart rate and body temperature. The theory is that the technology can track and monitor these conditions and adjust the cabin settings accordingly. For example, it may recline the seat when a passenger is asleep or suggest an exercise routine to prevent tiredness. However, British Airways’ idea of an ‘ingestible pill’ is somewhat more radical. The description of the pill outlined in the patent application suggests it would include a sensor to detect internal temperature, stomach acidity and other bodily conditions, which would be relayed to the cabin crew. Again, this data would be used by the crew to improve the travel experience of passengers. How comfortable the idea of such a pill makes passengers remains to be seen. A British Airways spokeswoman told the London Evening Standard: “We are always looking to deliver new innovations for our customers, whether it be in design or digital transformation. as such, we develop many ideas and many patents.” More analysis to follow
<urn:uuid:97ebf06f-b0f9-4f50-ad7b-1611cebb0554>
CC-MAIN-2017-04
https://internetofbusiness.com/british-airways-digital-pill/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280410.21/warc/CC-MAIN-20170116095120-00136-ip-10-171-10-70.ec2.internal.warc.gz
en
0.932197
438
2.75
3
The Computer History Museum in Mountain View, Calif., reopens its doors this week after undergoing a $19 million, 25,000-square-foot building renovation. The gem at the heart of this giant undertaking is a major new exhibit that traces the history of computing from the ancient abacus to the personal digital assistant (PDA) of the 90s. The details of the exhibition, titled “Revolution: The First 2000 Years of Computing,” were the subject of a recent article at Computerworld. John Hollar, museum CEO, shared with Computerworld the impetus for the project: “Many times, people coming to the museum have very basic questions: ‘How did that computer on my desk get there? How did that phone I’ve used for so long get so smart?’ It’s an exhibition that’s primarily aimed at a nontechnical audience, though there’s a ton of great history and information for the technical audience as well.” The show’s 19 galleries house documents, video presentations, and more than 5,000 images and 1,100 artifacts. Some of the presentations on display are designed for hands-on use. For example, visitors will be able to pick up a 24-lb. Osborne computer or play a game of Pong, Pacman or Spacewar. Among other noteworthy artifacts are a 1956 IBM 305 computer and its 350 hard drive, the first commercially-available machine of its type. The machine holds 5MB of data and occupies almost an entire room. Also on display are “the console of a 1950 Univac 1, the first computer to become a household name; a complete installation of an original IBM System/360, which dominated mainframe computing for 20 years; and a Cray-1 supercomputer, which reigned as the world’s fastest from 1976 to 1982.” During the next year, the museum will host a special lecture series, called “Revolutionaries,” which will spotlight prominent technology innovators speaking about the developments and discoveries that have influenced our world. A permanent installation, “Revolution: The First 2000 Years of Computing” opens to the public tomorrow, Jan. 13.
<urn:uuid:a9848ea5-a6a7-4709-bc8d-26331079909e>
CC-MAIN-2017-04
https://www.hpcwire.com/2011/01/12/2000_years_of_computing_on_display_at_computer_history_museum/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280410.21/warc/CC-MAIN-20170116095120-00136-ip-10-171-10-70.ec2.internal.warc.gz
en
0.924944
460
2.515625
3