text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
A computer system should provide confidentiality, integrity and assurance against intrusion attempts. However, due to increased connectivity on the Internet, more and more systems are subject to attack by intruders. Intrusion Detection Systems (IDS) are used by organizations to extend their security infrastructure by detecting and responding to unauthorized access of resources in real time. This paper discusses what is an intrusion detection system, the models and the main techniques. What is an IDS? ID stands for Intrusion Detection, which is the art of detecting inappropriate, incorrect, or anomalous activity. An Intrusion Detection System (IDS) analyze a system for filesystem changes or traffic on the network, this system, learns what normal traffic looks like, then notes changes to the norm that would suggest an intrusion or otherwise suspicious traffic. So an IDS protect a system from attack, misuse, and compromise. It can also monitor network activity, audit network and system configurations for vulnerabilities, analyze data integrity, and more. Depending on the detection methods someone choose to deploy. There are basically 3 main types of IDS being used today: Network based (a packet monitor), Host based (looking for instance at system logs for evidence of malicious or suspicious application activity in real time), and Application Based IDS (monitor only specific applications). Host-Based IDS (HIDS) Host-based systems were the first type of IDS to be developed and implemented. These systems collect and analyze data that originate on a computer that hosts a service, such as a Web server. Once this data is aggregated for a given computer, it can either be analyzed locally or sent to a separate/central analysis machine. One example of a host-based system is programs that operate on a system and receive application or operating system audit logs. These programs are highly effective for detecting insider abuses. On the down side, host-based systems can get unwieldy. With several thousand possible endpoints on a large network, collecting and aggregating separate specific computer information for each individual machine may prove inefficient and ineffective. Possible host-based IDS implementations include Windows NT/2000 Security Event Logs, RDMS audit sources, Enterprise Management systems audit data (such as Tivoli), and UNIX Syslog in their raw forms or in their secure forms such as Solaris’ BSM; host-based commercial products include RealSecure, ITA, Squire, and Entercept, etc. Network-Based IDS (NIDS) NIDS are used to monitoring the activities that take place on a particular network, Network-based intrusion detection analyzes data packets that travel over the actual network. These packets are examined and sometimes compared with empirical data to verify their nature: malicious or benign. They have n/w interface in promiscuous mode. Because they are responsible for monitoring a network, rather than a single host, Network-based intrusion detection systems (NIDS) tend to be more distributed than host-based IDS. Instead of analyzing information that originates and resides on a computer, network-based IDS uses techniques like “packet-sniffing” to pull data from TCP/IP orother protocol packets traveling along the network. This surveillance of the connections between computers makes network-based IDS great at detecting access attempts from outside the trusted network. In general, network-based systems are best at detecting the following activities: - Unauthorized outsider access: When an unauthorized user logs in successfully, or attempts to log in, they are best tracked with host-based IDS. However, detecting the unauthorized user before their log on attempt is best accomplished with network-based IDS. - Bandwidth theft/denial of service: these attacks from outside the network single out network resources for abuse or overload. The packets that initiate/carry these attacks can best be noticed with use of network-based IDS. Some possible downsides to network-based IDS include encrypted packet payloads and high-speed networks, both of which inhibit the effectiveness of packet interception and deter packet interpretation. Examples of network- based IDS include Shadow, Snort!, Dragon, NFR, RealSecure, and NetProwler. One important topic about the NIDS is where to deploy the sensor, inside or outside the firewall. A interesting quote from SANS’ GIAC Director Stephen Northcutt’s book, Network Intrusion Detection: An Analyst’s Handbook: “An IDS before the firewall is an Attack detection and after the firewall is Intrusion detection…. In a switched network, since we don’t have broadcasting, we have two better options on deploying the NIDS, using a hub to force a broadcast or using a mirroring-port in the switch.” Application Based IDS Application Based IDS monitor only specific applications such as database management systems, content management systems, accounting systems etc. They often detect attacks through analysis of application log files and can usually identify many types of attack or suspicious activity. Sometimes application-based IDS can even track unauthorized activity from individual users. They can also work with encrypted data, using application-based encryption/decryption services. Some IDSes are standalone services that work in the background and passively listen for activity, logging any suspicious packets from the outside. Others combine standard system tools, modified configurations, and verbose logging. Knowledge based systems use signatures about attacks to detect instances of these attacks. Knowledge based systems is the most-used IDS model. Signatures are patterns that identify attacks by checking various options in the packet, like source address, destination address, source and destination ports, flags, payload and other options. The collection of these signatures composes a knowledge base that is used by the IDS to compare all packet options that pass by and check if they match a known pattern. Signatures have the same limitations as a patch – it is not possible to write the signature until the hack has materialized. Behavior based systems use a reference rule of normal behavior and flag deviations from this model as anomalous and potentially intrusive. A behavioral rule aims to define a profile of legitimate activity. Any activity that does not match the profile, including new types of attack, is considered anomalous. As rules are not specific to a particular type of attack, forensic information is not normally very detailed. However, rules can identify malicious behavior without having to recognize the specific attack used. This approach offers unparalleled protection against new attacks ahead of any knowledge being available in the security community. The disadvantage of this model is that it may cause a high number of false-positive alerts. -False positive: A report of an attack or attempted attack when no vulnerability existed or no compromise occurred. -False negative: The failure of an IDS to report an instance in which an attacker successfully compromises a host or network. -Sensor: The computer that monitors the network for intrusion attempts. Sensors usually run in promiscuous mode, often without an IP address. Useful Links & References http://www-rnks.informatik.tu-cottbus.de – Intrusion Detection Systems List http://www.securityfocus.com – Introduction to Intrusion Detection Systems http://www.lids.org – Linux Intrusion Detection System http://www.snort.org – The Open Source Network Intrusion Detection System http://www.sans.org/resources/idfaq/ – Intrusion Detection FAQ
<urn:uuid:4a0e270d-d899-4b8b-8865-9411471c9d96>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2003/06/11/intrusion-detection/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00075-ip-10-171-10-70.ec2.internal.warc.gz
en
0.922373
1,538
3.46875
3
For better or worse, this seems to be the common practice when it comes to e-mail. Individuals tend to have: - Primary work e-mail address - Primary personal e-mail address - Secondary e-mail address The work e-mail is necessary for business-related conversations. In order to prevent it from filling up a possible quota or having possible distractions or embarrassing messages, we set up a personal e-mail address. Then for all those random sites we have to sign up for (and we’re not sure they’ll spam us or sell our information), we use another e-mail address. These personal e-mail addresses probably come from the popular 3 sources of free e-mail accounts: Gmail, Yahoo! Mail, or Hotmail. You can’t just make up a fake e-mail address for many of these sites because they’ll send you an e-mail containing a link to verify that the e-mail address exists. There are a number of issues that need to be addressed: - Knowing which site is selling your information - Sorting incoming e-mail - Having to check multiple e-mail accounts - Blocking spam If you have a Gmail account you can make use of a cool feature called Plus Addressing. For example, to send an e-mail to your gmail account I would send to YourEmail@gmail.com. With Plus Addressing, you can tell me your e-mail address is YourEmailfirstname.lastname@example.org. Of course, this works better on electronic registration pages than telling people. Now, you can set up a filter to label all incoming messages sent to YourEmailemail@example.com. If spammers were to get this e-mail address, YourEmailfirstname.lastname@example.org, you would know that I was the one that had published this address or sold your information. This is a pretty cool feature, but it requires a lot of manual work in setting up filters. There are also plenty of sites out there that won’t let you sign up with an e-mail account that has a ‘+’ in it (Microsoft websites, just to name one example). Spammers, if they get smart (which they always eventually do), would just start parsing the ‘+spam’ out of e-mail addresses. In order to address these problems, there is a better way of doing this. You can create your secondary e-mail account over at OtherInbox. Right now the site is in a private beta. If you sign up over at their site, they’ll periodically send out invitations in which the first 50 responders can create an account. The good news with it being in beta is that awesome new features could always be right around the corner. Let me highlight some of the cool features that are currently available. Basically OtherInbox is exactly like any other free e-mail account, you can receive and send e-mail, but how OtherInbox differs is also what makes it the perfect secondary e-mail account. When you sign up for OtherInbox, you’ll create an account. This will let you receive all your e-mail to @YourAccount.otherinbox.com. The trick to OtherInbox is that the first part of the e-mail address, which is usually your account name is actually for you to change whenever you’re entering your e-mail address somewhere. Let’s say I’m creating a new account on StumbleUpon. The e-mail account I’d sign up with would be stumbleupon@YourAccount.otherinbox.com. You’d do the same thing with other sites you register with digg@YourAccount.otherinbox.com, delicious@YourAccount.otherinbox.com, and etc. You could even give out your e-mail address with this kind of information: email@example.com, firstname.lastname@example.org, etc. When you compose messages, you can choose which e-mail address it comes from, so it all stays together. Now, when you receive an e-mail sent to email@example.com, it will automatically be put in a folder called stumbleupon with any other e-mails sent to that same e-mail address. Most likely, any e-mail sent from the StumbleUpon site will be sent to that address and thus should end up in the appropriate stumbleupon sub-folder in your inbox. You can see how this looks in the screenshot below. This feature really takes care of three of the problems: sorting mail, knowing which site(s) are selling your information, and blocking spam. Wait, how does this block spam? Well, anytime you find out that an e-mail address of yours has been compromised, you just click the ‘Block All’ button. This will prevent any incoming e-mail from that address from reaching your inbox. It will get filtered to the Blocked folder on the left. If you still wish to do business with the site that sold or published your e-mail address, you can just go to their site and change your listed e-mail address to something else and see if they release your information again. I, however, would recommend finding an alternative website. The final problem was regarding having to check multiple e-mail accounts. OtherInbox provides a few conveniences so that you don’t have to really log into the interface that often. You can have the messages forwarded onto another e-mail account of yours, if you would like. This is convenient, but pretty much defeats the purpose in the first place in my opinion. Another offering of OtherInbox fits much better in my opinion: RSS feed. You can subscribe to your Inbox in your favorite RSS Feed Reader, like Google Reader, and receive any new messages there. This helps bring your messages to you without cluttering up your e-mail. One final cool feature of OtherInbox that I’ll highlight is the ability to add domains. It’s a lot to type each time you enter your e-mail address: example@YourAccount.otherinbox.com. If you own a domain, you can point your mail servers (MX Records) to point to OtherInbox, and they have easy instructions on how to do this. So now, you can take advantage of the OtherInbox features but have an e-mail address at your own domain. Example@YourAccount.otherinbox.com becomes firstname.lastname@example.org. This does a lot for branding and lets you take advantage of all the other features OtherInbox offers. OtherInbox has all the information you need to update the setting on your server. One really cool side-effect of using my OtherInbox account for signing up at a lot of random sites is that I don’t have to guess the username. If they use the e-mail address as the user name, I then know that I sign in using sitename@MyDomain.com Sign up for the beta over at OtherInbox and get yourself an account.
<urn:uuid:9ef1c7cd-9b61-43eb-8285-add778bcfdf1>
CC-MAIN-2017-04
https://www.404techsupport.com/2009/01/otherinboxcom-the-perfect-secondary-e-mail/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00223-ip-10-171-10-70.ec2.internal.warc.gz
en
0.922702
1,510
2.53125
3
0.7.1 Dijkstra's Banker's Algorithm for Deadlock Prevention The Banker's Algorithm is a strategy for deadlock prevention. In an operating system, deadlock is a state in which two or more processes are "stuck" in a circular wait state. All deadlocked processes are waiting for resources held by other processes. Because most systems are non-preemptive (that is, will not take resources held by a process away from it), and employ a hold and wait method for dealing with system resources (that is, once a process gets a certain resource it will not give it up voluntarily), deadlock is a dangerous state that can cause poor system performance. One reason this algorithm is not widely used in the real world is because to use it the operating system must know the maximum amount of resources that every process is going to need at all times. Therefore, for example, a just-executed program must declare up-front that it will be needing no more than, say, 400K of memory. The operating system would then store the limit of 400K and use it in the deadlock avoidance calculations. The Banker's Algorithm seeks to prevent deadlock by becoming involved in the granting or denying of system resources. Each time that a process needs a particular non-sharable resource, the request must be approved by The banker is a conservative loaner. Every time that a process makes a request of for a resource to be ``loaned'' the banker takes a careful look at the bank books and attempts to determine whether or not a deadlock state could possibly arise in the future if the loan request This determination is made by ``pretending'' to grant the request and then looking at the resulting post-granted request system state. After granting a resource request there will be an amount of that resource left free in the system, f. Further, there may be other processes in system. We demanded that each of these other processes state the maximum amount of all system resources they needed to terminate up-front so, therefore, we know how much of each resource every process is holding and has claim to. If the banker has enough free resource to guarantee that even one process can terminate, it can then take the resource held by that process and add it to the free pool. At this point the banker can look at the (hopefully) now larger free pool and attempt to guarantee that another process will terminate by checking whether its claim can be met. If the banker can guarantee that all jobs in system will terminate, it approves the loan in question. If, on the other hand, at any point in the reduction the banker cannot guarantee any processes will terminate because there is not enough free resource to meet the smallest claim, a state of deadlock can ensue. This is called an unsafe state. In this case the loan request in question is denied and the requesting process is usually The efficiency of the Banker's algorithm depends greatly on how it is implemented. For example, if the bank books are kept sorted by process claim size, adding new process information to the table is O(n) but reducing the table is simplified. However if the table is kept in no order, adding a new entry is O(1) however reducing the table is less efficient.
<urn:uuid:5c296a89-e900-44c2-8bdd-3f9edecee1ba>
CC-MAIN-2017-04
http://www.darkridge.com/~jpr5/mirror/alg/node149.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00067-ip-10-171-10-70.ec2.internal.warc.gz
en
0.918842
725
2.890625
3
A new report, conducted jointly by Ericsson (NASDAQ:ERIC), Arthur D. Little and Chalmers University of Technology in 33 OECD countries, quantifies the isolated impact of broadband speed, showing that doubling the broadband speed for an economy increases GDP by 0.3%.* A 0.3 percent GDP growth in the OECD region is equivalent to USD 126 billion. This corresponds to more than one seventh of the average annual OECD growth rate in the last decade. The study also shows that additional doublings of speed can yield growth in excess of 0.3 percent (e.g. quadrupling of speed equals 0.6 percent GDP growth stimulus) Both broadband availability and speed are strong drivers in an economy. Last year Ericsson and Arthur D. Little concluded that for every 10 percentage point increase in broadband penetration GDP increases by 1 percent. This growth stems from a combination of direct, indirect and induced effects. Direct and indirect effects provide a short to medium term stimulus to the economy. The induced effect, which includes the creation of new services and businesses, is the most sustainable dimension and could represent as much as one third of the mentioned GDP growth. "Broadband has the power to spur economic growth by creating efficiency for society, businesses and consumers," says Johan Wibergh, Head of Business Unit Networks, Ericsson. "It opens up possibilities for more advanced online services, smarter utility services, telecommuting and telepresence. In health care, for instance, we expect that mobile applications will be used by 500 million people." During a keynote speech at Broadband World Forum 2011 in Paris, Wibergh said: "We expect a huge increase from the current estimate of around 1 billion people with broadband access to about 5 billion in 2016, most of whom will have mobile broadband. Connectivity and broadband are just a starting point for new ways of innovating, collaborating and socializing." Erik Almqvist, Director at Arthur D. Little, says: "Until now there has been an absence of hard facts investigating the effects of broadband speed on the economy. This unique empirical study may help governments and other decisions makers in society make more correct tradeoffs and policy choices." "These results have been derived using rigorous scientific methods where the direction of causality, data quality and significance levels have been appropriately tested," says Erik Bohlin, Professor at Chalmers University of Technology. "The results of this study support governmental policies that recognize and promote the importance of broadband." This study is the first of its kind in that it quantifies the economic impact of increases in broadband speed in a comprehensive scientific method using publicly available data. Notes to editors: * The economic impact of average attained broadband speed, both fixed and mobile, has been analyzed using panel data regression analysis with quarterly data points from 2008-2010 for 33 OECD countries. * One-directional, isolated effect. * Countries considered in the study are Australia, Austria, Belgium, Canada, Chile, Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Iceland, Ireland, Israel, Italy, Japan, Korea, Mexico, Netherlands, New Zealand, Norway, Poland, Portugal, Slovakia, Slovenia, Spain, Sweden, Switzerland, Turkey, UK and US. * Average achieved broadband speed data provided by Ookla Photo of Johan Wibergh is available at: Our multimedia content is available at the broadcast room: www.ericsson.com/broadcast_room Ericsson is the world's leading provider of technology and services to telecom operators. Ericsson is the leader in 2G, 3G and 4G mobile technologies, and provides support for networks with over 2 billion subscribers and has the leading position in managed services. The company's portfolio comprises mobile and fixed network infrastructure, telecom services, software, broadband and multimedia solutions for operators, enterprises and the media industry. The Sony Ericsson and ST-Ericsson joint ventures provide consumers with feature-rich personal mobile devices. Ericsson is advancing its vision of being the "prime driver in an all-communicating world" through innovation, technology, and sustainable business solutions. Working in 180 countries, more than 90,000 employees generated revenue of SEK 203.3 billion (USD 28.2 billion) in 2010. Founded in 1876 with the headquarters in Stockholm, Sweden, Ericsson is listed on NASDAQ OMX, Stockholm and NASDAQ New York. FOR FURTHER INFORMATION, PLEASE CONTACT Ericsson Corporate Public & Media RelationsPhone: +46 10 719 69 92E-mail: firstname.lastname@example.org Ericsson Investor RelationsPhone: +46 10 719 00 00E-mail: email@example.com RSS: All News All Press Releases
<urn:uuid:7c2b3a1e-b1b1-42d6-81ea-9ae10bdcd13a>
CC-MAIN-2017-04
https://www.ericsson.com/news/1550083
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279915.8/warc/CC-MAIN-20170116095119-00489-ip-10-171-10-70.ec2.internal.warc.gz
en
0.897675
983
2.703125
3
Proposed guidelines offer help in managing passwords in the enterprise - By William Jackson - Apr 24, 2009 Passwords probably are the most commonly used method of authentication for access to information technology resources, but despite their apparent simplicity, they can be difficult to manage. Long, complex passwords should be more secure than simpler ones, but they also are more difficult for the user to remember, leading to the increased possibility they will be improperly stored. Password resets also are notorious consumers of help-desk resources. To help agencies select and implement proper controls, the National Institute of Standards and Technology (NIST) has released a draft version of Special Publication 800-118 , titled “Guide to Enterprise Password Management,” for public comment. Comments should be e-mailed by May 29 to email@example.com , with “Comments SP 800-118” typed in the subject line. Password management, as defined by NIST, is “the process of defining, implementing and maintaining password policies throughout an enterprise.” Because passwords are used to control access to and protect sensitive resources, organizations need to protect the confidentiality, integrity and availability of passwords themselves. The goal is to ensure that all authorized users get the access they need, while no unauthorized users get access. “Integrity and availability should be ensured by typical data security controls, such as using access-control lists to prevent attackers from overwriting passwords and having secured backups of password files,” NIST states. “Ensuring the confidentiality of passwords is considerably more challenging and involves a number of security controls along with decisions involving the characteristics of the passwords themselves.” Threats to confidentiality of passwords include capturing, guessing or cracking them through analysis. Password guessing and cracking become more difficult with the complexity of the password. The number of possibilities for a given password increases with the length of the password and the possible number of choices for each character. The possible choices for each character of a numerical password are 10 (0 through 9). Possible choices for passwords using letters are 26 for each character. By combing upper and lower case letters, numerals and special characters, there can be as many as 95 possibilities for each character. A four-digit numerical personal identification number has keyspace of 10,000; that is, there are 10,000 possible combinations. An eight-character password using 95 possibilities for each character has a keyspace of 7 quadrillion. Increasing the length of the password increases the keyspace more quickly than increasing the number of possibilities for each character, NIST states. One method of password management is to use a single sign-on (SSO) tool, which automates password authentication for the user by controlling access to a set of passwords through a single password. This can make it more feasible for a user to use and remember a single, complex password. However, “in nearly every environment, it is not feasible to have an SSO solution that handles authentication for every system and resource — most SSO solutions can only handle authentication for some systems and resources, which is called reduced sign-on,” NIST states. NIST recommends protecting the confidentiality of passwords: - Create a password policy that specifies all of the organization’s password management-related requirements, including Federal Information Security Management Act and other regulatory requirements. “An organization’s password policy should be flexible enough to accommodate the differing password capabilities provided by various operating systems and applications.” - Protect passwords from attacks that capture passwords. “Users should be made aware of threats against their knowledge and behavior, such as phishing attacks, keystroke loggers and shoulder surfing, and how they should respond when they suspect an attack may be occurring. Organizations also need to ensure that they verify the identity of users who are attempting to recover a forgotten password or reset a password, so that a password is not inadvertently provided to an attacker.” - Configure password mechanisms to reduce the likelihood of successful password guessing and cracking. “Password guessing attacks can be mitigated rather easily by ensuring that passwords are sufficiently complex and by limiting the frequency of authentication attempts, such as having a brief delay after each failed authentication attempt or locking out an account after many consecutive failed attempts. Password-cracking attacks can be mitigated by using strong passwords, choosing strong cryptographic algorithms and implementations for password hashing, and protecting the confidentiality of password hashes. Changing passwords periodically also slightly reduces the risk posed by cracking.” - Determine requirements for password expiration based on balancing security needs and usability. Regularly changing passwords “is beneficial in some cases but ineffective in others, such as when the attacker can compromise the new password through the same keylogger that was used to capture the old password. Password expiration is also a source of frustration to users, who are often required to create and remember new passwords every few months for dozens of accounts, and thus tend to choose weak passwords and use the same few passwords for many accounts.” William Jackson is a Maryland-based freelance writer.
<urn:uuid:e2fa624f-a41b-44b5-8049-64bbb6ba3788>
CC-MAIN-2017-04
https://gcn.com/articles/2009/04/24/nist-password-management.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00270-ip-10-171-10-70.ec2.internal.warc.gz
en
0.941525
1,044
2.65625
3
Protocol breaks and content checking are technologies used under the hood. These technologies relate to the principal information security objectives, and ultimately how confidential information is protected using data diodes. When protecting an isolated network against outsider attacks, there are a number of objectives and technologies that are commonly used. Objectives typically boil down to C.I.A.: confidentiality, integrity and availability. The best possible technology for confidentiality is the unidirectional network connection by means of a data diode. However, there is a lot of technology relating to data diodes that impacts integrity and availability. In particular, protocol breaks and content checking have a subtle relation to these objectives. This briefing paper explains how data diodes are used to protect confidential information.
<urn:uuid:5a7f3d17-27a6-4755-862f-dcfe256dddb1>
CC-MAIN-2017-04
https://www.fox-it.com/en/insights/paper/protecting-confidential-information-using-data-diodes/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00480-ip-10-171-10-70.ec2.internal.warc.gz
en
0.89308
153
2.890625
3
An important security consideration for any desktop administrator is how to keep data secure. With the dramatic increase in the use of removable storage devices, Gigabytes of data can be copied from secure storage locations onto USB flash drives, CD-Rs, DVD-Rs, etc. Most PCs are built with these devices as standard equipment and it is difficult to find models without them. Some CD and DVDs contain harmful executables that if executed can compromise system security. Microsoft Windows 7 can control the use of such devices with Local and Group Policies. In the Computer Configuration/Administrative Templates/System /Removable Storage Access node are settings that can specify what users can do with removable storage. The CD and DVD: Deny read access, Deny write access, and Deny execute access settings prevent optical drives from copying data. Even Floppy drives can be managed with similar settings. Tape drives and Removable Disks (USB hard drives) are also controlled with the same options. WPD Devices such as media players, cellular phones, and CE devices can be limited as well. In order to restrict a specific device from a specific manufacturer a specific device guide can be specified in the Custom Classes: Deny read access and Deny write settings. The device guid for any device can be determined by selecting the properties of the device in Device Manager, clicking the Details tab and selecting the Device Class drop down list. Select the “Device class guid” property and the specific device class guid will be displayed. If you simply want to ban the use of any type of Removable Storage you can enable the “All Removable Storage classes: Deny all access” setting and breathe easier knowing that you done a lot to improve data security on Windows Desktops. In an Active Directory environment Group Policy can be used to apply these settings to all computers in a Domain, Site or Organizational Unit.
<urn:uuid:13050761-82af-4f7a-b52f-96d14b82f768>
CC-MAIN-2017-04
http://blog.globalknowledge.com/2010/05/27/how-to-control-access-to-removable-storage-devices-in-windows-7/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00296-ip-10-171-10-70.ec2.internal.warc.gz
en
0.899106
382
2.71875
3
One irony of the digital revolution is that it has made the once relatively simple concept of government transparency significantly more opaque. In the old days of paper, transparency generally meant two things: meetings and documents. The game was about what public officials would show you and what they wouldn't and how long it would take to find out. The rise of the Internet brought an increased focus on proactive disclosure of agency reports, audits and visitors' logs. Soon the government was releasing massive data sets online, even if no one had specifically requested them, and the concept of transparency became a matter of how clearly information was organized and how easily it could be searched, as much as it was about the content itself. As the Web enabled new sorts of communication between the government and its governed, the blanket term "transparency" expanded to include a host of programs that make democracy more participatory through blogs, wikis and social media conversations. The scope, possibilities and trapdoors of transparency writ large are on display through the Open Government Partnership, a new association of more than 40 nations committed to open government practices. Eight steering committee nations already have published lists of commitments under the partnership. Those range from the old-fashioned, such as the Philippines' promise to establish an effective and efficient public information portal, to the Internet Age, such as an Indonesian plan to create a digitized map of all its national timber resources. The U.S. commitments similarly run the gamut from using better search technology to speed agency responses to Freedom of Information Act requests to a plan to publish all federal spending data in a single, searchable database modeled after the Recovery.gov website, which tracks funds disbursed under the 2009 stimulus law. Among the most important elements of the U.S. plan, experts say, is a commitment to join the international Extractive Industries Transparency Initiative, which tracks governments' revenue from oil and gas drilling and mineral mining. The expanding tent of open government can be a good thing, bringing together traditional freedom of information advocates and their latter day techie counterparts, says Nathaniel Heller, co-founder and executive director of Global Integrity, a Washington-based transparency think tank. "The communities of practice are still so bifurcated and siloed. With rare exceptions, the civic hackers and the born-digital types rarely know much about the FOIA champions that came before them 15, 20 or 30 years ago," Heller says. "That could be a positive outcome of OGP, reframing all these groups under something bigger that invites these communities to share their knowledge. On the flip side, it invites a lot of confusion. It invites everybody to pile into the bus with their pet project and just call it open government." The Norwegian document, for example, includes several vague commitments to increase female representation in government and in top private sector posts. "That's a wonderful goal, but is it open government?" Heller says. "That's going to be a challenge with OGP. Do we leave fuzzy and unlabeled what we mean by open government?" The Open Government Partnership was specifically designed as a loose community, says Heller, whose organization did much of the communications work for the initiative in the run-up to its September launch on the sidelines of the United Nations General Assembly in New York. The steering committee nations wanted to make the partnership a "race to the top," he says, not a stick to beat China, Iran and other less transparent nations with. They also wanted a system that allowed nations to mold transparency to their own needs, rather than imposing a set of standards from on high in the nature of a U.N. declaration. That question of what transparency means in different countries is a tricky one. U.S. transparency groups have applauded the Obama administration's many high-tech transparency initiatives, but also have criticized the White House for valuing quantity over quality in disclosure and for not putting enough muscle behind low-tech initiatives such as declassifying old national security documents, improving agency responsiveness to FOIA requests and better protecting government whistleblowers. John Wonderlich, policy director for the Sunlight Foundation, for instance, acknowledges that Data.gov, the administration's data set trove, is "probably still the world's best data portal." But then he launches into a critique of data agencies post to the site, which veers more toward which metro areas are consuming the most lima beans and less toward which contractors are receiving the most federal money. Brazil, which currently co-chairs the partnership's steering committee with the United States, has an online contractor database that puts detailed information about individual procurements online within just a few days. That system puts its U.S. counterpart to shame, Wonderlich says. The South African commitments, by contrast, don't specifically address government contractors, but focus heavily on the effective delivery of government services, including a Know Your Service Rights and Responsibilities public outreach campaign. "In developing nations there tends to be a relatively greater emphasis on service delivery," Heller says, "because those are the challenges those countries face - keeping the electricity on for 24 hours and making sure schools actually work." That's not to say tech-enabled transparency has no place outside industrialized nations. Several developing nations with comparatively low Internet penetration are publishing government information online, including India, which uses a similar site to Data.gov. Those nations' wired journalists and civil society organizations then filter and transmit that information to the wider citizenry in newspapers, TV and radio broadcasts. Projects like the We the People public petition website in the United States are less likely to translate, though, in nations where the Internet is not as prevalent and citizens are more concerned with having good roads to market than petitioning their government for redress. Yet one of the most important parts of the partnership, Heller and Wonderlich both say, is its acknowledgement that large pieces of transparency do translate from government to government. We the People, for instance, was based largely on a British model. And the United States and India are teaming up to publish open source code for their public data set sites - a project they call "Data.gov in a box." "What's important, more than the individual commitments, is this idea that countries around the world should view their governance issues in light of other countries' governance issues," Wonderlich says. "It's good for countries to say to each other, 'How do you deal with this problem? And, if you're doing better than us, how does that work?' "
<urn:uuid:99c75ac4-3e95-4afe-b5e5-580016644aa4>
CC-MAIN-2017-04
http://www.nextgov.com/technology-news/2011/12/open-government-a-new-window-on-the-world/50357/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00232-ip-10-171-10-70.ec2.internal.warc.gz
en
0.95417
1,322
2.796875
3
Apple didn’t invent the personal computer. Instead, Steve Jobs and Steve Wozniak borrowed and adapted innovative concepts from others and mashed them up with ideas from within. Everything Apple did, from its auspicious beginnings in the late 1970s pairing a ho-hum Apple II computer with a third-party spreadsheet application called VisiCalc has been successful because of cross-boundary collaboration and powerful doses of innovation. That is true not just of its computers but the mouse, the graphical user interface, the portable MP3 player which became the iPod, buying an online music distribution service and calling it iTunes, the iPhone, the iPad. Using the single letter i, Apple branded an entire lifestyle and created continuous, leveraged, sustainable innovation that has turned it into the most valuable technology company in the world. Apple is just one example of a company using cross-boundary collaboration to inspire and sustain innovation. Research studies conducted by the BTM Institute between 2005 and 2009 repeatedly found that enterprises with an emphasis on innovation often performed better financially and were better able to weather economic change than are stagnant, insular organizations. Let’s define terms: - Cross-boundary collaboration exists in an environment where ideas are celebrated and anyone is welcome to contribute, regardless of their position or group, either within or outside the enterprise. It thrives in a culture where business and technology have successfully converged to accomplish the organizational goals and missions. - Sustained innovation is a high-productivity state in which an organization is capable of innovating in all aspects of its business: management, divisions, operations, customers, and suppliers. It requires a seamless, structured management approach that begins with board- and CEO-level leadership and connects all the way through technology investment and implementation. While collaborating for innovation is not a new concept. In 1943, Lockheed’s skunkworks team created a new WWII fighter jet in just 143 days, for example. It’s one that apparently begs to be rediscovered over and over. In the 1980s, Texas Instruments, a leader in semiconductors, found itself lagging in its innovation. It formed an official collaborative development group, The Lunatic Fringe, tasked with bringing the company back from the brink. Today, the group’s mission is to continuously find new uses, opportunities, and ventures for TI technology. Above all, sustained innovation is a journey, not a destination. Leaders and enterprises often believe they’re successful when they launch an innovative service or product, then rest on their laurels. They fail to recognize how quickly the competition can overtake them. The enterprise cannot stop innovating after attaining one goal; rather, it’s in a continual, profoundly creative process of creativity, reinvention, and discovery. Sustained innovation also depends upon business-technology convergence for success. Its essential contributions are resiliency, agility, and the ability to be adaptive in the face of constantly changing business conditions. Innovation is a holistic human endeavor that requires both left-brained (analytical) and right brained (creative) talents. No single leader or group of decision-makers can manage sustained innovation. Innovative enterprises build a culture that embraces a left-brain/right-brain approach to creative thinking, executing, and communicating. Successful innovation depends upon input from a wide range of people in collaboration, sharing ideas, comparing observations, offering wide-ranging perspectives from their diverse viewpoints, and brainstorming solutions to complex problems. We refer to these divergent perspectives as personas. Here are a few examples: - Learning personas keep an enterprise from being too internally focused and caught in their comfort zone. - Organizing personas move the innovation lifecycle forward; they are skilled at navigating processes, politics, and red tape to bring an innovation to market. - Building personas are closest to the innovative action, establishing connections between the learning and organizing personas; they apply insights from the learning personas and channel empowerment from the organizing personas to facilitate innovation. Personas, real and virtual, help challenge assumptions as the innovation lifecycle unfolds. Some are analytical, some are creative; others are a combination. Not all innovation teams require all personas, and teammates can adopt or change personas during the process. Cross-boundary collaborative groups require a firm foundation and a powerful set of tools with which to perform their work. They must have the assurance that technology has been carefully chosen and successfully merged with the enterprise goals and objectives. We call this an enabling technology. For example, a well-stocked information repository is an asset, but it is enabled by query tools that are easy to learn and deliver prompt, productive searches. A sustained culture of innovation requires building mature cross-boundary teams and mastering the art and science of business and technology convergence. How to begin? The following five-step approach has worked well for many enterprises: - Step 1: Improve strategic planning, business leadership and management capabilities to mandate and support relentless innovation. - Step 2: Encourage creative thinking and creative problem-solving to encourage rapid idea generation and diffusion across the enterprise. - Step 3: Drive rapid development of new and improved products, processes, or services that cultivate customer intimacy and build service dependency. - Step 4: Enable higher productivity, performance, and growth through collaboration; capture and adopt the resulting new learning practices. - Step 5: Develop new business models that aid in differentiating the organization’s core offerings from those of its competitors. Innovation is not a luxury it is essential for any enterprise moving forward. And cross-boundary collaborative teams working with enabling technologies are what drive it there. Faisal Hoque is the founder and CEO of BTM Corporation. He is an internationally known entrepreneur, thought leader, and was named as one of the Top 100 Most Influential People in Technology. A former senior executive at GE and other multi-nationals, Hoque has written five management books, established a non-profit research think tank, The BTM Institute, and become a leading authority on CONVERGENCE, innovation, and sustainable growth.. His latest book, The Power of Convergence, is now available.
<urn:uuid:886e20bf-f9ee-43c1-81bb-81eb66300a9e>
CC-MAIN-2017-04
http://www.cioupdate.com/cio-insights/innovation-without-boundaries.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00444-ip-10-171-10-70.ec2.internal.warc.gz
en
0.941729
1,280
2.734375
3
The ABCs of Network Security With the explosive growth in online applications such as e-commerce, e-government and remote access, companies are able to achieve great efficiency with streamlined processes and lower operating cost. Today’s data networking contains many different types of hardware, software and protocols that are interrelated and integrated. A network security professional must have the ability to look more in depth to fully understand where network vulnerabilities can arise in order to prevent exploits from happening. As mission-critical networks enable more applications and are available to more users, they become ever more vulnerable to a wider range of security threats. Networks are vulnerable to unauthorized, destructive intrusions, from virus to denial-of-service attacks. Network professionals are under increasing pressure to design and manage complex networks that are secure from unauthorized access and information corruption or theft. The threats can come from many different venues. Security has changed from a wrongfully regarded luxury to well-recognized necessity. There is a vast number of online resources on documented security vulnerabilities. The Goal of Network Security Generally, the purpose of information security is to provide authorized users access to the right information and to ensure that the information is correct and that the system is available. These aspects are referred to as confidentiality, integrity and availability (CIA). As a critical part of information security, network security is the protection of the data network from unauthorized access. Restricting access to network services and performing network traffic and bandwidth management and data encryption are common security methods. First, let’s look at the architecture of a typical network. Then we will discuss its potential vulnerabilities and technologies and practices in use to overcome these potential threats. An Architectural View of Today’s Network It is critical to have a solid understanding of today’s network architecture. One must look at the data flow in and out of a network and how the network devices, software and appliances interact with one another to achieve specific business or operational goals. The standard model for networking protocols and distributed applications is the International Standard Organization’s Open System Interconnect (ISO/OSI) model. Understanding the OSI model is instrumental in understanding how the many different protocols fit into the networking jigsaw puzzle. As we will see later, the majority of the network attacks can be attributed to one or more of the seven layers in the OSI model. - Layer 1 Physical: The physical layer defines the electrical, mechanical, procedural and functional specifications for activating and maintaining the physical link between communicating network systems. Physical layer specifications define characteristics such as voltage levels, physical data rates and physical connectors. - Layer 2 Data Link: The data-link layer provides synchronization, error control and flow control for data across the physical link, including physical and logical connections to the packet’s destination, typically using a network interface card (NIC). This layer contains two sub-layers: Media Access Control (MAC) and Logical Link Control (LLC). Some of the protocols that work at this layer are the Point-to-Point Protocol (PPP), Layer 2 Tunneling Protocol (L2TP) and Fiber Distributed Data Interface (FDDI). - Layer 3 Network: The network layer defines the network address and handles the routing and forwarding of the data. Some of the protocols that work at this layer are the Internet Protocol (IP), Internet Control Message Protocol (ICMP) and Routing Information Protocol (RIP). - Layer 4 Transport: The transport layer manages the end-to-end control, including error checking and flow control. It accepts data from the session layer and segments the data for transport across the network. Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) sit at the transport layer. TCP keeps track of the connection state, like packet delivery order and the packets that must be present. UDP, on the other hand, is connectionless and stateless. - Layer 5 Session: The session layer establishes, manages and terminates communication sessions. Communication sessions consist of service requests and service responses that occur between applications located in different network devices. These requests and responses are coordinated by protocols implemented at the session layer. Some of the protocols that work at this layer are the Secure Socket Layer (SSL), Remote Procedure Call (RPC) and the AppleTalk Protocol. - Layer 6 Presentation: The presentation layer formats the data to be presented to the application layer. It can be viewed as the translator for the network (like bit ordering). This layer may translate data from a format used by the application layer into a common format at the sending and receiving station. Some well-known graphic image formats working at this layer are Graphics Interchange Format (GIF) and Joint Photographic Experts Group (JPEG). This layer also handles data compression and encryption. - Layer 7 Application: The application layer functions typically include identifying communication partners, determining resource availability and synchronizing communication. The layer does not include the actual application, but includes the protocols that support the applications. Some examples of the protocols that work at this layer are Simple Mail Transfer Protocol (SMTP), Hypertext Transfer Protocol (HTTP) Telnet and FTP. Information being transferred from a software application in one computer system to a software application in another must pass through the different OSI layers. The application program in the sending host will pass its information to the application layer and downward until it reaches the physical layer. At the physical layer, the information is placed on the physical medium and is sent across the medium to the receiving host. The physical layer of the receiving host retrieves the information from the physical medium, and then its physical layer passes the information upward until it reaches the application layer for processing. Vulnerabilities by Layers With a basic understanding of how networks are structured and how data communication is done, let’s look at some concrete network vulnerabilities and possible attacks. There are a variety of ways to classify security vulnerabilities and attacks. It is worthwhile to briefly examine them by OSI layers. We will look at vulnerabilities from different angles in next section. The vast majority of vulnerabilities exhibit themselves as application-layer vulnerabilities, which are the closest to the user application. Telnet and FTP are such examples. These applications send user passwords in such a way that anyone who can sniff the network traffic will get the user’s login and password to gain unauthorized access. On the presentation layer, there are various attacks against data encryption. On the session layer, Remote Procedure Call (RPC) is one of the top computer system vulnerabilities according to SANS. On the transport layer, there are exploitations using SYN flooding and TCP hijacking. Port scanning is common technique used by hackers to identify vulnerable systems. IP spoofing is a very common network-layer attack. Frequent traffic sniffing and wiretapping are common Layer 1 and Layer 2 attacks. Wireless networking has opened new possibilities to hackers. Network Vulnerabilities and Threats With virtually all network layers exposed with vulnerabilities, malicious hackers have plenty of means at their disposal to launch various attacks. Without proper protection, any
<urn:uuid:d6623fb1-c115-4429-90c1-40e9ae5000ed>
CC-MAIN-2017-04
http://certmag.com/the-abcs-of-network-security/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00168-ip-10-171-10-70.ec2.internal.warc.gz
en
0.902708
1,439
3.40625
3
The new chip is designed to prevent so-called “side channel attacks” designed to extract the cryptographic key by analyzing patterns of memory access or fluctuations in power usage. “The idea in a side-channel attack is that a given execution of the cryptographic algorithm only leaks a slight amount of information,” said research paper co-author, Chiraag Juvekar. “So you need to execute the cryptographic algorithm with the same secret many, many times to get enough leakage to extract a complete secret.” Changing the cryptographic key after each transaction via a random-number generator can prevent a side channel attack, but by cutting the RFID chip’s power repeatedly just before it changes the secret key, hackers can render this strategy ineffective and run the same side-channel attack thousands of times, with the same key. Crucially, the new chips developed by MIT and TI prevent these so-called “power glitch” attacks, by having an on-board power supply virtually impossible to cut, and “nonvolatile” memory cells that store data the chip is working on when it begins to lose power. To achieve this, the researchers developed chips featuring ferroelectric crystals, which produce computer memory which retains data even when powered off. Texas Instruments CTO, Ahmad Bahai, described the discovery as an “important step toward the goal of a robust, low-cost, low-power authentication protocol for the industrial internet.” The research team claims that the innovative new chips could help prevent contactless card details from being stolen, as well as securing key cards and warehouse goods loaded onto pallets fitted with RFID tags. The chip giant has built several prototypes based on the new design, which have apparently performed well in tests. The research was shown off at the International Solid-State Circuits Conference, in San Francisco this week.
<urn:uuid:5f26492c-2d20-4e4c-87f3-cf96bdd23810>
CC-MAIN-2017-04
https://www.infosecurity-magazine.com/news/researchers-build-hack-proof-rfid/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280872.69/warc/CC-MAIN-20170116095120-00526-ip-10-171-10-70.ec2.internal.warc.gz
en
0.935684
389
3.0625
3
Help fund a solar-powered Raspberry Pi school The Raspberry Pi Foundation’s Eben Upton was inspired to create his bare-bones credit-card sized computer after noticing a decline in the number of children learning to code. He wanted to create a cheap computer designed to be programmed, much like the BBC Micro, which was hugely popular in UK schools back in the 1980s. Although the Raspberry Pi has since found a massive audience outside of schools, it’s still an educational tool at heart, and its low cost and energy efficiency make it ideal for introducing computers into rural schools in developing nations. A new Indiegogo fundraiser has just launched which is aiming to fund a Raspberry Pi computer lab for a school in Cosmo City, a thriving suburb north of Johannesburg in South Africa. The people behind the project, United Twenty-13, a South African non-profit organization, are seeking $10,500 in funding (they are currently a tenth of the way there) to equip the building they already have with Raspberry Pi computers, monitors, keyboards, mice and additional hardware. If they can raise another $12,000 on top of that, they will be able to power the lab using solar energy. It’s a great idea, and a worthy cause. 77 percent of schools in South Africa don’t have any computers and 40 percent don’t even have access to electricity, so projects like this can really make a difference. If the venture is successful, United Twenty-13 is hoping to reproduce labs like this all over South Africa.
<urn:uuid:73263f4f-b969-48ad-ac07-19cee0d76986>
CC-MAIN-2017-04
http://betanews.com/2014/07/22/help-fund-a-solar-powered-raspberry-pi-school/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280891.90/warc/CC-MAIN-20170116095120-00370-ip-10-171-10-70.ec2.internal.warc.gz
en
0.957153
322
2.953125
3
The Department of Environmental Quality, in cooperation with the Department of Natural Resources and Department of Information Technology, Tuesday announced the availability of the Michigan Surface Water Information Management (MiSWIM The MiSWIM system is a new, state-of-the-art Internet mapping application designed to provide the public easy access to water quality (biological, chemical, and physical) data and other information that has been obtained for Michigan's rivers, lakes and streams. Types of water quality information available to MiSWIM system users include: water and sediment chemistry, fish contaminants, E. coli bacteria, fish and aquatic macroinvertebrate communities, river flow, fish stocking, lake bathymetry, river valley segments, industrial and municipal wastewater discharge sites, septage land disposal sites, coldwater and natural river classifications, nonpoint source program grants, land use classifications, soil types, and aerial photographs. "The MiSWIM system will allow the public and water resource managers to obtain water quality data and information for Michigan's rivers, streams, and lakes more easily and more efficiently," said DEQ Director Steven E. Chester. "Better access to this information through the MiSWIM system will improve water quality decision making at all levels of government." "MiSWIM will provide a great tool for natural resource managers and citizens interested in natural resource issues to see how a water resource has been managed," DNR Director Rebecca Humphries said. "It will also aid recreational enthusiasts and anglers interested in different bodies of water by showing them a wide array of information regarding a lake, stream, or river."
<urn:uuid:cb3866a3-8920-4958-a7ed-f6a31f95ec14>
CC-MAIN-2017-04
http://www.govtech.com/geospatial/102478739.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00214-ip-10-171-10-70.ec2.internal.warc.gz
en
0.907695
325
2.90625
3
What good is computing if it’s not reliable? As with standard computers, reliability is also the key to establishing practical quantum computing. The challenge for fielding such systems is immense, but the potential payoff is even more compelling. Quantum computers would be several magnitudes more powerful than today’s best technology, putting formerly intractable problems, for example strong decryption, within reach. |This schematic of a bismuth selenide/BSCCO cuprate (Bi2212) heterostructure shows a proximity-induced high-temperature superconducting gap on the surface states of the bismuth selenide topological insulator.| For this reality to be achieved, scientists must develop “fault-tolerant” quantum computers. An international team of researchers just got a little closer to this goal. Working at the DOE’s Advanced Light Source (ALS) facility, scientists from China’s Tsinghua University and the Lawrence Berkeley National Laboratory (Berkeley Lab) have reported the first demonstration of high-temperature superconductivity on the surface of a topological insulator, a first step toward stable quantum computing. The experiment used premier beams of ultraviolet light at the ALS, a DOE facility for synchrotron radiation, to induce high-temperature superconductivity in a topological insulator, a material class that is electrically insulating on the inside but conducting on the surface. This process paves the way for a theoretical quasiparticle to appear. The mysterious particle is known as the “Majorana zero mode” and it’s being pursued for fault-tolerant quantum computing. “We have shown that by interfacing a topological insulator, bismuth selenide, with a high temperature superconductor, BSCCO (bismuth strontium calcium copper oxide), it is possible to induce superconductivity in the topological surface state,” stated Alexei Fedorov, a staff scientist for ALS beamline 12.0.1, where the event was confirmed. While quantum computing has enormous potential, the essential computing unit – the quantum bit or “qubit” – is notoriously unstable. According to a Berkeley Lab piece on the subject, “the qubit is easily perturbed by electrons and other elements in its surrounding environment.” These perturbations can cause a quantum particle to decohere, that is to lose information, comprising the accuracy of computations. Scientists are looking to topological insulators to solve this “decoherence” problem. The qubits in a topological quantum computer would be made from Majorana zero modes, which are immune to decoherence. Thus states stored in the form of topologically protected qubits would be preserved. The experimenters believe they have identified a promising substrate in the form of bismuth selenide/BSCCO heterostructures. “Our studies reveal a large superconducting pairing gap on the topological surface states of thin films of the bismuth selenide topological insulator when grown on BSCCO,” Fedorov says. “This suggests that Majorana zero modes are likely to exist, bound to magnetic vortices in this material, but we will have to do other types of measurements to find it.” The research was primarily funded by the National Natural Science Foundation of China. Findings were published in the journal Nature Physics in a paper titled “Fully gapped topological surface states in Bi2Se3 induced by a d-wave high temperature superconductor.”
<urn:uuid:e2384541-2062-41fb-b1d1-4bc656923699>
CC-MAIN-2017-04
https://www.hpcwire.com/2013/09/18/toward_stable_quantum_computing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00243-ip-10-171-10-70.ec2.internal.warc.gz
en
0.904686
756
3.640625
4
The Internet of Things increasingly includes “smart toys,” but no parent knowingly purchases a toy for their child that potentially risks the safety and privacy of their family. Those risks are caused by security flaws found in the Internet-connected toys. Unlike “dumb” toys, hackers could exploit “smart” toy vulnerabilities and potentially harvest a child’s name, birthdate, location and more. Bugs in a smart bear Mark Stanislav, manager of security advisory services at Rapid7, discovered bugs in the $100 Fisher-Price Smart Toy, bugs that a hacker could exploit to find kids’ profiles with their name, birthdate and more. The WiFi-connected stuffed toy “can talk, listen, and learn,” and comes with an app that parents can use to schedule playtime activities, daily helpers and more. The security issues had to do with how the app communicated with servers as the Web API improperly handled authentication. Rapid7 reported a list of APIs that mishandled authorization with associated risks ranging from finding “all children's profiles, which provides their name, birthdate, gender, language, and which toys they have played with” to hijacking the “device’s built-in functionality.” As for the impact, Rapid7 wrote: Most clearly, the ability for an unauthorized person to gain even basic details about a child (e.g. their name, date of birth, gender, spoken language) is something most parents would be concerned about. While in the particular, names and birthdays are nominally non-secret pieces of data, these could be combined later with a more complete profile of the child in order to facilitate any number of social engineering or other malicious campaigns against either the child or the child's caregivers. Additionally, because a remote user could hijack the device's functionality and manipulate account data, they could effectively force the toy to perform actions that the child user didn't intend, interfering with normal operation of the device. Tod Beardsley, Rapid7’s security research manager, said, “This is an easy mistake. You wouldn’t find these bugs today from places like Google, Microsoft.” The flaws were discovered by Stanislav in November with the vendor fixing the issues on January 19. Flaws in kid-tracking watch open unauthorized access to child’s location The hereO GPS watch, which started as an Indiegogo campaign, is a real-time tracking device for small children. The watch comes with an app which allows parents to see the location of their child, set up geofencing alerts – such as for safe and un-safe places – and more. Stanislav found flaws in the hereO GPS platform that could allow for authorization bypass. Regarding the impact, he wrote: By abusing this vulnerability, an attacker could add their account to any family's group, with minimal notification that anything has gone wrong. These notifications were also found to be able to get manipulated through clever social-engineering by creating the attacker's "real name" with messages such as, 'This is only a test, please ignore.' Once this exploit has been carried out, the attacker would have access to every family member's location, location history, and be allowed to abuse other platform features as desired. Because the security issue applies to controlling who is allowed to be a family member, the rest of this functionality performs as intended and not its self any form of vulnerability. The security issues were discovered in October and reported in November; the vendor patched the flaw on December 15. Rapid7 has no indication that attackers were exploiting the vulnerabilities. As more companies connect their products to the Internet, we are likely to continue seeing the unpleasant trend of tacking on security as an afterthought instead of baking it in.
<urn:uuid:ab08fae9-e9e3-41db-ae76-92e1b4499876>
CC-MAIN-2017-04
http://www.networkworld.com/article/3028827/security/security-flaws-found-in-fisher-price-smart-teddy-bear-and-kids-gps-tracker-watch.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00545-ip-10-171-10-70.ec2.internal.warc.gz
en
0.968589
793
2.515625
3
The System Bus The System Bus is one of the four major components of a computer. This logical representation is taken from the textbook. system bus is used by the other major components to communicate data, addresses, instructions, and control signals. The CPU: A High–Level Description Just how does the CPU interact with the rest of the system? 1. It sends out and takes in data (in units of 8, 16, 32, or 64 bits) 2. It sends out addresses indicating the source or destination of such data. An address can indicate either memory or an input/output device. 3. It sends out signals to control the system bus and arbitrate its use. 4. It accepts interrupts from I/O devices and acknowledges those interrupts. 5. It sends out a clock signal and various other signals. 6. It must have power and ground as input. The CPU Interacts Via the System Bus The System Bus allows the CPU to interact with the rest of the system. Each of the logical pinouts on the previous figure is connected to a line in the system bus. Ground lines on the bus have two purposes 1. To complete the electrical circuits 2. To minimize cross–talk between the signal lines. Here is a small bus with three data lines (D2, D1, D0), two address lines (A1, A0), a system clock (F) and a voltage line (+ V). In our considerations, we generally ignore the multiple grounds, and the power lines. Notations Used for a Bus Here is the way that we would commonly represent the small bus shown above. The big “double arrow” notation indicates a bus of a number of Our author calls this a “fat arrow”. Lines with similar function are grouped together. Their count is denoted with the “diagonal slash” notation. From top to bottom, we have 1. Three data lines D2, D1, and D0 2. Two address lines A1 and A0 3. The clock signal for the bus F Power and ground lines usually are not shown in this diagram. Computer Systems Have Multiple Busses Early computers had only a single bus, but this could not handle the data rates. Modern computers have at least four types of busses 1. A video bus to the display unit 2. A memory bus to connect the CPU to memory, which is often SDRAM. 3. An I/O bus to connect the CPU to Input/Output devices. 4. Busses internal to the CPU, which generally has at least three busses. Often the proliferation of busses is for backward compatibility with older devices. Backward Compatibility in PC Busses Here is a figure that shows how the PC bus grew from a 20–bit address through a 24–bit address to a 32–bit address while retaining backward compatibility. Backward Compatibility in PC Busses (Part 2) Here is a picture of the PC/AT bus, showing how the original configuration was kept and augmented, rather than totally revised. Note that the top slots can be used by the older 8088 cards, which do not have the “extra long” edge connectors. Notation for Bus Signal Levels The system clock is represented as a trapezoidal wave to emphasize the fact that it does not change instantaneously. Here is a typical depiction. Others may be seen, but this is what our author uses. Single control signals are depicted in a similar fashion, except (of course) that they may not vary in “lock step” with the bus clock. Notation for Multiple Signals A single control signal is either low or high (0 volts or 5 volts). A collection, such as 32 address lines or 16 data lines cannot be represented with such a simple diagram. For each of address and data, we have two address or data is valid address or data is not valid For example, consider the address lines on the bus. Imagine a 32–bit address. At some time after T1, the CPU asserts an address on the address lines. This means that each of the 32 address lines is given a value. When the CPU has asserted the address, it is valid until the CPU ceases assertion. Reading Bus Timing Diagrams we need to depict signals on a typical bus. Here we are looking at a synchronous bus, of the type used for connecting memory. This figure, taken from the textbook, shows the timings on a typical bus. the form used for the Address Signals: between t0 and t1 they change value. According to the figure, the address signals remain valid from t1 through the end of t7. Read Timing on a Synchronous Bus The bus protocol calls for certain timings to be met. maximum allowed delay for asserting the address after the clock pulse TML the minimum time that the address is stable before the MREQ is asserted. Read Sequences on an Asynchronous Bus Here the focus is on the protocol by which the two devices interact. This is also called the “handshake”. The bus master asserts MSYN and the bus slave responds with SSYN when done. Attaching an I/O Device to a Bus figure shows a DMA Controller for a disk attached to a bus. It is only slightly more complex than a standard controller. Each I/O Controller has a range of addresses to which it will respond. Specifically, the device has a number of registers, each at a unique address. When the device recognizes its address, it will respond to I/O commands sent on the command bus. A number of I/O devices are usually connected to a bus. Each I/O device can generate an Interrupt, called “INT” when it needs service. The CPU will reply with an acknowledgement, called “ACK”. The handling by the CPU is simple. There are two signals only INT some device has raised an interrupt ACK the CPU is ready to handle that interrupt. We need an arbitrator to take the ACK and pass it to the correct device. The common architecture is to use a “daisy chain”, in which the ACK is from device to device until it reaches the device that raised the interrupt. Details of the Device Interface Each device has an Interrupt Flip–Flop that is set when the device raises the interrupt. Note that the interrupt line is grounded out as a signal to the CPU. The ACK comes from the left of the figure and is trapped by the AND gate. The device identifies itself by a “vector”, a pointer to the address of the device controller that will handle the I/O.
<urn:uuid:cd4ab080-1e6a-4e71-ad4b-2ea83e245390>
CC-MAIN-2017-04
http://edwardbosworth.com/My5155_Slides/Chapter12/SystemBusFundamentals.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00179-ip-10-171-10-70.ec2.internal.warc.gz
en
0.925029
1,472
3.828125
4
Threats to information security come in all shapes and sizes, and from all directions: blended threats, mass-mailer worms, Trojans, phishing attacks, spyware, keystroke loggers, etc. Every day, one or more of these threats put critical information at risk in Internet-connected corporations and businesses around the globe. One of the biggest differences between the threats of today and those of yesterday is motive. It used to be that a hacker demonstrated his or her skills to gain notoriety or bragging rights within the hacking community. Now, however, it’s all about profit. Show Me the Money Malicious code for profit is the name of the game today. And how do online con artists and criminals get between victims and their money? More often than not, it is by duping unsuspecting users into doing something they should not—like opening an infected attachment, clicking on a fraudulent link, providing sensitive information to an untrustworthy source, downloading unsafe programs, and more. Unfortunately, users often make it even easier for hackers to exploit their systems by neglecting to keep operating systems and other software up-to-date. And keeping up with such critical but cumbersome tasks is no easier for IT administrators who typically have so much to patch and so little time. Yet, just one vulnerable system or a single gullible or careless user is all it might take for a hacker to gain entry into a virtual goldmine of confidential corporate data. Then what? Trouble, that’s what. Security experts have observed what they refer to as “a worrisome trend” in the use of malicious code for profit. According to the our most recent, bi-annual Internet threat report, targeted Trojan attacks are being used for financial gain. In one overseas case, several executives at large companies were arrested for allegedly using Trojans to monitor their competitors, costing those competitors lost bids and customers as a result. The Trojan provided complete access to the victims’ computers over the Internet. And how did the Trojan get onto a victim’s system in the first place? Through a seemingly safe e-mail attachment, which was opened by its naïve recipient. In another unrelated case, Trojans were sent to government agencies in the United States and the United Kingdom either via e-mail attachments or by exploiting a vulnerability in a popular word processing program. The Trojans were able to download other applications and open back doors on the compromised computers. Just Say No So, what does this mean to the CIO? It’s simply a reminder that when attempting to assess and manage risk, the place to start is with people. End users must understand the difference between safe and unsafe computing practices and must be held accountable for their actions. The most appropriate forum for sharing this information is the corporate information security policy, which every employee should read and understand. Among other things, this policy should detail the following: People represent one of the greatest risks to information security and availability. Yet, they can also be one of the most formidable deterrents to information theft and compromise—if they understand and follow proven best practices for secure computing. With a well-informed workforce, organizations can take better advantage of the technologies of today and tomorrow while reaping the benefits of doing business efficiently and effectively in a highly connected and very profitable Internet-driven world. Mark Egan is Symantec's CIO and vice president of Information Technology. He is responsible for the management of Symantec's internal business systems, computing infrastructure, and information security program. Egan is author of "Executive Guide to Information Security: Threats, Challenges, and Solutions” from Addison Wesley and was a contributing author to "CIO Wisdom.”
<urn:uuid:3bab9940-1a99-4954-86f6-770082710266>
CC-MAIN-2017-04
http://www.cioupdate.com/trends/article.php/3555031/Hacking-for-Dollars.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280791.35/warc/CC-MAIN-20170116095120-00573-ip-10-171-10-70.ec2.internal.warc.gz
en
0.949024
773
2.546875
3
For years, people were accustomed to RS232 ports. However, recently, RS485 converters took over and industries are realising the advantages of using these converters. These converters are capable of high speed when compared to other kinds of converters. Hence, their popularity is growing manifold. Typically, industries use RS232 to RS485 converters to utilize multiple features that the converters have. They are utilised in industries for gadget control, data acquisition, remotely managing temperature and other settings. For one thing, in industrial setups, there’s always a need to send data over long distances. This selection is not present in RS232, which can’t transmit and receive data over long-distance. However, the RS485 has the ability to transmit data on the distance of up to 4,000 feet. Perhaps, one of the biggest benefits of RS232 to RS485 converter is the ability of the converter to withstand electrical spikes. In industrial setups, due to the different machinery being run, there’s always static electricity generated which can cause electrical spikes and ground loops. As a result, expensive machinery and gadgets could possibly get damaged. However, the converter is able to withstand these spikes, and for that reason actively works to protect gadgets and machinery. The RS485 converter does not require an application driver to operate. The moment it’s plugged on the device and the data cables are connected to it, it begins working. Hence, the simple plug and play system are an advantage and doesn’t require the user to set up any kind of software. Furthermore, the rate from the converter is very fast. The converter includes a speed of upto 10 maps on the short distance, but because the distance increases, the rate reduces, but is still comparatively fast. Within the same building, if a person desire to connect to a specific device, that’s, certain machinery on production, one need to go to the pc to connect exactly the same. To solve this issue, there is a perfect solution which is interface RS232 to RS485 converters. There are various strengths of those converters. There’re also guaranteed benefits above RS232 converters. The foremost and foremost advantage is that RS485 is a single supply voltage whereas however a 5V and 12V is needed for RS232. It just needs 5V by RS485 converters because it swings over to 5V at the output. RS485 converters have wide temperature and power ranges and they are designed to meet industry demands for reliability and functionality in environments with extreme amounts of interference. It enables connectivity between items which operate different communications protocols. RS485 converters are used in motor managers, sensors, temperature controllers and control valves. It’s also been seen that this converter does not stipulate using any sort of connector. Hence, any kind of connector could be mounted on it, such as DB9 sequential connector or RJ11 jack, without having affected its performance in any manner whatsoever. While utilizing converter and appropriate connector, a RS232 serial port of a computer could be linked to any other device either in the same room or perhaps in a remote location. RS485 converters come as 2-cable or 4-cable models fiber optic modems. The second is said to be more effective, because the driver spreader linked to all of the node recipients, and the joint transmitter is attached to the driver recipient at the other end. This not only enhances the speed. It also helps make the communication more effective between your connected systems. Furthermore, the RS485 converter has multi-point system, which isn’t present in other converters. It can have up to 32 nodes and this enables the attachment of several devices simultaneously. These multiple devices can run simultaneously the whole time, as long as the converter has an external source of energy. The information can be driven from an extended distance by RS485 that is among the key benefits of this converter. 1200 meters can be driven that is about 15 meters maximum for RS232. Speed of RS485 becomes manifest pretty quickly. For the slow ones too, there is a minimum of 10 megabits. Right now running at approximately 50 megabits, the fastest RS485 converter transceivers can be found nowadays. The quickest transceivers at this time is one megabit by RS2322 presently. Better noise immunity is received by RS485 that is another most prominent benefit of this converter. Any noise is subjected externally to the cable because of the differential signal and is put through both cables. Noise is really cancelled out, when you take the distinction between the two.
<urn:uuid:4c54315b-4dbd-47f9-8a77-bae5a7e21ae1>
CC-MAIN-2017-04
http://www.fs.com/blog/benefits-from-rs485-converters.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00023-ip-10-171-10-70.ec2.internal.warc.gz
en
0.953333
942
2.625
3
WASHINGTON, DC--(Marketwired - March 31, 2014) - The National Association for the Education of Young Children's (NAEYC) Week of the Young Child™ (April 6-12, 2014) draws attention to how a high-quality early childhood experience the first few years of life set a child's path for success in school and in life and offers tips for parents to be sure they're choosing high-quality program. "Week of the Young Child™ reinforces that the early years (birth through age 8) are critical learning years, and qualified early childhood professionals accelerate how our children learn, develop, build the skills to get along with others, and succeed in school and life," said Rhian Evans Allvin, NAEYC's Executive Director. "An NAEYC accredited program offers a safe, nurturing, and stimulating environment during the early years with specially skilled and knowledgeable staff and professionals can ensure children have the most positive learning experience possible." NAEYC offers the following tips parents can use when selecting a safe, nurturing and stimulating learning environment for their children. For infants, a high-quality program means: - Group size is limited to no more than eight babies, with at least one teacher for every three children. - Each infant is assigned to a primary caregiver, allowing for strong bonds to form and so each teacher can get to know a few babies and families very well. - Teachers show warmth and support to infants throughout the day; they make eye contact and talk to them about what is going on. - Teachers are alert to babies' cues; they hold infants or move them to a new place or position, giving babies variety in what they can look at and do. - Teachers pay close attention and talk and sing with children during routines such as diapering, feeding, and dressing. - Teachers follow standards for health and safety, including proper hand washing to limit the spread of infectious disease. - Teachers can see and hear infants at all times. - Teachers welcome parents to drop by the home or center at any time. For toddlers, a high-quality program means: - Children remain with a primary teacher over time so they can form strong relationships. - The teacher learns to respond to the toddler's individual temperament, needs, and cues, and builds a strong relationship communication with the child's family. - Teachers recognize that toddlers are not yet able to communicate all of their needs through language; they promptly respond to children's cries or other signs of distress. - Teachers set good examples for children by treating others with kindness and respect; they encourage toddlers' language skills so children can express their wants and needs with words. - The physical space and activities allow all children to participate. For example, a child with a physical disability eats at the same table as other children. - Teachers frequently read to toddlers, sing to toddlers (in English and children's home languages), do finger-plays, and act out simple stories as children actively participate. - Teachers engage toddlers in everyday routines such as eating, toileting, and dressing so children can learn new skills and better control their own behavior. - Children have many opportunities for safe, active, large-muscle play both indoors and outdoors. - Parents are always welcome in the home or center. - Teachers have training in child development or early education specific to the toddler age group. For preschoolers ages 3 to 5, a high-quality program means: - Children follow their own individual developmental patterns, which may vary greatly from child to child. - Children feel safe and secure in their environment. - Children have activities and materials that offer just enough challenge -- they are neither so easy that they are boring nor so difficult that they lead to frustration. - Children can connect what they learn with past experiences and current interests. - Children have opportunities to explore and play. To find a NAEYC accredited center or school and for more tips for choosing a high-quality early childhood education program go to http://families.naeyc.org NAEYC's mission is to serve and act on behalf of the needs, rights and well-being of all young children with primary focus on the provision of educational and developmental services and resources. Founded in 1926, the National Association for the Education of Young Children is the largest and most influential advocate for high-quality early care and education in the United States. Learn more at www.naeyc.org.
<urn:uuid:3167c2ed-6f67-48e2-b093-8507901a6939>
CC-MAIN-2017-04
http://www.marketwired.com/press-release/essential-ingredients-high-quality-early-childhood-education-highlighted-week-young-1894257.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00417-ip-10-171-10-70.ec2.internal.warc.gz
en
0.952863
919
2.9375
3
The Santa Clara, Calif., chip maker's big leap into the nanotechnology era extends on the "strained silicon" technique first adopted by competitor IBM Corp but Intel would be the first to use it in large scale production. By stretching the atoms, Intel said the new technology would allow electrical current to flow faster, boosting computing performance and, more importantly, reduce chip-making costs in a tough market for the semiconductor group. Intel said the new process, which is part of plans to spend $12.5 billion over two years on chip-making technologies, could actually create transistors whose key features are just 50 nanometers. The latest advances are aimed at the nanotechnology era, where chip-making science is geared towards controlling individual atoms and molecules that are thousands of times smaller than current technologies permit. Intel, one of a handful of companies in the semiconductor group with the financial might to go it alone on new chip-making technologies, also announced plans to move to 12-inch silicon wafers, up from the current standard of eight inches, at two factories in New Mexico and Oregon. The move would cut production costs by at least one-third per chip. Intel said the Oregon foundry would manufacture the chips of the 90nm process in the interim while the company's facilities in New Mexico and Ireland would handle the mass production of the new chips.
<urn:uuid:05121ae6-0b28-4372-91a6-92846463bd18>
CC-MAIN-2017-04
http://www.cioupdate.com/news/article.php/1445891/Intel-Makes-Nano-Leap.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00353-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947057
279
3.109375
3
A greenhouse is a structure developed for growing plants in a controlled environment. These structures capture the incoming visible solar radiation and retain heat to provide a favorable environment for plant growth. Traditionally, majority of the greenhouses used soil as the base for growing plants. However, hydroponic or soil-less horticulture has recently started gaining popularity in the greenhouse industry. The greenhouse market is analyzed by different types and on the basis of technologies used. The type segments considered for the market estimation of greenhouses include hydroponic and non-hydroponic techniques. The greenhouse market can be segmented by ingredients and sub-markets. Ingredients of this market are polylactic acid, polyhydroxyalkanoate (PHA), dispersants, polyethylene (PE), superabsorbents, solvents, plastic films and sheets, and amphoteric surfactants. Sub-markets of this market are permanent greenhouse, macro tunnels, and low tunnels. Key Questions Answered - What are market estimates and forecasts; which of the greenhouse markets are doing well and which are not? What makes our report unique? - This report provides most granular segmentation on permanent greenhouse, macro tunnels, and low tunnels. - This report provides market sizing and forecast for the greenhouse market, along with the drivers, restraints, and opportunity analysis for each of the micro markets. Audience for this report - Global greenhouse companies - Manufacturing companies - Traders, distributors, and suppliers - Governmental and research organizations - Associations and industry bodies - Technology providers Along with the market data, you can also customize MMM assessments that meet your company’s specific needs. Customize to get comprehensive industry standard and deep dive analysis of the following parameters: - Demand estimation for greenhouse equipment for each specific country/geographic region - Prioritize the equipment after classifying them based on their value, application, and frequency - Intricate research to study the dominant market forces that steer the product’s regional growth by analyzing its supply and demand - To recognize the highly sought and most suited equipment specifications for each crop type and regional market Supply Chain Analysis - To identify the least cost green supplier sourcing across the world based on a set of qualifying criteria - To identify the potential distribution pockets based on selective benchmarks - To study the efficiency of the distributional patterns of the competitor’s product range with greenhouse equipment - To decipher and alert supply chain disruptions from procurement through distribution, and reduce the risk with possible alternative solutions - A very critical study on the competitor’s strengths and capabilities, upcoming strategic moves, and their business innovations for this sector to single out effective solutions - Analysis of the effectiveness of competitor’s product and service portfolio in any desired location - Analysis of the market share of the products in your preferred state/region in order to converge on their success factors - To provide a smooth legal passage through the regulatory framework of local authorities by analyzing the barriers in conducting trade - To analyze the duty and tax regulations in the procurement and assembling of components in a preferred region/country in case of setting up a manufacturing plant - In-depth trend analysis of the functional patterns of greenhouse equipment in your preferred choice of region/state/country - To provide information on crop suitability and industrial innovations for greenhouse applications and its future prospects Social Connect Forum - To discern the sustainable and customer-friendly novelties - To understand the product’s stage and rate of adoption through customer opinions - Using quality function deployment, expert views, and ideas that are considered to redesign and validate a promising product for the future Please fill in the form below to receive a free copy of the Summary of this Report Please visit http://www.micromarketmonitor.com/custom-research-services.html to specify your custom Research Requirement Permanent Greenhouse and Macro Tunnels and Low Tunnels adds up to total... Macro Tunnels and Permanent Greenhouse and Low Tunnels adds up to... Low Tunnels and Permanent Greenhouse and Macro Tunnels adds up...
<urn:uuid:0f84ca98-790c-457f-bdf9-f5669f1a903d>
CC-MAIN-2017-04
http://www.micromarketmonitor.com/market-report/greenhouse-reports-2983600293.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00381-ip-10-171-10-70.ec2.internal.warc.gz
en
0.894242
849
2.671875
3
Corresponding with World AIDS Day 2006, the U.S. Health and Human Services Department has launched the AIDS.gov Web site to help increase awareness of this devastating disease. According to the HHS "since 1981, over 20 million people worldwide have died of AIDS, and an estimated 40 million people are living with HIV." The Web site's goal is to ease access to resources, testing and provide vital information. "We at HHS encourage users to learn about prevention, testing, treatment, and research programs, and to find federal HIV/AIDS policies and resources," said Secretary Mike Leavitt in a statement released today. The site includes sections on basic HIV/AIDS information, news and events, treatment research and advocacy agencies and programs, as well as a section dispelling myths about the disease. "World AIDS Day is a time for reflection and renewal," the Leavitt said. "Let us mark it by acknowledging the work of all the worldwide partners in this battle and how, combined, we can strive to defeat HIV/AIDS."
<urn:uuid:c7362af3-457c-4716-97f1-8ff3d084fb6b>
CC-MAIN-2017-04
http://www.govtech.com/e-government/HHS-Launches-Awareness-Web-Site-on.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00013-ip-10-171-10-70.ec2.internal.warc.gz
en
0.964083
212
3.109375
3
Clearly, the key to successful project execution in a complex organization is some kind of high level coordination. The difficulty is how to impose this kind of cooperation on functional units that have been doing things their own way for a long time. The natural result of functional departments fighting over scarce resources is to reach a compromise. Compromise boils down to agreements over how scarce resources will be shared between departments and projects. Rather than agreeing to have one project being completed before another, the functional units agree to have a number of projects executed simultaneously. Compromise “feeds” everyone a little bit. Resources (skilled staff) working on projects are therefore forced to divide their time up between many projects in order to please everyone. Unfortunately the typical result is that no one is satisfied because of the effects of multitasking. Is Multitasking Bad? Multitasking means jumping between tasks. It implies that a particular resource (person) will apply themselves to a number of different tasks (projects or assignments) over a given period of time (within a given hour, day or week). In itself, multitasking is not necessarily bad. Managers love it because it keeps staff busy — there is always something for them to do. Through multitasking resource utilization can be kept very high (high efficiency levels). However, multitasking becomes bad when the time cost of switching between tasks adds to the overall completion time of every task. Bad multitasking occurs when the most important projects are delayed so as to ensure that there is some progress on every project. Bad multitasking occurs when resources don’t know what’s the most important use of their time (generally because they were not told) and therefore split their efforts among a number of activities. This keeps them very busy but does not actually emphasize the completion of any one project as a priority.
<urn:uuid:81885811-6d77-43b6-b2b6-8f364fc874f1>
CC-MAIN-2017-04
http://blog.globalknowledge.com/2011/03/10/portfolio-management-versus-multitasking-2/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00435-ip-10-171-10-70.ec2.internal.warc.gz
en
0.959089
376
2.609375
3
Amr Ibrahim Enan is a Global Knowledge instructor who teaches and blogs from Global Knowledge Egypt. With the introduction of I/O consolidation that I discussed in a previous post, network engineers found themselves responsible for one physical switch that provides both LAN and SAN access for servers in data centers since SAN traffic is now carried over Ethernet using FCOE technology. Even if you do not have to configure the SAN features of your I/O consolidation switch, you should at least have a basic understanding of FC technologies. In this post, I will provide you with all you need to know about FC technologies as a network engineer. While doing so, I will try to relate — whenever possible — the FC feature to a network feature you already understand, which will make it much easier and fun to understand and memorize the relative new FC feature. Before we discuss anything, let me begin with a quick comparison between the TCP/IP stack and the FC stack. Before we dig deeply in this comparison, I just want to bring to your attention that for the TCP/IP stack, we have four layers. Each layer has a programming API or physical interfaces, but for FC it is called Levels. Theoretically speaking there is no difference, but from the implementation perspective Levels does mandate programming API or Physical interfaces between levels where levels implementation is vendor specific. So how this will makes FC different from Ethernet? In Ethernet, if you connected two Ethernet switches in which one is Cisco and one is Juniper, most probably it will work just fine as each switch functionality is strictly defined and implemented according to its layer definition. For FC switches that is not the case if you tried to connect a Cisco Switch to a Brocade switch for example. Because of this specific implementation, you have to configure both switches in what is called compatibility mode since each vendor has a specific FC level implementation. Now let us dip deeper in the layers. As you can see in Figure 1, we have the TCP/IP stack with its four layers, and right next to it is the FC levels . Application layer/FC-level 4 and FC-level 3 The application layer is where we run all of our application layer protocols like DNS SMTP,HTTP, etc. As you may already know, whenever any application layer protocol needs to put some data on the network, it relies on the services of the next layer which, in our case, is the Transport layer. This is exactly the same role of FC level 4. In this layer, we map the ULP protocols to the FC environment. You can see from the above figure that inside FC-level 4 each ULP has it is own mapping. FC level 3 is a placeholder for future features that can be applied to all ULP protocols before the packets is passed to FC level 2 Transport layer /FC level 2 In the transport layer we have two main protocols: the TCP protocol and the UPD protocol. Both are available for the use of application layer protocols, but each provides a different set of services to the application layer. TCP is a connection oriented protocol, which means it needs to negotiate a specific set of parameters before exchanging any data between the two peers on the network. Thanks to this negotiation, TCP can recover lost frames while sending data and in times of congestion, provide the connection with flow control service. UDP is just a connectionless protocol that does not provide you with any of those services. FC level 2 provides FC level 4 with the same set of services that are provided by TCP to the Application layer while TCP is implemented inside the kernel of the OS. This places a lot of processing load on the CPU to drive a TCP connection. For example, to drive a 100 mb/s link TCP would consume 80% of your CPU cycles. FC is implemented in Hardware using smart Asics on the Storage adapter, meaning that FC level 2 has its own CPU instead of relying on the system CPU This is why storage adapters named HBA (Host bus adapters ) are more expensive than normal network adapters and require virtually no system CPU cycles to drive the FC link. Again, the transport layer will rely on the Internetwork layer to put this data on the network. In the Internetwork layer, we have only one protocol called the IP protocol. The main function of the IP protocol is to provide a unique identity for your machine on the network so it can send and receive data from other hosts connected to the Internet. This address is used by routers to identify the path to this host through routes exchanged by the routers using any configuration routing protocol. This layer has no equal level in FC. Data link /FC level 1 Finally, the Internetwork layer relies on the Data link layer too, but this piece of data on the network is actually two layers. The Data link layer is where we have the Ethernet protocol encoding that decodes the data as it moves to and from the host. This function is carried by FC level 1. For FC we use 10b/8b decoding scheme. Because of this decoding scheme, you will end up losing 20% of your BW. This means that if you use 1Gb/s, your effective BW is 8oo MB/s. How do we calculate it? It is very easy. Just multiply 1 G/s* 8/10, and the result is 0.8 G/s, which is the same as 800 MB/ s. Physical layer / FC level 0 Both layers are responsible for serializing the data to the cable and deserializing the data off the cable. Take notice that FC can provide you with an error free line if the latency is no higher than 10^-13 . In the next post we will the difference between Ethernet switching and FC switching concept
<urn:uuid:79cc6005-4ba5-4f80-9a14-fa190ee6f6c1>
CC-MAIN-2017-04
http://blog.globalknowledge.com/2012/05/30/introduction-to-fc/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00435-ip-10-171-10-70.ec2.internal.warc.gz
en
0.925969
1,166
2.53125
3
When it comes to blasting satellites into Low Earth Orbit, cost can be a major detriment. A company based in New Zealand called Rocket Labs is looking to fix that problem – at least for smaller satellite launches—with a carbon composite, 11-ton , 18 meter (about 60ft) tall rocket known as Electron that it says can blast payloads of about 100kg (about 220lbs) into LEO for about $5 million. The company says comparable flights would cost around $100 million. +More on Network World: Quick look: The hot Asian space industry+ “Along with benefits for commercial enterprises, cheaper and faster space access has the potential to lead to more accurate weather prediction, global high speed Internet access, as well as real-time monitoring of the impacts of human development. The innovation behind Electron will release the limitations on launching small satellites. Our vision at Rocket Lab is to make space commercially viable and more accessible than ever, doing what the Ford Model T did for consumer automobiles,” said company CEO Peter Beck. Beck founded Rocket Labs in 2007 and the outfit has developed rocket propellant technology for the Defense Advanced Research Projects Agency (DARPA) and the US Office of Naval Research. Electron will use liquid oxygen and kerosene that will fuel up nine of the company’s Rutherford engines --named after the famous New Zealand scientist Ernest Rutherford – strapped together on Electron. With nine Rutherford engines on the first stage, Electron can sustain a complete engine loss before launch and still complete its mission, making it one of few launch vehicles with such capability, the company stated. +More on Network World: NASA forming $3M satellite communication, propulsion competition+ Rocket Labs says launches can be slotted within weeks, rather than years of planning most conventional launches require. It claims to have 30 launches from its private launch facility in New Zealand already set to go next year. Khosla Ventures of Silicone Valley is Rocket Lab’s principal funder. Check out these other hot stories:
<urn:uuid:43618100-0c7e-4eb2-a3b1-63a27e6bafe1>
CC-MAIN-2017-04
http://www.networkworld.com/article/2458887/security0/rocket-lab-wants-to-make-model-t-of-space-satellite-launchers.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00069-ip-10-171-10-70.ec2.internal.warc.gz
en
0.924591
418
2.78125
3
As part of the administration’s new 3.8 trillion budget (that is with a T, not a B guys. Really big number.), a radical change in NASA’s mission is proposed. If President Obama has his way, NASA will stop being the operator and builder of our space vehicles and program. The US space program will be placed in the hands of private industry. The government will try to steer the private space industry with grants and technology development projects. This could be a great environment and opportunity for an open source space program. Of course this is a huge departure for the NASA we have known. For the past 50 years NASA has been the designer, builder and operator of our space program. The space race and the race to the Moon were national goals to strive for. In later years, NASA’s mission became more cooperative with other world powers in projects such as the International Space Station and missions to the planets. Then President George W. Bush announced a return to the Moon by 2020. Many experts said it was pie in the sky and would never happen in that time frame, but nevertheless we spent almost 9 billion dollars to date on this mission. Now if the Obama administration has their way we are scrapping it. As a child of the Apollo program, I am dismayed that we can’t even come close to accomplishing something that we were able to do 40 years ago. Have we fallen that far? Alan Boyle over on MSNBC’s Cosmic Log has a great synopsis of how and why the space program is being turned over to the “commercial boys” and who some of those players are. But there is another player. A dark horse if you will, that could be coming up fast on the outside. The Open Luna Foundation has a plan to use open source software and hardware and most importantly an open source community methodology to put a permanent outpost on the Moon. They estimate that they will have to raise 500 to 700 million dollars to pull this off. By selling tourism and moon souvenirs they hope to raise the bulk of this money. They are also looking for donations and volunteers. If you think you have the right stuff head on over. I have included a slide presentation to give you some details on Open Luna. But here are some highlights of their approach: - All aspects of the mission plan and hardware will be open source. This information will be publicly available and community support and involvement will be actively pursued and welcomed. - Special efforts will be made to involve students, educational facilities, and amateur space enthusiasts. - A strong media presence will be a priority. The entertainment and educational potential of the mission will be exploited to allow the mission to reach the maximum number of people possible. This furthers the educational potential of the mission, provides publicity for sponsors (which will encourage support for future missions), and demonstrates to people that this is possible in the present and inspires the next generation to continue and exceed these mission goals. - Mission hardware will be light and geared toward continuity from one mission to future missions. This will save costs and simplify the mission and hardware development. Superfluous hardware will be removed from missions and each component will be made in the lightest fashion possible. This may create initial complications, but it will balance out over the span of the program. Risk levels will be assessed and considered to balance risk with the cost of safety to the ability of the mission to continue forward. - Much like an Alpine expedition, moderate risks will be acceptable in favor of exploration. - Access to all scientific data and acceptance of outside research proposals will be encouraged. They currently have a 5 mission plan. I am sure this and other aspects of the plan will change. But that is what they are proposing right now. So why do I think open source could be a winning strategy for a successful return to the moon? Mostly for the same reasons why it took a government to get us there in the first place. I think pure science like going to the moon without a certain profit will not be sustainable in a traditional commercial program. Now some may say that if it is not commercially viable it shouldn’t be undertaken. But sometimes you have to do things for adventure and discovery, without knowing what the exact pay off will be. More often than not though, where there is new discovery, there is new opportunity. It spawns new technology and innovation. I envision using software and systems based on open source software that will save significant dollars in license costs, but more importantly allow for the rapid development of new applications and features that will be required for the mission. Of course some of the hardware (rockets) will be commercially available models. But I am counting on Open Luna to spur development of new designs for crew compartments, living quarters and open source design for the permanent moon station. I think a vibrant community will give Open Luna the edge for government grants and incentives. There are an inordinate amount of space enthusiasts in the IT industry. I think a well organized open source space project could attract a super community of volunteers and developers to accelerate the technology needed in the shortest, cheapest and most efficient manner. Maybe software will not be the zenith of open source usefulness. Maybe the true future of open source lies in the stars, “going where no commercial software has gone before.” Here is Open Luna's slide show:
<urn:uuid:069beb1a-9db2-496b-82ed-f872d36bf7b5>
CC-MAIN-2017-04
http://www.networkworld.com/article/2229522/opensource-subnet/bang--zoom--is-open-source-the-right-way-to-the-moon-.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00279-ip-10-171-10-70.ec2.internal.warc.gz
en
0.945981
1,094
2.546875
3
IPv6 is the new version of the public IP address system for the internet. IPv4 addresses are running out as new devices are added to the net every day. While IPv4 uses 32-bit addresses, IPv6 uses 128-bit addresses, which increase the number of possible addresses by an exponential amount. For example, IPv4 allows 4,294,967,296 addresses to be used (2^32), while IPv6 allows for over 340 sextilion IP addresses. Yep, that should be enough for some time. Do you want to monitor your IPv6? Do you need help with your IPv6 implementation? Do you want to load test your IPv6? GET IN TOUCH!
<urn:uuid:97eff6a6-7493-4f5b-9bb6-74a6223b4afa>
CC-MAIN-2017-04
https://www.apicasystem.com/solutions/ipv6/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00279-ip-10-171-10-70.ec2.internal.warc.gz
en
0.868851
153
2.9375
3
Hong Kong has the world's fastest Internet. Internet on the moon is 10 times faster. How do our lunar-exploring spaceships get buffer-free video? Lasers. NASA and MIT are shooting "lasers full of Internet" to a ship named LADEE that's exploring the moon's atmosphere. According to NASA, speeds have reached 622 megabits per second (Hong Kong tops out at 63.6). Right now, the agency is using a pulsed laser beam to transmit a pair of HD video signals to and from the moon. The 239,000 miles between the New Mexico ground station and the moon marks the "longest two-way laser communication ever demonstrated," according to NASA. In one test, NASA sent an HD video of Bill Nye (the science guy) from a Massachusetts station to the New Mexico transmitters to the moon—and back through the same route—with just a seven-second delay. It takes 1.3 seconds for a signal to make the one-way trip to the moon. NASA says the information it's receiving now is so precise it can determine LADEE's distance from Earth to within half an inch. "Suppose you wanted to make a Google Maps image of Mars, and not even as crisp as Google Maps," NASA's Don Boroson told the Institute of Electrical and Electronics Engineers. "It would take decades to send that much data back with radio systems we have now. If you had a laser communication system with a 50-times-higher data rate, it would take tens of weeks. Then you could send all the data for a Google Map in one year." NASA says testing will soon expand to include signals from a European Space Agency station in Spain. Later tests will expand to include daylight operations and different cycles of the moon.
<urn:uuid:0f880966-f684-4069-8d16-6c828143afcf>
CC-MAIN-2017-04
http://www.nextgov.com/mobile/2014/02/moon-lasers-are-creating-galaxys-fastest-internet/79025/?oref=ng-dropdown
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279915.8/warc/CC-MAIN-20170116095119-00491-ip-10-171-10-70.ec2.internal.warc.gz
en
0.949636
372
3.4375
3
Rock samples analyzed by NASA's Curiosity rover have shown conditions that could have supported ancient life on Mars. The samples, drilled at a depth of a few centimeters, contained sulfur, nitrogen, hydrogen, phosphorous and carbon -- some of the basic ingredients that are needed to support life, scientists at the U.S. space agency said Tuesday. "We've discovered a completely different planet," said Chris McKay, senior research scientist at NASA Ames. The work done by Curiosity represents the first time scientists have been able to drill into any planet other than Earth, said McKay. Previously, analysis of Mars was conducted with samples scooped from the surface. The rover has been drilling in an area that has been named Yellowknife Bay, which is believed to be the end of an ancient river system or intermittently wet lake bed. The samples from the area show evidence of multiple periods of wet conditions, said NASA. The drilling also revealed something else about the planet. "It's the first time we found out the planet isn't red but grey," said McKay. Just a few centimeters below the surface, the sample showed no evidence of oxidization and so didn't have the red color that is so identified with Mars. Curiosity touched down on Mars in August last year. NASA selected the touchdown point -- an area called Gale Crater -- because it believed the area showed evidence of an old network of stream channels and water. The sample was drilled not far from where the rover landed. "It shows the future of Mars exploration is down," said McKay. "That's where we have to look." NASA already has plans to send subsequent missions that will penetrate deeper into the planet. Martyn Williams covers mobile telecoms, Silicon Valley and general technology breaking news for The IDG News Service. Follow Martyn on Twitter at @martyn_williams. Martyn's e-mail address is firstname.lastname@example.org
<urn:uuid:a5fa4153-0d27-4dbc-a97b-8181f149975d>
CC-MAIN-2017-04
http://www.networkworld.com/article/2164319/data-center/curiosity-finds-mars-could-have-supported-life.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280761.39/warc/CC-MAIN-20170116095120-00307-ip-10-171-10-70.ec2.internal.warc.gz
en
0.963952
398
3.484375
3
Within the next few years, the Internet of Things (IoT) could have implications across several areas of our personal and professional lives. To keep you in the know on this topic, we’ve compiled some relevant articles that offer different perspectives on the impact of this rising technology. What is the Internet of Things? Need an “Internet of Things 101”? What is it, and why does it matter? This article explores how the IoT can potentially benefit your life on a day-to-day basis, as well as some of the challenges that will be encountered as the technology evolves. The realities of IoT will affect not only our homes and appliances, but businesses as well, including industries such as manufacturing and healthcare. Read entire article here. A New Frontier for CRM: The Internet of Things According to Gartner, Customer Relationship Management (CRM) technology will be an integral aspect of supporting the emerging IoT technologies. The IoT marks a paradigm shift that will affect how customers interact with brands. The advent of connected technology will require more sophisticated methods of serving customers. CRM solutions will help brands manage customers through multiple sales and service channels. Read entire article here. First Click: you can’t spell ‘idiot’ without IoT (Internet of Things) Are all these new connected devices worth the cost? As an alternative perspective on the IoT hype, this new technology offers to fix something that isn’t really broken with an expensive, complex solution. Is the race to put a chip in nearly every household device really making our lives better? Read entire article here. Economist: The Internet of Things will deliver surge of productivity In this article, Harvard economist Michael Porter discusses how the IoT can reignite our economy and encourage innovation. IoT data and predictive analytics will fuel efficient product development based on real market drivers and needs, ultimately reducing waste within the economy. This will result in opportunities for growth and productivity. Real entire article here.
<urn:uuid:0e61c752-a961-4903-8c72-c80989ae3d49>
CC-MAIN-2017-04
http://www.internap.com/2015/05/08/news-roundup-internet-of-things/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00123-ip-10-171-10-70.ec2.internal.warc.gz
en
0.925826
405
2.515625
3
When your power goes out due to an outage or power surge, you hope that all of your devices will work when the lights come back on. Usually, they do. But sometimes they don’t. A summer thunderstorm that leaves you in the dark for even just a few minutes can damage your computer or external hard drive. Faulty hardware can also cause power surges that can short out a data storage device. There are many ways a data storage device can fail as a result of a blackout or power surge. A sudden loss or spike of power can short out your hard drive’s control board. It could cause your hard drive’s read/write heads to crash, its motor to seize, or its platters to become damaged. If this happens to you, our electrical surge data recovery technicians can help you. In power loss/electrical surge data recovery scenarios, the hard drive’s printed control board is a common point of failure. Shorted Control Board Every data storage device has a control board. The control board acts as an intermediary between the data on your device and your computer. Power and interface commands go into the device. Data comes out. When the PCB fails, this exchange stops. In a traditional spinning-platter hard drive, the PCB allows electricity to power the spindle motor that sets the drive’s platters and heads in motion. In USB flash drives, SD cards, and solid state drives, the PCB pulls data from the NAND flash memory chips. These two components are usually kept discrete. However, in some microSD cards and flash drives, the control board and chip have been integrated into what appears to be a single package. For traditional spinning-platter hard drives, the PCB contains drive-unique calibrations. For solid-state devices, the PCB contains a controller that takes the raw data from the NAND chip and parses it into something recognizable by your computer. PCBs are especially vulnerable to power surges. A power surge of even just 3 nanoseconds is enough to short a control board. When this happens, a flash memory device will be completely unresponsive. A traditional hard drive can exhibit a variety of symptoms. Often, the hard drive won’t spin up. Or the hard drive spins up, clicks, and spins down. In some rare cases, a shorted PCB can cause other devices hooked up to the drive to short out as well. Electrical Surge Data Recovery – Shorted Control Boards When you plug in an external hard drive, solid state drive, or USB flash drive, they all look the same to you. But the underlying technological differences between hard disk drives and flash memory devices are huge. The electrical surge data recovery method depends heavily on the nature of the device. Traditional Spinning-Platter Hard Drive Replacing a failed control board is not the simple affair it once was. On a spinning-platter hard drive, the PCB contains unique hard drive calibrations. A long time ago, hard drives didn’t need all these calibrations, and any two PCBs from the same model of hard drive were identical. But hard drives have become much more sophisticated since then. Nowadays, while two PCBs may look identical, they are quite different. Each hard drive has its own unique calibration parameters stored on its ROM chip. Replacing a hard drive’s shorted control board involves removing the ROM chip from one board and soldering it onto a compatible board. This is a delicate electrical procedure. Gillware Data Recovery staffs skilled electrical engineers to handle these electrical surge data recovery situations. Solid State Devices Solid state drives, USB thumb drives and SD cards have the control board and NAND flash memory chips divided into discrete components. Thumb drives typically have one NAND chip to store data. Solid state drives will often have several NAND chips. The data within these chips looks nothing like the data stored on a hard drive’s platters. Data pulled from the chips is assembled into something recognizable by the control chip on the PCB. If the PCB fails, this control chip fails with it. Electrical surge data recovery for flash memory devices involves carefully removing the NAND chips and extracting their contents. The raw data from a NAND chip can be read with any device programmer. But these contents are absolutely useless to anyone at this point. Our data recovery computer scientists must use custom-designed software to emulate the controller chip and piece the data together correctly. There is another kind of flash memory device, known as a monolithic USB thumb drive. Many flash drives today bundle their internal components into what looks like a single inscrutable item. It bears some resemblance to the monolith from Kubrick’s 2001: A Space Odyssey. Data recovery from monoliths generally follows the same procedure. However, accessing the NAND chip inside is much trickier. Tiny wires need to be carefully soldered to specific contact points on the device to access the chip. This is referred to as “spiderwebbing”. It is an extremely delicate procedure for our electrical engineers. Electrical Surge Data Recovery – Failed Hard Drive A sudden loss of power can cause many different kinds of failure in a hard disk drive. Hard drives have several moving parts. All of these moving parts are potential points of failure. When you power down your computer, it sends signals to your hard drive. These signals tell the hard drive to prepare itself for shutdown. The read/write heads unpark from their position over the platters and the platters slowly spin down. When your computer or external device abruptly loses power, the hard drive doesn’t receive these signals. The flow of power through the PCB to the hard drive spindle motor stops without warning. The heads might not have time to unpark before the cushion of air keeping them afloat above the platters dissipates. If the hard drive is in the middle of a write operation, data can become corrupted. If a firmware sector becomes corrupted, the hard drive can be prevented from booting. The read/write can also make physical contact with the platters, damaging both. If the heads make contact with the platters, they can stop them from spinning. This harms not only the heads and platters, but the motor as well. The hard drive spindle motor can become seized if it encounters sudden resistance. If the platters keep spinning, the read/write heads will start to gouge out the magnetic coating on the surfaces of the platters. This is called rotational scoring. Severe rotational scoring can render the data on a hard drive unsalvageable. The Electrical Surge Data Recovery Process At Gillware Data Recovery, our cleanroom is staffed with highly skilled engineers. The majority of our engineers have been with us for years and have salvaged data from thousands of failed storage devices. In electrical surge data recovery situations, we always begin with a free evaluation. We even offer to cover the cost of inbound shipping. Prepaid UPS shipping labels are available for all our clients in the continental US. Once we’ve assessed your storage device’s point of failure, we can determine the cost and likelihood of a successful recovery. This is when you get your exact price quote from us. We don’t do any additional work unless you approve the price quote, and we only send you a bill once we’ve successfully recovered your critical data. Once your case has been paid for, we send your data back to you on a healthy external drive. If we do not manage to recover your important data, you owe us nothing for our attempts. Ready to Have Gillware Assist You with Your Power Loss/Electrical Surge Data Recovery Needs? Best-in-class engineering and software development staff Gillware employs a full time staff of electrical engineers, mechanical engineers, computer scientists and software developers to handle the most complex data recovery situations and data solutions Strategic partnerships with leading technology companies Gillware is proud to be a recommended provider for Dell, Western Digital and other major hardware and software vendors. These partnerships allow us to gain unique insight into recovering from these devices. RAID Array / NAS / SAN data recovery Using advanced engineering techniques, we can recover data from large capacity, enterprise grade storage devices such as RAID arrays, network attached storage (NAS) devices and storage area network (SAN) devices. Virtual machine data recovery Thanks to special engineering and programming efforts, Gillware is able to recover data from virtualized environments with a high degree of success. SOC 2 Type II audited Gillware has been security audited to ensure data safety, meaning all our facilities, networks, policies and practices have been independently reviewed and determined as completely secure. Facility and staff Gillware’s facilities meet the SOC 2 Type II audit requirements for security to prevent entry by unauthorized personnel. All staff are pre-screened, background checked and fully instructed in the security protocol of the company. We are a GSA contract holder. We meet the criteria to be approved for use by government agencies GSA Contract No.: GS-35F-0547W Our entire data recovery process can be handled to meet HIPAA requirements for encryption, transfer and protection of e-PHI. No obligation, no up-front fees, free inbound shipping and no-cost evaluations. Gillware’s data recovery process is 100% financially risk free. We only charge if the data you want is successfully recovered. Our pricing is 40-50% less than our competition. By using cutting edge engineering techniques, we are able to control costs and keep data recovery prices low. Instant online estimates. By providing us with some basic information about your case, we can give you an idea of how much it will cost before you proceed with the recovery. We only charge for successful data recovery efforts. We work with you to define clear data recovery goals for our technicians, and only charge you upon successfully meeting these goals and recovering the data that is most important to you. Gillware is trusted, reviewed and certified Gillware has the seal of approval from a number of different independent review organizations, including SOC 2 Type II audit status, so our customers can be sure they’re getting the best data recovery service possible. Gillware is a proud member of IDEMA and the Apple Consultants Network.
<urn:uuid:79e1e22b-3ab4-4f7d-a7bc-f45eed2ac4a8>
CC-MAIN-2017-04
https://www.gillware.com/power-loss-electrical-surge-data-recovery/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00031-ip-10-171-10-70.ec2.internal.warc.gz
en
0.925867
2,150
3.015625
3
Zero-day attacks can strike anywhere, anytime. Here are five example of recent zero-day exploits: - Windows: In May, Google security engineer Tavis Ormandy announced a zero-day flaw in all currently supported releases of the Windows OS. According to his claim, the troubled code is more than 20 years old, which means “pre-NT”. - Java: In March, Oracle released emergency patches for Java to address two critical vulnerabilities, one of which was actively used by hackers in targeted attacks. They received the highest possible impact score from Oracle and can be remotely exploited without the need for authentication such as a username and password. The risk applies to both Windows and Mac devices. - Acrobat Reader: In February, a zero-day exploit was found that bypasses the sandbox anti-exploitation protection in Adobe Reader 10 and 11. According to Costin Raiu, director of Kaspersky Lab's malware research and analysis team, the exploit is highly sophisticated; it is likely either a cyber-espionage tool created by a nation state or one of the so-called lawful interception tools sold by private contractors to law enforcement and intelligence agencies for large sums of money. - The Elderwood Project: Symantec reported that in 2012 the Elderwood Project used a seemingly “unlimited number of zero-day exploits, attacks on supply chain manufacturers who service the target organization, and shift to ‘watering hole’ attacks” on websites likely visited by the target organization. The report went on to say that the resources needed could only be provided by a large criminal organization supported by a nation state. - Various Game Engines: In May, Computerworld blogger Darlene Storm reported that thousands of potential attack vectors in game engines put millions of gamers at risk. The article talked about zero-day vulnerabilities in CryEngine 3, Unreal Engine 3, id Tech 4 and Hydrogen Engine.
<urn:uuid:4535c4e8-7352-4e6c-9c43-804a8d4c7f09>
CC-MAIN-2017-04
http://www.networkworld.com/article/2168888/network-security/5-examples-of-zero-day-attacks.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285001.96/warc/CC-MAIN-20170116095125-00517-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947763
394
2.78125
3
In the first part of this XenClient CTO blog series (/blogs/?p=174194383), I discussed how the evolution of the traditional computing model led to the sharing of hardware and software resources. Without a doubt, sharing of hardware and software resources is the root cause of various forms of malicious computer security threats and malware attacks, to list a few: - The browser spyware mess that was created once every computer programmer, despite their intentions, were able to intercept, read, and modify all of Internet Explorer ActiveX browsing component activities. - The operating system kernel protection dilemma in existence today as rootkits can directly manipulate core system objects. - The system boot persistent threats problem with malware able to infect the Master Boot Record (MBR) requiring a cold boot of the hardware system from a different clean media for possible repair. - The firmware security dilemma as malicious, persistent threats infect the main system BIOS or I/O device firmware, forcing the malware code to load and gain control prior to the OS. Enforcing certain levels of access control became mandatory to control access to shared hardware and software system resources. But in reality this was not sufficient by itself to address the known challenges for various reasons: - Lack of standardization across the industry for the definition, deployment and exchange of those access control rules. This results in a large manageability challenge across the industry. - The number and complexity of the access rules had a big impact on operating system and application performance and stability in a very non-predictable way. - The absence of a trusted authority or organization which would establish the rules for commonly known access cases and problems across the industry - It is not possible to trap on the access of key hardware resources including memory, keyboard and display. This results in various types of malware attacks that can’t be prevented such as key loggers, screen capturers and memory persistent hidden malware. Interestingly but not surprisingly enough, the original PC architecture assumed a single, active computing experience per device. This means that you cannot load and run more than one operating system at a time on any device. The simple solution for this was supporting multi-OS booting by giving the user the option to install multiple operating systems and have them choose the one they preferred at system boot time. Unfortunately, those capabilities were permitted without any level of security measurement, verification, or checking of the authenticity of the installed operating systems. This architectural facility allowed malware authors to install hidden tiny operating systems that could take control of a user’s environment prior to booting of the user’s regular operating system. As the number of vendors publishing software increased greatly and the number of malware generated for the PC increased exponentially, it become almost impossible to distinguish between installation of a legitimate application and a malicious application. Moreover, the lake of moderation of software components installed on personal computers created numerous system security and availability challenges as explained above. In the next part of this blog series, I will discuss how system virtualization with XenClient helps to address these security challenges for IT. Join the conversation by connecting with the Citrix XenClient team online! - Visit the XenClient product page - Follow us on Twitter - Like us on Facebook - Visit our XenClient Technical Forum About the author: Ahmed Sallam drives technology and product strategy working with ecosystem partners for Citrix XenClient and the emerging client devices virtualization market. Prior to Citrix, he was CTO and chief architect of advanced technology at McAfee, now part of Intel Corp. He was co-inventor and architect of DeepSAFE, co-developed with Intel Labs, and co-designer of VMware’s VMM CPU security technology known as VMsafe. Prior to McAfee, Ahmed was a senior architect with Nokia’s security division and a principal engineer at Symantec. He holds 17 issued patents and has more than 40 pending patent applications. He earned a bachelor’s degree in computer science and automatic control from the University of Alexandria. Follow Ahmed on twitter: https://twitter.com/ahmedsallam Check Ahmed public profile: www.linkedin.com/in/ahmedsallam
<urn:uuid:1bd77de2-5f64-4c15-8d92-f495817815ab>
CC-MAIN-2017-04
https://www.citrix.com/blogs/2012/11/19/xenclient-cto-series-intelligent-desktop-virtualization-as-an-enabler-for-next-generation-computing-experience-part-2/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00428-ip-10-171-10-70.ec2.internal.warc.gz
en
0.932162
861
2.984375
3
Humidity monitoring helps library protect books Thursday, Apr 25th 2013 In order to make sure that its books are not further damaged by mold, Union College in Schenectady, New York, installed temperature and humidity monitoring equipment at its library. When the school renovated Schaffer Library in 1998, Union College officials thought that the region's cooler climate naturally protected its books against mold damage and thus humidity monitoring was not necessary. However, library officials discovered mold growth on approximately 12,000 of its books two years ago. To avoid damaging the texts, Union College shipped all of the books to an off-site facility for cleaning, the school reported. "These molds can be found on just about any book and for the most part, are pretty dormant," said facilities director Loren Rucinski. "But all it takes is a slight elevation in temperature to make it turn really destructive and that's what we faced. Although harmless to humans, it began to attack books in our collection that had a specific type of binding and was beginning to contaminate other books. Something had to be done." How humidity monitoring deters mold growth At research facilities such as libraries, the need to maintain ideal environmental conditions is paramount. Unlike with research labs where all external variables need to be meticulously monitored in order to prevent calamity, the existing systems in most libraries are typically able to sufficiently deter mold growth. At Union College, the region's cold winters and temperature monitoring equipment used in the summertime ensure that ideal conditions for mold growth are not present throughout most of the year. However, the school reported that mold was able to propagate during the spring and fall. During these seasons, temperature monitoring is more erratic as the library frequently switches on and off its air conditioning unit. As a result, the temperature was warm enough in Schaffer Library for fungi to grow on the texts. Today, the library uses humidity monitoring equipment to make sure mold is not present indoors during these seasons. W. J. Kowalski of Penn State University's Department of Architectural Engineering that building managers hoping to deter mold growth should keep internal moisture levels at 60 percent or below, and Schaffer Library maintains 50 percent relative humidity. To make sure internal conditions remain ideal and that facilities managers have total oversight, the environmental monitoring system can be monitored online. "This has been a truly collaborative project," said college librarian Frances Maloy. "Everyone immediately understood what this situation could mean to our library and everyone involved in finding a solution took ownership of the problem. Even the students were good sports about it, helping to clean books and putting up with temperature swings as we tested out various solutions."
<urn:uuid:bd06f01d-bd4c-4990-8e34-23b30e54ecd7>
CC-MAIN-2017-04
http://www.itwatchdogs.com/environmental-monitoring-news/research-labs/humidity-monitoring-helps-library-protect-books-426809
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00152-ip-10-171-10-70.ec2.internal.warc.gz
en
0.968428
539
2.609375
3
The rising percentage of parents opting out of at least one mandatory vaccination could be a major factor in the recent increase in whooping cough cases. That's according to a study, published in Pediatrics today, based on data from New York state, where religious exemptions to vaccination requirements are enforced loosely enough to allow parents to opt out based on personal or philosophical beliefs about the drugs, Reuters explains. Researchers tracked data from the state's Department of Health. They noticed that the proportion of religiously exempt kids, while still very small, had nearly doubled in the state: 23 in 10,000 to 45 in 10,000. And in counties with more than 1 percent of children under a religious exemption, whooping cough cases were higher: 33 out of every 100,000 kids, compared to 20 per 100,000 kids in counties with an exemption rate under 1 percent. But there's more: because the current whooping cough vaccine is less effective than the original, even vaccinated kids in counties with more exemptions are more susceptible to the illness. Most vaccines rely on "herd immunity" to boost their effectiveness — if a certain percentage of the population is vaccinated, the disease can't spread. Different diseases have different thresholds, and while the New York whooping cough numbers seem pretty tiny, increasing vaccination exemptions aren't just a localized issue.
<urn:uuid:9d355158-2acd-4fe4-9b78-55df69aed7df>
CC-MAIN-2017-04
http://www.nextgov.com/health/2013/06/vaccine-exemptions-could-help-make-whooping-cough-thing-again/64187/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00390-ip-10-171-10-70.ec2.internal.warc.gz
en
0.957756
266
2.59375
3
At one point or another, anyone who captures packets will see a TCP Retransmission. Even in the best of network environments, packet loss will happen from time to time – hey, TCP is built to handle it so don’t worry that the sky is falling! Of course, if you see a bunch of them, that’s a problem. In Wireshark, there are several ways that a retransmission can be categorized, depending on the behavior. In order to make the best next-step decision, it is important to understand each type of retransmission and what it indicates. TCP Retransmission – This is a plain-Jane retransmission. Wireshark observed a packet in a TCP conversation with a sequence number and data, and later observed another packet with the same sequence number and data. These are typically sent after a retransmission timer expires in the sender. There are some gotchas, but this is the general definition. TCP Spurious Retransmission – Since version 1.12 of Wireshark, TCP Spurious Retransmission events have been identified. These indicate that the sender sent a retransmission for data that was already acknowledged by the receiver. For some reason, the sender interpreted that a packet was lost, so it sends it again. These are the big three that you will likely see in any given environment. Remember that TCP is built to handle some packet loss, so these are normal events, but hopefully not common! If these start to impact application or service performance, comb the path between network and server and look for discards, Ethernet errors, high utilization or cabling issues that may be the real underlying culprit. Chris can be contacted at chris (at) packetpioneer (dot) com.
<urn:uuid:614b2311-e344-4863-afe7-a11d4c55f8f7>
CC-MAIN-2017-04
http://www.lovemytool.com/blog/2014/09/the-tcp-retransmission-flavors-of-wireshark-by-chris-greer.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00116-ip-10-171-10-70.ec2.internal.warc.gz
en
0.929603
370
2.515625
3
Roadway panels from Solar Roadways consist of a high-strength surface layer with embedded solar cells, a middle layer containing a microprocessor board and related circuitry, and a base plate that distributes power. Shown above is what an actual roadway created with this technology could look like. “There’s 25,000 square miles of road surfaces, parking lots and driveways in the lower 48 states. If we covered that with solar panels with just 15 percent efficiency, we’d produce three times more electricity than this country uses on an annual basis, and it’s almost enough to power the entire world,” said Scott Brusaw, co-founder of Solar Roadways, in a segment of Your Environmental Road Trip, a new film that explores cutting-edge energy solutions. For more information about roadways that transform sunlight into electricity and send it directly to homes or businesses lining the street, read Energy Ecosystem of the Future Hinges on Many Sources. Photo courtesy of Solar Roadways.
<urn:uuid:76e5b27e-6d71-499a-852f-d3e24e79c120>
CC-MAIN-2017-04
http://www.govtech.com/Solar-Roadways-Could-Power-Entire-World-05172011.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00024-ip-10-171-10-70.ec2.internal.warc.gz
en
0.896242
208
3.625
4
It is that time of year, where kids of all ages are heading back to school, with fresh, unwrapped school supplies in their backpacks with their smartphones, tablets, and laptops. Teacher and school administrators are busy preparing for their new incoming class of students, entering in student data, setting up distribution lists, updating their syllabuses, and setting up their grading systems among the many things our teachers do for our children. Hard at work behind the scenes are the computers, networks, servers, applications, and cloud infrastructure that supports all of the applications that our teachers, administrators, and students depend upon throughout the school year. While all of these various technologies have enabled a modern teaching and learning experience and provided efficiencies to our school systems, their availability is under appreciated. As we have come to rely on these technologies more, their availability has grown in importance as well. Traditionally we think of the availability of our school in terms of the building(s) being open or closed, such as closed for a snow day, or worse for a natural disaster. But what happens when one piece of IT infrastructure that is used every day in our schools does not work? What happens if there is no internet access? Cloud-based solutions are not helpful, unless classes are moved to the local coffee shop, but only so many students can fit thought the doors. What happens if the server hosting exams or grades goes down during finals? Do students get sent home indefinitely until the problem is fixed. One of the widest ranging threats to an educational institution’s Information infrastructure today are Distributed Denial of Service (DDoS) attacks. These attacks are very common on the networks of our colleges and universities and are increasingly being seen at high schools across America. While students are instigators for all imaginable reasons of these attacks, they are also frequently targets themselves. The two most commonly seen DDoS attacks in our educational institutions are students: - Attacking their own school to delay their final exams that they have not properly prepared for; - Attacking either gaming servers or other gamers to gain an advantage within the game they are playing in competition with other gamers. While there is no DDoS 101 class, DDoS attacks are unfortunately as cheap as $5 (USD) and simple to execute, by even the most novice user. Sadly, this is a global phenomenon, and not isolated to any single geography. At Arbor we have worked with educational institutions to implement comprehensive solutions to protect against DDoS attacks, including a group of state and regional educational organizations with a combined network that supports more than 1.4 million students and school internet access. The network provides access to high stakes online testing, such as PARCC, AIR, and MAP1 and supports integrated Education Management Information Systems with student data reporting, student information systems, and state fiscal software applications. The shared network was experiencing an increasing number of DDoS attacks – 28 attacks in 28 days was reported at one time. Adding to the issue was that not all the attacks throughout the network were detected or reported. Administrators were aware of “low and slow” DDoS tactics targeting applications with lesser volumes of traffic that were very difficult to identify. Now with Arbor’s DDoS Protection Solution, every participating organization using the statewide network enjoys multilayer DDoS defense, with always on, in-line protection from in-bound DDoS attacks through an on premise Availability Protection Systems (APS) that can also stop outbound activity from compromised hosts, and up to 2 Tbps of on-demand mitigation capacity from Arbor Cloud’s global, cloud-based scrubbing centers. In fact, one of the strengths of the comprehensive Arbor DDoS solution is the seamless integration between the scalable, Arbor Cloud DDoS protection service and Arbor’s on-premise APS. If an APS detects a volumetric DDoS attack that may overwhelm the organization, the APS can automatically redirect traffic to the fully managed Arbor Cloud DDoS protection service. This Cloud Signaling feature is unique to Arbor’s DDoS Protection Solution Since deploying Arbor’s DDoS Protection Solution, state and regional educational organizations have experienced a reduction in DDoS attacks—and faster mitigation. They have effectively removed the threat of botnets, and set connection limits on application servers to prevent “unintentional” DDoS. They were also pleasantly surprised to recover 5-6 percent of inbound bandwidth and reduced their average firewall utilization. School is back in session and DDoS attacks are sure to follow. Our schools are bastions of learning for our younger generations and their technology needs to be protected in order to ensure their missions of education and research are achieved.
<urn:uuid:d18d0204-2201-4cb3-b133-588c89373875>
CC-MAIN-2017-04
https://resources.arbornetworks.com/h/i/293372640-ddos-attacks-are-coming-back-to-school
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00538-ip-10-171-10-70.ec2.internal.warc.gz
en
0.96307
958
2.65625
3
Fiber optics can be a revolutionary technology which has transformed the strategies of communication, making data considerably faster. We have which utilizes glass (or plastic) threads (fibers) to send out data. They carry numerous additional benefits as compared with metal wires with regards to data transfer useage. The fiber optic cable is made up of bundle of glass threads, which can do transmitting messages modulated onto light waves. The thin size optical cables makes them easy to set up, greater bandwidths allow greater data and less vunerable to any interference in the signals. Although benefits are plenty, yet a very important factor that holds importance is cleaning of fiber optic cables. Cleaning is to maintain normal running of the fiber optic method. In optical devices used process, for instance, optical fiber joint activities in various degrees may contamination by dust, dirt, the optical link loss increase, at this point the particular phenomenon seen as optical receiver optical power reduction, optical receiver output level is decreased obviously. In cases like this, the fiber connectors has to be properly clean maintenance. Light always travels in the straight path as well as a slight hindrance in its path can bring about data loss. So, to ensure that there is no data loss in fiber optic communication, it is necessary to clean fiber optics. Not simply is the cleaning of fiber optics crucial, but the connectors which are utilized to connect the fiber optics need to be maintained and cleaned regularly. Cleaning of fiber optic connectors requires that you have a very little technical knowledge. The first task in connector cleaning may be the cleaning of Ferrule, a cylindrical element of the connector which is generally comprised of stainless steel. The dwelling of an ferrule contains small holes in which the fiber cables are positioned. The fibers are engraved to suit with all the end face from the ferrule. With the aid of a mating sleeve, the 2 ferrules are contacted with each other within their end faces and additional transmission of sunshine signals comes about. Any blockage or damage in the form of dust or stain can distort the info connection and result in signal loss. So, to make certain that your communication channels are operating smoothly with no interruption inside the data flow you’ll want to perform the cleaning of fiber optical connections and cables with a timely basis. There are various fiber optic tool kits you can purchase that can be used for cleaning of fiber optic cables installed at your home and also office. The very fact, that cleaning has to be done on a regular basis makes the cleaning kits more important for domestic use. By using these kits, you can save yourself in the need for calling professional cleaning providers when you look to clean your optical connections. FiberStore provide other fiber optic tool,for example Crimping tool,wire cutter etc. You’ll find your optical tools within our store.
<urn:uuid:fccba06b-584a-401c-9ab4-46d199ca0b13>
CC-MAIN-2017-04
http://www.fs.com/blog/fiber-optics-and-cleaning-of-fiber-optic-connectors.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00446-ip-10-171-10-70.ec2.internal.warc.gz
en
0.943883
571
2.78125
3
Joined: 08 Jun 2007 Posts: 71 Location: Zoetermeer, the Netherlands I guess COMP-5 is same as comp-2 COMP-5 stands for "binary native". On the mainframe this is not really an issue. On ohter platforms, however, there are processor architectures representing integers in a different way (http://en.wikipedia.org/wiki/Endianness). All those computers can run COBOL so sometimes the cobol compilers forces those binary-integers to behave in a standard way (comp-4). This might be sub-optimal for calculations. Specifiying COMP-5 allows native represention and optimal performance for calculations (comp-3 does not perform at all outside the mainframe!) . So, when you want to produce portable code, always use comp-5 for your calculations. I've also used this in the past to test at execution time to determine on which machine the program ran: 03 B-I-G-endian value +0004 pic S9(4) comp-5. 03 litle-endian value +1024 pic S9(4) comp-5. display "this is the AIX (powerPC)" display "this is a PC (intel inside!)" display "how the *beep* do you run this program?" Those days I was coding & testing on windows (Micro Focus NetExpress) and the target machine was a R6000 (Micro Focus ServerExpress).
<urn:uuid:e5ec6ae4-c76d-4f15-9844-a24404eb60df>
CC-MAIN-2017-04
http://ibmmainframes.com/about23760.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00262-ip-10-171-10-70.ec2.internal.warc.gz
en
0.815214
312
2.640625
3
The Next Version of the Internet Protocol - IPv6 - Page 2 Part 2: The Trouble with IPv4 The Trouble with IPv4 Figure 1 shows how the IPv4 address space is allocated: as you can see, the original architecture allocated fully half of all IPv4 addresses to 126 Class A networks. Originally intended for very, very large networks maintained at the national level (or multinational, for corporations), quite a few Class A addresses were snatchedup by net-savvy organizations such as MIT and Carnegie Mellon University early on. Each Class A network is capable of handling as many as 16 million nodes, so since few organizations with Class A network addresses have that many nodes much of that address space is wasted. Figure 1: (from RFC 791) IPv4 started slowly strangling on this structure by the mid 1990s even as corporations began embracing TCP/IP and the Internet in earnest. Each new IP network address assigned meant some more addresses taken out of circulation. Even though there are still plenty of addresses left, that is only due to the implementation of a series of stopgap measures, strict rationing, and better utilization of existing addresses. The IETF and the IANA (the Internet Assigned Number Authority, in the process of being superceded by the Internet Corporation for Assigned Names and Numbers, ICANN) used several approaches to extending IPv4's lifetime while IPv6 was being readied. These steps can be characterized as rationing, repackaging, recycling, and replacing. First, rationing. This one is easy: the process of getting a Class B or Class A network address was tightened up. And Class C addresses were distributed by ISPs, who get a limited number of addresses and need to take care that they are not wasted unnecessarily. Class B addresses were very hard to come by as early as 1990 or so, and Class A addresses virtually impossible. By holding onto the Class A and B network addresses, it is now possible to break them up and redistribute them in smaller chunks. Next, repackaging. Classless InterDomain Routing (CIDR) does away with the class system, allowing ISPs to allocate groups of contiguous Class C addresses as a single route. The alternative would be to have routers treat each individual Class C address as a separate route, resulting in a nightmarishly large routing table. Instead of Class A, B, or C, routed addresses are expressed along with a number indicating how many bits of the network address is to be treated as the route. For example, 256 Class C addresses could be aggregated into a single route by indicating that 16 bits of the address is to be treated as the route (the same as for a Class B address). In this way, an ISP or other entity that administers CIDR networks can handle the routing from the Internet. Address space can be recycled, sort of, in two ways: first, Class A and B addresses that have not yet been assigned can be divided up and allocated to smaller organizations. Where the CIDR approach is sometimes referred to as "supernetting", this approach simply breaks the larger networks into subnets which can be routed by some entity handling routing for the entire (undivided) network address. Another approach is to use the reserved network addresses, sometimes called Network 10, to do network address translation, or NAT. RFC 1918 sets aside the network address ranges:10.0.0.0 - 10.255.255.255 172.16.0.0 - 172.31.255.255 192.168.0.0 - 192.168.255.255 to be used for private intranets. These addresses provide one Class A, 16 Class B, and 255 Class C network addresses to be used by anyone who wants to, as long as they don't attempt to forward packets to or from those networks on the global Internet. The last option is to replace IPv4 addresses entirely. This is the IPv6 option. Each of these other approaches pushes back the day when IPv4 will no longer work, but does not relieve the stress.
<urn:uuid:49366ef1-47f9-4b43-acee-62b011bff2e8>
CC-MAIN-2017-04
http://www.enterprisenetworkingplanet.com/netsp/article.php/10953_616701_2/The-Next-Version-of-the-Internet-Protocol--IPv6.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283689.98/warc/CC-MAIN-20170116095123-00564-ip-10-171-10-70.ec2.internal.warc.gz
en
0.966684
844
3.15625
3
Researcher at University of Washington develop IoT sensors powered by electromagnetic radiation Researchers at the University of Washington have developed a new technology that could enable IoT sensors to work wirelessly without batteries. The technology allows sensors and small electronics to be powered completely wirelessly from TV, radio, cell phone, and wireless signals. This research relies on a principle called backscattering. So far researchers have developed tiny devices including cameras, motion sensors, temperature sensors, and other devices that require no battery or any wiring. In a research paper, the scientists said that Computational RFID (CRFID) devices are emerging platforms that can enable perennial computation and sensing by eliminating the need for batteries. “Although much research has been devoted to improving upstream (CRFID to RFID reader) communication rates, the opposite direction has so far been neglected, presumably due to the difficulty of guaranteeing fast and error-free transfer amidst frequent power interruptions of CRFID. With growing interest in the market where CRFIDs are forever-embedded in many structures, it is necessary for this void to be filled,” the researchers said. The researcher proposed a technology called Wisent, which they said was a downstream communication protocol for CRFIDs that operates on top of the legacy UHF RFID communication protocol: EPC C1G2. The devices could be embedded in anything, including living things, once for its entire lifetime. They added that as the complexity of use cases for CRFIDs grow, there is another emerging maintenance requirement: the need to patch or replace the firmware of the device, or to alter application parameters including the RFID radio layer controls. “In current CRFIDs, maintenance of firmware (due to e.g. errors) requires a physical connection to CRFID, nulling the main benefit of battery-free operation.” You might like to read: Internet of Things start-up secures £11 million investment IoT devices get wireless updates This means that devices could be updated without the need of a cable. The researchers added that Wisent devices can be manufactured very cheaply – perhaps less than a dollar per component. This would allow wireless power for applications such as smoke detectors or surveillance cameras. One area that the wireless technology could benefit are smart cities. Dr. Neil Garner, founder of WhiteSpace Norwich, told Internet of Business that wireless connectivity is a key component for this. “In the next 12-18 months, consumers will be seeing a lot more innovation, early deployments and piecemeal services,” he said. You might like to read: Four UK niversities win £4m grant to research connected sensor systems
<urn:uuid:b6d7116f-e1ad-4eb7-bac2-1e65ef66ac80>
CC-MAIN-2017-04
https://internetofbusiness.com/wireless-battery-less-tech-extend-iot/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00382-ip-10-171-10-70.ec2.internal.warc.gz
en
0.932495
552
3.046875
3
What's cooler than a herd of nerd girls working on a solar-powered car? Not much, in my opinion. In this video, students at Tufts University talk about the car they're crafting, along with why they love being engineers-in-training and their dreams for the future. This is worth sharing with any young woman you know who might be considering STEM as a career path. Or, in fact, with any youngster who enjoys building things and learning how they work. In this 8-1/2-minute video, students talk about the benefits of working with other female techies, with lines like, "It's fun to work with all girls, for once. It's definitely a male-dominated profession." They also get into some of the details about their work, including how they split up the work -- frame, shell, motor -- and their various tasks within the whole. Kudos to Karen Panetta, associate professor at Tufts, who started the Nerd Girls group in 2000 to celebrate "smart-girl individuality that's revolutionizing our future." And yes, boys and men are welcome, too, as "Dr. Karen" makes clear in this blog post about why it's especially important to nurture girls in this way. Nerd Girls isn't the only group doing this work, of course. Women in Technology International has an outreach program for young women, though it remains mostly focused on supporting women who already work in the field with networking events, online and in-person seminars and classes, and more. Know of any others that work to keep young women connected to the STEM world? Provide the link in the comments section, below, and pass it on.
<urn:uuid:a0b58da6-9b20-46cb-8e3a-614c7d963af7>
CC-MAIN-2017-04
http://www.computerworld.com/article/2474596/it-skills-training/nerdy-girls-unite-.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281746.82/warc/CC-MAIN-20170116095121-00500-ip-10-171-10-70.ec2.internal.warc.gz
en
0.972016
345
2.609375
3
Chapter 1 – Historical and Economic Development of Computers Computers are important to our society. Computers are everywhere. There – I have said the obvious. Many books on computer organization and architecture spend quite a few pages stating the obvious. This author does not want to waste paper on old news, so we proceed to our section on the history of computer development. One might wonder why this author desires to spend time discussing the history of computers, since so much of what has preceded the current generation of computers is obsolete and largely irrelevant. In 1905, George Santayana (1863 – 1952) wrote that “Those who cannot remember the past are condemned to repeat it. ... This is the condition of children and barbarians, in whom instinct has learned nothing from experience.” [Santayana, The Life of Reason]. Those of us who do not know the past of computers, however, are scarcely likely to revert to construction with vacuum tubes and mercury delay lines. We do not study our history to avoid repeating it, but to understand the forces that drove the changes that are now seen as a part of that history. Put another way, the issue is to understand the evolution of modern digital computers and the forces driving that evolution. Here is another way of viewing that question: A present-day computer (circa 2008) comprises a RISC CPU (to be defined later) with approximately 50 internal CPU registers, a pipelined execution unit, and one to four gigabytes of memory. None of this was unforeseeable in the 1940’s when the ENIAC was first designed and constructed, so why did they not build it that way. Indeed the advantages of such a unit would have been obvious to any of the ENIAC’s designers. During the course of reading this chapter we shall see why such a design has only recently become feasible. The history shows how engineers made decisions based on the resources available, and how they created new digital resources that drove the design of computers in new, but entirely expected, directions. This chapter will present the history of computers in several ways. There will be the standard presentation of the computer generations, culminating in the question of whether we are in the fourth or fifth generation of computers. This and other presentations are linked closely with a history of the underlying technology – from mechanical relays and mercury delay lines, through vacuum tubes and core memory, to transistors and discrete components (which this author remembers without any nostalgia), finally to integrated circuits (LSI and VLSI) that have made the modern computer possible. We shall then discuss the evolution of the computer by use, from single-user “bare iron” through single user computers with operating systems to multiple-user computer systems with batch and time-sharing facilities culminating in the variety of uses we see today. Throughout this entire historical journey, the student is encouraged to remember this author’s favorite historical slogan for computers: “The good old days of computing are today” The Underlying Technologies A standard presentation of the history of computing machines (now called computers) begins with a list of the generations of computers based on the technologies used to fabricate the CPU (Central Processing Unit) of the computer. These base technologies can be listed in four categories, corresponding somewhat to the generations that are to be defined later. These technologies are: 1) Mechanical components, including electromechanical relays, 2) Vacuum tubes, 3) Discrete transistors, and 4) Integrated circuits, divided into several important classes. To a lesser extent, we need to look at technologies used to fabricate computer memories, including magnetic cores and semiconductor memories. All early computational devices, from the abacus to adding machines of the 1950’s and early 1960’s were hand-operated, with components that were entirely mechanical. These were made entirely of wood and/or metal and were quite slow in comparison to any modern electronic devices. The limitation of speed of such devices was due to the requirement to move a mass of material; thus incurring the necessity to overcome inertia. Those students who want to investigate inertia are requested either to study physics or to attempt simple experiments involving quick turns when running quickly (but not with scissors). Analog and Digital Computers With one very notable exception, most mechanical computers would be classified as analog devices. In addition to being much slower than modern electronic devices, these computers suffered from poor accuracy due to slippages in gears and pulleys. We support this observation with a brief definition and definition of the terms “analog” and “digital”. Within the context of this discussion, we use the term “digital” to mean that the values are taken from a discrete set that is not continuous. Consider a clinical thermometer used to determine whether or not a small child has a fever. If it is an old-fashioned mercury thermometer, it is an analog device in that the position of the mercury column varies continuously. Another example would be the speedometer on most modern automobiles, it is a dial arrangement, where the indicator points to a number and moves continuously. The author of these notes knows of one mechanical computer that can be considered to be digital. This is the abacus. As the student probably knows, the device records numbers by the position of beads on thin rods. While the beads are moved continuously, in general each bead must be in one of two positions in order for the abacus to be in a correct state. Figure: Abacus Bead Moves From One Position to The Other Modern binary digital computers follow a similar strategy of having only two correct “positions”, thereby gaining accuracy. Such computers are based on electronic voltages, usually in the range 0.0 to 5.0 volts or 0.0 to 2.5 volts. Each circuit in the design has a rule for correcting slight imperfections in the voltage levels. In the first scheme, called TTL after the name of its technology (defined later), a voltage in the range 0.0 to 0.8 volts is corrected to 0.0 volts and a voltage in the range 2.8 to 5.0 volts is corrected to 5.0 volts. The TTL design reasonably assumes that voltages in the range 0.8 to 2.8 volts do not occur; should such a voltage actually occur, the behavior of the circuit will be unpredictable. We have departed from our topic of mechanical computers, but we have a good discussion going so we shall continue it. In an analog electronic computer, a voltage of 3.0 would have a meaning different than a voltage of 3.1 or 2.9. Suppose that our intended voltage were 3.0 and that a noise signal of 0.1 volts has been added to the circuit. The level is now 3.1 volts and the meaning of the stored signal has changed; this is the analog accuracy problem. In the digital TTL world, such a signal would be corrected to 5.0 volts and hence its correct value. speedometer in an average car presents an excellent example of a mechanical analog computer, although it undoubtedly uses some electromagnetic components. The device has two displays of importance – the car speed and total distance driven. Suppose we start the automobile at time T = 0 and make the following definitions. S(T) the speed of the car at time T, and X(T) the distance the car has been driven up to time T. As noted later, mechanical computers were often used to produce numerical solutions to differential and integral equations. Our automobile odometer provides a solution to either of the two equivalent equations. Figure: The Integral is the Area Under the Curve The problem with such a mechanical computer is that slippage in the mechanism can cause inaccuracies. The speedometer on this author’s car is usually off by about 1.5%. This is OK for monitoring speeds in the range 55 to 70 mph, but not sufficient for scientific calculations. Electronic relays are devices that use (often small) voltages to switch larger voltages. One example of such a power relay is the horn relay found in all modern automobiles. A small voltage line connects the horn switch on the steering wheel to the relay under the hood. It is that relay that activates the horn. The following figure illustrates the operation of an electromechanical relay. The iron core acts as an electromagnet. When the core is activated, the pivoted iron armature is drawn towards the magnet, raising the lower contact in the relay until it touches the upper contact, thereby completing a circuit. Thus electromechanical relays are switches. Figure: An Electromechanical Relay In general, an electromechanical relay is a device that uses an electronic voltage to activate an electromagnet that will pull an electronic switch from one position to another, thus affecting the flow of another voltage; thereby turning the device “off” or “on”. Relays display the essential characteristic of a binary device – two distinct states. Below we see pictures of two recent-vintage electromechanical relays. Figure: Two Relays (Source http://en.wikipedia.org/wiki/Relay) The primary difference between the two types of relays shown above is the amount of power being switched. The relays for use in general electronics tend to be smaller and encased in plastic housing for protection from the environment, as they do not have to dissipate a large amount of heat. Again, think of an electromechanical relay as an electronically operated switch, with two possible states: ON or OFF. Power relays, such as the horn relay, function mainly to isolate the high currents associated with the switched apparatus from the device or human initiating the action. One common use is seen in electronic process control, in which the relays isolate the electronics that compute the action from the voltage swings found in the large machines being controlled. In use for computers, relays are just switches that can be operated electronically. To understand their operation, the student should consider the following simple circuits. Figure: Relay Is Closed: Light Is Illuminated Figure: Relay Is Opened: Light Is Dark. We may use these simple components to generate the basic Boolean functions, and from these the more complex functions used in a digital computer. The following relay circuit implements the Boolean AND function, which is TRUE if and only if both inputs are TRUE. Here, the light is illuminated if and only if both relays are closed. Figure: One Closed and One Open Computers based on electromagnetic relays played an important part in the early development of computers, but became quickly obsolete when designs using vacuum tubes (considered as purely electronic relays) were introduced in the late 1940’s. These electronic tubes also had two states, but could be switched more quickly as there were no mechanical components to be moved. The next figure shows a picture of an early vacuum tube, along with its schematic and a circuit showing a typical use. The picture and schematics are taken from the web page of Dr. Elizabeth R. Tuttle of the This particular vacuum tube seems to have been manufactured in the 1920’s. Later vacuum tubes (such as from the 1940’s and 1950’s) tended to me much smaller. The vacuum tube shown above is a triode, which is a device with three major components, a cathode, a grid, and an anode. The cathode is the element that is the source of electrons in the tube. When it is heated either directly (as a filament in a light bulb) or indirectly by a separate filament, it will emit electrons, which either are reabsorbed by the cathode or travel to the anode. When the anode is held at a voltage more positive than the cathode, the electrons tend to flow from cathode to anode (and the current is said to flow from anode to cathode – a definition made in the early 19th century before electrons were understood). The third element in the device is a grid, which serves as an adjustable barrier to the electrons. When the grid is more positive, more electrons tend to leave the cathode and fly to the anode. When the grid is more negative, the electrons tend to stay at the cathode. By this mechanism, a small voltage change applied to the grid can cause a larger voltage change in the output of the triode; hence it is an amplifier. As a digital device, the grid in the tube either allows a large current flow or almost completely blocks it – “on” or “off”. Here we should add a note related to analog audio devices, such as high–fidelity and stereo radios and record players. Note the reference to a small voltage change in the input of a tube causing a larger voltage change in the output; this is amplification, and tubes were once used in amplifiers for audio devices. The use of tubes as digital devices is just a special case, making use of the extremes of the range and avoiding the “linear range” of amplification. We close this section with a picture taken from the IBM 701 computer, a computer from 1953 developed by the IBM Poughkeepsie Laboratory. This was the first IBM computer that relied on vacuum tubes as the basic technology. Figure: A Block of Tubes from the IBM 701 The reader will note that, though there is no scale on this drawing, the tubes appear smaller than the Pliotron, and are probably about one inch in height. The components that resemble small cylinders are resistors; those that resemble pancakes are capacitors. Another feature of the figure above that is worth note is the fact that the vacuum tubes are grouped together in a single component. The use of such components probably simplified maintenance of the computer; just pull the component, replace it with a functioning copy, and repair the original at leisure. There are two major difficulties with computers fabricated from vacuum tubes, each of which arises from the difficulty of working with so many vacuum tubes. Suppose a computer that uses 20,000 vacuum tubes; this being only slightly larger than the actual ENIAC. A vacuum tube requires a hot filament in order to function. In this it is similar to a modern light bulb that emits light due to a heated filament. Suppose that each of our tubes requires only five watts of electricity to keep its filament hot and the tube functioning. The total power requirement for the computer is then 100,000 watts or 100 kilowatts. We also realize the problem of reliability of such a large collection of tubes. Suppose that each of the tubes has a probability of 99.999% of operating one hour. This can be written as a decimal number as 0.99999 = 1.0 – 10-5. The probability that all 20,000 tubes will be operational for more than an hour can be computed as (1.0 – 10-5)20000, which can be approximated as 1.0 – 20000·10-5 = 1.0 – 0.2 = 0.8. There is an 80% chance the computer will function for one hour and 64% chance for two hours of operation. This is not good. Discrete transistors can be thought of as small triodes with the additional advantages of consuming less power and being less likely to wear out. The reader should note the transistor second from the left in the figure below. It at about one centimeter in size would have been an early replacement for the vacuum tube of about 5 inches (13 centimeters) in size shown on the previous page. One reason for the reduced power consumption is that there is no need for a heated filament to cause the emission of electrons. When first introduced, transistors immediately presented an enormous (or should we say small – think size issues) advantage over the vacuum tube. However, there were cost disadvantages, as is noted in this history of the TX-0, a test computer designed around 1951. The TX-0 had 18-bit words and 3,600 transistors. “Like the MTC [Memory Test Computer], the TX-0 was designed as a test device. It was designed to test transistor circuitry, to verify that a 256 X 256 (64–K word) core memory could be built … and to serve as a prelude to the construction of a large-scale 36-bit computer. The transistor circuitry being tested featured the new Philco SBT100 surface barrier transistor, costing $80, which greatly simplified transistor circuit design.” [R01, page 127] That is a cost of $288,000 just for the transistors. Imagine the cost of a 256 MB memory, since each byte of memory would require at least eight, and probably 16, transistors. The figure below shows some of the symbols used to denote transistors in circuit diagrams. For this course, it is not necessary to interpret these diagrams. Integrated circuits are nothing more or less than a very efficient packaging of transistors and other circuit elements. The term “discrete transistors” implied that the circuit was built from transistors connected by wires, all of which could be easily seen and handled by humans. Integrated circuits were introduced in response to a problem that occurred in designs that used discrete transistors. With the availability of small and effective transistors, engineers of the 1950’s looked to build circuits of greater complexity and functionality. They discovered a number of problems inherent in the use of transistors. 1) The complexity of the wiring schemes. Interconnecting the transistors became the major stumbling block to the development of large useful circuits. 2) The time delays associated with the longer wires between the discrete transistors. A faster circuit must have shorter time delays. 3) The actual cost of wiring the circuits, either by hand or by computer aided design. It was soon realized that assembly of a large computer from discrete components would be almost impossible, as fabricating such a large assembly of components without a single faulty connection or component would be extremely costly. This problem became known as the “tyranny of numbers”. Two teams independently sought solutions to this problem. One team at Texas Instruments was headed by Jack Kilby, an engineer with a background in ceramic-based silk screen circuit boards and transistor-based hearing aids. The other team was headed by research engineer Robert Noyce, a co-founder of Fairchild Semiconductor Corporation. In 1959, each team applied for a patent on essentially the same circuitry, Texas Instruments receiving The first integrated circuits were classified as SSI (Small-Scale Integration) in that they contained only a few tens of transistors. The first commercial uses of these circuits were in the Minuteman missile project and the Apollo program. It is generally conceded that the Apollo program motivated the technology, while the Minuteman program forced it into mass production, reducing the costs from about $1,000 per circuit in 1960 to $25 per circuit in 1963. Were such components fabricated today, they would cost less than a penny. It is worth note that Jack Kilby was awarded the Nobel Prize in Physics for the year 2000 as a result of his contributions to the development of the integrated circuit. While his work was certainly key in the development of the modern desk-top computer, it was only one of a chain of events that lead to this development. Another key part was the development by a number of companies of photographic-based methods for production of large integrated circuits. Today, integrated circuits are produced by photo-lithography from master designs developed and printed using computer-assisted design tools of a complexity that could scarcely have been imagined in 1960. Integrated circuits are classified into either four or five standard groups, according to the number of electronic components per chip. Here is one standard definition. integration Up to 100 electronic components per chip Introduced in 1960. integration From 100 to 3,000 electronic components per chip Introduced in the late 1960’s. integration From 3,000 to electronic components per chip. Introduced about 1970. integration From 100,000 to 1,000,000 components per chip Introduced in the 1980’s. integration More than 1,000,000 per chip. This term is not standard and seems recently to have fallen out of use. The figure below shows some typical integrated circuits. Actually, at least two of these figures show a Pentium microprocessor, which is a complete CPU of a computer that has been realized as an integrated circuit on a single chip. The reader will note that most of the chip area is devoted to the pins that are used to connect the chip to other circuitry. Figure: Some Late-Generation Integrated Circuits Note that the chip on the left has 24 pins used to connect it to other circuitry, and that the pins are arranged around the perimeter of the circuit. This chip is in the design of a DIP (Dual Inline Pin) chip. The Pentium (™ of Intel Corporation) chip has far too many pins to be arranged around its periphery, so that they are arranged along the surface of one side. Having now described the technologies that form the basis of our descriptions of the generations of computer evolution, we now proceed to a somewhat historic account of these generations, beginning with the “generation 0” that until recently was not recognized. The Human Computer The history of the computer can be said to reach back in history to the first human who used his or her ability with numbers to calculate a number. While the history of computing machines must be almost as ancient as that of computers, it is not the same. The reason is that the definition of the word “computer” has changed over time. Before entering into a discussion of computing machines, we mention the fact that the word “computer” used to refer to a human being. To support this assertion, we offer a few examples. Here are two definitions from the first edition of the Oxford English Dictionary [R10, Volume II]. The reader should note that the definitions shown for each word are the only definitions shown in the 1933 dictionary. “Calculator – One who calculates; a reckoner”. “Computer – One who computes; a calculator, reckoner; specifically a person employed to make calculations in an observatory, in surveying, etc.” This last usage of the word “computer” is seen in a history of the Royal Greenwich Observatory in We begin our discussion of human computers with a trio from the 18th century, three French aristocrats ( Alexis-Claude Clairaut, Joseph-Jerome de Lalande, and Nicole-Reine Étable de la Brière Lepaute ) who produced numerical solutions to a set of differential equations and computed the day of return of a comet predicted by the astronomer Halley to return in 1758. At the time, Figure: Alexis-Claude Clairaut, Joseph-Jerome and Nicole-Reine Étable de la Brière Lepaute By the end of the 19th century, the work of being a computer became recognized as an accepted professional line of work. This was undertaken usually be young men and boys with not too much mathematical training, else they would not follow instructions. Most of the men doing the computational work quickly became bored and sloppy in their work. For this reason, and because they would work more cheaply, women soon formed the majority of computers. The next figure is a picture of the “Harvard Computers” in 1913. The man in the picture is the director of the Harvard College Observatory. Figure: The Computers at the We continue our discussion of human computers by noting three women, two of whom worked as computers during the Second World War. Many of the humans functioning as computers and calculators during this time were women for the simple reason that many of the able-bodied men were in the military. The first computer we shall note is Dr. Gertrude Blanch (1897–1996), whose biography is published on the web [R18]. She received a B.S. in mathematics with a minor in physics from We note here that mathematical tables were quite commonly used to determine the values of most functions until the introduction of hand–held multi–function calculators in the 1990’s. At this point, the reader is invited to note that the name of the group was not “Association for Computers” (which might have been an association for the humans) but “Association for Computing Machinery”, reflecting the still-current usage of the word “computer”. The next person we shall note is a woman who became involved with the ENIAC project at the Moore School of Engineering at the “We did have desk calculators at that time, mechanical and driven with electric motors that could do simple arithmetic. You’d do a multiplication and when the answer appeared, you had to write it down to reenter it into the machine to do the next calculation. We were preparing a firing table for each gun, with maybe 1,800 simple trajectories. To hand-compute just one of these trajectories took 30 or 40 hours of sitting at a desk with paper and a calculator. As you can imagine, they were soon running out of young women to do the calculations.” [R19] Dr. Alan Grier of the George Washington University has studied the transition from human to electronic computers, publishing his work in a paper “Human Computers and their Electronic Counterparts” [R20], David Alan Grier from George Washington University analyzed the transition from human to electronic computers. In this paper, Dr. Grier notes that very few historians of computing bother with studying the work done before 1940, or as he put it. “In studying either the practice or the history of computing, few individuals venture into the landscape that existed before the pioneering machines of the 1940s. …. Arguably, there is good reason for making a clean break with the prior era, as these computing machines seem to have little in common with scientific computation as it was practiced prior to 1940. At that time, the term ‘computer’ referred to a job title, not to a machine. Large-scale computations were handled by offices of human computers.” In this paper, Dr. Grier further notes that very few of the women hired as human computers elected to make the transition to becoming computer programmers, as we currently use the term. In fact, the only women who did make the transition were the dozen or so who had worked on the ENIAC and were thus used to large scale computing machines. According to Dr. Grier [R20]. “Most human computers worked only with an adding machine or a logarithm table or a slide rule. During the 1940s and 1950s, when electronic computers were becoming more common in scientific establishments, human computers tended to view themselves as more closely to the scientists for whom they worked, rather than the engineers who were building the new machines and hence [they] did not learn programming [as the engineers did].” Classification of Computing Machines by Generations Before giving this classification, the author believes that he should cite directly a few web references that seem to present excellent coverage of the history of computers. We take the standard classification by generations from the book Computer Structures: Principles and Examples [R04], an early book on computer architecture. 1. The first generation (1945 – 1958) is that of vacuum tubes. 2. The second generation (1958 – 1966) is that of discrete transistors. 3. The third generation (1966 – 1972) is that of small-scale and medium-scale 4. The fourth generation (1972 – 1978) is that of large-scale integrated circuits.. 5. The fifth generation (1978 onward) is that of very-large-scale integrated circuits. This classification scheme is well established and quite useful, but does have its drawbacks. Two that are worth mention are the following. 1. It ignores the introduction of the magnetic core memory. The first large-scale computer to use such memory was the MIT Whirlwind, completed in 1952. With this in mind, one might divide the first generation into two sub-generations: before and after 1952, with those before 1952 using very unreliable memory technologies. 2. The term “fifth generation” has yet to be defined in a uniformly accepted way. For many, we continue to live with fourth-generation computers and are even now looking forward to the next great development that will usher in the fifth generation. The major problem with the above scheme is that it ignores all work before 1945. To quote an early architecture book, much is ignored by this ‘first generation’ label. “It is a measure of American industry’s generally ahistorical view of things that the title of ‘first generation’ has been allowed to be attached to a collection of machines that were some generations removed from the beginning by any reasonable accounting. Mechanical and electromechanical computers existed prior to electronic ones. Furthermore, they were the functional equivalents of electronic computers and were realized to be such.” [R04, page 35] Having noted the criticism, we follow other authors in surveying “generation 0”, that large collection of computing devices that appeared before the official first generation. Mechanical Ancestors (Early generation 0) The earliest computing device was the abacus; it is interesting to note that it is still in use. The earliest form of abacus dates to 450 BC in the western world and somewhat earlier in For those who like visual examples, we include a picture of a modern abacus. Figure: A Modern Abacus In 1617, John Napier of Merchiston (Scotland) took the next step in the development of mechanical calculators when he published a description of his numbering rods, since known as ‘Napier’s bones’ for facilitating the multiplication of numbers. The set that belonged to Charles Babbage is preserved in the The first real calculating machine, as the term is generally defined, was invented by the French philosopher Blaise Pascal in 1642. It was improved in 1671 by the German scientist Gottfried Wilhelm Leibniz, who developed the idea of a calculating machine which would perform multiplication by rapidly repeated addition. It was not until 1694 that his first complete machine was actually constructed. This machine is still preserved in the Royal A Diversion to Discuss Weaving – the Jacquard Loom It might seem strange to divert from a discussion of computing machinery to a discussion of machinery for weaving cloth, but we shall soon see a connection. Again, the invention of the Jacquard loom was in response to a specific problem. In the 18th century, manual weaving from patterns was quite common, but time-consuming and plagued with errors. By 1750 it had become common to use punched cards to specify the pattern. At first, these instructions punched on cards were simply interpreted by the person (most likely a boy, as child labor was common in those days). Later modifications included provisions for mechanical reading of the pattern cards by the loom. The last step in this process was taken by Joseph Jacquard in 1801 when he produced a successful loom in which all power was supplied mechanically and control supplied by mechanical reading of the punched cards; thus becoming one of the first programmable machines controlled by punched cards. We divert from this diversion to comment on the first recorded incident of industrial sabotage. One legend concerning Jacquard states that during a public exhibition of his loom, a group of silk workers cast shoes, called “sabots”, into the mechanism in an attempt to destroy it. What is known is that the word “sabotage” acquired a new meaning, with this meaning having become common by 1831, when the silk workers revolted. Prior to about 1830, the most common meaning of the word “sabotage” was “to use the wooden shoes [sabots] to make loud noises”. Charles Babbage and His Mechanical Computers Charles Babbage (1792 – 1871) designed two computing machines, the difference engine and the analytical engine. The analytical engine is considered by many to be the first general-purpose computer. Unfortunately, it was not constructed in Babbage’s lifetime, possibly because the device made unreasonable demands on the milling machines of the time and possibly because Babbage irritated his backers and lost financial support. Babbage’s first project, called the difference engine, was begun in 1823 as a solution to the problem of computing navigational tables. At the time, most computation was done by a large group of clerks (preferably not mathematically trained, as the more ignorant clerks were more likely to follow directions) who worked under the direction of a mathematician. Errors were common both in the generation of data for these tables and in the transcription of those data onto the steel plates used to engrave the final product. The difference engine was designed to calculate the entries of a table automatically, using the mathematical technique of finite differences (hence the name) and transfer them via steel punches directly to an engraver’s plate, thus avoiding the copying errors. The project was begun in 1823, but abandoned before completion almost 20 years later. The British government abandoned the project after having spent £17,000 (about $1,800,000 in current money – see the historical currency converter [R24]) on it and concluding that its only use was to calculate the money spent on it. The above historical comment should serve as a warning to those of us who program computers. We are “cost centers”, spending the money that other people, the “profit centers”, generate. We must keep these people happy or risk losing a job. Figure: Modern Recreation of Babbage’s Analytical Engine Figure: Scheutz’s Differential Engine (1853) – A Copy of Babbage’s Machine The reader will note that each of Babbage’s machine and Scheutz’s machine are hand cranked. In this they are more primitive than Jacquard’s loom, which was steam powered. Babbage’s analytical engine was designed as a general-purpose computer. The design, as described in an 1842 report (http://www.fourmilab.ch/babbage/sketch.html) by L. F. Menebrea, calls for a five-component machine comprising the following. 1) The Store A memory fabricated from a set of counter wheels. This was to hold 1,000 50-digit numbers. 2) The Mill Today, we would call this the Arithmetic-Logic Unit 3) Operation Cards Selected one of four operations: addition, subtraction, multiplication, or division. 4) Variable Cards Selected the memory location to be used by the operation. 5) Output Either a printer or a punch. This design represented a significant advance in the state of the art in automatic computing for a number 1) The device was general-purpose and programmable, 2) The provision for automatic sequence control, 3) The provision for testing the sign of a result and using that information in the sequencing decisions, and 4) The provision for continuous execution of any desired instruction. To complete our discussion of Babbage’s analytical engine, we include a program written to solve two simultaneous linear equations. m·x + n·y = d m’·x + n’·y = d’ As noted in the report by Menebrea, we have x = (d·n’ – d·’n) / (m·n’ – m’·n). The program to solve this problem, shown below, seems to be in assembly language. Source: See R25 No discussion of Babbage’s analytical engine would be complete without a discussion of Ada Byron, Lady Lovelace. We have a great description from a web article written by Dr. Betty Toole [R26]. Augusta Ada Byron was born on December 10, 1815, the daughter of the famous poet Lord Byron. Five weeks after When she was 17, Babbage continued to work on plans for his analytical engine. In 1841, Babbage reported on progress at a seminar in Bush’s Differential Analyzer Some of the problems associated with early mechanical computers are illustrated by this picture of the Bush Differential Analyzer, built by Vannevar Bush of MIT in 1931. It was quite large and all mechanical, very heavy, and had poor accuracy (about 1%). [R21] Figure: The Bush Differential Analyzer For an idea of scale in this picture, the reader should note that the mechanism is placed on a number of tables (or lab benches), each of which would be waist high. For the most part, computers with mechanical components were analog devices, set up to solve integral and differential equations. These were based on an elementary device, called a planimeter that could be used to produce the area under a curve. The big drawback to such a device was that it relied purely on friction to produce its results; the inevitable slippage being responsible for the limited accuracy of the machines. A machine such as Bush’s differential analyzer had actually been proposed by Lord Kelvin in the nineteenth century, but it was not until the 1930’s that machining techniques were up to producing parts with the tolerances required to produce the necessary accuracy. The differential analyzer was originally designed in order to solve differential equations associated with power networks but, as was the case with other computers, it was pressed into service for calculation of trajectories of artillery shells. There were five different copies of the differential analyzer produced; one was used at the Moore School of Engineering where it must have been contemporaneous with the ENIAC. The advent of electronic digital computers quickly made the mechanical analog computers obsolete, and all were broken up and mostly sold for scrap in the 1950’s. Only a few parts have been saved from the scrap heap and are now preserved in museums. There is no longer a fully working mechanical differential analyzer from that period. To finish this section on mechanical calculators, we show a picture of a four-function mechanical calculator produced by the Monroe Corporation about 1953. Note the hand cranks and a key used to reposition the upper part, called the “carriage”. The purely mechanical calculators survived the introduction of the electronic computer by a number of years, mostly due to economic issues – the mechanicals were much cheaper, and they did the job. Electromechanical Ancestors (Late generation 0 – mid 1930’s to 1952) The next step in the development of automatic computers occurred in 1937, when George Stibitz, a mathematician working for Bell Telephone Laboratories designed a binary full adder based on electromechanical relays. Dr. Stibitz’s first design, developed in November 1937, was called the “Model K” as it was developed in his kitchen at home. In late 1938, Bell Labs launched a research project with Dr. Stibitz as its director. The first computer was the Complex Number Calculator, completed in January 1940. This computer was designed to produce numerical solutions to the equations (involving complex numbers) that arose in the analysis of transmission circuits. Stibitz worked in conjunction with a Bell Labs team, headed by Samuel Williams, to develop first a machine that could calculate all four basic arithmetic functions, and later to develop the Bell Labs Model Relay Computer, now recognized as the worlds first electronic digital computer. [R27] Also known as 2 to 5 1 to 6 1 to 6 1 to 7 Table: The Model II – Model V Relay Computers [R09] The Model V was a powerful machine, in many ways more reliable than and just as powerful as the faster electronic computers of its time. The relay computers became obsolete because the Model V represented the basic limit of the relay technology, whereas the electronic computers could be modified considerably. The IBM Series 400 Accounting Machines We now examine another branch of electromechanical computing devices that existed before 1940 and influenced the development of the early digital computers. This is a family of accounting machines produced by the International Business Machines Corporation (IBM), beginning with the IBM 405 Alphabetical Accounting Machine, introduced in 1934. Figure: IBM Type 405 [R55] According to the reference [R55], the IBM 405 was IBM's high-end tabulator offering (and the first one to be called an Accounting Machine).The 405 was programmed by a removable plugboard with over 1600 functionally significant "hubs", with access to up to 16 accumulators, the machine could tabulate at a rate of 150 cards per minute, or tabulate and print at 80 cards per minute. The print unit contained 88 type bars, the leftmost 43 for alphanumeric characters and the other 45 for digits only. The 405 was IBM's flagship product until after World War II (in which the 405 was used not only as a tabulator but also as the I/O device for classified relay calculators built by IBM for the US Army). In 1948, the Type 405 was replaced by an upgraded model, called the Type 402. Figure: The type bars of an IBM 402 Accounting Machine [R55] As noted above, the IBM 405 and its successor, the IBM 402, were programmed by wiring plugboards. In order to facilitate the use of “standard programs”, the plugboard included a removable wiring unit upon which a standard wiring could be set up and saved. Figure: The Plugboard Receptacle Figure: A Wired Plugboard The Harvard Computers: Mark I, Mark II, Mark III, and Mark IV The Harvard Mark I, also known as the IBM Automatic Sequence Controlled Calculator (ASCC) was the largest electromechanical calculator ever built. It has a length of 51 feet, a height of eight feet, and weighed nearly five tons. Here is a picture of the computer, from the IBM archives (http://www-1.ibm.com/ibm/history/exhibits/markI/markI_intro.html). Figure: The Harvard Mark I computer was conceived in the 1930’s by Howard H. Aiken, then a graduate student in theoretical physics at The Mark I operated at Harvard for 15 years, after which the machine was broken up and parts sent to the Smithsonian museum, a museum at Harvard, and IBM’s collection. It was the first of a sequence of four electromechanical computers that lead up to the ENIAC. The Mark II was begun in 1942 and completed in 1947. The Mark III, completed in 1949, was the first of the series to use an internally stored program and indirect addressing. The Mark IV, last of this series, was completed in 1952. It had a magnetic core memory used to store 200 registers and seems to have used some vacuum tubes as well. IBM describes the Mark I, using its name “ASCC” (Automatic Sequence Controlled Calculator) in its archive web site, as “consisting of 78 adding machines and calculators linked together, the ASCC had 765,000 parts, 3,300 relays, over 500 miles of wire and more than 175,000 connections. The Mark I was a parallel synchronous calculator that could perform table lookup and the four fundamental arithmetic operations, in any specified sequence, on numbers up to 23 decimal digits in length. It had 60 switch registers for constants, 72 storage counters for intermediate results, a central multiplying-dividing unit, functional counters for computing transcendental functions, and three interpolators for reading function punched into perforated tape.” The Harvard Mark II, the second in the sequence of electromechanical computers, was also built by Howard Aiken with support from IBM. In 1945 Grace Hopper was testing the Harvard Mark II when she made a discovery that has become historic – the “first computer bug”, an occurrence reported in her research notebook, a copy of which is just below. The caption associated with the moth picture on the web site is as follows. “Moth found trapped between points at Relay # 70, Panel F, of the Mark II Aiken Relay Calculator while it was being tested at “In 1988, the log, with the moth still taped by the entry, was in the We should note that this event is not the first occurrence of the use of the word “bug” to reference a problem or error. The term was current in the days of Thomas A. Edison, the great American inventor. One meaning of the word “bug” as defined in the second edition of the Oxford English Dictionary is “A defect or fault in a machine, plan, or the like, origin “Mr. Edison, I was informed, had been up the two previous nights discovering ‘a bug’ in his phonograph – an expression for solving a difficulty and implying that some imaginary insect has secreted itself inside and is causing all the trouble.” The Colossus was a computer built in Figure: The Colossus The reader should note the paper tape reels at the right of the picture. Konrad Zuse (1910 – 1995) and the Z Series of Computers Although he himself claimed not to have been familiar with the work of Charles Babbage, Konrad Zuse can be considered to have picked up where Babbage left off. Zuse used electromechanical relays instead of the mechanical gears that Babbage had chosen. Zuse built his first computer, the Z1, in his parent’s After the war, Zuse was employed in the Z1 (1938), a mechanical programmable digital computer. Although mechanical problems made its operation erratic, it was rebuilt by Zuse himself after the war. Z2 (1940), an electro-mechanical computer. Z3 (1941), this machine uses program control. Plankalkül (1945/46), the first programming language, implemented on the Z3. Z22 (1958), the last machine developed by Zuse. It was one of the first to be designed with transistors The next figure shows the rebuilt Z1 in the Figure: Konrad Zuse and the Reconstructed Z1 We use some of Konrad Zuse’s own memoirs to make the transition to our discussion of the “first generation” of computers – those based on vacuum tube technology. Zuse begins with a discussion of Helmut Schreyer, a friend of his who first had the idea. “Helmut was a high-frequency engineer, and on completing his studies (around 1936) … suddenly had the bright idea of using vacuum tubes. At first I thought it was one of his student pranks - he was always full of fun and given to fooling around. But after thinking about it we decided that his idea was definitely worth a try. Thanks to switching algebra, we had already married together mechanics and electro-magnetics – two basically different types of technology. Why, then, not with tubes? They could switch a million times faster than elements burdened with mechanical and inductive inertia.” “The possibilities were staggering. But first basic circuits for the major logical operations such as conjunction, disjunction and negation had to be discovered. Tubes could not simply be connected in line like relay contacts. We agreed that Helmut should develop the circuits for these elementary operations first, while I dealt with the logical part of the circuitry. Our aim was to set up elementary circuits so that relay technology could be transferred to the tube system on a one-to-one basis. This meant the tube machine would not have to be redesigned from scratch. Schreyer solved this problem fairly quickly.” “This left the way open for further development. We cautiously told some friends about the possibilities. The reaction was anything from extremely skeptical to spontaneously enthusiastic. Interestingly enough, most criticism came from Schreyer's colleagues, who worked with tubes virtually all the time. They were doubtful that an apparatus with 2,000 tubes would work reliably. This critical attitude was the result of their own experience with large transmitters which contained several hundred tubes.” “After the War was finally over, news of the Vacuum Tube Computers (Generation 1 – from 1945 to 1958) It is now generally conceded that the first purely electronic binary digital computer was designed by John Vincent Atanasoff (October 4, 1903 – June 15, 1995) at The relation of the ABC to the ENIAC (once claimed to be the first electronic digital computer) has been the source of much discussion. In fact, the claim of precedence of the ABC over the ENIAC was not settled until taken to court quite a bit later. Figure: Clifford Berry standing in front of the ABC Atanasoff [R 32] decided in building the ABC that: “1) He would use electricity and electronics as the medium for the 2) In spite of custom, he would use base-two numbers (the binary system) for his computer. 3) He would use condensers (now called “capacitors” – see Review of Basic Electronics in these notes) for memory and would use a regenerative or "jogging" process to avoid lapses that might be caused by leakage of power. 4) He would compute by direct logical action and not by enumeration as used in analog calculating devices.” According to present-day supporters of Dr. Atanasoff, two problems prevented him from claiming the credit for creation of the first general-purpose electronic digital computer: the slow process of getting a patent and attendance at a lecture by Dr. John Mauchly, of ENIAC fame. In 1940, Atanasoff attended a lecture given by Dr. Mauchly and ,after the lecture, spent some time discussing electronic digital computers with him. This lead to an offer to show Mauchly the ABC; it was later claimed that Mauchly used many of Atanasoff’s ideas in the design of the ENIAC without giving proper credit. Atanasoff sued Mauchly in U.S. Federal Court, charging him with piracy of intellectual property. This trail was not concluded until 1972, at which time U.S. District Judge Earl R. Larson ruled that the ENIAC was "derived" from the ideas of Dr. Atanasoff. While Mauchly was not deemed to have stolen Atanasoff’s ideas, the judge did give Atanasoff credit as the inventor of the electronic digital computer. [R32] As we shall see below, the claim of the ENIAC as the first electronic digital computer is not completely unfair to Dr. Atanasoff; it is just not entirely true.
<urn:uuid:8d83d91a-abb6-48c4-9b64-74841f753a82>
CC-MAIN-2017-04
http://edwardbosworth.com/My5155Text_V07_HTM/MyText5155_Ch01A_V07.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280872.69/warc/CC-MAIN-20170116095120-00528-ip-10-171-10-70.ec2.internal.warc.gz
en
0.957193
10,789
3.25
3
Optical and Mechanical Resolution Optical resolution tells you how many pixels the scan element can see at once across the width of a page. Mechanical resolution tells you how many steps the scan element takes going down the page (or how many the page takes going past the scan element). A spec like "600-by1,200-dpi optical" really means "600-ppi optical by 1,200-ppi mechanical." N by M Optical Resolution. Not. Even ignoring the distinction between optical and mechanical resolution, a spec such as "600-by-1,200 optical resolution" is misleading. The scanner can't pass a 600-by-1,200-ppi image to your computer, so the top resolution you'll get without interpolation is 600-by-600 ppi. High Optical Resolutions Even high optical resolutions usually don't matter. For typical office tasks such as copying, faxing and scanning to PDF files, and even for scanning photos to print at the same size, a 600-ppi scanner is almost always all you need. Higher resolutions are rarely useful unless you're scanning, say, slides or otherwise need to resize the image.
<urn:uuid:b7ced4e4-265a-41e3-9046-274023986884>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Printers/Playing-Fast-and-Loose-with-Scanner-Specs/1
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00436-ip-10-171-10-70.ec2.internal.warc.gz
en
0.917945
240
2.546875
3
A Duke University student has created a wireless power saving technology called Sleep Well which has the capability to double the battery life of mobile devices, they claim. The technology addresses the need for WiFi devices to 'stay awake' while waiting their turn to download a packet and this results in higher battery consumption in areas of dense WiFi availability. Using an analogy Duke graduate student Malweiler explained the concept of the technology: "Big cities face heavy rush hours as workers come and leave their jobs at similar times. If work schedules were more flexible, different companies could stagger their office hours to reduce the rush." "With less of a rush, there would be more free time for all, and yet, the total number of working hours would remain the same." "The same is true of mobile devices trying to access the Internet at the same time," Manweiler said. "The SleepWell-enabled WiFi access points can stagger their activity cycles to minimally overlap with others, ultimately resulting in promising energy gains with negligible loss of performance." The researchers called SleepWell a potentially 'important upgrade' to WiFi technology although it does seem unlikely that WiFi power savings will result in doubling of most smartphone phone users' battery life. Still, power is power.
<urn:uuid:56d8452e-1c04-446e-8db8-2859f8deffa2>
CC-MAIN-2017-04
http://www.pcr-online.biz/news/read/researchers-claim-wifi-upgrade-doubles-mobile-battery-life/019676
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00344-ip-10-171-10-70.ec2.internal.warc.gz
en
0.962107
252
2.765625
3
We’ve been hearing about so-called 3D, stacked-chip technology for a while. By layering circuits and components on top of each other, data does not need to travel as far and consequently the chip runs much faster. The design has not yet reached the commercialization stage, but that may soon change as progress on these chips appears to be reaching critical mass. This is the subject of recent Processors Whispers column, by Andreas Stiller. The article begins by citing a humorous exchange that took place at CeBIT. When IBM Chairman Sam Palmisano presented German Chancellor Angela Merkel with a model 3D chip stack, the Chancellor asked him, “Do you take that from Intel?” Palmisano’s reply, “No, ours are better”, was nearly drowned out by the crowd’s hearty laughter. However blunt the Chancellor’s question, Intel was among the first to grow the processor into the vertical dimension. When designing a stacked chip, one of the primary goals is to integrate the memory either above or beneath the CPU, which Intel’s Teraflops Research Chips project accomplished. Stiller notes that IBM’s 3D technology will first appear in its upcoming Power8 processor, planned for 2013, using 28 or 22nm process technology. He says the processor will likely employ a linked memory and “a layer of small specialized computing cores adapted for specific intended uses.” The new design may even repurpose the Synergistic Processing Units of the abandoned Cell processors, as the 3D stacks offer enough room for modularity. In the future, 100,000 connections per square millimeter may be possible. Stiller also acknowledges the possibility that Intel’s Haswell processor, the successor to Sandy Bridge that is scheduled for 2013 release, could make use of the 3D technique in the form of a large stacked cache. Three-dimensional integration does have its drawbacks, however. The chips’ high power densities pose a significant cooling challenge. To address this issue, IBM is working with the École Polytechnique Federale de Lausanne and the ETH Zurich on an innovative cooling system that pipes water between the chip layers through tiny tubes no more than 50 microns in diameter, or about the width of a human hair. There are still many more technical challenges to overcome, however, and fully functional prototypes might not appear for another decade.
<urn:uuid:cc95a7d1-520a-4240-9727-f4e22db3e3ad>
CC-MAIN-2017-04
https://www.hpcwire.com/2011/03/14/3d_chip_technology_readies_for_take_off/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280891.90/warc/CC-MAIN-20170116095120-00372-ip-10-171-10-70.ec2.internal.warc.gz
en
0.943595
501
2.9375
3
Children and the Internet: Safety Rules 21 Jun 2010 Today’s Internet provides huge opportunities for children to learn more about the world in which they live, to communicate more widely and to study and have fun, but the Internet can also have a dark side. Children are often far too trusting of their virtual friends and this may lead to them unwittingly revealing personal information, or meeting someone in real life whose motives are less than pure. Some criminals will even take advantage of children’s naivety to extort money or download malware to the family computer. However, children are not just victims on the Internet – some take part in illegal activities such as hacking. Parents have to share some of the blame when this happens. “When their children start to get the hang of what they are doing, many parents consider that their mission is fulfilled,” Maria Namestnikova, Senior Spam Analyst at Kaspersky Lab, explains in her article ‘Children and the Internet’. “But that is when the parents’ work actually begins.” The author suggests that one of the most effective method for parents to keep their children safe when online is to surf the Internet together, explaining what is safe and what is potentially dangerous as they go along. This approach, in combination with software solutions that include parental control functionality, offers the most all-round protection for any child. Surfing together may resolve a number of other problems of a family nature, as well as addressing issues of IT security. Anyone who cares about their child’s online safety ought to help them get to grips with this exciting new environment. The full version of the article ‘Children and the Internet’ can be found at: www.securelist.com/en.
<urn:uuid:55932993-0f6a-4b45-a787-2dd7eb130da4>
CC-MAIN-2017-04
http://www.kaspersky.com/au/about/news/virus/2010/Children_and_the_Internet_Safety_Rules
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00280-ip-10-171-10-70.ec2.internal.warc.gz
en
0.959848
363
3.1875
3
Sir Tim Berners-Lee's huge online bill. Today marks the 25th birthday of the World Wide Web. Intended as a way to make sharing information over computers easier, the Web has transformed lives in ways its creator Sir Tim Berners-Lee never thought possible. Critical of Internet surveillance by governments world- wide, Sir Tim Berners-Lee now believes an online "Bill of Rights" is needed for a healthy, open Internet. In IT Blogwatch, bloggers wish the World Wide Web a happy birthday. Filling in for our humble blogwatcher Richi Jennings, is a humbler Stephen Glasskeys. Not a troglodyte, Sharon Gaudin uses the Internet: [On the] 25th anniversary of the World Wide Web...87% of U.S. adults use the Internet. ... That's a significant change compared to the 42% of U.S. adults who had never heard of [it] in 1995...six years after Tim Berners-Lee...introduced the idea of the World Wide Web. MORE So Rich McCormick proposes to a computer: On March 12th, 1989, Sir Tim Berners-Lee put forth a proposal to make information sharing possible over computers, using nodes and links to create a "web" that would eventually...become the modern Internet. ... [Now] Berners-Lee has called for the Internet he invented to stay free and open. MORE Straight talk from Sir Tim Berners-Lee: Twenty-five years ago today, I filed the proposal for what was to become the World Wide Web. My boss dubbed it 'vague but exciting'. Luckily, he thought enough of the idea to allow me to quietly work on it on the side. ...Today, and throughout this year, we should celebrate the Web's first 25 years. But though the mood is upbeat, we also know we are not done. We have much to do for the Web to reach its full potential. We must continue to defend its core principles and tackle some key challenges. MORE Then Jemima Kiss fights for our rights: The inventor of the world wide web believes an online "Magna Carta" is needed to protect and enshrine the independence of the medium he created and the rights of its users worldwide. MORE Meanwhile, the BBC serves breakfast: [Sir Tim Berners-Lee] has been an outspoken critic of government surveillance following a series of leaks from ex-US intelligence contractor Edward Snowden. ...He told BBC Breakfast the online community has now reached a crossroads. ..."It's time for us to make a big communal decision," he said. "In front of us are two roads - which way are we going to go? ... Are we going to continue on the road and just allow the governments to do more and more and more control - more and more surveillance?" MORE Subscribe now to the Blogs Newsletter for a daily summary of the most recent and relevant blog posts at Computerworld.
<urn:uuid:88b74329-c2e2-41ad-8edc-8ab646b0be0e>
CC-MAIN-2017-04
http://www.computerworld.com/article/2476022/internet/happy-birthday-world-wide-web-.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280761.39/warc/CC-MAIN-20170116095120-00308-ip-10-171-10-70.ec2.internal.warc.gz
en
0.931797
617
2.53125
3
Recordings of 911 calls gone awry have been played repeatedly by broadcast media and published verbatim by print media. Sometimes blamed on outdated technology, other times on the call-taker, these phone calls highlight two of the common problems associated with 911. Technology and call-taker standards and training vary by state and locality, where counties and cities, even those next to one another, sometimes have varying requirements. To make matters worse, the current fiscal environment, where governments at all levels are feeling pain, is forcing some states to raid surcharges collected to pay for new 911 technologies in order to fund other initiatives. Other states are stifled by companies that provide emergency call center equipment that doesn't connect with other vendors, therefore impeding the move toward next-generation 911. The proliferation of cell phones has created the need for new technologies in 911 centers because people assume that call-takers automatically know their phone number and location, which isn't always true. As of December 2009, 285.6 million U.S. residents used cell phones and 22.7 percent of U.S. households were wireless only, meaning they lack a landline telephone, which for decades was the main way people called 911, according to CTIA-The Wireless Association. Public safety answering points (PSAPs), the local centers that handle calls to 911, and wireless network carriers have been implementing E911 technology that will provide call-takers with the wireless caller's phone number and estimated location. The Wireless Communications and Public Safety Act of 1999 required the implementation of E911, to be executed in two phases. Phase I required wireless carriers to provide the PSAP with the telephone number of the 911 caller and the location of the cell site or base station receiving the call. Phase II required the carriers to provide Automatic Location Identification, which identifies the address or geographic location of the calling device within 300 meters; this was to be completed by the end of 2005. Local call centers have upgraded or are in the process of upgrading their technology to use the data provided by E911. However, in February 2010 NENA found that about 10 percent of the nation's PSAPs hadn't installed the equipment to use that information. The issue is funding: According to a U.S. General Accountability Office report, "Not all states have implemented a funding mechanism for wireless E911, and of those that have, some have redirected E911 funds to unrelated uses." Consumer technology is pushing the evolution of 911 technology even further. Popular technologies like text messaging, photos and videos and the need to transfer calls and data between PSAPs has led to the need for next-generation 911, which will run off statewide public safety IP networks. According to the U.S. Department of Transportation's Research and Innovative Technology Administration, "The next-generation 911 initiative will establish the foundation for public emergency services in this wireless environment and enable an enhanced 911 system compatible with any communications device." Many consider Indiana a leader in the next-generation 911 initiative. It has a statewide IP network that's based on a redundant high-speed fiber network. "We have an IP network that is dedicated solely to 911," said Ken Lowden, executive director of the Indiana Wireless Enhanced 911 Advisory Board. "We have all the counties that we can connected to it and the ability to transfer both voice and data." Indiana's PSAPs have been connected to the IP network for about three years, except those that are served by AT&T. He said the AT&T counties aren't connected because the company does a straight lease of PSAP equipment to the localities, which means it retains full control of the equipment and it refused the Indiana Wireless Enhanced 911 Advisory Board connectivity to its equipment. Counties that work with other vendors buy the equipment or do a lease purchase on it, so they control any changes made to it. Lowden said calls can be transferred to the AT&T counties, but they must pass through the company's router, which causes them to lose the digital advantage. "If there's a Verizon county next door, they can't transfer the call out of the AT&T network into a Verizon territory," he said. "Once the call is inside the AT&T network it has to stay there." This can impede public safety because additional information, like the caller's location and phone number, won't be transferred to a PSAP operating on another vendor's equipment - just the person's voice - therefore eliminating the benefits of E911. Indiana isn't the only state to run into provider-related hurdles. "In North Carolina, we have three major telephone companies that are 911 service providers: CenturyLink, AT&T and Verizon," said Richard Taylor, executive director of the North Carolina 911 Board. "And just in the county that I'm sitting in right now, Wake County, all three of those companies operate. We cannot transfer voice and data from the centers in Wake County to another center because one operates under AT&T, one under CenturyLink and one under Verizon." He said the companies lack interconnection agreements to exchange information, which is fundamental in next-generation 911. However, similar to the situation in Indiana, North Carolina has found AT&T to be the most challenging to work with, Taylor said. "Companies like AT&T will absolutely refuse to allow us to have those interconnections agreements," he said. "In fact, they have gone through all kinds of lawsuits and not just in our state, but in other states, trying to keep other companies from being able to connect into their system." "AT&T is committed to doing our part to make next-generation 911 available across the country," said an AT&T spokesperson. "We work closely with public safety answering points to ensure that customers are provided with the most advanced and reliable emergency communications services. In addition, we continue to engage in the timely resolution of interconnection negotiations for the provision of competitive 911 service." Regulations from the FCC would help alleviate this vendor-driven problem for sharing calls and data across PSAPs that operate on different systems. Lowden said national requirements about technology are nice to think about but he doesn't think it would work from a practical standpoint. "I think 911 should be a local, interstate issue," he said. Go to Emergency Management to learn about how 911 call-taker training standards and technology vary nationwide, and how some states are raiding their 911 coffers to fund other initiatives.
<urn:uuid:aa020dee-add7-44a7-be48-74ba80160aac>
CC-MAIN-2017-04
http://www.govtech.com/public-safety/911-Technology-Problems-Plague-First-Line.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00124-ip-10-171-10-70.ec2.internal.warc.gz
en
0.965399
1,315
2.6875
3
Simplicity is: logging in without a username or password “I really like what I can do in the web interface, but having to enter my username and password to login each time is extra work.” We’ve seen the above comment many times. Identity verification, as everyone who has not been lost on a desert island for 10 years knows, is really, really important these days. But like many aspects of security, it can be rather annoying. On the bright side, there are a number of ways to get around this step and make the login process simpler without necessarily making your account less secure. Here is how we have helped many customers simplify their Internet life. 1. Your browser can save your password Most modern web browsers can tell when you are on a web site’s “login” page and will ask you if you want to “save your password” to that site. If you choose this option, then the password will be saved on your computer somewhere so that, the next time you visit the login page, your login credentials can be pre-filled for you. All you have to do then is click “login” and you are in. Super quick. This method will work with most web sites. If you are not being prompted to save passwords, it’s possible this feature simply isn’t enabled in your browser. Here is how to turn it on: - Settings > click on “Show advanced settings…” - Enable “ “ - Preferences > Security tab - Enable “Remember passwords for sites” - Preferences > Passwords tab - Enable “Autofill user names and passwords” Internet Explorer (v11): - Internet Options > Content Tab - Press the “Settings” button under “AutoComplete - Enable “User names and passwords on forms” - press “Ok” Warning: What you must know about this method is that your username and password are being saved on your local computer. As such, someone with access to your computer (either access to your login or an administrator) could possibly get at that information, and that can be a significant security risk. Additionally, if you step away from your computer without logging out, anyone sitting down can then login as you to any sites where your login credentials are being saved. So, you should never save your passwords on public computers (e.g. library, coffee shop) or computers that are not accessed exclusively by you and/or people you trust. If you use Mozilla FireFox, there is a useful feature that allows you to set a “master password” for all your other passwords. With this option enabled, your saved login passwords will be encrypted on disk, making them inaccessible without the master password. This protects your passwords from someone sitting down at your computer and opening a new FireFox session, but of course hinges on you remembering to close FireFox before you step away. To enable the master password option: - Go to “Preferences” and choose the “Security” tab - Enable “Use a master password”. If your organization has security requirements (e.g. HIPAA), please check with your compliance officer or IT staff to see if saving passwords in this way is permitted before you start doing it. 2. Quick Logins LuxSci has a cool feature called “Quick Logins” that drastically improves on the browser-based “Saved Password” option discussed above for logins to the LuxSci.com member’s web site: - It works with any browser, even on tablets and mobile phones - Your password is never saved on your computer or device. - You can setup Quick Logins for multiple accounts so you can get a list of account choices on the login page and just press one button. - You can see a list of all browsers that have Quick Logins enabled, and you can selectively invalidate any of them at any time even if you no longer have access to that computer or browser. - Users can enable Quick Logins for themselves, or administrators can require Quick Logins for their users on a case-by-case basis. To learn more about Quick Logins and how to set them up, see: Want to Login to LuxSci from your Mobile Phone with a Single Touch? Quick Logins work great with all web browsers, but they are especially designed for mobile devices where it is much more painstaking to type passwords manually. What about high security HIPAA accounts? For high security accounts, such as those with HIPAA compliance requirements, Quick Logins are limited: - Account and Domain Administrators are not permitted to use Quick Logins for themselves at all. - Users are never permitted to self-provision Quick Logins — an administrator must enable a Quick Login for an approved user and communicate an authorization code to that user. Even in lower security accounts, administrators are only allowed to access the “mobile site” via Quick Login, for security reasons.
<urn:uuid:69f50b11-950c-404d-90a3-bc2ce030f89a>
CC-MAIN-2017-04
https://luxsci.com/blog/simplicity-is-logging-in-without-a-username-or-password.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285001.96/warc/CC-MAIN-20170116095125-00518-ip-10-171-10-70.ec2.internal.warc.gz
en
0.898893
1,064
2.5625
3
A group of teenagers click a mouse and begin "bombing" England. Across the room, the English prime minister -- a teenager looking grimly at another screen -- attempts to figure out a counterattack in the midst of World War II. These teenagers aren't at home playing shooter games on the Xbox. They're in a high-school classroom learning the dynamics of World War II in a game called Making History. Their video-game-savvy world history teacher, David McDivitt, incorporates video games in his history and sociology courses at Converse, Ind.'s Oak Hill High School. McDivitt says using video games modernizes the education process to the meet the digitized world of teenagers, who view technology as an essential part of their lives. "I see a future where students are engaged in technology at school because that's what they do outside of school, and that's how we live our lives; we're always wired to something like an iPod or cell phone," he said. "We need to move in that direction because that's where our kids are." Let's Bomb Canada For his world history course, McDivitt puts away the textbooks for a week and his class plays Making History -- a World War II simulation game where students assume leadership of countries during that conflict. Governing their countries individually or in teams, students negotiate treaties, build armaments and maintain their country's domestic needs. The game has a real-time scoring system that evaluates each country's performance and compares the students' decisions to actual events in the war. Thus, students controlling England during the war can actually compete against Winston Churchill and see how their economic, military, and international and domestic policy decisions measure up. Teachers can choose from six different scenarios referring to a specific period of the war. One scenario, "The End of Diplomacy," begins after agreements made at the 1938 Munich Conference collapsed. Each scenario begins at an accurate moment in history, and students must navigate the economic, political and military events of the times, compiled in an enormous historical database by Professor William Keylor of Boston University. The game includes the correct national debt, military size and pacts formed between countries, such as Germany's nonaggression pact with Russia. McDivitt supplements Making History, which meets state educational standards, with in-class discussions and lectures. As a result, children live history instead of reading about it, said McDivitt, who found his students making treaties and planning attacks in school hallways. "The excitement level that game brought to the kids was incredible. Both years we played the game, the kids would play after school, [at] lunch and in study hall." After the first year of incorporating Making History into the classroom, McDivitt knew the video game was successfully engaging his students. Yet he wondered if the students were actually learning more, or more effectively, through the video game than with traditional teaching methods, such as textbooks, lectures and in-class discussions. The following year, McDivitt surveyed 110 students from five classes: Three classes used the game, and the other two used only textbooks and class discussions. After collecting tests and essays, McDivitt found that students who played the game generally scored better on tests, especially on geography and multiple-choice questions. Yet McDivitt was most impressed by how well students who played the game demonstrated their knowledge of the events in essay questions. "What I found most impressive was the essay question and the depth of understanding the kids who played the game had versus kids who learned solely from the textbook," McDivitt said. "I found that the game group was more thorough, and you might say thoughtful in their follow-up writing assignments." The game brought excitement to McDivitt's classroom and helped engage students who had poor grades and were generally not eager to learn. "Good students are good students no matter what, but what the video game did was bring up the poor student, who doesn't work well in school," McDivitt said. "When we went to the computer game, those kids were just excited and along the way they engage in the learning process without knowing it. It might be a little trick, but that's OK." Kristin Thompson, one of McDivitt's students, said playing Making History in class was a refreshing break from the traditional classroom setting, and it enhanced her comprehension of World War II. "I did enjoy playing Making History -- not just because it got me out of class -- but because it was fun, and I got a better understanding about World War II by acting it out instead of just reading it in the textbook," Thompson said. "It was difficult at first but after I caught on to how to play, it was fun and a more interesting way to learn." The Gaming Classroom Muzzy Lane, the company that designed Making History, is betting video games will soon become a substantial part of a teacher's curriculum. The firm is developing a game called Making Money that will teach entrepreneurship, stock market strategies, and business policy and management. It also is creating a game called Living History that will teach what life was like in different periods of history. Making History is catching on at various schools nationwide. It's used at the University of Illinois, Salem State College, Southeast Missouri State University, the College of Charleston, and Des Moines Area Community Colleges, as well as several high schools. The National Education Association (NEA) accepts that video games can have value in an educational setting, but does not endorse or advocate any particular video game. "Computer games and simulations can be an important part of the educational experience," said Reg Weaver, president of the NEA. "Many educators are discovering and lauding the benefits of incorporating computer games into their curriculum. Technology has changed the classroom exponentially." In October 2005, an NEA representative attended a forum on educational games presented by the Federation of American Scientists (FAS). The forum had various panels that addressed what benefits video games offer in an educational setting. Topics at the conference included the difficulty in adopting new instructional models like video games, the need for new forms of assessment, resistance from educators, attitudes about games, uncertainty about the effect of games on learning, and preparing teachers for new roles with new skills. There was strong consensus among summit participants that many video game elements can be applied in education, according to the FAS summary of the event. According to the report, some of the major findings of the forum were that many video games require players to master skills that are in demand by today's employers, and that a program is needed for research and experimentation to enhance the development of educational games. "The strength of some games is that they support complex thinking and require thinking about how to manage resources and how you sequence events," said Kay Howell, vice president of information technologies projects for the FAS. "Those are very important skills when you get to real jobs -- skills hard to teach in a typical classroom setting." However, Howell said video games used in classrooms should have a curriculum built around them so that teachers can use them more effectively. Because of the success of Making History in his world history course, McDivitt decided to add The Sims -- a video game where players create characters with different personalities who interact with each other under one roof -- as part of his sociology course for the 2006 school year. McDivitt is hoping the game will help his students better understand social interaction. McDivitt sees himself as a practical educator updating his teaching methods in accordance with children, whose lifestyle and worldview have been shaped by the computer. In his blog, McDivitt tells other educators not to be afraid of video games and points out that the "digital teenager" has altered the definition of traditional, and now it is a question of whether educators will update their repertoires to address that change. McDivitt hopes that by the time his daughters are in high school, teachers will use technology more than they do today. But for now, he is forging into uncharted teaching territory and hopes he is helping to pave the way for a more technologically oriented classroom. "I am prepared to fight the good fight, to convince the naysayers that not all games are a mind-numbing activity," McDivitt said. "Gaming is a way to excite the unexcited, to engage the disengaged and to educate those who fight education."
<urn:uuid:384a39b2-8d42-4185-a587-bd362e98be19>
CC-MAIN-2017-04
http://www.govtech.com/magazines/gt/Modern-Education.html?page=3
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00153-ip-10-171-10-70.ec2.internal.warc.gz
en
0.975113
1,719
3.421875
3
The National Center for Atmospheric Research (NCAR) broke ground yesterday on a new data center in Cheyenne, Wyoming that will house one of the world’s most powerful supercomputers. The future NCAR-Wyoming Supercomputing Center (NWSC) will be a 171,000 square foot facility in North Range Business Park. Scientists will use the supercomputing center to accelerate research into climate change, examining how it might affect agriculture, water resources, energy use, sea levels and impact on extreme weather events, including hurricanes. The $70 million facility will include 24,000 square feet of raised floor data center space, with the current design plan calling for a 10-foot raised floor and 9-foot ceiling plenum to manage airflow required to cool the IT gear. The new machine is expected to rank among the world’s 25 fastest supercomputers. The speed, and manufacturer of the supercomputer will be determined through the initial procurement process, which has an estimated budget of $25 million to $35 million. Includes Archive for Climate Data The center is a partnership among NCAR, the National Science Foundation (NSF), the University of Wyoming (UW), the state of Wyoming, Cheyenne LEADS, the Wyoming Business Council, and Cheyenne Light, Fuel and Power. It will also house a data storage and archival facility that will hold, among other scientific data, unique historical climate records. “After extensive planning and preparation, it’s gratifying to see the pieces coming together for construction,” said University of Wyoming President Tom Buchanan. “I look forward to the supercomputing center coming online because it’s so important to the research we’re doing.” Buchanan says that the university’s primary use for the supercomputing facility will be to model flow in porous media, in order to better understand how water and carbon dioxide move through the spaces that exist in rocks. This research is critical to the university’s work in carbon sequestration – the development of methods to keep carbon dioxide from fossil fuels out of the atmosphere. Need for More Powerful Supercomputer NCAR, which is based in Boulder, has housed supercomputers in its Mesa Laboratory in southwest Boulder for decades, but needs a new purpose-built facility for the increasingly powerful machines. Most researchers will interact with the center remotely, via the Internet. The NWSC is pursuing gold certification under the LEED ( Leadership in Energy and Environmental Design) program, a voluntary rating system for energy efficient buildings overseen by the US Green Building Council. The facility, which is designed specifically for scientific supercomputing, is scheduled to open its doors in 2012. The supercomputer will likely consume about 3 to 4 megawatts of electricity. The NWSC will get 10 percent of its power from wind energy, with the option to increase that percentage. Economic Development Significance The project is a showcase for the state of Wyoming, which has been actively pursuing data center projects. The state’s economic development team has attended major industry conferences for several years, highlighting Wyoming as a destination for data center development. “We are delighted that construction on the supercomputing center in Cheyenne is moving forward,” said Wyoming Governor Dave Freudenthal. “The partnership with NSF, UCAR, and NCAR allows Wyoming to develop its technology portfolio. The research capabilities it will allow UW will be of great benefit to the state.” In this video, Wyoming Business Council CEO Bob Jensen discusses the significance of the NCAR project in Cheyenne.
<urn:uuid:8a23e35e-1a11-4387-a62e-0cd980d258a5>
CC-MAIN-2017-04
http://www.datacenterknowledge.com/archives/2010/06/16/new-supercomputer-will-track-climate-change/?utm-source=feedburner&utm-medium=feed&utm-campaign=Feed%3A+DataCenterKnowledge+(Data+Center+Knowledge)
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00061-ip-10-171-10-70.ec2.internal.warc.gz
en
0.912746
746
2.875
3
7 Criteria For Enriching Digital EvidenceContext is the essential ingredient that is missing from many digital forensic investigations. Digital forensic investigations are, for the most part, predominantly conducted in direct response to an incident. By taking a reactive approach like this, investigators are under great pressure to gather and process digital evidence before it has been modified or is no longer available. There are practical and realistic scenarios where a more proactive approach to gathering digital evidence can ease tension during forensic/incident response activities. (See How ‘Digital Forensic Readiness’ Reduces Business Risk.) But what is often overlooked in these situations is the need to supplement the data content with relevant context. Here are seven examples of criteria that can be used to enhance the relevance of digital evidence during a forensic investigation. When it comes to analyzing digital evidence collected from different systems and/or devices, time synchronization is a major factor in establishing a chronology. Using Network Time Protocol (NTP) set to Greenwich Mean Time (GMT), with the time zones of each system configured locally, is the best practice for establishing consistent and verifiable timestamps to ensure digital evidence can be correlated, corroborated, and chronologically ordered during a forensic investigation. On its own, digital evidence content presents a number of challenges because it lacks situational awareness. However, when combined with a supplemental layer of information, or “data about data,” investigators can bring about a better understanding of digital evidence structural metadata (e.g. used to describe arrangement of information) or guide metadata (e.g. used to assist with locating information) Because that metadata is also electronically stored information (ESI), the same digital evidence management requirements must be taken to ensure its authenticity and integrity are maintained. Cause and Effect A common challenge with any digital forensic investigation is to determine the cause of an event because the effect can vary depending on the context of the event. The "Pareto Principle," also referred to as the "80/20 Rule," states that approximately 80 percent of all effects come from roughly 20 percent of the causes. Instead of trying to understand every cause-and-effect combination, referring back to the six business risk scenarios can reduce the scope of which cause-and-effect combinations need to be considered. By narrowing the scope down to the applicable risk scenarios, supplementary information can be identified and considered for collection. Correlation and Association The scope of a digital forensic investigation can be made up of several interconnected and distributed technologies where an event on one system can have a relationship to an event on other systems. Creating a linkage amongst the various technologies is critical when it comes to establishing a complete trail of evidence, so a more comprehensive picture of the incident can be compiled. Achieving a holistic view requires thinking in terms of gathering digital evidence in support of the entire trail of evidence, instead of as individual data sources that may or may not be useful during the investigation. Corroboration and Redundancy Generally, the goal of every forensic investigation is to use digital evidence as a means of providing credible answers to substantiate an event and/or incident. However an investigation is initiated, establishing credible facts can be challenging, because individual pieces of evidence on their own may not provide the necessary context. By aggregating different data sources, the strength of digital evidence collected will improve because it can be vetted across multiple data sources. Over time, continuing to gather data from multiple sources will provide a sufficient amount of digital evidence that can minimize the need for forensic analysis of systems. Retention of ESI, regardless of whether it is preserved as digital evidence, has unique requirements for the length of time for which it has to be preserved; such as those defined by regulators or legal entities. Not only does preserving ESI support regulator or legal requirements, but it also has evidentiary value and might need to be recalled to support one of the six business risk scenarios. Careful planning must be done to determine which type of electronic storage medium will be used to ensure that the type of backup media used will not impact the authenticity and integrity of ESI. Although advancements have been made in the processing and analysis of digital evidence, there remains an underlying issue of how to effectively manage the ever increasing volumes of data that are gathered. Solutions such as an Enterprise Data Warehouse (EDW) can be easily adapted and scaled to support the growing volumes of ESI that need to be accessed in both real-time and near-real time. When implementing any type of digital evidence storage solution, it is important that the solution adheres to the best practices for maintaining the integrity and authenticity of digital evidence and not risk making the ESI inadmissible in a court of law. Determining the meaningfulness, usefulness, and relevance of digital evidence requires additional layers of supplemental information to enhance its contextual awareness. By ensuring the factors discussed in this article are included when proactively gathering digital evidence, the significance of digital evidence can be better realized during a digital forensic investigation. This article was sourced from the forthcoming book by Jason Sachowski, titled Implementing Digital Forensic Readiness: From Reactive To Proactive Process, available for pre-order at the Elsevier Store and other online retailers. Jason is an Information Security professional with over 10 years of experience. He is currently the Director of Security Forensics & Civil Investigations within the Scotiabank group. Throughout his career at Scotiabank, he has been responsible for digital investigations, ... View Full Bio
<urn:uuid:53f4c415-4a7d-4781-ba7e-94fe5295479f>
CC-MAIN-2017-04
http://www.darkreading.com/attacks-breaches/7-criteria-for-enriching-digital-evidence/a/d-id/1323842?_mc=RSS_DR_EDT
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282935.68/warc/CC-MAIN-20170116095122-00455-ip-10-171-10-70.ec2.internal.warc.gz
en
0.930305
1,114
2.578125
3
Updated: The Memory Spot stores about 250 times more data than an RFID chip and transmits data 20 times faster. First there was VeriChip, the company that developed a human-implantable RFID chip the size of a grain of rice. And now theres Hewlett-Packard, with new "stick almost anywhere" chip technology thats even smaller than a grain of rice and carries a whole lot more data than any RFID chip. HP announced July 17 that its Memory Spot research team has developed a tiny wireless data chip that can hold, in comparison to other micro technologies, reams of information. The chip, in experimental phase now, has a 10M-bps data transfer rateas fast as a high-speed Internet connectionand can store information thats in audio, video, photo or document form. The idea for the chips is to embed them or stick them to a sheet of paper, for example, (no mention has been made of human-implantable chips) to add audio-, visual- and document-based data to everything from postcards to photographs. HP said there could at some point even be a booklet of self-adhesive "dots" available to the public. But HPs Memory Spots have a broader implication. The technology has similarities to RFID tagsthe anticipated successor to bar codes that is being used (or considered) to track everything from shampoo bottles and pharmaceutical drugs to postal packages and farm animals. According to HP officials who demonstrated the tags late last week to press and analysts, the Memory Spots can store about 250 times more data than RFID, can transmit that data about 20 times faster, and has some native security capabilities built in. Where RFID and Memory Spots are similar is that data is stored on a physical chipor a chip thats embedded in a tag, in the case of RFIDthat has an antenna which transmits information. The antenna on an RFID chip is external and big by comparisonabout an inch in lengthwhereas the Memory Spots antenna is embedded directly on the chip. Once that antenna on both an RFID and Memory Spot chip is tapped electronically by a reader device, the stored information can be accessed and read through a reader interface. Memory Spot data transfer is similar to RFID. The two technologies differ, however, in several key areas: HPs Memory Spots have the capacity to store a lot more dataanywhere from 256K bits that can hold up to 15 seconds of video to 4M bits that can store up to 42 secondsin working prototypes. RFID tags transmit a few hundred kilobits of data a second, according to HP officials, who said future versions of the Memory Spots could have more storage capacity. The Memory Spot also comes with a computing brain that enables it to encrypt data, whereas RFID, for the most part, relies on so-called Gen 2-enabled software installed at the tag and reader level to provide some security measures. At the same time, the Memory Spots require a reader to be just about on top of it to extract dataabout a millimeter awaywhereas an RFID chip can be read from several inches to many feet away, a fact that has security and privacy advocates in an uproar. RFID technology companies, however, arent in any imminent danger yet. The Memory Spots are at least two years from hitting the marketHP said it has no product plans right now but is in touch with its business units and potential partnerswhereas RFID is, in many cases, in production along the supply chain (Wal-Mart and the U.S. Department of Defense are the most cited examples) and will soon be in U.S. households (the State Department will begin issuing RFID-chipped passports as early as next month). Then theres the price differential to consider. HP said a Memory Spot chip could be priced at about $1, depending on the application its being used for and the volume its being sold at. RFID chip manufacturers are getting closer to the $0.05 price tag, though theyre not there yet. Item-level RFID tags cost more than expected. Click here to read more. HP said it is not positioning the Memory Spot as a competitor to RFID. Rather, the emerging technology is being looked at as applicable in a number of business and consumer areas, from storing medical records on a hospital patients wristband to providing the equivalent of audio and video Post-it notes to photographs. But HP is also tapping the pharmaceutical industrys attempts at fighting drug counterfeiting, a potentially huge area for RFID, and as an add-on to ID cards and passportsanother huge area for RFID with governmental initiatives under way in both the United States and Europe that utilize RFID. Howard Taub, vice president and associate director of HP Labs, said during a July 17 press conference that despite the Memory Spots similarities to RFID, HP is not targeting the technology sector as a competitor. "Ive told my team, If someone can do it with RFID, were not even looking at it as an application," said Taub, in Palo Alto, Calif. "This is high capacity, high bandwidth, where you need to store rich definition, media" information. Next Page: Different applications.
<urn:uuid:94933b78-38cc-48f3-8b7f-e3333c734662>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Data-Storage/HP-Develops-Tiny-Wireless-Data-Chip
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00089-ip-10-171-10-70.ec2.internal.warc.gz
en
0.94776
1,089
2.59375
3
Remember that road surface being tested in the Netherlands that acted as a giant solar panel converting solar energy into electricity? Well, guess what? It actually worked. Six months into the test, the engineers say they've generated 3,000kwH of power from the 70-meter bike path test track. That's enough power to run a one-person household for a year, and more than expected of the project, according to SolaRoad, the company behind the experiment. Data centers are heavy users of electricity, and SolaRoad's better-than-expected electricity generation will be interesting news for those designing data centers. SolaRoad's road surface acts as a huge photovoltaic panel. Practical applications thought of thus far include street lighting, traffic systems, and electric vehicles. Designers are keen on the idea of developing a system where electricity could be passed onto vehicles as they drive down the road, for example. Glass and concrete construction The project uses standard, off-the-shelf solar panels that the engineers have placed between layers of glass, silicon rubber, and concrete. Those concrete modules consist of 2.5-by-3.5-meter slabs capped with 10-millimeter thick tempered glass. Crystalline silicon solar panels are located between the glass and concrete. The researchers are delighted that the project worked, in part because of the technical challenges. The top layer had to absorb sunlight, unlike normal blacktop. But it also had to be long-term skid-resistant for the bicycle tires, unlike what you'd get with shiny glass. It had to repel dirt in order to keep the sun shining in, but could not break even if a service truck drove on it. Glass is obviously dangerous and could injure someone if it broke. The skid resistance was addressed with a coating for the glass In a 2,543-comment Reddit debate over the news of the successful test, Reddit user Imposterpill sarcastically comments: "I have an idea…why don't we put solar cells on our roofs?" Good point. Why roads, one might ask? What's wrong with roofs? Well, the engineers have an answer for that comedian: Total electricity consumption in the Netherlands is around 110,000 GWh, and that keeps going up. That number, taking into account the small size of the country and the limited number of roofs available, means that even if all suitable roofs were equipped with solar panels, they would only supply a quarter of Dutch power consumption. The same limits might apply in a data center. One day data center designers may want to look at surrounding infrastructure for panel placement. In other words, the roadway. Surprisingly, the wise-crackers at Reddit haven't posed the question - what happens when there's a traffic jam? The cars on the road will surely block the sunlight and reduce yield. Well, the engineers do acknowledge that as a potential problem, and they say that they are looking into it as part of the pilot study. Another Reddit user suggests placing the solar panels over the road instead of on it. However, in true Reddit-user logic, BloodBride disagrees and says: "Solar panels OVER the road increase the amount of drunk people throwing traffic cones up there. Traffic cones ON a road invariably just get stolen, worn as hats and taken home." And that's problem solving. This article is published as part of the IDG Contributor Network. Want to Join?
<urn:uuid:6f36d779-05f3-4789-8aeb-32c7767e5246>
CC-MAIN-2017-04
http://www.networkworld.com/article/2921244/data-center/solar-power-road-surface-actually-works.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280791.35/warc/CC-MAIN-20170116095120-00575-ip-10-171-10-70.ec2.internal.warc.gz
en
0.959529
716
3.234375
3
The Berkeley Lab website recently published a short “five-question” interview with David Brown, the director of the Computational Research Division at Lawrence Berkeley National Laboratory (Berkeley Lab) since August 2011. Mathematics, according to Brown, is the foundation of modern computational science. When Brown got a post-doc position at Los Alamos National Laboratory after earning his PhD in applied mathematics from California Institute of Technology in 1982, he thought he would only stay a couple of years, before accepting a teaching position elsewhere. Best laid plans, and all that because that 2-year plan evolved into a 31-year tenure with US Department of Energy (DOE) national laboratories, including 14 years at Los Alamos National Laboratory and 13 years at Lawrence Livermore National Laboratory. Brown explains that when he made the move to Lawrence Livermore National Lab, he was able to “apply his knowledge of math and science to the development and oversight of new research opportunities for scientists and mathematicians at that lab and throughout the DOE.” Brown’s passion for the field made him an ideal candidate to lead the extensive research program in applied mathematics at Berkeley Lab. Brown refers to mathematics as the language of science, and says this language is what allows science to be put on computers. From there, it’s not a huge jump to see why the DOE invests in math research. “New and better mathematical theories, models and algorithms…allow us to model and analyze physical and engineered systems that are important to DOE’s mission,” notes Brown. “Often math is used to make a very difficult problem tractable on computers.” Brown cites a notable example from 30 years ago. Mathematician James Sethian’s work with asymptotic methods set the stage for breakthroughs in combustion simulation techniques. That discovery undergirds modern supercomputing codes used in everything from combustion to astrophysics to atmospheric flow. Asked how math applies to supercomputers, Brown responds: The scientific performance of big applications on supercomputers is as much a result of better mathematical models and algorithms as it is of increases in computer performance. In fact, the increases in performance of many scientific applications resulting from these better models and algorithms has often exceeded the performance increases due to Moore’s Law . And Moore’s Law, which predicts of doubling of performance every 18 months, offers a pretty impressive increase on its own. These improvements in performance help scientists make much more efficient use of supercomputers and study problems in greater detail. An applied mathematician by training, Brown is especially interested in the development and analysis of algorithms for solving partial differential equations (PDEs). In 2001, the Overture project, which Brown led, was selected as one of the 100 “most important discoveries in the past 25 years” by the DOE Office of Science.
<urn:uuid:a78a09e9-1b3d-4751-9b36-a75178d41729>
CC-MAIN-2017-04
https://www.hpcwire.com/2013/09/17/the_math-supercomputing_connection/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00447-ip-10-171-10-70.ec2.internal.warc.gz
en
0.943794
586
2.71875
3
The Wi-Fi industry has ambitions to make Wi-Fi the dominant wireless technology for the Internet of Things (IoT). While its current incarnation lacks two key features for this emerging market, Wi-Fi would be attractive option for a number of reasons. It is more IP-friendly than any of the alternative wireless technologies, supports higher rates, and the chip industry backs it with high volumes and low prices. If new versions of Wi-Fi for IoT maintain commonality with current MAC and PHY designs and familiar interfaces allowing easy integration into devices, they should be attractive to IoT developers. We can somewhat narrowly define IoT as head-less, often battery-powered devices that need building-wide connectivity to report data and receive control from the “cloud.” The two requirements where Wi-Fi falls short in IoT are range and power consumption. When compared with the alternative IoT wireless technologies, Zigbee, Z-wave, and Bluetooth Low Energy (BLE), a battery-powered Wi-Fi sensor lasts months, not years; attempts to make very-low-power chips using 802.11b/a/g/n/ac have had marginal commercial success. Also, a single Wi-Fi access point cannot reliably cover all locations in all modern houses and commercial buildings due to multiple floors, thick walls, basements, and the other Wi-Fi-unfriendly features. When an installer cannot rely on Wi-Fi coverage throughout a building or business, we cannot claim a full solution for IoT. Four initiatives seek to remedy these shortcomings. All are progressing through the standards process, and some are years from shipping. But they are worth following because the Wi-Fi industry has shown success in reaching technical agreement on standards, then building and shipping products based on new certifications. The first Wi-Fi initiative is “extended-range” 802.11ah. This improves range by moving to a lower frequency, the 900 MHz band, and narrow 1 MHz RF channels. Narrower channels and protocol changes reduce chip power requirements. It should be possible to get a range of twice 802.11n (at 2.4 GHz): a reliable 40+ meters range at 150 kbps (single stream) or, with a more complex dual-stream chip, multi-Mbps at 80+ meters. The 900 MHz band, while allowed for license-exempt use in the U.S., is not harmonized globally, but the better propagation characteristics (over 2.4 GHz or 5 GHz) are important for increased range. This looks promising, but in addition to spectrum, the timescale is a potential risk. While 802.11ah is already an IEEE standard, the Wi-Fi Alliance will take a while to conduct interoperability tests and develop an industry certification – it’s a frustratingly deliberate process, but we have not found a better way – and by the time the technology is ready to ship, there’s a risk the market may have moved on. But by most measures, this is the Wi-Fi industry’s flagship entry for mainstream IoT applications, with the range of Zigbee (which uses multi-hop mesh to compensate for shorter hops) and battery life surpassing BLE. Meanwhile, another initiative, the “Connected Home,” proposes a stop-gap approach to bridge 802.11ah’s timescale and backwards-compatibility challenges. An intermediate proxy server in the home would communicate with the current Wi-Fi access point, answering on behalf of end-devices that in turn connect to it, while allowing those devices to sleep for much longer intervals. This approach offers a short-term path to lower-power consumption. Connected Home doesn’t really extend range, but should make Wi-Fi IoT devices in the home more viable while working with existing access points. A more recent idea presented to the IEEE proposes extending Wi-Fi to provide equivalent functionality to Zigbee indoors and over short distances outdoors. Designed for the 2.4 GHz band, it is closer to existing Wi-Fi than 802.11ah. Range improvements come from narrower RF channels and lower rates, rather than lower frequencies. It should be implemented with a simple low-rate addition to existing chip designs, much easier for the chip and module designers to implement and hopefully included in all new 2.4 GHz Wi-Fi chips in the same way that higher rates were added in the past. Meanwhile, we must not forget the UHF bands, where propagation is even more favorable than at 900 MHz. “White space” rests on a spectrum-sharing arrangement, where Wi-Fi access points re-designed for the 450 to 700 MHz TV bands use sensing and a geographical database of incumbents to discover and occupy locally unused spectrum. IoT is not a primary target for this technology, but it’s well-suited to wide-area sensor and control networks. We sometimes forget that Wi-Fi’s success is largely due to a single application, Internet connectivity for mobile phones, tablets, and PCs. This has provided such a rich market for Wi-Fi that there has been little need to branch out. But IoT is one of several new opportunities for short- and medium-range wireless technologies and the industry is – at last – determined to offer a viable alternative to Zigbee and BLE. This article is published as part of the IDG Contributor Network. Want to Join?
<urn:uuid:d2aa3961-c8ef-4c84-943b-c1da779f4da9>
CC-MAIN-2017-04
http://www.networkworld.com/article/3016919/wi-fi/how-the-wi-fi-industry-is-adapting-to-keep-up-with-the-iot.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00171-ip-10-171-10-70.ec2.internal.warc.gz
en
0.933023
1,113
2.84375
3
Data is all around us. Big data is used every day by private and government entities to measure patterns and predict trends and in day-to-day life. Similarly, you use the “small data” that you gather from your interactions and observations to make similar predictions and navigate the world. The ways big data analytics benefit retailers are many. Retailers can use collected data to analyze how and when customers make purchases and which items they buy together, which allows them to tailor their advertising methods and store layouts to better fit with these trends. But not all big data uses are this obvious. 1. Big data analytics play a role in writing movie scripts and casting films. By analyzing data collected from social media, file downloads, and streaming services like Netflix, Hollywood can determine the kinds of stories viewers like best and the right actors for them. If you've felt like recent movies and shows have felt more formulaic than older ones, you're not crazy – studios use the data they gather to make safer investments in the content they create. 2. Another use of data analytics that companies have latched onto in recent years is the trend of gamification. Gamification is the incorporation of video game elements like point scoring and goal journals in non-game applications. It's used to guide user behavior and reward desirable actions. One of the oldest examples of gamification is frequent flyer programs, which airlines use to reward their regularly-returning customers. But to build a gamified app or loyalty program, companies need to know users' goals and typical behaviors. That's where big data analytics come into play. 3. When data is printed in 3D, it becomes tangible and for many, easier to understand. Using big data to drive road and infrastructure development is nothing new, but printing out this data in three-dimensional graphs is giving city planners a new way to conceptualize what's going on in their cities and how to address these issues. 4. Big data analytics are also considered by some to be the next agricultural revolution because today, devices like those manufactured by The Climate Corporation collect data from equipped farm vehicles and tools, which farmers can later use to make decisions that will increase productivity on the farm. For example, a farmer can use this data to determine which fields need more fertilizer. These are just a few of the many innovative ways big data is being used currently and will continue to be used in the future. Big data is a unique medium that can give us insight into our lives and cultures in a way that others simply cannot. Think about the ways big data analytics can benefit your company. Think not about the information you currently have, but the information you don't have, to determine where to focus your research and development efforts. Then, find a way to fill that information gap.
<urn:uuid:f0c115f6-04a7-4f38-a947-5f3f85a4dc79>
CC-MAIN-2017-04
https://www.digitalrealty.com/blog/4-unexpected-uses-for-big-data/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00291-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947605
565
3.09375
3
Most of the new viruses we keep seeing nowadays are email worms, with the occasional P2P, filesharing or network exploit -based worms thrown in. So, it's weird finding a virus which replicates by using floppy disks and CD-ROMs. This is exacly how the Bacros virus replicates. Bacros was already found a month ago but we've started receiving more questions on it lately. This virus will copy itself to all floppies it sees. It also attempts to burn itself to CD-R discs (complete with an AUTORUN file, which will run the virus when the CD-R is inserted to another machine). In addition of this spreading on physical media, the virus also works as a companion virus, attacking TXT files. For example, when the virus finds a file called README.TXT, it will make that file hidden and drop a new file called README.EXE in the same directory. Icon for this file makes it look like a normal text file, and when clicked, it will launch the original text file to hide it's activities. Bacros is also unusual because it's destructive. We don't see many directly destructive viruses nowadays; most viruses just try to silently take over your machine instead. Bacros overwrites GIF image files with an image that says "KUOLE JEHOVA" (the message is in Finnish as this virus was apparently written in Finland). And on Christmas day, it will try to delete all files from the system.
<urn:uuid:37e3dfeb-7333-47bf-8907-27401b9a72b4>
CC-MAIN-2017-04
https://www.f-secure.com/weblog/archives/00000314.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00043-ip-10-171-10-70.ec2.internal.warc.gz
en
0.9633
312
2.53125
3
Information technology, especially personal computers, smartphones and handhelds, have a far greater reputation for causing mental illness than relieving it. Not the serious kind that gets you confined to locked treatment facilities, ruins your life, your career and your hope of ever being able to cope with your own life. Not even the kind that makes getting on a reality TV series seem like a good idea. Computers usually just amp up the stress, shrink the window of time available to do anything and stop working correctly at the exact moment you can't afford any malfunctions. In those characteristics they're exactly like every other inanimate object in the world – all of which resent the presence of humans enough that many will do what they can to thwart, punish or frustrate us just for petty revenge. The more tiny bits of the inanimate are in a single object, the more precocious evil that object contains. Computers make you nuts, but not really crazy Computers are made up of a LOT of tiny parts, many of which we've made more intelligent than they already were. That's why experienced IT people seem superstitious about the systems they work on, why they have little rules like 'if it's working, don't touch it,' even if 'It' is due for an upgrade. They know from bitter experience that any attempt to improve something that already works will, inevitably, break what currently works without delivering the benefit of the new generation of technology – at least, not without a lot of work, a lot of swearing and at least one swift kick at the casing to physically intimidate the new box into cooperating. So reading that researchers at the University of Bergen are developing an app for smartphones or tablets they hope will help schizophrenics tame the real-sounding voices in their heads and function more effectively with non-schizophrenics, I was skeptical. The voices schizophrenics hear, at least according to Univ. of Bergen professor Kenneth Hugdahl, aren't just impulse whispers, they're tangible voices that sound real and are difficult to ignore. It's often difficult to even know whether the voices are coming from outside the victim's head or which are not real. One app might help schizophrenics seem more sane Brain scans show a drop in brain activity while a victim is hearing voices, but more activity in the sections of the brain responsible for receiving and processing language, even if there is no one speaking or any reason beyond hallucination to hear voices. "When neurons become activated by inner voices it inhibits perception of outside speech. The neurons become ‘preoccupied’ and can’t ‘process’ voices from the outside…this may explain why schizophrenic patients close themselves off so completely and lose touch with the outside world when experiencing hallucinations." – Kenneth Hugdahl. Non-schizophrenics hear voices, too – scraps of music stuck in our heads, random noises interpreted as human voices, the mistaken conviction someone has called your name. The frontal lobes of schizophrenics don't function quite right, reducing their ability to tune out or ignore the phenomenon or realize that the "voices" are random noises, not orders from an unseen speaker. Some researchers have found biofeedback methods to be effective in teaching schizophrenics to identify real voices from unreal, allowing them to ignore the random stimulus in order to focus on voices of people who are actually present or interactions with people that are not hallucinations, Hugdahl wrote. Systems that rely on artificial stimulation of specific muscles, EEGs that show different brain patterns for hallucinations than for real sounds are effective but ungainly. Hugdahl's team is working on a mobile application that uses biofeedback to play a different voice in each ear simultaneously so the patient can practice distinguishing between the two and paying attention to only one. The technique isn't proven but does show promise, at least in helping schizophrenics distinguish between real stimuli and fake and to pay attention to the correct one. "The voices are still there, but the test subjects feel that they have control over the voices instead of the other way around. The patient feels it is a breakthrough since it means he can actively shift his focus from the inner voices over to the sounds coming from the outside," Hugdahl wrote. The good news is that schizophrenics with experience trying to pay attention to the real world while false voices try to distract them shouldn't find any radically new level of frustration dealing with voices on a mobile computer that may speak compellingly at a million miles per hour, but can always be turned off when the users is looking for a little peace and quiet. Now they may teach other voices to do the same thing. Read more of Kevin Fogarty's CoreIT blog and follow the latest IT news at ITworld. Follow Kevin on Twitter at @KevinFogarty. For the latest IT news, analysis and how-tos, follow ITworld on Twitter and Facebook.
<urn:uuid:cc22de91-0e05-4869-8d70-578dd0a23f3f>
CC-MAIN-2017-04
http://www.itworld.com/article/2732160/mobile/mobile-app-may-help-schizophrenics-tame-the-wild-voices-in-their-heads.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00345-ip-10-171-10-70.ec2.internal.warc.gz
en
0.954932
1,012
2.75
3
When a computer system is able to reduce tax expenditures by millions of dollars and generate as much as $27 in savings for every dollar spent on expenses, you would think state officials would be scrambling to use it. But that's not the case with the Public Assistance Reporting Information System, better known as PARIS, a four-year-old system designed to reduce improper payments in public assistance programs. So far, only 16 states are using PARIS to identify individuals or families who may be receiving benefit payments from TANF (Temporary Assistance for Needy Families), Medicaid or Food Stamps in more than one state, according to a report by the General Accounting Office (GAO). In February 2001, PARIS identified almost 33,000 instances in which improper payments were potentially made to individuals who appeared to reside in more than one state. Just under half of the potential improper payments involved Medicaid benefits; the rest involved some combination of TANF, Medicaid and Food Stamps. So far, four states and the District of Columbia have collected data on the benefits of the interstate matching system and have documented $16 million in savings. According to the GAO, their analysis suggests PARIS could help other states save program funds by identifying and preventing future improper payments. Among the 34 states not participating when the GAO released its report in September were California, Texas, Michigan and Ohio, all of which account for a significant portion of welfare expenditures. Lack of Information Each year, the United States spends approximately $230 billion on public assistance, Medicaid and Food Stamps. Millions are lost annually when individuals and families receive duplicate benefit payments from more than one state. Part of the problem has been the lack of information sharing between federal agencies that run the welfare programs and states that administer them. In 1997, the Department of Heath and Human Services started PARIS so states could share eligibility information and identify improper payment benefits. PARIS works by comparing states' benefit recipient lists with one another using individual social security numbers, as well as name and address information. Computers at the Defense Manpower Data Center search for matches and any hits are forwarded to the appropriate state, where staff can take steps to verify the information and decide whether to cut off benefits. Few states have taken the time to compare the program's costs to the benefits, but studies of its benefits clearly indicate that computer matching saves tax dollars. For example, Pennsylvania estimated that PARIS uncovered more than $2.8 million in savings in its TANF, Medicaid and Food Stamp programs. Maryland said that it saved $7.8 million in the Medicaid program during the first year PARIS was in operation. Kansas estimated that PARIS produced a savings-to-cost ratio of about 27 to 1. According to the GAO, if states used data from all three public assistance programs in their matching activities (not all do), the net savings could outweigh the costs of PARIS. On average, the savings-to-cost ratio would be 5 to 1. Based on data provided by the three states, approximately 20 percent of match hits end up valid. In addition to the savings generated by participation in PARIS, states also gain from the program's internal controls that help ensure public assistance payments are only made to or on behalf of people who are eligible for them. Even with the success so far, PARIS has been limited in its effectiveness. Most notably, only one-third of the states participate, leaving a large portion of the public assistance population not covered by the matching system. Second, PARIS has been hampered by coordination and communication problems among its participants. Third, some participating states give PARIS low priority, resulting in many duplicate payments left unresolved. Finally, the system suffers from the fact that it can't prevent duplicate payments from occurring, but can only identify and stop those
<urn:uuid:c8548f31-de78-4904-9064-7142205c0a5a>
CC-MAIN-2017-04
http://www.govtech.com/e-government/Shunning-PARIS.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00071-ip-10-171-10-70.ec2.internal.warc.gz
en
0.964821
776
2.515625
3
There long has been a debate about "near-death experiences," in which people whose hearts momentarily stop beating report seeing brilliant lights, long tunnels, dead relatives and other visions, report floating above their own bodies, etc. The people who have had and believe in those experiences insist they're quite real, while skeptics argue that they either are fabricated or meaningless hallucinations. Now science is weighing in with evidence that brain activity continues after cardiac arrest, and actually may increase immediately after a heart stops beating. In an experiment with nine lab rats, researchers at the University of Michigan Health System determined that "shortly after clinical death, in which the heart stops beating and blood stops flowing to the brain, rats display brain activity patterns characteristic of conscious perception," the university said in a statement: Researchers analyzed the recordings of brain activity called electroencephalograms (EEGs) from nine anesthetized rats undergoing experimentally induced cardiac arrest.Within the first 30 seconds after cardiac arrest, all of the rats displayed a widespread, transient surge of highly synchronized brain activity that had features associated with a highly aroused brain. About one in five heart attack survivors report near-death experiences, which they invariably describe as intense and surreal. “We reasoned that if near-death experience stems from brain activity, neural correlates of consciousness should be identifiable in humans or animals even after the cessation of cerebral blood flow,” said lead study author Dr. Jimo Borjigin, associate professor of molecular and integrative physiology and associate professor of neurology at the University of Michigan Medical School. “We were surprised by the high levels of [brain] activity,” said study senior author Dr. George Mashour, assistant professor of anesthesiology and neurosurgery at the U-M. “In fact, at near-death, many known electrical signatures of consciousness exceeded levels found in the waking state, suggesting that the brain is capable of well-organized electrical activity during the early stage of clinical death.” Results of the research were published Monday in the Proceedings of the National Academy of Sciences. Now read this:
<urn:uuid:38b6e183-43f8-4410-9ac9-4354a5ce2a01>
CC-MAIN-2017-04
http://www.itworld.com/article/2708130/enterprise-software/researchers-find-scientific-basis-for-near-death-experiences.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280891.90/warc/CC-MAIN-20170116095120-00373-ip-10-171-10-70.ec2.internal.warc.gz
en
0.959452
425
2.6875
3
For companies like Google, there are potential hazards to free cooling Friday, Feb 21st 2014 Data centers are always looking for ways to cut costs and have the best data room cooling services available. For massive companies like Google, the solution boils down to channeling cooling sources that already exist - such as sea water and air - to cool their data centers. But in some situations, the condition of outside air makes this practice difficult to sustain. Google: A free cooling powerhouse Perhaps no company is more well-versed in taking advantage of the natural resources around them than Google, a company whose network of data centers spans the globe. In Hamina, Finland, the company took an old paper mill and transformed it into a data powerhouse, replete with color-coordinated pipeage and even a sauna for employees. But the center's crowning feature is its use of seawater as a data room cooling solution. According to Google, that the data center sat right next to a Gulf presented a unique opportunity to turn natural elements into cooling solutions. Google has implemented a system wherein water from the Gulf is pumped through tunnels into the center, where it is then put to use minimizing the heat generated by the computers. "The takeaway is, don't look at what has been done as the only way it can be done," said Joe Kava, a director of datacenter construction for Google. That is why it uses some form of free cooling - including tapping into water and outside air - in all its data centers, according to Google engineer Chris Malone. "It yields tremendous efficiency gains," Malone said. In Dublin, Google's data center relies on air cooling from Ireland's natural climate. A perusal of all its other data centers reveals that Google's commitment to harnessing natural elements is unshakeable. The benefits are not only monetary, but environmental. Does air pollution present concerns for outside cooling solutions? Google clearly has lots of experience tapping into the natural elements - but does the presence of contaminants in the air threaten such a practice? According to industry expert Nigel Laws, it might - depending on the location of the data center. Laws wrote that whenever a data center decides to use outside air as a cooling option, it risks bringing in contaminants from that air, which could prove detrimental to the data center. Thus, companies looking to channel outside air into their center need to keep several factors in mind when considering where to build their data center: - Stay away from major roads: Cars leak exhaust fumes, and if a data center is too close to these emissions it risks having polluted air potentially find its way into the center. Diesel fumes and data centers were never meant to mix. Placing a data center at a safe distance from such chemicals ensures they never will. - Make sure there aren't treatment plants in the area: As a general rule, data centers don't want neighbors, unless those neighbors are trees, grass or large bodies of water. The presence of a treatment or sewage plant near a data center significantly heightens the risk of chemical contamination, which threatens to find its way into the IT equipment onsite and cause damages, or worse, render it unusable. - Avoid parkland: With all the talk of the hazards of industrial elements, it may seem contradictory to advise against building a data center near parkland as well, but parkland comes with its own set of risks for data centers, including insects and high levels of humidity. - Location, location, location: If a data center is looking to remain cool year-round, then setting up shop in Death Valley probably is not the way to go. The reason Google has been so successful with its natural cooling solutions is that it strategically places them in regions that provide consistent and dependable outside cooling. Nevertheless, outside cooling can prove beneficial both for the center and the environment around it - so long as it is used in conjunction with temperature monitoring equipment. When Google's Finnish center is done using the Gulf waters, that water gets purified, set back to its proper temperature, and returned to its source. This kind of practice isn't just environmentally friendly - it also leads to savings in Power Usage Effectiveness of up to 30 percent, according to Laws.
<urn:uuid:faf87fc6-6d6f-4156-807d-49988e8a9c5b>
CC-MAIN-2017-04
http://www.itwatchdogs.com/environmental-monitoring-news/data-center/for-companies-like-google,-there-are-potential-hazards-to-free-cooling-585307
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00154-ip-10-171-10-70.ec2.internal.warc.gz
en
0.956805
858
2.78125
3
O men, when you start to write about BGP it is probably the time then you seriously start questioning yourself where did I go with myself. That is probably the moment in which you realize that there is a network geek sitting somewhere inside you. At least that is what happened to me when I finished to write this huge post. Don’t be scared, it’s fun to know about this thing below. Every local network is managed by his own network administrator. If the network become big enough and there are more than few sub-segments inside that network there will probably be some kind of routing protocol running inside. That routing protocol will be IGP or interior gateway protocol more probably OSPF as it’s vendor independent. When we want to connect our network to other networks across the world, we are trying to connect it to the internet. The Internet is the network connecting most of the networks today and in that way it became the biggest inter-networking system in the world. To be able to get that huge network to function and get our LANs to act jointly there must be a routing protocol that enables it. BGP – Border Gateway Protocol is that one. Every individual network has his own policies that are enabling that network to behave as the administrator want. When connection networks to the internet network all those policies need to be tied together with BGP protocol in order to influence outside communication entering the local network and communications initiated from the local network going outside somewhere on the internet. This is done using more that few different BGP attributes. Those attributes are forwarded across specific prefixes. Sometimes those attributes are not only forwarded but also modified on the way, one of which is the community attribute. | Continue Reading.. |
<urn:uuid:826224c3-f8d3-413c-9eca-94bef686c8f3>
CC-MAIN-2017-04
https://howdoesinternetwork.com/category/networking/networklayer
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00548-ip-10-171-10-70.ec2.internal.warc.gz
en
0.9566
356
2.765625
3
Whether technology is used to manage welfare caseloads for a state or finances for a midsize city government, the result is often the same: an intricate computer system built from the ground up as a single unit. Invariably, the software is complex, custom-designed and hard to modify, even if the change involves just one business function. The result, say government officials, are systems that cost too much to build and are at a high risk of failing. It doesn't have to be that way. For decades, engineers have used the "Black Box" approach for developing everything from cars and PCs to stereos. Rather than try and build an entire project yourself, engineers suggest breaking it down into discrete pieces, which are built separately, but based on a common architecture. Each piece is both independent of and integral to the entire system. "The premise of Black Box," said David Gray, chief information officer for California's Secretary of State, "is that it's very inefficient for one organization to build everything they need, whether it's a product or a computer system." Gray, an engineer prior to working in the public sector, pointed out that when Sony builds a stereo, the company doesn't try to design and build the tuner, speakers, CD player and amplifier itself. "They create a common architecture for the entire system, specialize in one area and then purchase component parts from firms that specialize in CD players, tuners and so on," he explained. "Sony then integrates all those together to make a product." Gray, along with a small but growing number of IT executives, believes that same concept can be used with large-scale computer system development in the public sector. "The way it applies to information technology," outlined Gray, "is that the IT organization in an agency becomes the integrator, not the developer, of products necessary for a solution." The difficulty with Black Box is designing an underlying architecture that's both scalable and open, covering everything from the network and operating system to database management and memory specifications. Once the architecture is done, separate groups can be assigned to build components without necessarily knowing what is inside the other components. The advantage of this process is that, if an agency's business model changes over time, the IT department can retire and replace the system component affected by the change without redoing the entire system. "Black Box works because it reduces risk and cost by breaking large projects into little pieces," said Gray. Black Box in Practice The Secretary of State's Office used the Black Box approach for two key applications: one involving the certification and management of notary publics, the other for a document warehouse. In both cases, the agency created the architecture and then had a combination of in-house and private-sector developers build modules that interface with each other but are also stand-alone applications. With more than 125,000 notaries in the state, the job of managing this licensed position requires significant automation. Unfortunately, the Secretary of State's Office only had a 30-year-old mainframe -- which costs $25,000 per month to maintain -- with limited functionality to do the job. According to Gray, the office decided to increase the efficiency of its Notary Public Automation System by automating a number of notary business functions, including commissions and investigations. To design any new system using the Black Box approach, Gray begins by holding numerous meetings with his staff from the office's Information Technology Division to determine whether a system is a suitable candidate for Black Box development. Then the proposed system is evaluated by IT staff for ways it can be broken down into stand-alone components. Ultimately, design of the notary system was divided along functional areas: data, commissions, investigations and seal Actual design and development used two methodologies. Joint application development (JAD) calls for the system's users to work with the software developers in a series of white-board sessions to work out the detailed specifications, the business rules for the systems, and so on. With JAD, Gray's staff knew exactly what the system was supposed to do before any actual code was written. The software developers then used rapid application development (RAD) to build the application. In a typical RAD session, a developer and a user actually build the application together, using an application development tool, such as Powerbuilder or SQL Windows. The user gets to try things out as they would work on a computer, and the developer gets continuous feedback from the user on how the application should work. "The combination of the two development methodologies allowed the Black Box to come into play very well," said Gray. "Once we began RAD sessions, we treated each functional area of the notary system as a separate process that had to stand alone but also had to share a common architecture, such as database and network." Once all the functional pieces were built, they were tested and integrated. "Now we can retire components of the notary system at will," added Gray, "and replace them as long as we don't change the specifications in a way that keeps them from interacting with the other modules." By using the Black Box development, the notary system is very scalable, Gray said, meaning the different modules can be changed without affecting the entire system. "For example, if the state Legislature were to pass new legislation requiring us to store an actual image of a notary commission document, we would just replace the module that supports commissioning with one that adds imaging to the system," Gray pointed out. "We wouldn't have to change any of the other modules. Everything would work the same, yet we would have the ability to scale the commission module up to a higher level of functionality." The entire application was built in six months, at a cost of just over $2 million. The Secretary of State's Office is also using the Black Box approach for its document warehouse. Document filings relating to corporations, lobbyists and campaign expenditures are a huge business for the Secretary of State, involving millions of documents annually. Right now, most end up on microfilm, which are hard to access by the public and involve lots of labor to retrieve. The agency's goal is to place the documents into an electronic warehouse that could be accessed by the public via the Internet. Since storage technology continues to evolve, Gray doesn't want the agency to lock itself into one particular type of hardware and software, which may become obsolete in a few years. "There are a number of promising new technologies out there," explained Gray, "Everything from blue laser to DVD. We don't want to marry a technology that in two to three years down the road doesn't make sense any more." Gray plans to use Black Box to build the warehouse's storage repository, which will have a clearly defined architecture and set of standards for input and output. Once again, scalability will be the key factor. "Whatever I design has to be able to increase the system's throughput over time," he said. Crack open any engineering textbook and you'll find plenty of references to Black Box. But walk into just about any IT shop in the public or private sector, and when you mention the term, you're bound to get some blank stares. Gray believes the lack of knowledge about Black Box can be partly attributed to the fact that it's not taught in computer-science classes. Despite ignorance about the concept, Gray thinks that some Black Box practices are used in IT departments, but that staff aren't aware of it. Others are aware of its benefits, but don't know how to implement it. To get started, Gray has two recommendations: "Make sure you have a good IT architect on staff, somebody who knows how to design a scalable system, then take the time to look at how other organizations have deployed an open, scalable architecture." Once you stick to a set of architectural standards, then everything works together, Gray summed up. "The advantages of Black Box are too big to ignore. It reduces your risk while providing greater flexibility." July Table of Contents
<urn:uuid:70b1d5d5-58a5-4e07-9570-afa8ff734a13>
CC-MAIN-2017-04
http://www.govtech.com/featured/Outside-The-Box-Engineering-Concept-Aids.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00274-ip-10-171-10-70.ec2.internal.warc.gz
en
0.961922
1,637
2.75
3
Used with permission from the Maine Office of Information Technology In the last ten years, there has been great interest in creating highly-accurate ortho-rectified aerial photos for use in geographic information systems (GIS) in Maine, not only at the federal and state levels, but also at the municipal level. Many towns have developed such data on their own, but lack both the financial and technological resources to share these data, nor are they able to take advantage of the large amounts of data collected by state and federal agencies. As a result, several terabytes of digital aerial photos exist in the state, but the means of sharing them does not. Considering the millions of dollars spent in developing such data, it makes sense to invest a small amount more to publish the data and help it reach its true potential. The Maine Office of GIS (MEGIS), in cooperation with the Maine Library of Geographic Information (GeoLibrary), has developed an open web mapping service (WMS) platform specifically to meet this need. MEGIS has developed a "production pipeline" comprised of scripts and open-source software that greatly decreases the time it takes to convert imagery to a WMS. The platform provides the service using OpenGIS Consortium (OGC) standards, and relies almost exclusively on open-source software such as the Geospatial Data Abstraction Library (GDAL), Python, and MapServer. The process consists of two parts - preparation of the data to create the WMS, and then a web serving platform to host it. The purpose of this project is to partner with any organization holding publicly-available digital aerial ortho-photos for Maine in order to allow these data to be made available to any user via WMS. Such a service can easily be consumed in most GIS software including Google Earth, and can also be easily integrated into any web-mapping application. Maine's approach to this solution has spanned all levels of government in Maine, including hosting data from the federal government U.S. Geological Survey (USGS) and U.S. Department of Agriculture (USDA), state government (MEGIS and Maine Department of Marine Resources), and local government (Southern Maine Regional Planning Commission, the Greater Portland Council of Governments, and the towns of Augusta, Manchester, York, Kittery, and Hampden). This service is still early in its development and we foresee doubling the data holdings in the next six months. Collaboration was key to this service's development, MEGIS collaborated with USGS, USDA, the University of Southern Maine, and Maine GIS stakeholders via the Maine GeoLibrary. The GeoLibrary Board provided the funds to purchase the server hardware, while MEGIS provides the expertise to convert raw imagery into WMS. The other partner organizations provide their raw data in return for having free access to it via WMS. While WMS and providing image services have been around for a few years, this project is unique in that it is the first attempt, as far as we are able to ascertain, to provide a free and openly-available platform for sharing aerial ortho-photos. Indeed, even big federal agencies such as USGS and USDA do not have platforms available to provide all of their Maine imagery via web services. The benefits of these services will be tremendous. State agencies and federal agencies will be able to tap into any imagery product available in Maine, and small municipalities will be able to provide their data outside their town offices at no expense. This solution also provides several opportunities to protect and conserve natural resources. For example, the Maine Department of Environmental Protection (MEDEP) uses high-resolution aerial photos to determine environmental compliance. The ability to access municipal imagery allows them to see the state of projects on a specific date, and such data is admissible as evidence in court, if a matter advances to that level. MEDEP also uses high-resolution data to assess the health of urban watersheds. Other natural resource agencies use these data for habitat conservation and land use planning, to conserve natural resources. Another example is that having these data available allows government workers to assess many areas without having to make a gasoline-consuming trip out to the site. Additionally, fostering this collaboration brings all the stakeholders together for better future planning of aerial photo projects, and will reduce the number of plane flights required to capture the same data. Finally, another innovation in this project is the wide use of open technologies. The web service uses the WMS protocol, an OGC approved open GIS standard. The images are processed using mostly the open-source GDAL package, and many of them are compressed using the open JPEG-2000 compression algorithm. Scripts to process the imagery are written in the open-source Python language and typically auto-generated. Finally, the WMS itself is served out via the open-source MapServer software from the University of Minnesota. This allows us to share the methodology and process with any other organization that would like to duplicate it. It eliminates the need for expensive commercial software. This project, which began with its first offering in October 2008, increases government transparency and citizen engagement because it puts in the hands of the public the same data that are being used to make governmental decisions in a GIS context. By utilizing an open standard for the web service, the project ensures that any citizen with access to Google Earth or any other GIS package (such as free and open-source QGIS, MapWindow, or gvSIG) can see the same data. The greatest immediate measurable results are the usage of the service so far. Although it is still in its infancy, in terms of the amount of data available and the size of the audience aware of the service (which to date has only been advertised on the GeoLibrary web site and via email), the service is already seeing 2,000 - 3,000 hits per day. Although we have posted only two municipal products so far (Augusta area and York County data), we have been inundated with requests by other municipalities and already have a long backlog of data to prepare. Another immediately measurable result is significant cost savings. Earlier attempts at providing web imagery services were costing the state $110,000 per year, prohibiting the inclusion of municipal data. Using WMS and open-source software, that cost is slashed to $6,000 per year. Users will see increased efficiencies by being able to utilize a number of different imagery products for their particular geographic area, without having to hunt down the source and then acquiring the data. Finally, by using open-source software we can easily transfer the process to any other organization that wants to use it without incurring expensive licensing costs. In order to ensure a high-quality service and performance levels, we followed best practices for developing the initial offering. The service is available from two servers, with a third for developing new services. The production servers are located in different buildings and utilize different network links, thus providing high availability in the event one building was lost. The solution is easily scalable by simply adding more servers. Finally, by using lightweight software we are able to maximize the performance of the service, rather than using "heavy" commercial software which requires greater hardware resource for the same results. One result of early development of this service has been to test the methodology and learn from some mistakes. These lessons have included using the lightest software needed to do the job (we initially tested MapServer against two other commercial competitors), using JPEG-2000 compression whenever possible instead of proprietary compression, and working directly with towns to get data. Go online to try out the service.
<urn:uuid:ff5b7169-1303-4b7a-a10d-5bb4d09e3c3f>
CC-MAIN-2017-04
http://www.govtech.com/geospatial/102472579.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00392-ip-10-171-10-70.ec2.internal.warc.gz
en
0.93807
1,547
2.859375
3
Two security researchers recently discovered that USB users need to worry not only about files infected with malware residing inside the drive, but also about infections directly built into the device’s firmware. After months of reprogramming the firmware of thumbdrives, researchers Jakob Lell and Karsten Nohl managed to create a proof-of-concept (POC) malware that is invisible and able to be installed on a USB device, according to Wired. The malware, dubbed BadUSB, can execute a variety of malicious activities, including take over a PC, hijack a browser’s DNS settings, invisibly alter files installed from a memory stick and impersonate a USB keyboard and type commands. If the thumb drive is connected to a smartphone, it can read a user’s communications and send them to a remote location. Corrupted At The Core Part of what makes this new malware so dangerous is that it resides in the USB’s firmware that controls basic functions. Because of this, the attack code can avoid detection long after it would appear to an average user that the device’s contents had been deleted. BadUSB malware is not only reserved for thumb drives, but any USB device. Smartphones, mice and keyboards all have USB firmware that can be reprogrammed. Anytime a USB stick is plugged into a computer its firmware can be altered by malware on the PC, and infections can happen in the reverse as well, which creates a big problem for enterprise security because of the rate at which employees share information with thumb drives. “We’ve all known that if you give me access to your USB port, I can do bad things to your computer,” University of Pennsylvania computer science professor Matt Blaze said in an interview with Wired. “What this appears to demonstrate is that it’s also possible to go the other direction, which suggests the threat of compromised USB devices is a very serious practical problem.” USB Malware Easy to Spread The research duo say the worst part isn’t that the malware can infect any USB device — it’s that reformatting won’t fix the problem. According to ZDNet, long-term fixes can be achieved by chipset manufacturers creating stronger firmware that can’t be easily modified, as well as security companies checking USB devices for changes to firmware that weren’t authorized. In the short-term, however, Lell and Nohl suggest only using thumb drives in highly secure environments. The pair liken using thumb drives to hypodermic needles — trust only those that have been used inside one’s own personal environment and disposing of any that have in contact with unknown devices. “In this new way of thinking, you can’t trust a USB just because its storage doesn’t contain a virus. Trust must come from the fact that no one malicious has ever touched it,” said Nohl. “You have to consider a USB infected and throw it away as soon as it touches a non-trusted computer. And that’s incompatible with how we use USB devices right now.”
<urn:uuid:d42cb197-b4de-4c18-ac33-7e0182d6b8ad>
CC-MAIN-2017-04
https://www.entrust.com/new-usb-malware-causes-enterprise-security-concerns/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00026-ip-10-171-10-70.ec2.internal.warc.gz
en
0.934053
652
2.78125
3
SSH for Penetration Testing SSH stands for Secure shell and works on Port 22 . As penetration testers we are aware of the uses and power of SSH on remote access of systems . During Penetration testing SSH might come handy as a powerful tool . This post will explain some of the techniques that can be used during a penetration test . Local Forwarding using SSH Sometimes we come across scenarios where we need the services on the remote host accessible to the host via Local Network . Root is required . ssh -L 127.0.0.1:10521:127.0.0.1:1521 email@example.com LocalForward 127.0.0.1:10521 127.0.0.1:1521 Remote Forwarding using SSH Well this technique is complete opposite of the previous one . Remote forwarding on using SSH comes to rescue in those penetration testing scenarios where we need the services on a local machine / Local Network accessible to remote host via a remote listener . This might sound odd … why would I want my machine accessible on a remote host , but lets face it , we all need to expose a service that lets us download our penetration testing tools. For the practical information here is an example : The SSH server will be able to access TCP port 80 on SSH client by connecting to 127.0.0.1:8000 on the SSH Server . ssh -R 127.0.0.1:8000:127.0.0.1:80 192.168.1.10 RemoteForward 127.0.0.1:8000 127.0.0.1:80 SOCKS Proxy using SSH Here we set up a SOCKS Proxy on 127.0.0.1:8000 that lets you pivot through the Remote Host 192.168.1.10 ssh -D 127.0.0.1:8000 192.168.1.10 Host 192.168.1.10 DynamicForward 127.0.0.1:8000 X11 Forwarding using SSH If your SSH client is also an X-Server then you can launch X-clients (e.g. Firefox) inside your SSH session and display them on your X-Server. This works well with from Linux X-Servers and from cygwin‘s X-server on Windows. SSH -X 10.0.0.1 SSH -Y 10.0.0.1 # less secure alternative - but faster ForwardX11 yes ForwardX11Trusted yes # less secure alternative - but faster SSH Authorized Keys : SSH stands for Secure Shell … well to be secure , its always advisable to use Keys for encrypting the SSH communication . This helps to avoid unwanted hosts to take advantage of the penetration test and keep the penetration testing secure . That being said , it is a good practice to add an authorized_keys file that will allow you to log in using an SSH key . Authorized_keys File : This file is present in the User’s Home Directory on the SSH server . This file basically holds the public keys of the users allowed to login into that user account of SSH Server . For this the first step is to Generate PUBLIC KEY / PRIVATE KEY pairs . sh-keygen -f mysshkey cat mykey.pub # to copy this to authorized_keys To connect to the Remote host using the authorized key : ssh -i mykey firstname.lastname@example.org Some Cool SSH Configuration Tweeks Finally here are some cool modifications you can do to your SSH Client system , this will make it easier to use other penetration testing tools that are using SSH . Host 10.0.0.1 Port 2222 User ptm ForwardX11 yes DynamicForward 127.0.0.1:1080 RemoteForward 80 127.0.0.1:8000 LocalForward 1521 10.0.0.99:1521 #Please Share and Comment if you like this Post .
<urn:uuid:b18b79cf-994e-482d-97fa-3c3f97ddb998>
CC-MAIN-2017-04
https://www.hackingloops.com/tag/cheat-sheet/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00144-ip-10-171-10-70.ec2.internal.warc.gz
en
0.812937
844
2.59375
3
Parameter tampering is a simple attack targeting the application business logic. This attack takes advantage of the fact that many programmers rely on hidden or fixed fields (such as a hidden tag in a form or a parameter in a URL) as the only security measure for certain operations. Attackers can easily modify these parameters to bypass the security mechanisms that rely on them. The basic role of Web servers is to serve files. During a Web session, parameters are exchanged between the Web browser and the Web application in order to maintain information about the client's session, eliminating the need to maintain a complex database on the server side. Parameters are passed through the use of URL query strings, form fields and cookies. A classic example of parameter tampering is changing parameters in form fields. When a user makes selections on an HTML page, they are usually stored as form field values and sent to the Web application as an HTTP request. These values can be pre-selected (combo box, check box, radio button, etc.), free text or hidden. All of these values can be manipulated by an attacker. In most cases this is as simple as saving the page, editing the HTML and reloading the page in the Web browser. Hidden fields are parameters invisible to the end user, normally used to provide status information to the Web application. For example, consider a products order form that includes the following hidden field: <input type="hidden" name="price" value="59.90"> Modifying this hidden field value will cause the Web application to charge according to the new amount. Combo boxes, check boxes and radio buttons are examples of pre-selected parameters used to transfer information between different pages, while allowing the user to select one of several predefined values. In a parameter tampering attack, an attacker may manipulate these values. For example, consider a form that includes the following combo box: <FORM METHOD=POST ACTION="xferMoney.asp"> Source Account: <SELECT NAME="SrcAcc"> <BR>Amount: <INPUT NAME="Amount" SIZE=20> <BR>Destination Account: <INPUT NAME="DestAcc" SIZE=40> <BR><INPUT TYPE=SUBMIT> <INPUT TYPE=RESET> An attacker may bypass the need to choose between only two accounts by adding another account into the HTML page source code. The new combo box is displayed in the Web browser and the attacker can choose the new account. HTML forms submit their results using one of two methods: GET or POST. If the method is GET, all form parameters and their values will appear in the query string of the next URL the user sees. An attacker may tamper with this query string. For example, consider a Web page that allows an authenticated user to select one of his/her accounts from a combo box and debit the account with a fixed unit amount. When the submit button is pressed in the Web browser, the following URL is requested: An attacker may change the URL parameters (accountnumber and debitamount) in order to debit another account: There are other URL parameters that an attacker can modify, including attribute parameters and internal modules. Attribute parameters are unique parameters that characterize the behavior of the uploading page. For example, consider a content-sharing Web application that enables the content creator to modify content, while other users can only view content. The Web server checks whether the user that is accessing an entry is the author or not (usually by cookie). An ordinary user will request the following link: An attacker can modify the mode parameter to readwrite in order to gain authoring permissions for the content.
<urn:uuid:9f3b226c-2a96-4448-b0e6-e54c2e99487f>
CC-MAIN-2017-04
https://www.imperva.com/Resources/Glossary?term=parameter_tampering
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00144-ip-10-171-10-70.ec2.internal.warc.gz
en
0.769574
748
3.0625
3
Raghad Rabah - 2M CCTV Generally, when taking pictures or videos in the dark, you will notice dark and light spots on the image. This is prevalent in closed circuit television (CCTV) cameras, digital cameras, camcorders, basically any type of camera in low light settings. Fortunately for surveillance cameras, most of them feature digital noise reduction. It removes the specks on the photo (noise) and saves you space on your hard drive and makes the image clearer and easy on the eye. Here is an example of Digital Noise Reduction in action using the Veilux VS-70CDNRD DNR Color Box Camera What is Noise? Have you ever taken a picture or video outside in the dark (without flash)? You’ve probably noticed that it can sometimes turn out to be very grainy. In technical terms, this is called noise. Noise is any unwanted interference in the signal. It can be random, white noise, or coherent noise produced by the devices algorithms. It can be caused by low lighting situations, a nearby power interference, heat, or a defect on the CCD (Charge Coupled Device). Different Types of Digital Noise To understand noise, we must first explain how an image is captured. The image sensor, located on the CCD, processes each image and sends it to the DVR (digital video recorder). When the area the camera is located in has poor lighting, the image sensor picks up what is called “chroma”, or variations in hue, and luminance, variations in brightness. See Figure 1 Image noise is caused when there is not enough lighting to illuminate the area. It occurs when there is insufficient light reflecting off objects, so it cannot distinguish different colors or different contrast. Other types of noise include salt and pepper noise, when pixels are of different hues than the pixels around them, causing the image to have dark and white dots, hence the name. See Figure 2 Gaussian noise, or amplified noise, is caused by random interference and causes the every pixel to be changed from its original color. The name is derived from the Gaussian distribution, or the probability density function which is equal to a normal distribution. Gaussian noise amplifies every pixel in the image, particularly blue pixels, which causes image distortion. Film grain is a another type of noise. It is dependent on the signal of the video. It is the normal grain that is found on videos taken in low lighting. It is given a uniform texturing. In a nutshell, noise is caused by the sensor when it does not pick up adequate lighting. This causes blending in the image, which results in a grainy effect. This obscures the image and causes it to be blurry, or ghostly. What is Digital Noise Reduction? Some surveillance camera companies may include their brands in their innovative DNR technology, i.e. SSNRII (second generation of Samsung’s Super Noise Reduction) or XDNR (eXcellent Dynamic Noise Reduction by Sony). Sony’s XDNR delivers the finest noise reduction in low lit areas and eliminates motion blur. Samsung’s SSNRII removes much more noise than traditional DNR. When it comes to CCTV, DNR is crucial for clearer images. The image sensor on the CCD eliminates the grainy effect on the images, and consequences in a richer image. This is essential to identify any movement or objects on the screen. It helps to have DNR on surveillance cameras located in parking lots, and it comes in handy for forensic use. How Does DNR Function? Digital Noise Reduction utilizes software in the CCD to digitally remove any noise found in each image. It has an algorithm that analyses two consecutive frames and removes any grains that do not match to the previous frame. When the CCD eliminates the noise from the photo, the image is then transferred to the DVR/NVR. When it is stored on the hard disk drive (HDD), the picture size is decreased by 70%, and is clearer and crisper than before. However, the image is typically only processed in the foreground of the image. Objects in the background tend to appear grainy. Newer developments have found a solution for this. The most advanced type of digital noise reduction is called 3D-DNR, or as it is sometimes written, 3DNR. The latest form of Digital Noise Reduction is 3D-DNR. It compares every pixel with the pixels surrounding it in addition to every frame with the next. This process of matching is called spatial noise reduction. With 3D-DNR, a thorough processing of the image is applied, including the background. This results in a clearer image than traditional DNR and less space is taken on the hard disk drive.
<urn:uuid:9e41d0ce-244f-4194-bf15-aa4650984e6f>
CC-MAIN-2017-04
http://www.2mcctv.com/blog/2012_07_06-digital-noise-reduction-cctv/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00172-ip-10-171-10-70.ec2.internal.warc.gz
en
0.928826
984
3.359375
3
The exponential growth of scientific data has put considerable strain on existing research networks. Just five years ago, the United Nations International Panel on Climate Change (IPCC) was fed with 35 TB of data to use in developing its assessment report. In two years, when the next IPCC report is published, it is estimated that dataset will be up to 2 petabytes, more than a 57-fold increase. To support the transfer of these massive buckets of bytes and advance adoption of faster communications technology, the government introduced the Advanced Networking Initiative (ANI). The program that was created with a $62 million pot of money that was scooped out of the 2009 federal stimulus package, also known as the American Recovery and Reinvestment Act (ARRA). At the heart of the initiative is a prototype 100Gbps testbed network, built by the Energy Sciences Network (ESnet), in collaboration with the Internet2 consortium. The network currently connects the National Energy Research Computing Center, (NERSC) in California, the Argonne Leadership Computing Facility (ALCF) in Illinois, and the Oak Ridge Leadership Computing Facility (OLCF) in Tennessee. Brian Tierney, head of ESnet’s Advanced Networking Group, mentioned the network’s popularity with scientists in a recent article on the organization’s website. “Our 100G testbed has been about 80 percent booked since it became available in January, which just goes to show that there are a lot of researchers hungry for a resource like this,” says Tierny. While bandwidth is an integral component to moving information between facilities, applications also determine how effectively that data is transferred. The Climate 100 collaboration, also born from the ARRA, was tasked with developing new methods for moving extremely large amounts of climate data. Mehmet Balman of the Berkeley Lab’s Scientific Data Management group and member of the Climate 100 collaboration, explains that advanced middleware applications are needed to handle the variety of small and large data across high-throughput networks. The Climate 100 group used the 100Gbps network as a testing environment for their applications. The Climate 100 tool was used on the ANI testbed to demonstrate a 35 terabyte transfer of data between NERSC and ALCF. The operation took roughly 30 minutes to complete. Compared with a 10Gbps, the same transfer would have taken roughly five hours. The ANI project is set to wind down in a few months, after which, the test network will be folded into ESnet’s fifth-generation production infrastructure. Long distance networking has become a familiar bottleneck in scientific computing. As the dataset sizes continue their upward climb, these resources will be taxed even further. Projects like ANI display forward thinking from the government, at least when the federal money is flowing, and demonstrates the enabling effects of 100G bandwidth.
<urn:uuid:7cd39af6-563e-4bd2-a89d-aa3a076a888e>
CC-MAIN-2017-04
https://www.hpcwire.com/2012/05/02/100g_network_to_support_big_science/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00410-ip-10-171-10-70.ec2.internal.warc.gz
en
0.949385
584
2.78125
3
Coffee, one of the most important drinks in the world, is also one of the most terrible things you consume in space. Since the Apollo days, astronauts have complained about these freeze-dried, premixed packages of coffee. But no more! A group of freshman engineering students at Rice University have developed a solution that will let our space explorers mix exactly the right amount of sugar and creamer to their cup--er, bag of Joe. The Rice students developed a system that uses pouches and a 3D-printed roller to help Astronauts make their own customized batch of coffee. By attaching a bag of creamer or sugar to the coffee pouch, our astronauts can use the 3D-printed roller to deliver extremely precise amounts--down to 10 milliliters--of each to their drinks. So if you like your coffee with two creams and three sugars, you would squeeze out 20 and 30 milliliters of each, respectively. The only problem is that these pouches are not reusable, so they will have to be produced in smaller packages--something like those fast food ketchup packets. It might all seem like a much more complicated way to make your coffee, but it's a small sacrifice for the ability to make a perfectly blended drink. The students hope their invention will be tested aboard the International Space Station, where the astronauts there still have to settle on those terrible-tasting premixed bags with a lot of sugar, a lot of creamer, or a lot of both. Now if we could only make it so the coffee didn't need to be freeze-dried...
<urn:uuid:2efb56f4-b838-4a37-b3ff-9938e3a46fac>
CC-MAIN-2017-04
http://www.cio.com/article/2386263/government-use-of-it/space-coffee-just-got-a-whole-lot-better-thanks-to-rice-university-students.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00438-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947631
330
2.765625
3
7.12 What is key recovery? One of the barriers to the widespread use of encryption in certain contexts is the fact that when a key is somehow ``lost'', any data encrypted with that key becomes unusable. Key recovery is a general term encompassing the numerous ways of permitting ``emergency access'' to encrypted data. One common way to perform key recovery, called key escrow, is to split a decryption key (typically a secret key or an RSA private key) into one or several parts and distribute these parts to escrow agents or ``trustees''. In an emergency situation (exactly what defines an ``emergency situation'' is context-dependent), these trustees can use their ``shares'' of the keys either to reconstruct the missing key or simply to decrypt encrypted communications directly. This method was used by Security Dynamics' RSA SecurPC product. Another recovery method, called key encapsulation, is to encrypt data in a communication with a ``session key'' (which varies from communication to communication) and to encrypt that session key with a trustee's public key. The encrypted session key is sent with the encrypted communication, and so the trustee is able to decrypt the communication when necessary. A variant of this method, in which the session key is split into several pieces, each encrypted with a different trustee's public key, is used by TIS' RecoverKey. Dorothy Denning and Dennis Branstad have written a survey of key recovery methods [DB96]. Key recovery first gained notoriety as a potential work-around to the United States Government's policies on exporting ``strong'' cryptography.To make a long story short, the Government agreed to permit the export of systems employing strong cryptography as long as a key recovery method that permits the Government to read encrypted communications (under appropriate circumstances) was incorporated. For the Government's purposes, then, ``emergency access'' can be viewed as a way of ensuring that the Government has access to the plaintext of communications it is interested in, rather than as a way of ensuring that communications can be decrypted even if the required key is lost. Key recovery can also be performed on keys other than decryption keys. For example, a user's private signing key might be recovered. From a security point of view, however, the rationale for recovering a signing key is generally less compelling than that for recovering a decryption key; the recovery of a signing key by a third party might nullify non-repudiation. - 7.1 What is probabilistic encryption? - Contribution Agreements: Draft 1 - Contribution Agreements: Draft 2 - 7.2 What are special signature schemes? - 7.3 What is a blind signature scheme? - Contribution Agreements: Draft 3 - Contribution Agreements: Final - 7.4 What is a designated confirmer signature? - 7.5 What is a fail-stop signature scheme? - 7.6 What is a group signature? - 7.7 What is a one-time signature scheme? - 7.8 What is an undeniable signature scheme? - 7.9 What are on-line/off-line signatures? - 7.10 What is OAEP? - 7.11 What is digital timestamping? - 7.12 What is key recovery? - 7.13 What are LEAFs? - 7.14 What is PSS/PSS-R? - 7.15 What are covert channels? - 7.16 What are proactive security techniques? - 7.17 What is quantum computing? - 7.18 What is quantum cryptography? - 7.19 What is DNA computing? - 7.20 What are biometric techniques? - 7.21 What is tamper-resistant hardware? - 7.22 How are hardware devices made tamper-resistant?
<urn:uuid:bc71ddaa-173e-4ec1-b775-f22eae6f9e12>
CC-MAIN-2017-04
https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/what-is-key-recovery.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00438-ip-10-171-10-70.ec2.internal.warc.gz
en
0.924396
803
3.53125
4
The United States Department of Energy has announced a plan to field an exascale system by 2022, but says in order to meet this objective it will require an investment of $1 billion to $1.4 billion for targeted research and development. The DOE’s June 2013 “Exascale Strategy” report to Congress was recently obtained by FierceGovernmentIT. The report makes it clear that exascale systems, one-hundred to one-thousand times faster than today’s petascale supercomputers, are needed to maintain a competitive advantage in both the science and the security domain. The DOE notes that exascale computing will be essential to the processing of future datasets in areas like combustion, climate and astrophysics and claims that there is “significant leverage in addressing the challenges of large scale simulations and large scale data analysis together.” But before practical exascale machines can become a reality, there are several pretty major obstacles that need to be addressed. Among these are the energy issue; system balance and the memory wall; resiliency and coping with run-time errors; and exploiting massive parallelism. All of these issues require focused research and development. Reducing power requirements is one of the foremost objectives of any exascale endeavor. The report points out that an exascale supercomputer built with current technology would consume almost a gigawatt of power, approximately half the output of Hoover Dam. With a standard technology progression over the next decade, experts estimate that an exascale supercomputer could be constructed with power requirements in the 200 megawatt range at an estimated cost of $200-$300 million per year. Whether funding bodies will be willing to spend this much money remains to be seen, but the DOE would like to see that power requirement cut by a factor of 10, down to 20 megawatt neighborhood where current best-in-class systems reside. As a point of comparison, the largest US supercomputer, Titan, installed at Oak Ridge National Laboratory, requires 8.2 MW to reach 17.59 petaflops. The world’s fastest system, China’s 33.86 petaflop Tianhe-2, has a peak power load of 17.8 MW, but that figure goes up to 24 MW when cooling is added. The DOE report recommends five main areas of focus which add up to a comprehensive exascale roadmap with the goal of fielding such a system by the beginning of the next decade (circa 2022). - Provide computational capabilities that are 50 to 100 times greater than today’s systems at DOE’s Leadership Computing Facilities. - Have power requirements that are a factor of 10 below the 2010 industry projections for such systems which assumed incremental efficiency improvements. - Execute simulations and data analysis applications that require advanced computing capabilities such as performing accurate full reactor core calculations, validating and improving combustion models for mixed combustion regimes with strong turbulence-chemistry interactions, designing enzymes for conversion of biomass, and incorporating more realistic decisions based on available energy sources into the energy grid. - Provide the capacity and capability needed to analyze ever-growing data streams. - Advance the state-of-art hardware and software information security capabilities. The plan described in the report covers the research, development and engineering that is needed to achieve an exascale computing system by 2022, but the acquisition of such a system would be separate from this effort. The suggested approach is to continue fielding systems at intermediate stages of performance, for example 100 petaflops, 250 petaflops, 500 petaflops, and so on, up to exascale. Currently, the US invests between $180M to $200M annually to acquire and operate HPC machines through the NNSA Advanced Simulation and Computing (ASC) and Office of Science Advanced Scientific Computing Research (ASCR) programs. The R&D required to prepare the way for an exascale supercomputer comes with a price tag of between one billion and 1.4 billion dollars, a figure arrived at by surveying key stakeholders in the computing industry. This is the cost to the DOE with an expectation that there will be some “cost-share contribution” from vendors and some software componentry development left to the software ecosystem to resolve. Responsibility for the program will be jointly shared by the DOE’s Office of Science and the National Nuclear Security Administration (NNSA).
<urn:uuid:5421f53c-faf4-49b5-a718-63a58bbfa9bd>
CC-MAIN-2017-04
https://www.hpcwire.com/2013/09/16/doe_sets_exascale_pricetag/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00346-ip-10-171-10-70.ec2.internal.warc.gz
en
0.930711
908
2.953125
3
While a majority of homeland security professionals say America is safer now than it was before the September 11 attacks, 75 percent believe the country will experience a similar terrorist attack in the next five years — and Americans are not prepared, according to a recent survey commissioned by the National Homeland Defense Foundation (NHDF) and Colorado Technical University (CTU). The survey, which was conducted by Kelton Research, revealed that 94 percent of these professionals do not think Americans know the appropriate steps to take if a terrorist attack were to happen in their hometown. Among survey findings from professionals within the field: - Homeland security experts do not feel safe. More than half of those surveyed (51 percent) do not personally feel safe from a terrorist attack. - Cyberterrorism is an emerging threat. When asked which security issues the U.S. should invest more resources in over the next five years, computer networks or the Internet came out on top (58 percent), followed by homegrown or domestic terrorism inside the U.S. or infrastructure (49 percent), and US coastlines and harbors (42 percent). - Public education needs to be increased. Fewer than three in 10 (27 percent) homeland security professionals believe the U.S. is doing a good enough job to educate the public on what to do if the U.S. experiences a terrorist attack. - Need for more qualified applicants. Only 17 percent of survey respondents believe there are enough qualified job applicants to fill key roles in homeland security. “Since Sept. 11, many aspects of national security have improved, but we still have progress to make in terms of education for the professionals serving our country and in improving communications between government agencies at all levels,” said Donald Addy, NHDF President. “Much more can be done to prepare our nation for attacks, especially as acts and threats of terrorism evolve.”Homeland Security: Marked Improvement But Room for Growth Nearly eight in 10 (77 percent) of homeland security professionals surveyed believe that the response of federal, state and local governments to a terrorist attack today would be more coordinated than it was in 2001. Moreover, almost three in four (74 percent) feel that communication on homeland security matters across all government levels has improved since Sept. 11. Although the survey shows greater confidence in government coordination, the following findings suggest areas for improvement: - Almost nine in 10 (87 percent) feel that the field is fragmented, not cohesive, when it comes to communication or collaboration among agencies and departments. - As for terrorist threats, 58 percent think that homeland security in the U.S. is still generally reactive rather than proactive. “This survey clearly shows we need to do a better job when it comes to helping the public understand how to be prepared should we experience another terrorist attack similar to Sept. 11,” said Capt. W. Andy Cain, US Navy, a member of the CTU Homeland Security Advisory Board. “Professionals in the industry need preparation as well — in the form of advanced education and training to meet the needs the career will demand in the future.”More Education Needed Along with better public education, the survey demonstrated a need for better education. Seventy-two percent of homeland security professionals surveyed think better trained or educated staff would make the most dramatic improvement in US homeland security. In addition, a majority (71 percent) who do not already have graduate-level degrees in homeland security believe they could advance their own careers with this type of degree. “This survey reinforces what we have long perceived as a need for advanced degrees in the homeland security field,” said Greg Mitchell, President of the CTU Colorado Springs campus. “It is for this reason that we developed both master's and doctorate degrees in homeland security, to provide opportunities for current homeland security professionals to advance their career, as well as for those looking to enter this growing field.”Job Outlook for Homeland Security Those looking to make a difference with a career in homeland security may be well positioned to pursue success, with 69 percent of the homeland security professionals surveyed portraying their opinion of the job outlook in homeland security in the next five years as excellent or good. The following survey results also speak to their experiences within the field: - Based on their experiences in the homeland security field, an overwhelming majority of the professionals surveyed (89 percent) would recommend a career in the industry to others. - 47 percent of those surveyed frequently, if not always, believe they are personally making a difference with their jobs. - 63 percent of those surveyed feel the public values the services they perform.
<urn:uuid:63343236-3770-4a46-84fe-a912bd63c72b>
CC-MAIN-2017-04
https://www.asmag.com/showpost/8802.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00034-ip-10-171-10-70.ec2.internal.warc.gz
en
0.961699
944
2.71875
3
In my last post, we discussed the inroads that IPv6 has been making in our packet-switched internetworks. And, although IPv6 is only in its infancy in terms of general worldwide deployment, certain business sectors are combining the vast numbers of IP addresses now available with other new technologies that are also under development. As a CCNA, you need to be aware of these newly developing areas of opportunity. Keep in mind that IPv4 only provides an addressing capability of about 4 billion IP addresses, whereas IPv6 supports 340,282,366,920,938,463,463,374,607,431,768,211,456 addresses. And, in the time it took me to type in that long string of numbers, you could have plugged your refrigerator, which is Internet-ready with an IPv6 address, into a wall socket in the kitchen. Then, by default, your refrigerator will log into the manufacturer’s server over the Broadband over Power Lines (BPL) network. Science fiction? Nope, science fact. BPL is a system for carrying data on a conductor that is also being used for electric power transmission. This new process provides access to the Internet by sending information-bearing signals over the power lines coming into your home or business. Most BPL technologies are limited to one set of wires, but some can cross over between two levels, such as both the distribution network and the premises wiring. However, all power-line communications systems operate by applying a modulated carrier signal onto the wiring system. Different types of BPL systems use different frequency bands, depending on the signal transmission characteristics of the power wiring being used. One of the most significant engineering challenges facing the implementation of BPL to every home is that the installed base of existing wiring systems was originally intended for transmitting AC power. As such, they have only a limited ability to carry the higher frequencies needed to pass data. The bottom line is that with the BPL system, any computer, or any other device equipped for Internet access, would only need to plug a BPL “modem” into any outlet in an equipped building to have high-speed Internet Access. Many manufacturers have already equipped their products with these features, along with an IPv6 address, and are just waiting for this new Internet access to become available to consumers. And, if you think about it, BPL capabilities offer many benefits over the use of regular cable or DSL connectivity. In many rural areas, there is still no cable or DSL service available. However, the extensive power delivery infrastructure already in place would allow people in remote locations access to the Internet, with relatively little equipment investment by both the power utility or the customer. In many areas of the United States, BPL is already installed and available to people who have power to their homes. Manassas, VA, is the first city in the world to have BPL deployed to all its residents, and has been a demonstration center for utilities, integrators/operators, and government entities from around the globe. In July 2008, the cost for BPL services was pegged at $28.95 per month, however, it is projected to come down. In addition, in November 2008, IBM announced that it had signed a $9.6 million deal with International Broadband Electric Communications (IBEC) to install equipment and provide BPL service to almost 350,000 homes in Alabama, Indiana, Maryland, Pennsylvania, Texas, Virginia, and Wisconsin. The lawmakers in Washington have made research and deployment of BPL a major goal of our communications industry. Congress has included provisions and funding in the recently enacted “American Recovery and Reinvestment Act of 2009:. Because IPv6 is only in its infancy in terms of worldwide deployment, and the BPL technology and installations are a work in progress, it is obvious there are huge opportunities for people who have the skill sets inherent in the CCNA certification process. And, with your CCNA in hand, you are at the gateway of an entirely new career challenge. Say “hello” to your washer and dryer for me. They’ll soon be on the Web. Author: David Stahl
<urn:uuid:b504a6fb-4117-4a9d-a6a3-f7f768653454>
CC-MAIN-2017-04
http://blog.globalknowledge.com/2009/06/17/do-you-know-who/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00063-ip-10-171-10-70.ec2.internal.warc.gz
en
0.958046
862
2.859375
3
What is SNMP devices? SNMP device is a device that is managed using Simple Network Management Protocol (SNMP). SNMP devices can communicate among each other include switches, bridges, routers, access servers, computer hosts, hubs, and printers. Why to use of XIAN SNMP Device Simulator ? With XIAN SNMP device simulator, we can create “N” numbers of simulated devices like physical / virtual machines, routers, switches and printers. We don’t have to create different physical/virtual machine, Switches, Hubs and Computer Nodes to simulate application testing for IT admins. No need to acquire very high end machine or server to run your lab setup, you can just install Xian Simulator and you can test “n” number of scenarios. Device Recorder Feature in Xian SNMP device simulator allows us to record the state of physical devices for further simulation. This intern can help to TEST different scenarios which had happened earlier in your network. This will intern help to analyse and resolve the commonly repeated problems in your environment. How to Simulate Computer Devices/nodes and Run Test Scenarios? IT admins can simulate their work environment very easy in the test labs using XIAN SNMP device simulator. We can connect to virtual share drives using XIAN SNMP device simulator. I’ve connected to shared drives of simulated devices with IP 192.168.1.14, 192.168.1.15 and 192.168.1.14. SNMP simulator is very easy to install and IT Admins can easily simulate the workload with this. Once an assigned IP is used for simulation you can connect to the share drive, but noticed that this share corresponds to the share server on which this IP and any other list of IPs are added (network card). So basically through this IP we are connecting to the resource on which the agent is running. We can be using multiple IPs for simulation and all of them will connect to the same resource (the agent server). IT admin can also run the services.MSC on different simulated devices as I’ve shown in the following picture. As mentioned above using simulated IPs, we’re intern connecting to the Windows host where Xian Device Simulator agent is running. Do you want to simulate the scenario which is happened in the past within minutes? By using Xian SNMP device simulator, we can record the state of simulated devices in your test environment this can help to TEST the scenario which had happened earlier in your network. This will help to resolve your production issues in big time. SNMP device simulator provides ability to record any physical SNMP Network device that an IT can add to increase the list of device templates available for simulation. How to Add/Remove/Stop/Save/Load Windows and Network devices to Simulator ? We can Add/Remove/Stop/Save/Load Windows and Network devices to Xian SNMP device simulator with very ease. The simulator comes with loads of pre-loaded devices like Cisco switches, Hubs and Routers etc. Now it’s going to take only 5 minutes to get your simulated devices created and running. Do you know SNMP version 3 provides more security? Xian SNMP device simulator supports all versions of SNMP. It supports SNMP Version 1, SNMP Version 2 and most secured SNMP version 3. Version 3 is able to the correct the security deficiencies of SNMP version 1 and Versions 2. Scripting is also supported with “command line” option of Xian SNMP device Simulator .It’s easily scalable in terms of handling large amount of devices. Once you have defined your devices, you can deploy elaborated set of devices. It is possible to simulate a Windows or UNIX computer as a SNMP device if you previously enable this component (windows SNMP service or the corresponding demon in UNIX) and obtain a SNMP dump from it. Of course you won’t be able to do many things like getting connected to that ‘computer’ and stop services or so; just emulate how it might respond to SNMP.
<urn:uuid:a4fe37ce-cf38-4ede-8312-6e48697ecbd3>
CC-MAIN-2017-04
https://www.anoopcnair.com/create-windows-linux-network-snmp-devices-lab-environment/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00091-ip-10-171-10-70.ec2.internal.warc.gz
en
0.910344
854
2.890625
3
The Domain Name System (DNS) distributes the responsibility of assigning domain names and mapping those names to IP addresses by designating authoritative name servers for each domain. Authoritative name servers are assigned to be responsible for their particular domains and, in turn, can assign other authoritative name servers for their sub-domains. This mechanism has made DNS distributed, fault tolerant, and helped avoid the need for a single central register to be continually consulted and updated. In general, DNS also stores other types of information, such as the list of mail servers that accept email for a given Internet domain. By providing a worldwide, distributed keyword-based redirection service, the Domain Name System is an essential component of the functionality of the Internet. DNS also defines the technical basics of the functionality of this database service. For this purpose it defines the DNS protocol, a detailed specification of the data structures and communication exchanges used in DNS, as part of the Internet Protocol Suite (TCP/IP). Every domain has a domain name server somewhere that handles its requests, and there is a person maintaining the records in that DNS. This is one of the most amazing parts of the DNS system. It is completely distributed throughout the world on millions of machines administered by millions of people, yet it behaves like a single, integrated database! Name servers do two things 24/7: - They accept requests from users to convert domain names into IP addresses. - They accept requests from other name servers to convert domain names into IP addresses. When a request comes in, the name server can do one of four things with it: - It can answer the request with an IP address because it already knows the IP address for the domain. - It can contact another name server and try to find the IP address for the name requested. It may have to do this multiple times. - It can say, “I don’t know the IP address for the domain you requested, but here’s the IP address for a name server that knows more than I do.” - It can return an error message because the requested domain name is invalid or does not exist. When you type a URL into your browser, the browser’s first step is to convert the domain name and host name into an IP address so that the browser can go request a Web page from the machine at that IP address. To do this conversion, the browser has a conversation with a name server. When you set up your machine on the Internet, you, or the software that you installed to connect to your ISP, had to tell your machine what name server it should use for converting domain names to IP addresses. On some systems, the DNS is dynamically fed to the machine when you connect to the ISP. On other machines it is hard-wired. Any application on your machine that needs to talk to a name server to resolve a domain name knows what name server to talk to because it can get the IP address of your machine’s name server from the operating system. Then, the browser contacts its name server and basically says, “I need for you to convert a domain name to an IP address for me.” For example, if you type www.bicycle.com into your browser, the browser needs to convert that URL into an IP address. The browser will hand www.bicycle.com to its default name server and ask it to convert it. The name server may already know the IP address for www.bicycle.com. That would be the case if another request to resolve www.bicycle.com came in recently. In normal operation, name servers cache IP addresses to speed things up. In that case, the name server can return the IP address immediately. Let’s assume, however, that the name server has to start from scratch. A name server would start its search for an IP address by contacting one of the root name servers. The root servers know the IP address for all of the name servers that handle the top-level domains. Your name server would ask the root for www.bicycle.com and, assuming no caching, the root would say, “I don’t know the IP address for www.bicycle.com, but here’s the IP address for the COM name server.” Obviously, these root servers are vital to this whole process, so there are many of them scattered all over the planet. Every name server has a list of all of the known root servers. It contacts the first root server in the list and, if that doesn’t work, it contacts the next one in the list, and so on. Every name in the COM top-level domain must be unique, but there can be duplication across domains. For example, bicycle.com and bicycle.org are completely different machines. The left-most word, such as www or encarta, is the host name. It specifies the name of a specific machine, with a specific IP address in a domain. A given domain can potentially contain millions of host names as long as they are all unique within that domain. Because all of the names in a given domain need to be unique, there has to be a single entity that controls the list and makes sure no duplicates arise. For example, the COM domain cannot contain any duplicate names and there are recognized organizations in charge of maintaining this list. When you register a domain name, it goes through one of several dozen registrars who work with these organizations to add names to the list. These organizations, in turn, keep a central database known as the whois database that contains information about the owner and name servers for each domain. If you go to the whois form, you can find information about any domain currently in existence. As you can surmise from many of my previous Blogs, there are literally hundreds of individual processes and protocols that blend together to form large enterprises and the Internet as an entity. And, a serious study of each of these will help make your overall understanding of the network processes much more robust and complete. Author: David Stahl
<urn:uuid:d8d3af4e-3328-400c-82e2-e68a8570b156>
CC-MAIN-2017-04
http://blog.globalknowledge.com/2010/04/27/revisiting-the-domain-name-system-dns-part-3/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00301-ip-10-171-10-70.ec2.internal.warc.gz
en
0.92097
1,260
3.90625
4
The Devil virus is a reasonably simple diskette and Master Boot Record infector. It is only able to infect a hard disk when you try to boot the machine from an infected diskette. At this time Devil infects the Main Boot Record, and after that it will go resident to high DOS memory during every boot-up from the hard disk. Devil can be disinfected from hard drives by booting clean and giving command "FDISK /MBR". Floppies can be disinfected with the FIXBOOT program provided with F-Secure anti-virus products. Once Devil gets resident to memory, it will infect practicly all non-writeprotected diskettes used in the machine. Devil-virus is also a stealth virus - if you try to examine an infected boot record, it will show you the original clean one instead. This virus has one nasty side effect; all floppy diskettes that are infected cannot be read by DOS unless the virus is active in memory. This is due to the fact that the virus overwrites part of the floppy disks Boot Parameter Block (BPB) with the text "(c) Devil". As DOS needs the information stored in the BPB to determine the diskette type DOS cannot read the diskette anymore. When the virus is active in memory all diskette boot sector read requests are intercepted by the virus. The virus then modifies the read request parameters so that the real boot sector (or MBR) is read instead of the infected one. The original MBR is stored in a static position at cylinder 0, sector 4, head 0. Original FBR is stored at the last sector of root directory. The virus deems an FBR or MBR to be infected if the word at offset 0C2h is equal to 7C00h. The virus has two static text strings: "(c) Devil" and "v3.0". Technical Details: Jeremy Gumbley, Command Software Systems UK
<urn:uuid:51c0b662-2381-4421-89f3-ac7ad823c330>
CC-MAIN-2017-04
https://www.f-secure.com/v-descs/devilboo.shtml
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00301-ip-10-171-10-70.ec2.internal.warc.gz
en
0.886517
408
2.53125
3
Originally published September 12, 2006 This article is part of a series that I began in July of this year with the article entitled "An Introduction to Visual Multivariate Analysis." In that initial article, I provided an overview of several approaches to analyzing multivariate data using visualization techniques. In this article, I am featuring an approach called parallel coordinates. The first time that I saw a parallel coordinates visualization, I almost laughed out loud. My initial impression was "How absurd!" I couldn't imagine how anyone could make sense of the dense clutter caused by hundreds of overlapping lines (see Figure 1). This certainly isn't a chart that you would present to the board of directors or place on your Web site for the general public. In fact, the strength of parallel coordinates isn't in their ability to communicate some truth in the data to others, but rather in their ability to bring meaningful multivariate patterns and comparisons to light when used interactively for analysis. Figure 1: A parallel coordinates display that measures several aspects of U.S. counties. Reading Parallel Coordinates To recognize the worth of a parallel coordinates display, you cannot think of it as a normal line graph. Lines are predominantly used to encode time-series data. The up and down slopes of the lines indicates change through time from one value to the next. The lines in parallel coordinate displays, however, don't indicate change. Instead, a single line in a parallel coordinates graph connects a series of values - each associated with a different variable - that measure multiple aspects of something, such as a person, product, or country. The example in Figure 1 consists of 3,138 lines: one for each county in the United States. This graph has seven vertical axes arranged from left to right along the X-axis, each for a different variable that measures some aspect of U.S. counties, including median home value, the number of farm acres, the average per capita income, and so on. Notice that the units of measure differ among these variables, including dollars, counts, acres, percentages, and years. In this particular example, the scales for these independent variables have been normalized as percentages, with the highest value at the top (100%) and the lowest at the bottom (0%). For example, when this data was collected a few years ago, median home values ranged from $20,100 in McPherson County, South Dakota at the low end to $750,000 in Pitkin County, Colorado at the high end. Figure 2 displays the same set of data, but this time the line representing a single county has been highlighted. I selected Alameda County in California, where I live, to see its multivariate profile. Figure 2: Alameda County in the state of California has been highlighted. In examining this Alameda County profile, we must be careful to read nothing of significance into the slope of each line segment or the overall pattern formed by the line as a whole. The slopes and overall pattern would look completely different if I rearranged the order of the variables. Instead, we should read the variables one by one to construct a composite profile of Alameda County. In doing so, because we can see Alameda County in the context of all counties, we can quickly determine that home values are higher than average but only about 40% of that of the county with the highest value, the number of farm acres is much lower than average, income level is higher than most but only about 55% of the county with the highest value, and so on. The Big Picture Look again at Figure 1 to see if you can discern anything meaningful from its clutter of overlapping lines. Don't approach it as you would a normal line graph, but rather as a multivariate overview of 3,138 counties. When doing multivariate analysis, the big picture is usually where you want to start, for meaningful observations, believe it or not, can be gleaned from the clutter. A display with this much data cannot be used to explore the details, but it can be used to search for predominant patterns and exceptions. For example, we can tell that all but a few counties have populations that are 20% or less than the county with the highest population. The county with the highest population - Los Angeles - stands out as a clear exception with approximately twice the population of the next highest county - Cook County, Illinois. Several other exceptions also assert themselves, such as the fact that a few counties have life expectancies that are much lower than most (all are in South Dakota). By starting the analytical process with the big picture, we can then dig down into the predominant patterns and exceptions that catch our eyes. Useful Ways to Complement and Interact with Parallel Coordinates The examples that we've seen so far were created using Spotfire DXP, which enables us to complement the parallel coordinates graph with other displays and several useful means to interact with the data. In Figure 3, I've expanded the screenshot to show controls that can be used to filter the data (to the right of the parallel coordinate graph) and a table that provides precise details about the information that appears in the graph. In this case, I decided to look at the profiles for the 10 counties with the largest populations, which I accomplished by sorting the table by population from highest to lowest and selecting the top ten rows. By highlighting these rows in the table, the corresponding lines in the graph were automatically highlighted as well, which makes it easy to see that counties with the highest populations all consist of relatively low farm acreage. Figure 3: The 10 counties with the largest populations. Another way that I can easily highlight items is to simply draw a rectangle around values in the graph itself that interest me. In Figure 4, you can see the results of drawing a rectangle around the highest values on the College Graduate % axis. Given the resulting view, it only takes a moment to notice that counties with the highest percentages of college graduates all have very few acres of farmland, higher than average incomes, relatively small populations, and high life expectancies. Figure 4: The counties with the highest percentages of college graduates have been highlighted. It is often helpful to separate clusters of similar data into separate graphs to more easily focus on specific groups independent of the others and to compare their multivariate profiles. In Figure 5, to pursue an interest in the relationship between the percentage of college graduates and the other variables, I used convenient functionality in Spotfire DXP to divide the data into five groups (or bins) based on the percentage of college graduates and to place each group into a separate graph. The top graph displays counties with the lowest percentage of college graduates and in the bottom graph we see those with the highest percentages. A quick comparison of these graphs reveals that counties with the lowest percentages of college graduates also have the lowest home values as well as widely ranging percentages of elderly residents compared to counties with the highest percentages of college graduates. Another difference between these five groups that surfaces when viewed in this fashion is that the distribution of values for each variable except home value and population tends to narrow with each graph, beginning with the top graph (lowest percentage of college graduates), which displays a broad distribution of values across most variables, and proceeding down to the bottom graph (highest percentage of college graduates), which displays a relatively narrow distribution of values for each variable. In other words, greater percentages of college graduates appear to correspond to greater homogeneity among the people in that county. Figure 5: This display consists of five parallel coordinates graphs based on the percentage of college graduates. Searching for Similar Profiles Another useful task when exploring multivariate data involves searching for entities with a particular multivariate profile – either one that is exhibited by a particular entity (such as a county in the examples above) or one that you imagine might be interesting. To illustrate how this works, I've switched over to Spotfire Decisionsite to access this functionality, but am still examining the same set of data. Notice in Figure 6 that I've selected Alameda County once again (the highlighted line), which I'll use as the model profile for my pattern search. Figure 6: Preparing to search for counties with profiles that are similar to Alameda County. After running the search for counties with similar profiles and viewing the results, I selected the 10 counties most similar to Alameda County and removed all but them from the display to eliminate distractions. You can see the results in Figure 7, which shows the 10 counties in the parallel coordinates graph along with Alameda County, which is highlighted. These counties also appear in the table, which now includes two new columns that were produced by the search operation: "Similarity to Active," which measures their correlation to Alameda County (from 0 for no correlation to 1 for an exact correlation), and "Similarity to Active (Rank)," which ranks the counties by degree of correlation. Figure 7: Alameda County (highlighted) and the 10 most similar counties. Variations on the Theme Not all parallel coordinates graphs available in commercial software go by the name parallel coordinates, and they don't all look exactly the same. Besides Spotfire, other business intelligence vendors who offer parallel coordinates graphs include SAS, ProClarity (now owned by Microsoft), Advizor Solutions, and Information Builders (by virtue of the fact that they sell Advizor Solutions' software under a different name through an OEM relationship). To illustrate one more approach to using parallel coordinates, I'll shift over to the product named Advizor Analyst/X from Advizor Solutions. Figure 8 provides an example of a parallel coordinates graph (called a parabox by Advizor Solutions), which displays multivariate data regarding a company's customers (one line per customer) in a way that looks different than previous examples. Figure 8: A parabox (another name for parallel coordinates) graph from Advizor Solutions. The variable names appear across the top, including region, state, industry, and so on. This particular example includes both quantitative variables, such as revenue, and categorical variables, such as region. In addition to the gray lines that connect a value of each variable for a given customer, circles display the relative sizes of each value belonging to a particular categorical variable and a box plot displays the distribution of values for a particular quantitative variable. These circles (also known as bubbles) and box plots summarize each variable in a way that can't be seen merely by looking at the lines, which is a nice addition (although the 2-D areas of circles cannot be compared precisely). Parallel coordinates can reveal correlations between multiple variables. This is particularly useful when you want to identify which conditions correlate highly to a particular outcome. For instance, this example can be used to examine which conditions seem to have contributed to the desired outcome of customers responding to a special marketing campaign named the "Gold Bundle Campaign," which appears on the rightmost axis of the graph. As you can see, relatively few customers responded (indicated by "Yes") to the campaign. It would be useful to know the characteristics of those customers who responded. Look at what happens when I select the "Yes" circle on the "Response Gold Bundle Campaign" axis (see Figure 9). Figure 9: All customers who responded to the Gold Bundle Campaign are highlighted. Now we can begin to look for predominant characteristics across the other variables. Before we do so, however, I'm going to eliminate some of the clutter by turning off the lines, resulting in the graph that appears in Figure 10. Figure 10: The same display as Figure 9 but without the lines. Now it's easier to see the relationships. The first thing I notice is that, of the four regions (on the left-most axis), a much greater percentage of customers in the east responded than anywhere else, which appears to be largely a result of a significant response in the state of New York. The industries that responded the most are manufacturing and real estate, with about the same number of responses, but a much higher percentage of real estate customers. Shifting attention to the quantitative variables, I can easily see that responders tended to have lower than average revenues, profit margins that are typical, but a much lower than average number of employees (that is, they are relatively small companies). Another interesting characteristic is the fact that those customers that responded usually respond much less favorably to marketing campaigns, shown on the Campaign Responses axis. This is a good example of what can be discovered when exploring multivariate business data using a well-designed parallel coordinates display. I hope that you are beginning to get a sense of what can be seen and the useful questions that can be pursued and answered when using parallel coordinates. Multivariate analysis requires specialized visualizations and methods of interaction with data. Parallel coordinates is only one approach. Next month we'll look at what you can do with heatmaps. Recent articles by Stephen Few
<urn:uuid:1bd281bb-25cf-4f8f-bca2-71bd1f5f3b04>
CC-MAIN-2017-04
http://www.b-eye-network.com/view/3355
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00421-ip-10-171-10-70.ec2.internal.warc.gz
en
0.944133
2,629
3.375
3
For 50 years, the United Nations has been custodian to the world's treaties. Now it has embarked on an ambitious project to convert each treaty into electronic media, making them available to Internet users and others without risk to the documents themselves. Ron Van Note, information systems consultant at U.N. headquarters, is overseeing the project. "After much reflection and many discussions, we decided to go with imaging and optical storage as the best way to convert each page of every document. OCR (optical character recognition) input was briefly considered as an alternative methodology to imaging, but was quickly dismissed, primarily based on the need for complete document protection," said Van Note. "OCR, at best, is considered to be only 90-95 percent accurate. It would cost millions to ensure the necessary 100 percent accuracy due to the multiple languages used throughout the treaties." Why Choose the Optical Option? A treaty never changes. It can be modified by adding to it, but the original is never touched. Optical's permanency is ideal for this type of archiving. Optical storage has the capacity to convert large volumes of material -- including important pictures or drawings -- into secure, accessible and permanent files. Local and state agencies also receive documents submitted by external sources. These documents are delivered as hard copy and may not lend themselves to traditional storage methods. Data entry is costly and subject to human error. Character recognition software is not perfect, and many documents cannot be converted by this technology. The document imaging and optical storage combination may hold the answer. Imagine the many storage systems in place in any one municipality, or how documents must be cross-indexed with other agencies. Using jukebox storage capacities, the sting can be taken out of organizing documents from a variety of storage systems and media. "Going in, we knew this was a big job and that there were bound to be some hurdles," said Steve Lehrer, product manager at Liberty Information Management Systems (Liberty IMS), the Costa Mesa, Calif.-based software integrator chosen to implement and oversee the treaty conversion project. "The first priority was the protection of these extremely important world documents. The treaties had to be secure and the integrity of each page assured during the document input process. Afterward, when residing in electronic storage and being made available to outside users, document protection and security had to be irrefutable." Liberty IMS did hit stumbling blocks during the conversion project. For example, when migrating to electronic storage, the United Nations attempted to scan copies of each treaty in microfiche, but the quality was too poor to guarantee 100 percent accuracy, so it resorted to using the original paper pages. In another example, Van Note said under the organization's manual library system, the documents were stored by volume in rows of floor-to-ceiling bookcases. Volumes were of varying sizes and covered any number of years and treaties. There was no existing cumulative cross-volume index to aid in searching the documents. This made locating a specific treaty cumbersome and time-consuming, frustrating employees and document requesters. "It took a lot of work to find a treaty, locate specific information within the document and then make a copy for the requesting party," said Van Note. The U.N. library's indexing problem was uncovered during the scanning process. Liberty and their partners had to do a lot of new indexing -- each of the more than half-million pages was indexed during the scanning process. Now, each volume is indexed by treaty numbers and addenda; each page is indexed by volume number, treaty number and language used. Placing the treaties online will speed up the search process and give outside users direct access to the documents. Another important aspect of paper-to-electronic document conversion is the effect of multiple language formats. The United Nations requires treaties to be written in English, French and the languages of the participants. As a result, there are often four or more languages contained within a treaty, but there seem to be no rules about format or the placement of the multiple languages within each document. Lehrer said one page may have the left column in English and the right one in French, the following page may have this reversed. The next page could have the top of a page in one language and the bottom of the page in another. "This posed a special problem to our software developers," he said. To deal with the problem, Liberty IMS software engineers created an algorithm to automatically sort and separate each language within a treaty; the software then re-compiles each page as it is scanned in. Overcoming the Obstacles The use of multiple languages, the lack of a comprehensive volume-to-volume index, and the need to absolutely guarantee the security of every document were the major challenges faced by Liberty IMS and their partners. These obstacles were quickly overcome by modifying the standard Liberty software to handle the language separation task and provide new indexing during the scanning-in process. All U.N. treaties were 100 percent protected and secured using a combination of imaging technology and optical jukeboxes equipped with Write Once Read Many (WORM) media. "Our Network Information Management software is the main package involved in this very important venture," said Lehrer, "and we're responsible for coordinating the overall implementation of the document imaging and optical storage applications task. We are providing the necessary software packages, hardware components and document conversion services to place all of the 600,000 current pages of treaties -- and new pages as fresh treaties are added -- into electronic form. The pages will be kept online for ease of access and are to be fully integrated into the U.N. Treaty Department's existing LAN." Electronically storing those 600,000 documents perfectly, accurately and securely, in a minimal-size storage environment and with desktop accessibility is a model for the future. The online storage and almost immediate availability of the world's treaties are now a reality at the United Nations. The system is fully operational, making the documents accessible throughout the U.N. Treaty section. Phase two is now in progress. It will provide treaty document access throughout the entire organization and outside its walls. Liberty IMS is developing an Internet server to place the treaties on the World Wide Web. Ron Levine is a freelance writer based in Carpinteria, Calif. specializing in networking, storage devices and emerging technology. October Table of Contents
<urn:uuid:9ee88e6a-ea5b-4e5c-8a65-5e0e4a5abd6e>
CC-MAIN-2017-04
http://www.govtech.com/magazines/gt/United-Nations-Puts-Treaties-on-the.html?page=2
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00145-ip-10-171-10-70.ec2.internal.warc.gz
en
0.946053
1,302
2.8125
3
Researchers in Oregon claim to have solved the tricky problem of cloning human stem cells, but you're more likely to see a duplicate of a years-old ethics debate than you are a duplicate human. The breakthrough is one that's been long sought after by biologists: creating perfectly matched human tissues through the process of cell cloning. In the past, researchers have had success with cloning animals through a technique known as somatic-cell nuclear transfer (SCNT), where the nucleus of an unfertilized egg is replaced with the DNA of a donor cell. (That's how Dolly the sheep was born.) The egg can then be turned into an embryo, with DNA that matches the original donor exactly. Later, stem cells from that developing embryo could be harvested and, in theory, be cultured to become almost any type of human cell there is. That would open a huge array of new medical treatments, from curing diabetes to fixing spinal cord injuries to providing rejection-proof organ transplants. In the last decade or so, doctors had mostly turned away from SCNT as a means of producing "patient-specific" embryonic stem cells. They did so for a variety of reasons, but the biggest was that it didn't really work on humans. Researchers in South Korea claimed to have done just that in 2004, but their finding turned out to be a fake. Other attempts created imperfect results or were too expensive or inefficient. As a result, scientists have focused on other methods of attempting to create patient-specific stem cells. The focus now is on "reprogramming" adult cells so they become stem cells again, which has had limited success. (The new cells are called pluripotent cells, or iPS cells.) As one surprised researcher put it, "the most surprising thing [about this paper] is that somebody is still doing human [SCNT] in the era of iPS cells."
<urn:uuid:03cb35fa-f38c-498f-bcbf-5fa690e95920>
CC-MAIN-2017-04
http://www.nextgov.com/health/2013/05/stem-cell-cloning-breakthrough-going-revive-same-old-debate/63197/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00357-ip-10-171-10-70.ec2.internal.warc.gz
en
0.976536
382
3.125
3
Natural gas has never been much of an option for US car drivers and its going to take a lot of effort by the government and auto manufactures to make it a viable alternative to gas. But that's just what a $10 million program from the Department of Energy's advanced project development group The Advanced Research Projects Agency - Energy (ARPA-E) aims to start anyway. ARPA-E's Methane Opportunities for Vehicular Energy (MOVE) program wants to develop system "that could enable natural gas vehicles with on-board storage and at-home refueling with a five-year payback or upfront cost differential of $2,000, which excludes the balance of system and installation costs." More energy news: 15 cool energy projects From ARPA-E: "Specific aims include technological advancements in the area of (1) new sorbent materials for low-pressure storage of natural gas and (2) new high-strength, low-cost materials and manufacturing processes for conformable tanks capable of high-pressure (250 bar) natural gas storage. Low-pressure approaches inherently reduce the cost on home refueling; however for high-pressure approaches this program also seeks (3) innovative low-cost, high-performance compressor technology." According to the agency, there are over 13 million natural gas vehicles on the road worldwide but only 120,000 in the United States. But with what the agency termed as massive increases in the US natural gas reserves over the past decade, there is now an "unprecedented opportunity for advancing the economic, national, and environmental security of the nation. Spurred by technological advances in shale gas production, increased natural gas reserves have led to a decoupling of domestic natural gas with global petroleum prices, and historically low natural gas prices relative to petroleum." "Natural gas vehicles have the highest deployment in regions of the world where governments have artificially altered market conditions to favor natural gas. For example, in most of Europe, compressed natural gas is about $4.00/GGE (gasoline gallon equivalent) less expensive than gasoline due to high gasoline taxes. By contrast, natural gas vehicles in the U.S. must compete with gasoline and diesel vehicles based on commodity market prices. As a consequence, the US currently has limited deployment of natural gas vehicles and in only small, specific market sectors. These include buses and fleet vehicles, in addition to some heavy-duty trucking applications, such as refuse trucks that benefit from both high fuel use and predictable daily routes," ARPA-E says. In terms of refueling infrastructure, the US has five times fewer natural gas refueling stations per natural gas vehicle than nations with wide-spread adoption of natural gas vehicles. However, a change appears to be on the horizon for heavy-duty, long-haul natural gas trucks as the private sector is beginning to finance CNG (compressed natural gas)and LNG (liquefied natural gas) refueling stations along major highways without the use of public funds, ARPA-E says. By contrast, light-duty natural gas vehicles will still have to compete with a well-established gasoline refueling infrastructure that includes over 118,000 stations nationwide. Furthermore, the current cost of a natural gas refueling station is about $1.6M, compared to about $100k for gasoline. At these costs, a natural gas infrastructure that is equivalent to gasoline could cost over $100 billion and take decades to complete, according to the agency. With 65 million US homes using natural gas service, the natural gas light-duty vehicle infrastructure problem could be overcome with at-home natural gas refueling. That requires a new home storage system that would also be developed as part of the MOVE plan. So, would you be willing to jump through the hoops required to employ natural gas as your auto fuel? Layer 8 Extra Check out these other hot stories:
<urn:uuid:be22fc87-8812-473f-a6ec-3b36ac0291ae>
CC-MAIN-2017-04
http://www.networkworld.com/article/2221783/data-center/us-wants-natural-gas-as-major-auto-fuel-option.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00265-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947767
794
2.984375
3
In 1965, a year before the first pocket calculator was invented, a young physicist from Silicon Valley, Gordon Moore, made a daring prediction. He claimed that the number of components squeezed onto a single silicon chip would double about every two years. And double, and double and continue to double. If he had been right, the best silicon chips today would contain an unbelievable 100 million single components. The true figure is more like 2 billion: Moore had underestimated how fast the shrinking trend would take off.
<urn:uuid:095ae3e9-4e49-4a70-a927-dafb41f28867>
CC-MAIN-2017-04
https://www.hpcwire.com/2008/12/09/what_happens_when_silicon_can_shrink_no_more/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00293-ip-10-171-10-70.ec2.internal.warc.gz
en
0.968567
99
3.3125
3
Behind every big data breach headline there’s an attacker that has social engineered valid credentials out of someone. People often use the same password for their personal email as they do for their workplace and the various e-commerce sites they log into. If businesses expect to prevent attackers from leveraging valid credentials, they must first start with people taking more precautions with their passwords. I’m sure many of you have seen this, but no recent video (which still gives me a laugh) best illustrates the problem with cybersecurity in the United States as the one below. In this video, which is destined to be a classic in user cybersecurity awareness programs, Jimmy Kimmel has a member of his staff go out on the street to “get some passwords.” Having or establishing a trust relationship through a personal contact, brand or common activity is the key to getting a person to click on a website or email (or, apparently, give up their user name and password to a random person on the street with a microphone). Unfortunately, attackers know there will always be someone willing to give away their identity like it has no value. As P.T. Barnum would say, “There’s a sucker born every minute.” New firewalls, better intrusion detection systems (IDS), anti-virus or next-gen security information and even management (SIEM) systems aren’t going to detect an attacker that owns an identity. It’s only with a user behavior intelligence solution that an attacker with valid user credentials can be detected based on anomalous activity.
<urn:uuid:41531734-4c7c-49ff-9ec6-b959d489af63>
CC-MAIN-2017-04
https://www.exabeam.com/life-at-exabeam/fixing-stolen-credentials-problem-means-fixing-us-first-video/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00109-ip-10-171-10-70.ec2.internal.warc.gz
en
0.962388
322
2.578125
3
Blonder B.,University of Arizona | Blonder B.,Rocky Mountain Biological Laboratory | Buzzard V.,University of Arizona | Simova I.,University of Arizona | And 46 more authors. American Journal of Botany | Year: 2012 • Premise of the Study: Leaf area is a key trait that links plant form, function, and environment. Measures of leaf area can be biased because leaf area is often estimated from dried or fossilized specimens that have shrunk by an unknown amount. We tested the common assumption that this shrinkage is negligible. • Methods: We measured shrinkage by comparing dry and fresh leaf area in 3401 leaves of 380 temperate and tropical species and used phylogenetic and trait-based approaches to determine predictors of this shrinkage. We also tested the effects of rehydration and simulated fossilization on shrinkage in four species. • Key Results: We found that dried leaves shrink in area by an average of 22% and a maximum of 82%. Shrinkage in dried leaves can be predicted by multiple morphological traits with a standard deviation of 7.8%. We also found that mud burial, a proxy for compression fossilization, caused negligible shrinkage, and that rehydration, a potential treatment of dried herbarium specimens, eliminated shrinkage. • Conclusions: Our findings indicate that the amount of shrinkage is driven by variation in leaf area, leaf thickness, evergreenness, and woodiness and can be reversed by rehydration. The amount of shrinkage may also be a useful trait related to ecologically and physiological differences in drought tolerance and plant life history. © 2012 Botanical Society of America. Source
<urn:uuid:5c23aba2-4717-475b-9391-9d55bcf9bcf2>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/miles-exploratory-learning-center-525065/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00017-ip-10-171-10-70.ec2.internal.warc.gz
en
0.918872
341
3.578125
4
Gas analyzers are process analytical instruments designed with the purpose of determining the qualitative and quantitative composition of elements in a gas mixture. They provide vital data to many manufacturing, processing, and materials research industries. Gas analyzers are typically employed for safety purposes, such as preventing toxic exposure and fire. As these detectors measure a specified gas concentration, the sensor response serves as the reference point of scale. When the sensors response surpasses a certain pre-set level, an alarm will get activated to warn the user. Gas analyzers can be used for trace gas measurement in various processes having detection limits at levels as low as part per million (ppm), part per billion (ppb), and even part per trillion (ppt). While many of the older instruments units were originally fabricated to detect one gas, modern devices are capable of detecting several gases at once. This research report covers the entire spectrum of gas analyzers used across various industry verticals such as power, chemical, pharmaceutical, semiconductor processing, water & wastewater treatment, and others. Among all the analyzers catalytic and infrared analyzers are more commonly used to detect combustible gases, while other analyzers like electrochemical and MOS are installed for the detection of toxic gases. The entire market is segmented by technology into electrochemical, infrared, paramagnetic, zirconia, laser, catalytic, metal oxide semiconductor, photo ionization detectors, and others. The different geographical regions covered in the report include North America, Europe, APAC, and RoW. The report also highlights all the factors currently driving the market as well as restraints and opportunities for the global market. It also profiles all major companies involved in this segment covering their entire product offerings, financial details, strategies, and recent developments. Please fill in the form below to receive a free copy of the Summary of this Report Please visit http://www.micromarketmonitor.com/custom-research-services.html to specify your custom Research Requirement
<urn:uuid:4fc06181-5b4f-4056-ba07-69f18303b4fa>
CC-MAIN-2017-04
http://www.micromarketmonitor.com/market-report/gas-analyzers-reports-5125475481.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281746.82/warc/CC-MAIN-20170116095121-00503-ip-10-171-10-70.ec2.internal.warc.gz
en
0.924421
407
2.640625
3
Availability and Alternate Sources of Water The availability and cost of water are quickly climbing in priority on the site selection checklist as this critical resource becomes more precious. There are certain locations where the municipal water systems are being stressed and you will pay accordingly for a high capacity water tap. One such situation occurred in Ashburn, VA when RagingWire tried to obtain a larger water service for its cooling needs. The cost would have been four million dollars, according to a presentation by James Kennedy, director of critical facility engineering for RagingWire, at the Fall 2012 7×24 Exchange conference in Phoenix. They instead turned to reclaimed water for their cooling needs. In most situations, reclaimed water is of a very high quality and can be obtained at a fraction of the cost of potable water. Researching alternate water sources may not only save you on initial and operating costs, but it may also provide the ability to operate independently from municipal water sources. There are instances such as Google’s data center in Hamina, Finland where they use seawater for cooling. There are locations in Nebraska where irrigation wells are available to provide up to 1,500 gallons per minute of water at 56⁰F. These are just a couple of examples where the alternate source of water not only provides independence from a municipality, but the water temperature enables the reduction or elimination of certain pieces of cooling equipment most data centers would require, thereby saving a great deal of energy. As data centers come under increasing scrutiny for using potable water for cooling purposes, securing a reliable source of water will become more important moving forward and the value of alternate sources of water will only increase. The examples discussed illustrate the complex relationship between energy, water, and sustainability. In my next column, I will discuss this relationship, as well as Water Usage Effectiveness (WUE) and data center hydro-footprint. Editor’s Note: If you are interested in water conservation projects, visit DCK Coverage on the topic: Data Center Water Use Moves to the Forefront, Google Using Sea Water to Cool Finland Project, Google Boosts Its Water Recycling Efforts, and Google Recycling Water for Atlanta Data Center. Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. Pages: 1 2
<urn:uuid:9480723c-b3c9-4fd0-b87e-99af705df6b6>
CC-MAIN-2017-04
http://www.datacenterknowledge.com/archives/2013/01/03/water-consciousness-hits-the-data-center/2/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280872.69/warc/CC-MAIN-20170116095120-00531-ip-10-171-10-70.ec2.internal.warc.gz
en
0.934517
492
2.640625
3
"Convergence" is the word as big data moves further into the cloud and into the reach of small and midsize enterprises. Truth be told, big data is not a big or new The ideology behind big data has been around since the early days of mainframes and scientific computing. What is new about big data is the term itself, which has become part of the nomenclature of today's business speak. Still, for most of its existence, big data has been out of the reach of small and midsize businesses (SMBs) because the storage and processing power needed to make this technology work is too expensive. The cloud is changing that by bringing the necessary big data components to the masses in the form of hosted solutions. These new cloud-based capabilities are on a growth path and are creating more opportunities for even the smallest of businesses to leverage big data without the traditional expenses of compute farms and massive storage Big data analytics comprises three primary elements: volumes of unstructured data, processing power and algorithms. Naturally, the biggest challenge for SMBs is the data itself-finding it, storing it and accessing it. For it to be true big data, there has to be lots of it, and most SMBs don't generate that volume of data internally, which leads them to seek out alternative data sources. Here, the cloud delivers. There are several large public data sets that are readily available, containing all types of information, including data from the U.S. Census Bureau, the World Bank and general public data from Google. Additional data is available from several government agencies, such as Data.gov, while data-focused sites that span everything from Web traffic to social networking can be found in the likes of Crunchbase.com, Kasabi.com, Freebase.com, Infochimps.com and Kaggle.com. These Websites offer a variety of data types for use in analytics. Throughout 2012, those data sets and others can be expected to grow exponentially. The amount of data being generated globally increases by 40 percent a year, according to the McKinsey Global Institute, a data analytics However, data is only part of the equation. All this information needs to be organized, sorted and processed, and that takes computing power. Once again, cloud services can deliver those capabilities. A key example is Amazon's Cluster Compute, a cloud-based supercomputer that offers this service. Amazon isn't the only one in the game: Companies such as IBM and Hewlett-Packard are offering private cloud-based big data analytics platforms. However, since this technology is designed as a complete platform and not as a service, these platforms are still out of the reach of the SMB market. Other companies are looking to fill that void by offering on-demand analytic solutions that can process big data and deliver results quickly and inexpensively. A case in point is Aster Data, which offers a cloud-based, on-demand analytics platform, along with appliance-based and software analytics products. Another company looking to bring big data analytics into the cloud is 1010Data, which has developed a completely hosted big data analytics platform. Still other firms are developing the momentum to convert big data analytics into cloud services. The most notable of these ventures is Splunk, which is known for software that analyzes large volumes of machine data. The company is currently working on Splunk Storm, a data analytics platform designed for cloud developers to build multitenant solutions. That way, the high costs of big data analytics can be spread out among multiple customers, creating an economy of scale that will increase in affordability over time.
<urn:uuid:06c2fdf8-3447-40ad-a4b4-61cd101d1c9d>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Cloud-Computing/2012-A-Cloudy-Year-for-Big-Data-102807
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280872.69/warc/CC-MAIN-20170116095120-00531-ip-10-171-10-70.ec2.internal.warc.gz
en
0.940291
796
2.921875
3
ip address 188.8.131.52 255.255.255.0 ip nat outside ip address 10.1.1.1 255.255.255.0 ip nat inside access-list 1 permit 10.1.1.0 0.0.0.255 Two PCs on the network have addresses 10.1.1.5 and 10.1.1.10. Which command, when added to the router’s configuration, causes the PCs’ addresses to be translated to the router’s s0/0/0 address? - ip nat inside source list 1 184.108.40.206 overload - ip nat inside source list 1 interface s0/0/0 overload - ip nat inside source list 1 interface s0/0/0 - ip nat inside source 1 interface s0/0/0 overload The correct answer is 2. The “ip nat inside source list 1 interface s0/0/0 overload” command specifies that source addresses permitted by access list 1 will be translated to the address of interface s0/0/0. The “overload” keyword indicates that PAT is used so that one outside address represents many inside addresses, using the port number to distinguish between the translations. For more questions like these, try our CCNA Cert Check
<urn:uuid:f04298c6-8f34-4c9f-a080-cbc8b699c4a2>
CC-MAIN-2017-04
http://blog.globalknowledge.com/2012/04/12/ccna-exam-prep-question-of-the-week/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00439-ip-10-171-10-70.ec2.internal.warc.gz
en
0.738588
282
2.671875
3
Technology: Can IT Help Utilities Lower Commercial Energy Costs?By Debra D'Agostino | Posted 07-17-2006 You know there's an energy crisis when even the energy companies are turning to alternative energy. Xcel Energy Inc., a Minneapolis-based energy firm, runs a healthy business, with more than 3.3 million electricity and 1.8 natural gas customers in 10 states, and revenues of $10 billion annually. But about a year ago, company executives found themselves facing a troika of problems: increasing energy demands, decreasing resources, and a little issue called global warming. "We realized we had an obligation to behave in a manner that balances shareholder, regulatory and community priorities," says Michael Carlson, Xcel's vice president and CIO. So Xcel came up with a plan: Use technology to identify which customers would be candidates for alternative energy. In partnership with the National Renewable Energy Laboratory (NREL), which is funded by the U.S. Department of Energy, "we started applying some modeling technologies that combine NREL's weather and environmental data with our grid generation and consumption data," Carlson says. The modeling tool, called the Renewable Planning Model, is being used to determine exactly which kinds of alternative energy are best suited for specific customers. By using NREL's satellite terrain imagery to determine solar irradiation on rooftops, for example, Xcel can determine where, and how strongly, the sun shines on Xcel customers. With that data, the company can calculate where solar panels should be placed, how large they need to be to generate the most power, and how that power generation might affect Xcel's own energy grid. On the business side of the equation, Xcel is getting help from such technology companies as IBM Corp. and Siemens AG to incorporate alternative energy into the company's everyday business processes. "Utilities aren't known for their large R&D budgets," Carlson says, "so doing something like this requires collaboration. Thankfully, companies (our consumers as well as solution providers) are all showing an enhanced interest in this." IBM, for example, is helping Xcel create energy delivery and real-time pricing models for the company's renewable programs. Ultimately, the goal is to create what Carlson calls a "smart" grid: an electricity system that incorporates renewable energy and distributes electricity based on real-time demand, instead of using less accurate historical estimates, as most grids are run today. That will allow Xcel to charge more for, say, running your dishwasher during peak hours rather than in the late evening. There's still much to be determined, including who will pay for the alternative energy infrastructure and when it will be available. But Carlson is bullish. "We firmly believe we can do this," he says. "And technology is the key to making it happen."
<urn:uuid:f1094b5f-9a08-42e7-ae46-b72defaeb73f>
CC-MAIN-2017-04
http://www.cioinsight.com/print/c/a/Case-Studies/Technology-Can-IT-Help-Utilities-Lower-Commercial-Energy-Costs
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00347-ip-10-171-10-70.ec2.internal.warc.gz
en
0.948149
580
2.640625
3
For the Internet to make use of the advantages of IPv6 over IPv4, most hosts will eventually need to deploy this protocol. While many individuals look forward to the full deployment of IPv6, the transition to IPv6 doesn’t mean the networking world will somehow be totally secure. This was made clear by the recent report that Arbor Networks has reported the first IPv6 DDoS attacks against their networks. This is a clear paradigm shift since just a few years ago there were hardly more than a few thousand IPv6 systems connected to the Internet. That has changed, and as more and more users transition to IPv6, so will the threat of new network attacks. IPv6 offers many improvements over IPv4 and has built in support for IPSec. However, that does not mean that attackers cannot find new and interesting ways to target the protocol. Just consider how last year a vulnerability was discovered in the IPv6 network discovery protocol that will allow a nearby attacker to intercept traffic or cause congested links to become overloaded. It’s important to keep in mind that much of the work on IPv6 was done in the 1990’s before security had become the driving concern it is today. While moving to IPv6 does offer many advantages, it will not ease the burden of the security professional. IPv6 faces a number of very different kinds of attack strategies than IPv4. Proactive organizations will continue to need IT security specialists that understand the protocols and how they may be misused in new and emerging threats.
<urn:uuid:daf67e83-22eb-48b2-90f3-42848764bd9e>
CC-MAIN-2017-04
http://blog.globalknowledge.com/2012/03/13/the-first-ddos-attacks-against-ipv6/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00283-ip-10-171-10-70.ec2.internal.warc.gz
en
0.968226
307
2.578125
3
By Steve Baca VCP, VCI, VCAP, Global Knowledge Instructor Virtualization is an umbrella term that continues to evolve to include many different types that are used in many different ways in production environments. Originally virtualization was done by writing software and firmware code for physical equipment so that the physical equipment could run multiple jobs at once. With the success of VMware and its virtualization of x86 hardware, the term virtualization has grown to include not just virtualizing servers, but whole new areas of IT. This article is going to look at the origins of virtualization and how some of the historical development has spurred on today's virtualization. In addition, we will discuss different types of virtualization that are being utilized in the marketplace today and a listing of some of the leading vendors. In general, the idea behind virtualization is to make many from one. As an example, from one physical server using virtualization software, multiple virtual machines can run as if each virtual machine were a separate physical box. In data centers, before virtualization, one or more applications and an operating system would run on their own unique physical server. Since each one of those physical servers needed floor or rack space, there was a problem of the growing size and number of data centers, that businesses needed in IT. Using virtualization to consolidate the number of physical servers reversed the trend of this sprawl, , and companies began to see a cost savings. From the system administrator's point of view, another reason to virtualize is the ability to quickly add more virtual machines as needed, without having to purchase new physical servers. The delay in obtaining new servers varies widely with each company and, in some environments, could be quite lengthy. With virtualization, the length of this process can be greatly reduced because the physical server is already up and running in production. The system administrator can quickly create a brand new virtual machine by adding the virtual machine to an existing physical host. Thus, you can run many virtual machines on one physical server. A third reason to virtualize is for better resource utilization. Before virtualization, it was not unusual to see a physical server using less than five percent, or even ten percent, of its CPU and/or memory. As an example, consider the case where a physical server was purchased to run an application that only runs during the evening. When the application is not processing, such as in the morning or afternoon, then the physical box is sitting idle, which is a tremendous waste of resources. Thus, virtualizing the application that is used only at night runs at night, leaves that virtual machine to run on the same server with other virtual machines that utilize resources during the morning or afternoon. The virtual machines will balance each other's resource usage. Since one virtual machine application will run during the day, and the other virtual machine's application will process at night, the physical server will better utilize its resources. Resources such as memory and CPU on a server can be safely utilized by multiple virtual machines processing up to 75 percent to 80 percent on a continuous basis, with server side virtualization from vendors such as VMware. The advantage is the utilization of the resources will be far more efficient with virtualization, than if the applications ran on individual physical servers. A fourth reason to use virtualization is that it can utilize new features that create a more reliable environment. As an example, VMware offers a feature called High Availability (HA). This additional feature is used when a physical server fails. After HA has determined that the physical server is down, it can restart the virtual machines on surviving servers. Therefore, an application will experience less down time using HA as an automated approach to physical server failure. Other vendors have their own features written into their code that offer different forms of reliability as well. These are a few of the reasons to virtualize, and there are definitely more reasons. Now, let's turn to the beginning of virtualization. Origins of Virtualization The origins of virtualization began with a paper presented on time-shared computers, by C. Strachey at the June 1959 UNESCO Information Processing Conference. Time-sharing was a new idea, and Professor Strachey was the first to publish on the topic that would lead to virtualization. After this conference, new research was done, and several more research papers written on the topic of time-sharing began to appear. These research papers energized a small group of programmers at the Massachusetts Institute of Technology (MIT) to begin to develop a Compatible Time-Sharing system (CTSS). From these first time-sharing systems attempts, virtualization was pioneered in the early 1960s by IBM, General Electric, and other companies attempting to solve several problems. The main problem that IBM wanted to solve was that each new system that they introduced was incompatible with previous systems. IBM's president, T.J. Watson, Jr., had given an IBM 704 for use by MIT and other New England schools in the 1950s. Then, each time IBM built a newer, bigger processor, the system had to be upgraded, and customers were continuously being retrained whenever a new system was introduced. To solve this problem, IBM designed its new S/360 mainframe system to be backwards-compatible, but it was still a single-user system running batch jobs. At this time, MIT and Bell Labs were requesting time-sharing systems to solve their problem of many programmers and very few systems to run their programs. Thus, IBM developed the System/360-40 (CP-40 mainframe) for their lab to test time-sharing. This first system, the CP-40, eventually evolved into the development and public release of the first commercial mainframe to support virtualization the System/360-67 (CP-67 mainframe) in 1968. The new CP-67 contained a 32-bit CPU with virtual memory hardware. The CP-67 mainframe's operating system was named Control Program/Console Monitor System (CP/CMS). The early hypervisor gave each mainframe user a console monitor system (CMS), essentially a single-user operating system, which did not have to be complex because it was only supporting one user. The hypervisor provided the resources while the CMs supported the time-sharing capabilities, allocation, and protection. CP-67 enabled memory sharing across virtual machines while giving each user their own virtual memory. Thus, the CP operating system's approach provided each user with an operating system at the machine instruction level. Virtualization continues to be used on the mainframe system even today, but it took nearly two decades before virtualization would become heavily used outside of the mainframe world. Although IBM had provided a blueprint for virtualization, the client-server model that took over from the mainframe was inexpensive and not powerful enough to run multiple operating systems. These issues for the client-server model meant that these new systems could not support virtualization, and the idea of virtualization would disappear for many years. Eventually, the hardware performance increased to a point where significant savings could be realized by virtualizing X86. The concepts of virtualization that were developed on the mainframe were eventually ported over to X86 servers by VMware in 1998, and a new era of virtualization began. Types and Major Players in Virtualization Although some form of virtualization has been around since the mid-1960s, it has evolved over time, while remaining close to its roots. Much of the evolution in virtualization has occurred in just the last few years, with new types being developed and commercialized. It can be difficult to restrict the types of virtualization to just a few areas with the release of so many different types and no true standard definition. Therefore, the definition of virtualization can be limited "to make many from one," and also limited to the most popular types of virtualization that are used in business today. For the purposes of this article, the different types of virtualization are confined to Desktop Virtualization, Application Virtualization, Server Virtualization, Storage Virtualization, and Network Virtualization. The virtualization of the desktop, which sometimes is referred to as Virtual Desktop Infrastructure (VDI), is where a desktop operating system, such as Windows 7, will run as a virtual machine on a physical server with other virtual desktops. The processing of multiple virtual desktops occurs on one or a few physical servers, typically at the centralized data center. The copy of the OS and applications that each end user utilizes will typically be cached in memory as one image on the physical server. If you go back to the IBM mainframe era, each user would use the mainframe to do the centralized processing for their terminal session, so the user's environment consisted of a monitor and a keyboard with all of the processing happening back on the centralized mainframe. The monitor was not in color, which meant programs that used color graphics were not available on a terminal connected to a mainframe. However, in the 1990s, IT started to migrate to the inexpensive desktop system where each user would have a physical computer . The PC would consist of a color monitor, keyboard, and mouse, with much of the processing and the operating system running locally, using the physical desktop's central processing unit (CPU) and physical random access memory (RAM) instead of using the centralized mainframe to do the processing. In today's VDI marketplace, there are two dominate vendors, VMware Horizon View and Citrix Xen Desktop, vying to become the leader in the desktop virtualization marketplace. Both vendors have the ability to project graphic displays with rapid response from the mainframe. The desktops also come with a mouse, and both solutions make the end-user's experience feel that the remote desktop is local. Thus, the performance of the remote desktop and how the end-user accesses their applications should be no different than if they were using a physical desktop. Both VMware Horizon View and Citrix Xen Desktop each have a strong footprint and are the most-utilized choices for desktop virtualization in business today. Application virtualization uses software to package an application into a "single executable and run anywhere" type of application. The software application is separated from the operating system and runs in what is referred to as a "sandbox." Virtualizing the application allows things like the registry and configuration changes to appear to run in the underlying operating system, although they really are running in the sandbox. There are two types of application virtualization: remote and streaming of the application. A remote application will run on a server, and the client uses some type of remote display protocol to communicate back to the client machine. Since a large number of system administrators and users have experience running remotely, it can be fairly easy to set up remote displays for applications. With a streaming application, you can run one copy of the application on the server, and then have many client desktops access and run the streaming application locally. By streaming the application, the upgrade process is easier, since you just set up another streaming application with the new version, and have the end users point to the new version of the application. Some of the application virtualization products in the marketplace are Citrix XenApp, Novell ZENworks Application Virtualization, and VMware ThinApp. Server virtualization allows for many virtual machines to run on one physical server. The virtual servers share the resources of the physical server, which leads to better utilization of the physical servers resources. The resources that the virtual machines share are CPU, memory, storage, and networking. All of these resources are provided to the virtual machines through the hypervisor of the physical server. The hypervisor is the operating system and software that operate on the physical box. Each virtual machine runs independently of the other virtual machines on the same box. The virtual machines can have different operating systems and are isolated from each other. The server virtualization offers a way to consolidate applications that used to run on individual physical servers, and now with the hypervisor software runs on the same physical server represented by virtual machines. Server virtualization is what most people think of when they think of virtualization, due to VMware's vSphere, which has a large percentage of the marketplace. In addition, some of the other vendors are, Citrix XenServer, Microsoft's Hyper-V, and Red Hat's Enterprise Virtualization. Storage virtualization is the process of grouping physical storage using software to represent what appears to be a single storage device in a virtual format. Correlations can be made between storage virtualization and traditional virtual machines, since both take physical hardware and resources and abstract access to them. There is a difference between a traditional virtual machine and a virtual storage. The virtual machine is a set of files, while virtual storage typically runs in memory on the storage controller that is created using software. A form of storage virtualization has been incorporated into storage features for many years. Features such as Snapshots and RAID take physical disks and present them in a virtual format. These features can provide a format to help with performance or add redundancy to the storage that is presented to the host as a volume. The host sees the volume as a big disk, which fits the description of storage virtualization. The storage array vendors have implemented storage virtualization within the operating system of their respective arrays. This type of storage virtualization is called internal storage virtualization. In addition, there is external storage virtualization that is implemented by Veritas and many other storage vendors. Network virtualization is using software to perform network functionality by decoupling the virtual networks from the underlying network hardware. Once you start using network virtualization, the physical network is only used for packet forwarding, so all of the management is done using the virtual or software-based switches. When VMware' ESX server grew in popularity, it included a virtual switch that allowed enough network management and data transfer to happen inside of the ESX host. This paradigm shift caught the eye of Cisco, so when VMware was upgrading to vSphere 4.0, Cisco helped to write the code for VMware's new Distributed Switch. This helped Cisco learn how to work and design network virtualization, and an internal movement was started to write all of the Cisco switches to be software-based administrative entities. The network virtualization marketplace is really in its infancy with many startups and options to choose from at this time. Cisco and many startup companies are vying for control in this area of virtualization, which has huge potential. The vendors in network virtualization are the hypervisor's internal virtual switch. In addition, third-party vendors, such as Cisco and IBM, have developed virtual switches that can be used by hypervisors such as ESXi. The reasons to virtualize might have begun with saving money, but there are other good reasons to virtualize, such as better resource utilization and the ability to quickly add new virtual machines. Fortunately, the ability to save money makes it easier to get approval for virtualization. The reasons for virtualizing have increased since IBM began to incorporate the concept into the mainframe system in the 1960s. Once the client-server age began, there was a period of time where virtualization was not utilized outside of the mainframe systems. Eventually, the need for virtualization made it a viable solution again. VMware started the rise in popularity of virtualization by virtualizing the server. As server virtualization grew in popularity, other IT areas were also seen as virtualization possibilities, such as virtualizing the desktop.
<urn:uuid:e56708cd-99c8-4eb1-adef-9c3c387eca18>
CC-MAIN-2017-04
https://www.globalknowledge.com/ca-en/content/articles/virtualization-for-newbies/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00191-ip-10-171-10-70.ec2.internal.warc.gz
en
0.952777
3,142
3.515625
4
As seems to be my want at the moment, today I have another disaster-oriented column for you. Remember the recent meteor event in Russia? On February 15 this year a rock estimated to be 50 meters wide and weighing an estimated 10,000 tons entered the atmosphere over the city of Chelyabinsk at a speed of around 40,000 mph and exploded. The shock wave was estimated as having the power of 50 kilotons of TNT and injured about 1,500 people. It also rather pointedly raised the question of how well prepared are we for dealing with meteors. The answer is that we really aren't capable of doing much at all (although we have lots of ideas that probably aren't practical). For a start, we really don't know what's out there. While there are some pretty impressive programs for finding and identifying asteroids, for example, NASA's Near Earth Object Program, none are funded nearly well enough given that these chucks of rock could cause immense damage to our planet up to and including wiping out life on earth. While forewarned can be forearmed, when it comes to asteroids even if we knew that some piece of space debris was heading our way other than trying to evacuate people in its path (imagine trying to get people out of New York or, worse still, Mumbai) the current reality is that we'd be pretty much out of luck. To get a better handle on the degree of risk we face from meteors it's worth looking back to see how what kind of (if you'll excuse the pun) impact they've have had on the earth in the past. A new mashup using data on known significant meteor strikes over the last 100 years does just that. Fireball from Outer Space by Sebastian Sadowski plots 606 eye-witnessed events worldwide which you can filter by country and date. The Fireball historical meteor strike visualization The biggest meteors over the whole time period covered by Fireball are Sikhote-Alin, 23 metric tons (Russia, 1947), Jilin, 4 metric tons (China, 1976), Allende, 2 metric tons (Mexico, 1969), and Norton County, 1.1 metric tons (United States, 1948). But none of those compare to the 1908 Tunguska event which was not witnessed by anyone as it occurred near the desolate Podkamennaya Tunguska River in what is now Krasnoyarsk Krai, Russia. The reason we know about Tunguska event is due to the incredible damage it caused (770 square miles of forest was flattened) and it's estimated that the blast was equivalent to between 10 and 15 megatons of TNT (that's about 40 percent of the blast of the Tsar Bomba I discussed in my recent post What would a nuclear blast do to my town? concerning another mashup that simulates nuclear bomb explosions). The aftermath of the Tunguska event Another excellent meteor visualization is 500 Years of Witnessed Meteors by Adam Pearce. As it's name suggests it covers 5 centuries of observed meteor strikes. This interactive presentation actually has photos of the meteors (or fragments thereof). 500 Years of Witnessed Meteors visualization Fireball and 500 Years of Witnessed Meteors are beautiful pieces of work that were created as entries to visualizing.org's Visualizing Metorites competition and the actual winner of the competition was Macrometeorites by Roxana Torre. Macrometeorites is stunning and it's timeline animation is way cool. Keep in mind is that the visualization starts from 1399 so the apprent speed up of witnessed meteor events is due to better communications and a growing population and not that the rate of meteor strikes increased. After playing with these visualization you might conclude that meteor strikes of any significant size are pretty rare events (from a total of more than 45,700 recorded meteorite landings only around 3,800 have a mass larger than 1kg) and you'd be right as long as you're talking about those that are are actually witnessed by people. If you take a more realistic view you'll realize that there are huge tracts of the earth's land where nobody lives (depending on who you believe that's roughly 90% of the planet) so the apparent clustering you'll note is undoubtedly related to population density. So, let's assume that because of clustering something like 50% of impacts are not witnessed ... that makes the total of significant strikes for the last century around 1,200. But wait ... There's also the oceans where impacts undoubtedly happen with equal frequency to land events. Given that oceans are 70% of the earth's surface. It would seem reasonable that there were as many events at sea as there were on land over the last hundred years making the real total at least 4,000 significant events. The risk of significant meteor impact events may be much higher than is generally appreciated.
<urn:uuid:3a96c6ba-7053-4ae1-8972-aed818e8e46e>
CC-MAIN-2017-04
http://www.networkworld.com/article/2225072/security/is-it-a-bird--is-it-a-plane--no--it-s-a-huge---------meteor-.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279915.8/warc/CC-MAIN-20170116095119-00495-ip-10-171-10-70.ec2.internal.warc.gz
en
0.966212
1,012
3.046875
3
USGS to merge National Atlas and National Map programs The National Atlas of the United States and the National Map will be combined into a single source for geospatial and cartographic information, the U.S. Geological Survey announced. The purpose of the merger is to streamline access to information from the USGS National Geospatial Program, which will help set priorities for its civilian mapping role and “consolidate core investments,” the USGS said. The National Atlas provides a map-like view of the geospatial and geostatistical collected for the United States. It was designed to “enhance and extend our geographic knowledge and understanding and to foster national self-awareness,” according to its website. It will be removed from service on Sept. 30, 2014, as part of the conversion. Some of the products and services from the National Atlas will continue to be available from The National Map, while others will not, the agency said. A National Atlas transition page is online and will be updated with the latest news on the continuation or disposition of National Atlas products and services. The National Map is designed to improve and deliver topographic information for the United States. It has been used for scientific analysis and emergency response, according to the USGS website. In an effort to make the transition an easy one, the agency said it would post updates to the National Map and National Atlas websites during the conversion, including the “availability of the products and services currently delivered by nationalatlas.gov.” “We recognize how important it is for citizens to have access to the cartographic and geographic information of our nation, said National Geospatial Program Director Mark DeMulder. “We are committed to providing that access through nationalmap.gov.” Posted by Mike Cipriano on Mar 14, 2014 at 8:56 AM
<urn:uuid:8b93f2f9-0810-46bf-ae53-bebb20c1cced>
CC-MAIN-2017-04
https://gcn.com/blogs/pulse/2014/03/national-map-atlas.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00219-ip-10-171-10-70.ec2.internal.warc.gz
en
0.93157
387
2.546875
3
What is ITIL? ITIL (short for Information Technology Infrastructure Library) is a widely accepted series of five books that outline best practices for the management of IT services. It is praised for offering a simple and practical framework for the alignment of IT services with the larger needs of a business or organization. ITIL is recognized for its ability to bring smart improvements to any IT service management process. Originally offered by the British government, the ITIL approach is now the cornerstone of IT management in organizations ranging from Disney and HSBC to NASA and Britain’s National Health Service. The ITIL Core library covers five key stages in the IT service management lifecycle: Service Strategy, Service Design, Service Transition, Service Operation and Continual Service Improvement. The guidance offered by the library is not specific to any organization or industry; rather, it provides a baseline from which any sort of IT initiatives can be planned, implemented and evaluated. What is the ITIL Foundation Certification? The Foundation Certification is the entry-level ITIL qualification. It offers candidates a general awareness of the key elements, concepts and terminology used in the ITIL service lifecycle. It is a prerequisite for all subsequent ITIL certifications. What is the format for the ITIL Foundation exam? The exam is a one-hour, multiple-choice test. It’s a closed-book exam, with no electronic devices allowed. There are 40 questions to the exam, and each is worth one mark. A passing grade is 26 marks, or 65%. What are the ITIL Certification Levels? The ITIL Foundation Certification is the starting point for any ITIL learning path. Once that entry-level qualification is achieved, learners can proceed to four subsequent levels of certification: The diagram below shows the five main levels of the ITIL Certification path, along with the options and opportunities along that path. Why is ITIL so popular? The ITIL approach to IT service management offers a number of significant benefits. Because ITIL was not designed for any specific industry or organization, it offers a welcome flexibility in terms of how and where it is used. And because ITIL is based on its authors’ practical experience (as opposed to academic theory), it is immediately useful in real-world situations. ITIL is noted for its ability to promote communication and clarity between IT and the business it serves. With ITIL, roles and responsibilities are clearly defined, customer value is measured at every stage, and organizations are prompted to be proactive rather than reactive. The vagueness of traditional communications between IT and business is simply not allowed under ITIL. ITIL is also lauded for integrating financial management throughout the service lifecycle. Under ITIL, costs are planned and controlled, and can be justified at any stage. And ITIL increases customer satisfaction in ways that technical excellence alone cannot. Thanks to ITIL’s constant measurements and improvements, quality assurance does not end with the customer acceptance test, but continues throughout the service lifecycle. This emphasis on an ongoing relationship contributes to ITIL’s reputation for increased customer satisfaction. Simply stated, ITIL offers several key benefits: - Improved IT services - Reduced costs - Improved customer satisfaction - Improved productivity - Improved use of skills and experience - Improved delivery of third-party services
<urn:uuid:560af776-a883-4247-8090-f6914f1d75ba>
CC-MAIN-2017-04
https://www.globalknowledge.com/ca-en/content/articles/understanding-itil/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00219-ip-10-171-10-70.ec2.internal.warc.gz
en
0.936627
680
2.828125
3
A leading UK professor expects a battle to ensue for the future of travel…with driverless cars leading the way. Professor John Miles, of the department of engineering at University of Cambridge, was speaking at the Internet of Things Forum in Cambridge yesterday, where he gave a detailed overview of the opportunity for driverless cars as we move to a future likely to be dominated by the shared economy. In his presentation, entitled “The near future for connected transport…from self-driving cars to the Hyperloop”, the professor outlined opportunity for cars to do more than they are currently, saying that increasing the capacity and complexity of cars could lead to less traffic and smarter travel. He said that there are currently 224,000 miles of UK road network, but only 10,000 miles of railway network, something he says shows that roads remain a “powerful, existing asset”. “Maybe we’re too quick to rubbish the car…maybe we should observe what we’ve got here, and ask ourselves if we should be concentrating on making them even better.” And despite many ‘demonizing’ the car over continually congested roads and high emissions, he said it is invariably cheaper than competitors like the rail and bus, both of which have faults when it comes to capacity, usage and cost. For example, he researched the M1 with the corresponding railway lines, and found that the car would cost £30 million per mile in each direction on the M1. The railway would cost £50m per mile for the railway in the same area. Furthermore, cars and railways deliver roughly the same amount of people (9-10,000) in this area, while Miles says that rail upgrades can be expensive. Miles instead pushes for more ‘headroom’ of strategic road network. He says while capacity is an issue (roads can’t be built quick enough), there is more to be done to reduce minor incidents, and increase lane occupancy. He believes that current minor incidents (80 percent of which are apparently caused by driver inattention) account for around 30 percent of congestion on all roads. He adds that having cars travel closer together could ultimately lead to four lanes rather than three, representing a capacity increase of 33 percent. “If we could increase ln occupancy, we could increase number of people moving down those roads, without any increase in [financial] output.” “What we need to do is to fill the vehicles we have, not just have big empty vehicles driving around because they are deemed to be ‘good’. What we need is scalable bus…but if I was being cynical I’d say that this is the car.” He believes in on-demand systems – perhaps part of the shared economy trail-blazed by Uber and Airbnb – and urges us to move away from ‘yesterday’s thinking’ of fixed travel for a fixed group at a fixed moment in time. The future, he says, is all about spontaneous on-demanding booking service, cloud-based booking and billing, and yet vehicles which still maintain a comfortable and reliable journey that is guaranteed to arrive within a set time-frame. He says driverless cars are the driver for this and uses his previous example of the M1 to show that the car-based model should in future be able to deliver “six or seven times the amount we can do on the train.” “This is why we should be interested in self-driving vehicles; it’s a very big step forward.” Driverless cars improve road capacity This capacity matching is already being pushed by the UK government and a number of academics. For example, he describes L-SATS as perhaps the closest thing to ‘last mile’ automated devices, with these currently being tested in Milton Keynes and Cambridge too. There are other examples of driverless cars and other automated vehicles; the Bullet is an electric driverless 120mph vehicle where vehicles couple together (not wholly dissimilar to those imagined in Tom Cruise’s Total Recall), while the Mercedes F015 Luxury in Motion concept car was seen by IoB at MWC. “This is about all convenience and facility for the user, and its provided by the self-driving car. It’s a whole new dimension to travel, a dimension where we don’t mind being stuck in traffic because we’ve got better things to do. And most of the time we’re not stuck in traffic because the roads are optimised.” Yet he later suggested, after a question from the audience, that self-driving cars will also always have manual modes for the person to take-over. “You don’t need to force anybody to do anything; if you want to drive your car you will be able to.” Yet, he tempered his praise for driverless cars but suggesting that it could well face a battle against Elon Musk (and Tesla’s) next great invention – the Hyperloop, a 700mph subsonic train that is aiming to take passengers from London to Birmingham in 12 minutes. The Hyperloop is in essence a futuristic train that Musk calls “a cross between a Concorde, a railgun and an air hockey table”. It’s based on the very high speed transit (VHST) system proposed in 1972 which combines a magnetic levitation train and a low pressure transit tube. Musk has likened it to a vacuum tube system in a building used to move documents from place to place. Musk has previously said that all Tesla cars will be autonomous by 2018. “Hyperloop is a fantastic idea; we’ve done some work on it,” added Miles.
<urn:uuid:258bdd49-9758-4393-b2d9-81a8c1350701>
CC-MAIN-2017-04
https://internetofbusiness.com/driverless-cars-a-step-forward-for-smart-travel/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00340-ip-10-171-10-70.ec2.internal.warc.gz
en
0.961992
1,205
2.53125
3
Environment control systems can help prevent E. coli outbreaks Thursday, Jun 20th 2013 Frozen pork and beef stocks in the United States reached all-time highs at the end of May, about 19 percent above their five-year averages, Dow Jones Newswires reported. Yet as pork and beef stocks in cold storage are at record levels, environmental monitoring is paramount in ensuring the meat is optimally stored and safe conditions are maintained. One recent recall of ground beef products highlights the potential problems that can arise from improper storage and ineffective environmental monitoring. The U.S. Food Safety and Inspection Service recently issued a Class 1 recall for 22,737 pounds of beef that are feared to have been contaminated with E.coli. The cases of contaminated beef originated from National Beef Packing Co. in Liberal, Kansas and includes a variety of ground beef products: - 10 pound packages of "National Beef 80/20 Coarse Ground Chuck with package code "0481" - 10 pound packages of "National Beef" 91/19 Coarse Ground Beef with package code "0421" - 10 pound packages of "National Beef" 80/20 Fine Ground Chuck with package code "0484" According the the report, these recalled beef products were produced on May 25 and have a sell by/use by date of June 14, 2013. While there have been no known deaths or illnesses reported that are associated with the contaminated beef products, it is important to now take extra precautions as E.coli infections can have very serious consequences. While infection symptoms generally tend to last only a week in duration and cause no future problems, this is not always the case. In some cases of E.coli infection, people have developed severe blood and kidney problems within two weeks of the onset of symptoms like diarrhea, stomach cramps, nausea and vomiting. In rare cases, these complications can lead to kidney failure, long-term disability and even death, especially for children and older adults who are more susceptible to infection. Utilizing environmental control systems to combat bacterial growth The recall of the beef products is an all too common scenario, and both meat producers and consumers need to use controls like a temperature monitor to ensure that meat and other food products are stored in an environment that is not conducive to bacterial growth. "FSIS advises all consumers to safely prepare their raw meat products, including fresh and frozen, and only consume ground beef that has been cooked to a temperature of 160° F," FSIS stated in its recall announcement. "The only way to confirm that ground beef is cooked to a temperature high enough to kill harmful bacteria is to use a food thermometer that measures internal temperature." There are numerous other best practices for safe beef consumption, including thoroughly washing hands both before and after handling raw meat. While this is a smart preventative measure for people to take, temperature plays a particularly important role in the growth of pathogens, and proper temperature monitoring should be implemented from the initial production phase through various storage facilities to consumption. When preparing meat to be cooked, the FSIS noted that using hot or warm water is especially helpful for dissolving fats or foods, which can also make it easier for deactivating existing pathogens on meat during the cleaning process. After the meat is cooked, use a temperature sensor or thermometer to test that the beef has been cooked fully and reached an internal temperature of 160 degrees Fahrenheit. Any leftover, cooked meat from a meal will also need to be refrigerated within two hours of being cooked.
<urn:uuid:ccc44166-75f9-4c42-af4c-9268246421c8>
CC-MAIN-2017-04
http://www.itwatchdogs.com/environmental-monitoring-news/cold-storage/environment-control-systems-can-help-prevent-e.-coli-outbreaks-459903
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00156-ip-10-171-10-70.ec2.internal.warc.gz
en
0.959302
714
2.78125
3
Hackers, malware writers and attackers use a variety of methods, sophisticated techniques and malware vectors to spread their malicious programs. They rely heavily on social engineering in order to infect computers. Spam emails are used by attackers in an attempt to trick the user into opening the email and clicking on links within it or opening a malicious email attachment. Attackers have been known to use exploit packs in order to craft Web pages to exploit vulnerabilities in system and application software and spread the threat in drive-by downloads. Hackers and malware writers come from different age groups, backgrounds, countries, education and skill levels...with varying motivations and intents. Most malware writers and cycber-criminals today treat it as a business venture for financial gain while "script kiddies" typically do it for the thrill and boosting a reputation as being a hacker among their peers. Below are a few articles which attempt to explain who these individuals are and why they do what they do. - Who is Making All This Malware — and Why? - Who creates malware and why? - Who Writes Malicious Programs and Why - What goes through the minds of hackers? - Why do people write viruses? - Meet The Hackers Who Sell Spies The Tools To Crack Your PC (And Get Paid Six-Figure Fees) - What Makes Johnny (and Janey) Write Viruses? Keep in mind that the severity of infection will vary from system to system, some causing more damage than others especially when dealing with rootkits. The longer malware remains on a computer, the more opportunity it has to download additional malicious files and/or install malicious extensions for Internet browsers which can worsen the infection so each case should be treated on an individual basis. Severity of system infection will also determine how the disinfection process goes. Rogue security programs are one of the most common sources of malware infection. They infect machines by using social engineering and scams to trick a user into spending money to buy a an application which claims to remove malware. They typically use bogus warning messages and alerts to indicate that your computer is infected with spyware or has critical errors as a scare tactic to goad you into downloading a malicious security application to fix it. The alerts can mimic system messages so they appear as if they are generated by the Windows Operating System. It is not uncommon for malware writers to use the names of well known security tools and legitimate anti-virus programs as part of the name for bogus and fake software in order to trick people into using them. There were at least two rogues that used part of or all of the Malwarebytes name including this Fake and Bundled Malwarebytes Anti-Malware 2.0. There also were rogues for SmitfraudFixTool, VundoFixTool, Spybot Search and Destroy, Avira AntiVir and many more. Even Microsoft has been targeted by attackers using such names as MS Anti-virus and Windows Defender in naming schemes for rogue applications. Rogue antispyware programs are responsible for launching unwanted pop ups, browser redirects and downloading other malicious files so the extent of the infection can vary to include backdoor Trojans, Botnets, IRCBots and rootkits which compromise the computer and make the infection more difficult to remove. For more specific information on how these types of rogue programs and infections install themselves, read: - Anatomy of a malware scam - How does rogue security software get on my computer? - Sunbelt: How to Tell If That Pop-Up Window Is Offering You a Rogue Anti-Malware Product - GFI: How to tell if that pop-up window is offering you a rogue anti-malware product - Social engineering in action: how web ads can lead to malware Ransomware is a sophisticated form of extortion in which the attacker either locks the computer to prevent access and demands money (ransom) to unlock it or encrypts a personal information (data files) and then demands money in exchange for a decryption key that can be used to retrieve the encrypted files. In most cases the greatest challenge to recovering the encrypted data has been the process of breaking the code of how the data is scrambled so it can be deciphered. Some forms of Ransomware act like rogue security software, generating bogus infection alerts and warnings to scare their victims. Older versions of ransomware typically claim the victim has done something illegal with their computer and that they are being fined by a police or government agency for the violation. There are basically two types of ransomware. 1) File encrypting ransomware which incorporates advanced encryption algorithms that is designed to encrypt data files and demand a ransom payment from the victim in order to decrypt their data. 2) Locker ransomware which locks the victim out of the operating system so they cannot access their computer or it's contents to include all files, personal data, photos, etc. Although the files are not actually encrypted, the cyber-criminals still demand a ransom to unlock the computer. Master Boot Record ransomware is a variation of Locker ransomware which denies access to the full system by attacking low-level structures on the disk essentially stopping the computer's boot process and displaying a ransom demand. Some variants will actually encrypt portions of the hard drive itself. As noted above, Crypto malware (file encryptor ransomware) uses some form of encryption algorithms that prevents users from recovering files unless they pay a ransom or have backups of their data. Once the encryption of the data is complete, decryption is usually not feasible without contacting and paying the developer for a solution. Crypto malware typically encrypts any data file that the victim has access to since it generally runs in the context of the user that invokes the executable and does not need administrative rights. It typically will scan and encrypt whatever data files it finds on computers connected in the same network with a drive letter including removable drives, network shares, and even DropBox mappings...if there is a drive letter on your computer it will be scanned for data files and encrypt them. US-CERT Alert (TA13-309A) advises that many ransomware infections have the ability to find and encrypt files located within network drives, shared (mapped network paths), USB drives, external hard drives, and even some cloud storage drives if they have a drive letter. Some crypto malware will scan all of the drive letters that match certain file extensions and when it finds a match, it encrypts them. Other crypto malware will use a white list of excluded folders and extensions that it will not encrypt. By using a white list, such ransomware will encrypt almost all non-system and non-executable related files that it finds. Most security experts will advise against paying the ransom demands of the malware writers because doing so only helps to finance their criminal enterprise and keep them in business. One of the reasons that folks get infected is because someone before them paid the bad guys to decrypt their data. The more people that pay the ransom, the more cyber-criminals are encouraged to keep creating ransomware for financial gain. Further, there is no guarantee that paying the ransom will actually result in the restoration (decryption) of your files. Since many victims know there is no guarantee with paying the ransom, some cyber-criminals offer customer support and live Support Chat to help with decryption. Then the question becomes...should I trust that support? - K7 Computing: Why You Should Not Pay That Ransom Demand - Avira: CyptoLocker-style File Encryptors – Should you pay the ransom? - Kaspersky Lab: To pay or not to pay – the dilemma of ransomware victims - Before You Pay that Ransomware Demand... - Ask Leo: Decrypting files encrypted by ransomware...should you pay? Crypto malware ransomware typically propagates itself as a Trojan Horse which the developers use to target a wide audience for financial gain rather than a specific individual. Numerous variants of encrypting ransomware have been reported between 2013 and 2016. - Heimdal: What is Ransomware - History and evolution - Sophos: The Current State of Ransomware - Symantec: Ransomware and Businesses 2016 - Symantec: Ransomware A Growing Menace - The ascension of Crypto-Ransomware and what you need to know to protect yourself Crypto malware and other forms of ransomware are typically spread and delivered through social engineering (trickery) and user interaction...opening a malicious email attachments (usually from an unknown or unsolicited source), clicking on a malicious link within an email or on a social networking site. Crypto malware can be disguised as fake PDF files in email attachments which appear to be legitimate correspondence from reputable companies such as banks and other financial institutions, or phony FedExand UPS notices with tracking numbers. Attackers will use email addresses and subjects (purchase orders, bills, complaints, other business communications) that will entice a user to read the email and open the attachment. Another method involves tricking unwitting users into opening Order Confirmation emails by asking them to confirm an online e-commerce order, purchase or package shipment. Social engineering has become on of the most prolific tactics for distribution of malware, identity theft and fraud. Crypto malware can also be delivered via malvertising attacks, exploit kits and drive-by downloads when visiting compromised web sites...see US-CERT Alert (TA14-295A). An Exploit Kit is a malicious tool with pre-written code used by cyber criminals to exploit vulnerabilities (security holes) in outdated or insecure software applications and then execute malicious code. Currently the Angler, Magnitude, Neutrino, and Nuclear exploit kits are the most popular but the Angler EK is by far the largest threat and has been used for some of the more widely distributed ransomware infections. - Angler EK Drops TeslaCrypt Via Recent Flash Exploit - Angler Exploit Kit pushes a new variant of TeslaCrypt/AlphaCrypt ransomware - TeslaCrypt Distribution...exploit kits such as Angler, Sweet Orange, and Nuclear - CryptoWall 4.0 being distributed by Angler Exploit Kit as part of large Malware Campaign - CryptoWall 4 being distributed as a NSIS installer through Exploit Kits - Exploit Kit Infrastructure Activity Jumps 75 Percent in 2015 RaaS (Ransomware as a Service) is a ransomware hosted on the TOR network that allows "affiliates" to generate a ransomware and distribute it any way they want. The RaaS developer will collect and validate payments, issue decrypters, and send ransom payments to the affiliate, keeping 20% of the collected ransoms. Another scenario has involved attackers installing and spreading ransomware by targeted Remote Desktop or Terminal Services, especially on servers. The attacker brute forces weak passwords on computers running Remote Desktop (RDP) or Terminal Services. Once the attacker gains access to a target computer, they download and install a package that generates the encryption keys, encrypts the data files, and then uploads various files back to the hacker via the terminal services client. Kaspersky has reported brute force attacks against RDP servers are on the rise. - Ransomware using Remote Desktop to spread itself - Ransomware spreads through weak remote desktop credentials - Ransomware and RDP – Close those RDP ports now and stay vigilant! There also have been reported cases where crypto malware has spread via YouTube ads and on social media, a popular venue where cyber-criminals can facilitate the spread of all sorts of malicious infections. - Ransomware Tops List of Social Media Security Threats - How ransomware scams on social media often work - Experts Warn of Mobile Ransomware Deluge on Social Media Infections spread by malware writers and attackers exploiting unpatched security holes or vulnerabilities in older versions of popular software such as Adobe, Java, Windows Media Player and the Windows operating system itself. Software applications are a favored target of malware writers who continue to exploit coding and design vulnerabilities with increasing aggressiveness. - Kaspersky Lab report: Evaluating the threat level of software vulnerabilities - Time to Update Your Adobe Reader - Malware exploits Windows Media Player vulnerabilities - Eight out of every 10 Web browsers are vulnerable to attack by exploits Another PDF sample that exploits an unpatched vulnerability in Adobe Reader and Acrobat has been spotted in the wild... ...your machine may still be vulnerable to attacks if you never bother to uninstall or remove older versions of the software...a malicious site could simply render Java content under older, vulnerable versions of Sun's software if the user has not removed them.... Hole in Patch Process Ghosts of Java Haunt Users BlackHole toolkit enables attackers to exploit security holes in order to install malicious software If a website has been hacked or displays malicious ads, they can exploit the vulnerable software on your computer. The majority of computers get infected from visiting a specially crafted webpage that exploits one or multiple software vulnerabilities. It could be by clicking a link within an email or simply browsing the net, and it happens silently without any user interaction whatsoever. Exploit kits are a type of malicious toolkit used to exploit security holes found in software applications...for the purpose of spreading malware. These kits come with pre-written exploit code and target users running insecure or outdated software applications on their computers. Tools of the Trade: Exploit Kits To help prevent this, install and use Secunia Personal Software Inspector (PSI), a FREE security tool designed to detect vulnerable and out-dated programs/plug-ins which expose your computer to malware infection. A large number of infections are contracted and spread by visiting gaming sites, porn sites, using pirated software (warez), cracking tools, hacking tools and keygens where visitors may encounter drive-by downloads through exploitation of a web browser or an operating system vulnerability. Security researchers looking at World of Warcraft and other online games have found vulnerabilities that exploit the system using online bots and rootkit-like techniques to evade detection in order to collect gamer's authentication information so they can steal their accounts. Dangers of Gaming Sites: The design of online game architecture creates an open door for hackers...hackers and malware hoodlums go where the pickings are easy -- where the crowds gather. Thus, Internet security experts warn game players that they face a greater risk of attack playing games online because few protections exist....traditional firewall and antimalware software applications can't see any intrusions. Game players have no defenses...Online gaming sites are a major distribution vehicle for malware.... MMO Security: Are Players Getting Played? Malware Makers Target Online Games to Spread Worms Microsoft warns game developers of cyber thieves online game + online trade = Trojan Spy Real Flaws in Virtual Worlds: Exploiting Online Games Dangers of Cracking & Keygen Sites: ...warez and crack web pages are being used by cybercriminals as download sites for malware related to VIRUT and VIRUX. Searches for serial numbers, cracks, and even antivirus products like Trend Micro yield malcodes that come in the form of executables or self-extracting files...quick links in these sites also lead to malicious files. Ads and banners are also infection vectors... Keygen and Crack Sites Distribute VIRUX and FakeAV Dangers of Warez Sites: ...warez/piracy sites ranked the highest in downloading spyware...just opening the web page usually sets off an exploit, never mind actually downloading anything. And by the time the malware is finished downloading, often the machine is trashed and rendered useless. University of Washington spyware study Infections spread by using torrent, peer-to-peer (P2P) and file sharing programs. They are a security risk which can make your computer susceptible to a smörgåsbord of malware infections, remote attacks, exposure of personal information, and identity theft. In some cases the computer could be turned into a virus honeypot or zombie. File sharing networks are thoroughly infected and infested with malware according to Senior Virus Analyst, Norman ASA. Malicious worms, backdoor Trojans IRCBots, and rootkits spread across P2P file sharing networks, gaming, porn and underground sites. - US-CERT: Risks of File-Sharing Technology - A Study of Malware in Peer-to-Peer Networks - SANS Institute Peer-to-Peer File-Sharing Networks: Security Risks - More malware is traveling on P2P networks these days - File Sharing, Piracy, and Malware Users visiting such pages may see innocuous-looking banner ads containing code which can trigger pop-up ads and malicious Flash ads that install viruses, Trojans, and spyware. Ads are a target for hackers because they offer a stealthy way to distribute malware to a wide range of Internet users. Hackers are also known to exploit Flash vulnerabilities which can lead to malware infection. When visiting a website that hosts an HTML page which requires a Flash script, users may encounter a malicious Flash redirector or malicious script specifically written to exploit a vulnerability in the Flash Interpreter which causes it to execute automatically in order to infect a computer. - What is Malvertising - Malvertising: The Use of Malicious Ads to Install Malware - malvertisement (malicious advertisement) - Analyzing and Detecting Malicious Flash Advertisements Keep in mind that even legitimate websites can display malicious ads and be a source of malware infection. ...Internet users are 21 times more likely to become infected by visiting a legitimate online shopping site than by visiting a site used for illegal file-sharing...The problem isn't in the sites themselves; it's in the ads... ...According to Ciscos annual 2013 Security Report internet users are 182 times more likely to get malware from clicking on online ads than visiting a porn site... Clicking Online Ads More Likely To Deliver Malware Than Surfing Porn Sites Cisco Annual Security Report: Threats Step Out of the Shadows Infection can also spread by visiting popular social sites and through emails containing links to websites that exploit security hole's in your web browser. When you click on an infected email link or spam, Internet Explorer launches a site that stealthy installs a Trojan so that it can run every time you startup Windows and download more malicious files. Email attachments ending with a .exe, .com, .bat, or .pif from unknown sources can be malicious and deliver dangerous Trojan downloaders, worms and viruses which can utilize your address book to perpetuate its spread to others. At least one in 10 web pages are booby-trapped with malware...The tricks include hacking into a web server to plant malware, or planting it within third-party widgets or advertising...About eight out of every 10 Web browsers are vulnerable to attack by exploits...Even worse, about 30% of browser plug-ins are perpetually unpatched... One in 10 web pages laced with malware Bulk of browsers found to be at risk of attack Researchers at the Global Security Advisor Research Blog have reported finding pornographic virus variants on Facebook. The Koobface Worm has been found to attack both Facebook and MySpace users. Virus Bulletin has reported MySpace attacked by worm, adware and phishing. Some MySpace user pages have been found carrying the dangerous Virut. Malware has been discovered on YouTube and it continues to have a problem with malware ads. MSN Messenger, AIM and other Instant Messaging programs are also prone to malware attacks. - Conficker worm's copycat Neeris spreading over IM - IM attacks get nastier - MSN Most Dangerous IM Client in 2007 - IM attacks up nearly 80% Infections can spread when using a flash drive. In fact, one in every eight malware attacks occurs via a USB device. This type of infection usually involve malware that modifies/loads an autorun.inf (text-based configuration) file into the root folder of all drives (internal, external, removable) along with a malicious executable. Autorun.inf can be exploited to allow a malicious program to run automatically without the user knowing since it is a loading point for legitimate programs. When removable media such as a CD/DVD is inserted (mounted), autorun looks for autorun.inf and automatically executes the malicious file to run silently on your computer. For flash drives and other USB storage, autorun.ini uses the Windows Explorer's right-click context menu so that the standard "Open" or "Explore" command starts the file. Malware modifies the context menu (adds a new default command) and redirects to executing the malicious file if the "Open" command is used or double-clicking on the drive icon. When a flash drive becomes infected, the Trojan will infect a system when the removable media is inserted if autorun has not been disabled. Keeping autorun enabled on USB and other removable drives has become a significant security risk as they are one of the most common infection vectors for malware which can transfer the infection to your computer. To learn more about this risk, please read: - When is AUTORUN.INF really an AUTORUN.INF? - Nick Brown's blog: Memory stick worms - USB-Based Malware Attacks - Microsoft Security Advisory (967940): Update for Windows Autorun - Microsoft Article ID: 971029: Update to the AutoPlay functionality in Windows Note: If using Windows 7, be aware that in order to help prevent malware from spreading, the Windows 7 engineering team made important changes and improvements to AutoPlay so that it will no longer support the AutoRun functionality for non-optical removable media. - SQL Injection Overview - Taxonomy of Online Security and Privacy Threats - Malicious HTML Tags Embedded in Client Web Requests - Vulnerabilities Allow Attacker to Impersonate Any Website - Threat and Vulnerability Mitigation: SQL Injection ...More than 90 percent of these webpages belong to legitimate sites that have been compromised through hacking techniques such as SQL Injection...Hackers are apparently planting viruses into websites instead of attaching them to email. Users without proper security in place get infected by simply clicking on these webpages. One webpage gets infected by virus every 5 seconds Phishing is an Internet scam that uses spoofed email and fraudulent Web sites which appear to come from or masquerade as legitimate sources. The fake emails and web sites are designed to fool respondents into disclosing sensitive personal or financial data which can then be used by criminals for financial or identity theft. The email directs the user to visit a web site where they are asked to update personal information such as passwords, user names, and provide credit card, social security, and bank account numbers, that the legitimate organization already has. Spear Phishing is a highly targeted and coordinated phishing attack using spoofed email messages directed against employees or members within a certain company, government agency, organization, or group. These fraudulent emails and web sites, however, may also contain malicious code which can spread infection. Pharming is a technique used to redirect as many users as possible from the legitimate commercial websites they intended to visit and lead them to fraudulent ones. The bogus sites, to which victims are redirected without their knowledge, will likely look the same as a genuine site. However, when users enter their login name and password, the information is captured by criminals. Pharming involves Trojans, worms, or other technology that attack the browser and can spread infection. When users type in a legitimate URL address, they are redirected to the criminal's web site. Another way to accomplish these scam is to attack or "poison the DNS" (domain name system) rather than individual machines. In this case, everyone who enters a valid URL will instead automatically be taken to the scammer's site. Finally, backing up infected files, is a common source of reinfection if they are restored to your computer. Generally, you can back up all your important documents, personal data files, photos to a CD or DVD drive, not a flash drive or external hard drive as they may become compromised in the process. The safest practice is not to backup any executable files (*.exe), screensavers (*.scr), autorun (.ini) or script files (.php, .asp, .htm, .html, .xml ) files because they may be infected by malware. Avoid backing up compressed files (.zip, .cab, .rar) that have executables inside them as some types of malware can penetrate compressed files and infect the .exe files within them. Other types of malware may even disguise itself by hiding a file extension or by adding double file extensions and/or space(s) in the file's name to hide the real extension as shown here (click Figure 1 to enlarge) so be sure you look closely at the full file name. If you cannot see the file extension, you may need to reconfigure Windows to show file name extensions. Now that you know How malware spreads, you may want to read Best Practices for Safe Computing - Prevention which includes tips to protect yourself against malware infection. Edited by quietman7, 12 January 2017 - 09:58 PM.
<urn:uuid:3132154d-8517-49e2-a252-9e0093cc693d>
CC-MAIN-2017-04
https://www.bleepingcomputer.com/forums/t/287710/how-malware-spreads-how-your-system-gets-infected/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00550-ip-10-171-10-70.ec2.internal.warc.gz
en
0.904243
5,205
3.109375
3