text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
McLaughlin C.L.,WRc Plc. |
Blake S.,WRc Plc. |
Hall T.,WRc Plc. |
Harman M.,WRc Plc. |
And 3 more authors.
Water and Environment Journal | Year: 2011
A well-known use of perchlorate is as a rocket fuel propellant; however, more widespread uses include in munitions and fireworks, and it also occurs naturally. Perchlorate suppresses the thyroid, which can lead to a variety of adverse effects. It is a widespread contaminant in the United States, but limited occurrence data in the United Kingdom exist, and even less for drinking water. Monitoring of 20 raw and treated drinking water sites in England and Wales, covering four seasonal periods, showed that perchlorate is a low-level background contaminant of raw and treated drinking water. Low concentrations (treated drinking water: <0.020-2.073μg/L, mean 0.747μg/L) were detected at every higher-risk site. The concentrations were comparable in each of the four sampling exercises and no significant trends were apparent relating to the time of year, the type of risk or the method of chlorination. Limited data showed that removal by ion exchange and granular-activated carbon may occur. © 2010 WRc plc. Water and Environment Journal © 2010 CIWEM. Source
Brown L.E.,University of Leeds |
Mitchell G.,University of Leeds |
Holden J.,University of Leeds |
Folkard A.,Lancaster University |
And 27 more authors.
Science of the Total Environment | Year: 2010
Several recent studies have emphasised the need for a more integrated process in which researchers, policy makers and practitioners interact to identify research priorities. This paper discusses such a process with respect to the UK water sector, detailing how questions were developed through inter-disciplinary collaboration using online questionnaires and a stakeholder workshop. The paper details the 94 key questions arising, and provides commentary on their scale and scope. Prioritisation voting divided the nine research themes into three categories: (1) extreme events (primarily flooding), valuing freshwater services, and water supply, treatment and distribution [each >. 150/1109 votes]; (2) freshwater pollution and integrated catchment management [100-150 votes] and; (3) freshwater biodiversity, water industry governance, understanding and managing demand and communicating water research [50-100 votes]. The biggest demand was for research to improve understanding of intervention impacts in the water environment, while a need for improved understanding of basic processes was also clearly expressed, particularly with respect to impacts of pollution and aquatic ecosystems. Questions that addressed aspects of appraisal, particularly incorporation of ecological service values into decision making, were also strongly represented. The findings revealed that sustainability has entered the lexicon of the UK water sector, but much remains to be done to embed the concept operationally, with key sustainability issues such as resilience and interaction with related key sectors, such as energy and agriculture, relatively poorly addressed. However, the exercise also revealed that a necessary condition for sustainable development, effective communication between scientists, practitioners and policy makers, already appears to be relatively well established in the UK water sector. © 2010 Elsevier B.V. Source
Hayes C.R.,University of Swansea |
Hydes O.D.,Drinking Water Inspectorate
Journal of Water and Health | Year: 2012
At the zonal scale (e.g. a city or town), random daytime (RDT) sampling succeeded in demonstrating both the need for corrective action and the benefits of optimised orthophosphate dosing for plumbosolvency control, despite initial concerns about sampling reproducibility. Stagnation sampling techniques were found to be less successful. Optimised treatment measures to minimise lead in drinking water, comprising orthophosphate at an optimum dose and at an appropriate pH, have succeeded in raising compliance with the future European Union (EU) lead standard of 10 μg/L from 80.4% in 1989-94 to 99.0% in 2010 across England and Wales, with compliance greater than 99.5% in some regions. There may be scope to achieve 99.8% compliance with 10 μg/L by further optimisation coupled to selective lead pipe removal, without widespread lead pipe removal. It is unlikely that optimised corrosion control, that includes the dosing of orthophosphate, will be capable of achieving a standard much lower than 10 μg/L for lead in drinking water. The experience gained in the UK provides an important reference for any other country or region that is considering its options for minimising lead in their drinking water supplies. © IWA Publishing 2012. Source
McLaughlin C.L.,National Center for Environmental Toxicology |
Blake S.,National Center for Environmental Toxicology |
Hall T.,National Center for Environmental Toxicology |
Harman M.,National Center for Environmental Toxicology |
And 3 more authors.
Water and Environment Journal | Year: 2011
There has been increasing interest in the widely used perfluorinated chemicals such as perfluorooctane sulphonate (PFOS). PFOS has been shown to be toxic, persistent and bioaccumulative in the environment and is a focus for restriction within the European Union. Limited monitoring data, especially in the United Kingdom, are available for PFOS in environmental waters, and even less for its detection in drinking water. Data available in the United Kingdom indicate that PFOS contamination of environmental waters has only occurred following specific incidents. Monitoring of 20 raw and treated drinking water sites in England, covering four seasonal periods, showed that PFOS is not a widespread background contaminant of raw and treated drinking water in England. Low levels of PFOS (0.012-0.208μg/L) were detected at four specific sites, which were at a higher risk for contamination. At three of these sites, where PFOS was detected in both raw and final drinking water, treatment processes [chlorination, ozonation and granular activated carbon (GAC)] did not appear to remove PFOS. The findings of this work are pertinent to risk assessments now required by the drinking water quality regulations. © 2009 WRc plc. Water and Environment Journal © 2009 CIWEM. Source
Carmichael C.,Public Health England |
Odams S.,Public Health England |
Murray V.,Public Health England |
Sellick M.,Drinking Water Inspectorate |
And 2 more authors.
Journal of Water and Health | Year: 2013
Water shortages as a result of extreme weather events, such as flooding and severe cold, have the potential to affect significant numbers of people. Therefore, the need to build robust, coordinated plans based on scientific evidence is crucial. The literature review outlined in this short communication was conducted as part of a joint Drinking Water Inspectorate and Health Protection Agency (now Public Health England) report which aimed to review the scientific evidence base on extreme events, water shortages and the resulting health impacts. A systematic literature review was undertaken to identify published literature from both peer-reviewed and grey literature sources. The retrieved literature was then assessed using the Scottish Intercollegiate Guidelines Network quality assessment. The authors found very few scientific studies. However, a great deal of valuable grey literature was retrieved and used by the research team. In total, six main themes of importance that were identified by the review and discussed included health impacts, water quantity and quality, alternative supplies, vulnerable groups, communication with those affected and the emergency response. The authors conclude that more research needs to be conducted on health impacts and extreme events water shortages in order to build the future knowledge base and development of resilience. © IWA Publishing 2013. Source | <urn:uuid:15abbb91-7bfe-4202-8910-4efe55ffcfc7> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/drinking-water-inspectorate-1647768/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00326-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.942499 | 1,590 | 2.53125 | 3 |
Here we go again. I decided to write another article concerning some overall security aspects of installing and running linux. To keep it short and simple, here are some good pointers to enhancing your system’s security. But remember, there’s no absolute security, so keep your eyes open, subscribe yourself to a few good mailing lists, and keep your software up-to-date.
Good partitioning does a lot of good to your system’s security as it greatly simplifies your admin duties in case of a system crash and data recovery. You can create various partitions, and have them set as read-only, nosuid or similar. By having a partition mounted as nosuid you can simply address the SUID issue, generally connected to buffer overflows and obtaining a root shell or some other possibile security compromising flaws. More about the SUID issue can be read here. If you plan to run an FTP server, setting that partition would save you a lot of trouble in the future, as it is in read-write mode, but no suid programs can be run from it. The same can be said for mounting a partition read-only, or ro. You can always alter these settings, wich are located in /etc/fstab, for any of your block devices. Of course, ‘man fstab(5)’ and ‘man mount(8)’ are your good friends to get a grip on all possible options when mounting a filesystem. /etc/fstab is human readable, so you’ll easy get into it.
Generally speaking, putting linux, or any other OS on a single partition is a major administration no-no, and with any multiuser, multitasking os, is asking for trouble, sooner or later. So, whenever possible, create at least these partitions, with sizes of your choices:
- / – which needs little space, but will house all of your other directories if you do not create them as stand-alone partitions,so consider that also before creating it
- /usr – houses most of your software, so you might consider allocating a lot of space here,
- /home – is the starting point for all users on your system, so allocate space according to the number of users you plan to have
- /var – which is required for all the administrative logs, mail, usenet news and other.
From a security point, a good thing would be to consider at least having a separate / and a /home partition. This way you can restrict access to some partitions, can easily repair damaged filesystem and keep the system running and so on. It even might be good to keep your temporary data and logs on a separate, RAM partition. That way, no information about your system and all events that took place can be traced, because all the information on the RAM disk is lost when the system is rebooted or shut down. But, you also might consider taring the files before shutting down, and copy them elswhere for later safe reading, if neccessary. It’s up to you.
You can always partition your system prior to the installation of any flavour of linux you have chosen, and most of the distributions come already equipped a partitioning tool. However, if you wish to do it yourself before installing linux, you can always do it with any fdisk, ‘man fdisk(8)’ or run fdisk and type ‘m’ for reviewing the list of all commands. Of course, there are other tools for partitioning, such as Disk Druid that comes with Red Hat, or Parted, a tool from the GNU foundation that you can find it here.
Relying solely on decent passwords is not a good security measure, but using good passwords reduces the risks of a security breach. So, use password generation utilities, and most of all, educate your users about the significance of good passwords. Sadly, the best passwords are the ones you’ll hardly ever remeber right, so it’s always a trade-off between security and usability. Usually, this means horrible passwords, written on paper.
There are various proactive password checking utilites that can simplify your job and force users to pick a right password. Shadow passwording system needs not to be mentioned, it’s a must. A good practice would be to do a dictionary attack by yourself from time to time, just to check for easy retrieveable passwords. Make sure all users create a separate password for any system they access. All passwords are vulnerable to dictionary attacks and brute force attacks, it’s only up to you to make the attacker’s job more difficult.
Services and daemons running at boot time
All that could be briefly said is: disable anything you don’t need, or don’t plan on using and also don’t install anything you don’t need.
One thing is certain, if you need a certain service, like telnet or FTP, think about it. Are they really needed? Are they safe to use, and is there a supplement to them, even more reliable? For instance, SSH replaces telnet perfectly, and FTP is pretty much obsolete, with all those web forms
these days, and, yes, even SCP from the SSH package.
Need an MTA? Why not think Qmail or some other instead of sendmail? A lot of issues exist when planning what services you will provide, and more important how.
Think how you’re going to organize your machines in production, as it’s pretty much useless to setup a perfect firewall, lose a lot of time on perfecting it, just to put an FTP behind it. Deploy servers rationally, using the least possible number of services exposed to the outside of your LAN, no matter how simple or harmless the service might be. If you really need services that have known past security issues, a wise idea would be to put them in DMZ, and separate them from all other machines, in any possible way.
If you plan to use LILO as your boot loader, some things can be achieved by adding some extra lines to your /etc/lilo.conf, and these are ‘restricted’ and ‘password=”somepassofyourchoice”‘. After making any alterations to /etc/lilo.conf make sure to re-run lilo by typing /sbin/lilo’ to have them take effect when booting next time. Adding line restricted makes it neccessary for the user to provide a password when trying to pass additional boot parameters to lilo. The password option restricts the booting of linux to local users who have the password, but the password isn’t encrypted so make the /etc/lilo conf owned by root and set to mode 600. That’s ‘chmod 600’. As always, you can ‘man lilo’ to find out more about additional options. The ultimate choice is to make lilo boot from a floppy, so nobody without that floppy can boot the system. Nothing like a dose of physical security measures! 🙂 But still, be sure to have a backup lying somewhere safe because floppies aren’t that reliable…
Of course, there are other ways of booting linux, so make sure that you read more documentation on the subject, so that you can make some good choices to enhance the security of your system.
Think about running a scanner on your system to check it for vulnerabilities, wrong file permissions, SUID, or other wrongly set UID’s, open services, ports, etc. Network scanners test your host, as would a possible attacker do, and in most cases will, looking for any services and ports open and searching for any known vulnerability. Most scanners are easy to use and configure, so I’d recommend using the ones listed below:
Of course, there are so many others I’d need to write another article just to name them all, but the above mentioned are the most commonly used and have all the functions and options you may need. Use them cautiosly, and remember, trying to scan other hosts may be in violation of some law, or could cause an un-willing Denial Of Service attack.
On the other side, scanning can be logged so beware of any consequences involved in scanning other systems. Stick to scanning only your system for checking for possible exploits and running services you don’t need.
Consider running any scanner detector or logger, in conjuction with some tools that trigger certain events on detection of scanning.
If you plan on deploying a web server with CGI’s, I urge you to use a CGI vulnerability scanner, as it will save you from a lot of harm, as CGI
vulnerabilities impose a great threat. If you use poorly made CGI scripts, you’ll undermine the safety of your web, no matter how hard you tried and worked on it.
Logging is one of the great advantages linux has to offer. Logging, by default includes reporting errors, reasons, users logged in, the duration of their login time, tracks of scanning and other valuable information. That can also be missused, but that’s an issue too long to be discussed here.
System and kernel messages are handled by syslogd and klogd, and the output is located in the /var/log/messages file. A good thing to do is to
customize /etc/syslogd.conf to suit your needs, and to make the tracking of information easier. Typing ‘man syslogd(8)’ can bring you up to speed with syslogd and syslogd.conf. Just for an example, let’s say you wanted to separate all warning and error messages in a different file, you’d do it by entering the following lines in /etc/syslogd.conf:
# all error and warning messages logged
*.warn; *.err /var/log/errmsg
Everything can be logged up to some point. Read, develop your ideas, and implement them. Log everything. Logging is good. 🙂
The downside is that the attacker can learn about your system from your logs, so think about that RAM disk mentioned at the begining, or a separate partition with restricted access. You could also encrypt that partition, but that could cause some problems if not done with care.
Don’t underestimate the importance of logging. You can learn a lot about your system and network reading logs, and logs are sometimes your only hope in finding information about possible system intrusions that have occured to your system. You can find all sorts of logging utilities just lying around, waiting for you to pick them up and put them to good use.
So, you did a stealth scan and you think you can get away with it? Nope, quite wrong. In fact, most of your activites are being logged and carefully examined by someone right now as we speak. That my fellow readers is known as intrusion detection. Intrusion detetection is a real-time detection activity of intrusion attempts or any other information gathering activites. IDS’s are an extremly useful tool for any sys admin, so grab one and play around. I’d suggest Snort, as it is very versatile.
How does an average IDS work? IDS commonly use rule-based systems, meaning that certain events trigger other events, as described in the rules they use. Naturally, many rules can be made: you can write your own, or download pre-set rules, and that’s why I recommend getting Snort, as it is equipped with an enormous set of rules. An IDS listens to your network traffic, and upon noticing a suspicious activity (that’s what rules are for) it takes approriate steps, or can do so by analyzing your logs (you did log everything, did ya?). Of course, this approach is not fault free, and many false alarms could be generated, but never the less…
IDS’s are still being developed, and as such are not bug-free. Dealing with IDS’s may mean a lot of hard work setting it up right, writing your own rules, and generally asking yourself are they really worth the effort. Well, they are. In conjunction with a decent firewall, IDS’s, when set up properly, can prove to be a real time and nerve saver, not to mention boosting your system’s security.
Most of you are well aware of problems with privacy. And most of you mind your data and information being read by other people. Well, encryption is the answer you’re looking for. Use any encryption tool whenever possible. Encrypt your files, your mail, using OpenPGP, PGP, or other tested and proved encryption tools. You might consider encrypting entire folders, or even partitons containing personal data.
When accesing remote systems, use OpenSSH, don’t use Telnet, protect your data in transit. Network sniffers and other tools make the usage of a secure shell is a must these days. If you’re building a network use SSH, discard Telnet, as it is unsecure. SSH uses several algorithms, like RSA, IDEA, Triple DES, wich makes it an ideal choice for protecting your data in transit. Naturally, it also has some security history, so it would be good to keep your SSH up-to-date by applying patches and upgrading regularly.
Keep your system clean
Maintain your system clean and trojan free by using a tool like Tripwire. Viruses are not such big of an issue when it comes to linux, as they try to do some damage to the system. In order to do so, they first must obtain root access. Trojans, on the other hand, are a common thing on any system including linux. What is a trojan? A trojan is a program pretending to do something, but is in fact doing something else, and you guess it, that ‘something else’ is a no-no. Nowadays with everything
being downloaded from the internet, a major security issue has came up, concerning trojans. One could make sure by compiling everthing on their own system, and skimming through source code prior to compiling, but that’s not something most people have in mind when thinking about installing new software. Nor it is easy. Hell, I wouldn’t do it, unless I was extremly bored and had a looooot of spare time…
You might try to keep your system clean by making a fresh linux installation, and afterwards using Tripwire to preserve a snapshot of your system. Tripwire maintains a checksum database of everything installed, and if you should notice any odd activites, you can compare what exactly has changed since the last check, and therefore find out about any suspicious files on your system. Neat… Another must for any system administrator. But remember, make a fresh install, it can’t help you with a system that has been up and running for who knows how long, and could already contain malicious code.
Let’s dwell further into this subject. A firewall is basically a device, program or script, that prevents others from accessing your network, or your services. Different types of firewalls exist- both hardware and software based. I’ll deal with the software solution here. The firewall is not the ultimate in protecting your network, but can make a lot of good if used properly.Configuring a firewall is not easy and is time consuming but when set right can make your life much easier.
Linux is perfect for such duties. Depending on your kernel, or your distribution, you may find ipchains, or older ipfwadm installed to do the magic. Kernel 2.4.x supports a new utility called iptables. They all do the same, more or less efficiently, but as with anything security related, the newer – the better. What they all do is known as ‘packet filtering’, they analyze incoming packets and decide what to do with them, based on the rules that you set up. By using several variables, such as the port number, protocol or IP address, you can set up various rules for various situations.
When constructing firewall rules by yourself, a good idea is to use the policy ‘drop’ instead of ‘reject’ when controling unwanted IP packet traffic, as rejecting will let the possible attacker know about the firewall. With the ‘drop’ policy he will not be aware of what exactly happened to the packet, but he’ll be forced to guess. Try it out.
You can always go for the commercial software solutions, or even hardware, but it’s a good idea to try and create your own little firewall no matter how small, or even unefficient it may be, just to see how it works. Actually, free firewalls, firewall tools, and commercial ones will do the trick probably better than any of us can write in a short amount of time, so it’s a good idea to stick with a pre-made, just make sure it has no significant security history.
Remember, relying solely on a firewall is asking for trouble. Use it in conjunction with other tools.
One of the fine things with linux is the kernel. You can re-model an re-fit it to suit you needs, whatever those may be. A plain and ordinary kernel, re-compiled for you specific needs is good, but there are a lot of kernel add-ons or patches that can only do good for your system. Various scheduling, other tune-up patches and patches alike are available.
A number of kernel patches exist designed solely to spice up your system’s security, fix possible problems, or even holes in the kernel. A good example of such patch is the Openwall, which deals with various issues that may prove to be of some significance, like the non-executable user stack area, for those pesky buffer overflows, restricted links in /tmp, restricted /proc and so on. I suggest you visit the URL mentioned above and see what you make of it.
Aside from kernel patches, there are a couple of good scripts lying around designed to harden the security of your system, especially default installations maintained by novice linux users. Some of them may not do much that an experienced system administrator couldn’t, but are a helping thing for the unexperienced.
Usualy, the vendor has an archive for the patches, and there are sites dedicated to such things, so you can track down the latest patch with ease. Apply them, ASAP. It’s a bad enough fact that your system has a security hole, let alone keeping it like that for some time.
Last but not least, do not underestimate the importance of physical security. You’ve maybe created the perfect fortress, but think twice about it’s location. Who will have access, where it will be located, a lot of questions will arise during this process, questions to which should make you think twice before giving any answers.
But, to keep a short story short, we’ll stop here.
By combining many different utilities and aspects of keeping your system secure you’ll be able to reap multiple benefits, no to mention keep your nerves in a good shape. I hope you found good information in this one, ’till next time… | <urn:uuid:0857b3e8-da00-4fd0-868b-e1f26345dac0> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2002/05/16/securing-linux/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00142-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.942003 | 4,072 | 2.640625 | 3 |
An effective security warning is concrete and clear, appeals to authority, and doesn’t pop up too often, say the results of a study into the psychology of malware warnings conducted by Cambridge University researchers.
“We’re constantly bombarded with warnings designed to cover someone else’s back, but what sort of text should we put in a warning if we actually want the user to pay attention to it?” asked Cambridge University’s Head of Cryptography Professor Ross Anderson and research associate David Modic, and decided to investigate.
The unfortunate reality is that Internet users are faced with a number of security warnings, and that they ignore most of them. The things that can influence their behaviour in that respect are the ones that influence them in their day-to-day, offline life: authority, group pressure or influence, and risk preferences.
The researchers surveyed over 500 men and women which they recruited via Amazon Mechanical Turk. The group of respondents were faced with five different malware warnings.
The control group saw the typical Google Chrome warning, others were shown variations of warnings that:
- contained an appeal to authority (“The site you were about to visit has been reported and confirmed by our security team to contain malware.”)
- elements of social influence (“The scammers operating this site have been known to operate on individuals from your local area. Some of your friends might have already been scammed.”)
- a concrete threat (“The site you are about to visit has been confirmed to contain software that poses a significant risk to you, with no tangible benefit. It would try to infect your computer with malware designed to steal your bank account and credit card details in order to defraud you.”)
- a vague threat (“We have blocked your access to this page. It is possible that it might contain software that might harm your computer.”)
The respondents were also asked to choose a reason for turning off browser warnings, and to indicate what kind of information would make them heed the warning more (ex. information how a particular scam works, average amount of money lost in this scheme, etc.)
The research showed that users mostly turn off the warnings because of the high rate of false positives, but that the overwhelming majority of all users keep the warnings on (and women are a bit more likely to do so).
“Our analysis showed that the more familiar our respondents were with computers, the more likely they were to keep the malware warnings on,” the researchers noted in their paper. “Risk assessment is possibly more accurate in the population familiar with various cyber threats. This result indicates that the ability for premeditation outweighs the need for convenience to some extent.”
Some of the users who turn off the browser malware warnings cite the inability to understand them as the reason, which implies that the warnings are not written in a clear enough manner.
“When individuals have a clear idea of what is happening and how much they are exposing themselves, they prefer to avoid potentially risky situations,” the researchers note. Also, users respond more to soft power techniques (expert opinion) than harsh ones (threats, coercion). | <urn:uuid:7f4a8ea6-7d2f-47db-b042-ee502b0bde1e> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2014/01/06/how-to-make-malware-warnings-more-effective/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00382-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.96457 | 661 | 2.765625 | 3 |
A Mobile Phone That Spots Depression
Scientists at the Northwestern University Feinberg School of Medicine say they’re developing a mobile phone, Mobilyze!, designed to detect when the phone’s user is feeling blue.
A small trial has shown the phone has been effective in reducing depression, Mashable and others have reported.
The phone features sensor data that “interprets your location, activity level (via an accelerometer), social context and mood, ultimately detecting signs of depression,” according to Mashable. “The phone learns your typical lifestyle patterns, and notices if you are making calls and getting emails. If it thinks you are creating an isolating environment, it will suggest that you call or see friends.”
Perhaps it also suggests you not look at your cell phone bill? Can you imagine if Apple’s virtual assistant Siri could also be your therapist?
Kidding aside, the scientists say that tests on eight volunteers has found the Mobilyze! Phone boosted their moods.
“They all had a major depressive disorder when they started, and they were all both clinically and statistically better at the end of the treatment,” psychologist David Mohr told CBS.
TAGS:mobile phone, cellphone, depression | <urn:uuid:0ff5ded7-1f9a-496b-9c96-817a86fc02d0> | CC-MAIN-2017-04 | http://www.enterprisemobiletoday.com/features/after_hours/a-mobile-phone-that-knows-when-youre-depressed.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00500-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.946121 | 259 | 2.6875 | 3 |
To increase recycling in municipalities, sometimes it takes more than just encouraging citizens to be environmentally friendly. Incentive-based recycling may have a strange ring to it -- especially to the ears of city officials -- but that's what some cities are doing to discourage citizens from throwing away recyclables. By reducing the amount of waste cities must dump at landfills, they save money in tipping fees while encouraging residents to be environmental stewards. Tipping fees are charges paid to landfills based on the volume of waste.
Adding an incentive to recycling was the idea Ron Gonen had in 2005 when he co-founded RecycleBank -- a program that tracks how many pounds a household recycles in order to offer incentives, like coupons and discounts at local businesses and restaurants, to residents.
Gonen, also RecycleBank's CEO, said the inspiration behind the program was developing a business model that showed people that being environmentally conscious was not just the right thing to do, but also a good way to save money.
RecycleBank installs bar codes or radio frequency identification (RFID) tags on recycling carts, which enable them to be scanned and linked to the coordinating address. Participating cities' recycling pickup trucks are retrofitted with a mechanical arm that includes a scale and bar code/RFID scanner. "It picks up the cart, reads the chip, identifies how much your home recycled, and that's translated into RecycleBank Points," Gonen said. "They can log on to our Web site [www.recyclebank.com], and it's just like looking at your bank statement: It tells you how much they recycled each week and how many RecycleBank Points they earned."
Gonen said more than 75 cities participate in the program and service was recently launched in the UK.
In June, Westland, Mich., implemented RecycleBank as its first curbside recycling program and has already found a significant increase in how much its citizens recycle. "The last couple years we have had a drop-off recycling center program that was averaging about 90 tons a month of recyclables, and that had pretty much leveled off," said the city's mayor, William Wild. "Our first month with RecycleBank picking up recyclables at the curb, that number went up to 550 tons."
However, Westland launched curbside recycling and RecycleBank simultaneously so no data exists regarding what tonnage curbside recycling would have generated without RecycleBank. Wild added that the city pays about $30 per ton to landfill trash, and in the first month the city diverted about one-third of its solid waste from the landfill.
Wild was looking for a new recycling program because single-stream recycling -- where different recyclables, like glass, plastic and paper, can be mixed in one container -- became available in Westland. "I knew that it was easy now, and I knew that we needed some way to give our residents an incentive to do it. And that was where the RecycleBank program really helped us out," he said. "Also what was beautiful about it was that we're able to give our residents an incentive to recycle and help our local economy."
Westland deployed approximately 28,000 bar-coded recycling carts to single-family households. In the first month, 99 percent of the possible participating residents registered their carts with RecycleBank, he said.
The program tracks the amount recycled for each pickup route and averages the number per household to evenly distribute the points to all participating residences on the route. Wild said this is to garner maximum participation. "I wanted folks to be rewarded just for participating," he said. "What I didn't want was for a senior citizen who maybe puts out five pounds a week and gets 2.5 RecycleBank Points per pound to be discouraged by a family of five next
door when they put out 25 or 30 pounds."
The program has been called "the city of Westland's economic stimulus plan," Wild said. If each household earns the maximum points per year, he said there's the opportunity to inject more than $11 million into the local economy annually.
Although RecycleBank manages the Web site and rewards program, it provides the city with a breakdown of how much households are recycling. Wild said this lets the city focus educational efforts on specific neighborhoods or pickup routes.
Photo courtesy of RecycleBank
Recycling from home wasn't a new concept to North Miami, Fla., citizens, who separated plastic and paper into two recycling totes. However, in 2008 the city replaced the totes with 96-gallon carts to take advantage of the new single-stream recycling facility in Miami-Dade County, according to Pam Solomon, public information officer for the city.
RecycleBank outfitted 9,200 of the new carts with bar codes. As of press time, approximately 29 percent of the eligible residents had registered their carts with the program's Web site. Solomon said the city is increasing its marketing efforts to boost activation, but one barrier it faces is educating its diverse community. She praised the program for having customer service support and education materials in Spanish and English, but one-third of North Miami's citizens speak Creole. Outreach materials are being created for the Creole community, but Solomon said the RecycleBank concept can be difficult to explain in another language.
According to Solomon, before implementing the program, North Miami picked up about 30 tons of recyclables per month. Now on average 170 tons per month are picked up. However, the city hasn't seen the savings in landfill fees that it was anticipating. "We haven't really seen a large decrease in the garbage, which is what we were expecting when the recycling materials are removed from the waste stream," she said. "We are removing 170 tons from our garbage technically, but we still haven't seen a large decrease in the amount of garbage that we're picking up."
Solomon attributed this to residents being able to recycle materials that they weren't able to before the single-stream recycling plant opened, like phone books.
North Miami officials elected to weigh the recyclables at each residence, instead of averaging the total amount per pickup route like Westland. "In those deployments, the equity is not there because you could have a super recycler who's really trying to go green, and then you have somebody who's not even participating and they're getting the same amount of points," Solomon said. "We wanted to make them an incentive and have it really up to each household."
RecycleBank handles the technology cost and installation, and in return the cities pay the program a percentage of how much they saved by reducing the amount of waste taken to the landfill. "We look at the amount of waste we've diverted from the landfill and how much that city saved," Gonen said. "And we get a percentage of the savings."
North Miami entered into a five-year contract with RecycleBank, and when the contract was created, a baseline was set to serve as the basis for the amount of money the city saves by diverting waste from the landfill. The city's baseline is 40 tons of recyclables, Solomon said. For example, in July the city collected 170 tons of recyclables, the baseline is subtracted from the month's total, producing 130 tons of increased recycling. The 130 tons is multiplied by the $57 per ton tipping fee that the city would've paid the landfill -- North Miami saved $7,410 on tipping fees in July. RecycleBank receives a percentage of that savings -- in the first two years it gets 50 percent, 40 percent in years three and four, and 35 percent in year five.
What stops participants from filling their carts with heavy, nonrecyclable materials to earn extra points? Gonen said there's no reason to cheat because participants can earn a maximum of 450 points, and if they recycle they'll reach that number.
But just in case a resident dares to throw trash in the recycling cart, RecycleBank installs a button on the pickup truck that sanitation workers can press if the cart has nonrecyclables in it. When the button's pushed, the system is informed to send a letter to the rule breaker notifying him or her that nonrecyclable material was found. "We've actually helped cities reduce the amount of contamination by automating the regulating of decontamination," Gonen said. | <urn:uuid:430b8441-4abb-40bb-a775-e34ded7d5e47> | CC-MAIN-2017-04 | http://www.govtech.com/technology/Cities-Use-RFID-and-Bar-Codes.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00344-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.972033 | 1,723 | 2.8125 | 3 |
BIOS is an acronym for Basic/Input Output Systems. The BIOS is a key component of the boot process. It is responsible for addressing and mapping the various hardware components of the computer system into memory so that the operating system can communicate with it. The BIOS also performs the Power-On Self Test of the hardware known as POST. Without the BIOS, a computer would not be able to boot into its operating system.
Each BIOS and main board model combination is custom-designed to work with specific hardware components and hardware versions, which is largely dictated by the processor and chipsets that are incorporated onto the main board. This would imply that a BIOS would work across main boards that use the same processor and chipset. However, slight design differences from one main board to another make this not true.
Note: The main board is commonly referred to as a motherboard in the PC industry. In Apple® computers, it is referred to as the logic board.
Note: Boot speed, boot efficiency and size in Bytes are also taken into consideration when a BIOS is ported.
Historically, the BIOS and its settings were stored into CMOS (complementary metal-oxide semiconductor) and was commonly referred to as the CMOS Setup. The CMOS and Real-Time Clock (RTC) required an electric charge to maintain their settings. This was typically performed by an on-board battery. As the battery aged, the electrical charge that maintained the CMOS settings diminished. The BIOS and RTC would then revert to its default settings, resulting in, "Press F1 to enter the CMOS setup."
Note: The terms CMOS setup and BIOS setup were frequently used interchangeably in the 1990s and far into the 2000s.
Beginning in the late 1990s, main board manufacturers started to store the BIOS into flash memory. There are two benefits for doing this.
Note: Although the BIOS settings are stored in Flash memory, a battery is required to maintain the main board's RTC settings.
UEFI is the new version of the BIOS. UEFI is an acronym for Unified Extensible Firmware Interface. UEFI firmware performs all the tasks of the BIOS, but also allows users to access the boot settings once the operating system has fully loaded.
Note: For more information on UEFI, please visit the UEFI FAQ.
No, there is no back door BIOS password. If you forgot your BIOS password, you'll need to contact your board manufacturer or computer manufacturer for the proper instructions on how to reset the BIOS password.
All computers ship with a BIOS or UEFI firmware. American Megatrends (AMI) is a BIOS and UEFI firmware developer. The AMI Logo is hidden from view if the BIOS/UEFI firmware is set to QUIET BOOT or SILENT BOOT. If that setting is changed, the AMI Logo will appear during the boot sequence. For more information click here.
Follow the BIOS/UEFI Flash instruction provided in the main board manual.
Visit this website. CPU-Z is a freeware that may be able to identify your computer.
Error codes can differ between main board models. Contact the main board manufacturer to obtain the proper error code definitions.
You can also visit the BIOS and UEFI Firmware Support. Scroll down to For End Users: Multiple Support Options and download the status codes or beep codes document.
The MegaRAID division was acquired by LSI Logic in 2001 (now Avago). Please contact Avago for support.
There are many hardware manufacturers located all around the world. For more information, click here.
Copyright © 2017 American Megatrends Inc., All Rights Reserved. The registered trademark Linux® is used pursuant to a sublicense from LMI, the exclusive licensee of Linus Torvalds, owner of the mark on a world-wide basis. | <urn:uuid:5f8e33fd-1819-42a3-8d4c-2f223b6a844b> | CC-MAIN-2017-04 | https://ami.com/support/faqs/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00557-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.943106 | 791 | 3.359375 | 3 |
How to Avoid a Cyber Disaster
Planning for a cyber disaster makes recovering from one much easier. Still, as important as disaster planning is, it's often overlooked or put off until it is too late. In this webinar, Global Knowledge instructor Debbie Dahlin discusses planning for the unexpected -- whether the unexpected means a simple power outage, a network security breach, or a major natural disaster. She'll discuss risk analysis and risk management techniques and explain the importance and process of creating a business continuity plan.
Using a fictional company as an example, Debbie will walk you through the disaster planning process a security professional should use, and she will provide simple tricks to reduce your company's downtime before, during, and after a disaster.
- What a disaster is
- How to plan
- Techniques to reduce the impact of a disaster
- What a BCP is
- Five rules for creating a good disaster plan
- Testing your disaster plan
- Funding the plan
- How and where to get help with your disaster planning process | <urn:uuid:4f1543ef-5acd-42c5-98d6-cd61b1d64286> | CC-MAIN-2017-04 | https://www.globalknowledge.com/ca-en/resources/resource-library/recorded-webinar/how-to-avoid-a-cyber-disaster/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00245-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.934008 | 208 | 2.53125 | 3 |
TCP – Transmission control protocol in short terms is used as TCP which is one amongst the TCP/IP prime protocols. Its function is to allow two hosts, to set up a connection first over the internet and then exchange the data in form of stream. Reliability of TCP can be judged from that it provides the guarantee of data delivery by ensuring so that each packet will be transferred in the same pattern as was designed at the time of depart.
That means TCP is a consistent protocol in internet protocol suite TCP/IP that can make available a reliable and ordered flow of bytes in nonstop manners from one terminal program to another terminal’s computer program. Main Internet applications, the same as WWW, e-mail, and remote administration are dependant on this protocol.
TCP Functionality over the Network
TCP offers service when any internet application needs to send data in a continuous stream instead of small data chunks over the wide area network as internet. In such case, TCP takes the charge by residing at the halfway level of IP and application program. Anyway, there are multiple reasons for giving preference to this protocol over the IP protocol and some among them are like retransmission of misplaced data feature, minimizing network congestion ability, delivery of data in an organized way and if data gets out of order then TCP protocol is competent enough to rearrange it.
Technically, TCP passes data in the shape of segments and each segment is divided as header and data section. Soon as TCP acknowledges the data from a stream, fragment it into various chunks and then further adds a header in order to create a TCP segment. But that isn’t all; TCP segment after that are encapsulated into an IP datagram etc. Always data section with the payload data carrying for the application is following by the header section. Well, TCP makes sure of keeping on track the individual segments of data transmission.
TCP Life Cycle
TCP operations are divided into three different phases.
- Connections establishment phase (uses multi-steps handshake procedure)
- Data transfer phase
- Connection termination phase (to close the all launched virtual circuits plus to release the entire allocated resources)
Moreover, during the entire TCP connection lifetime, it undergoes from different changes in terms of state being. These changes can be differentiated such as the following: LISTENING, (server side), SYN-SENT (made by clients), SYN-RECEIVED (made by servers), ESTABLISHED (port readiness for receiving and sending data), FIN-WAIT-1, FIN-WAIT-2 (point out that server application is in standing-by position to close etc) CLOSE-WAIT, LAST-ACK (point out that server is standing-by to begin connection termination from its side), TIME-WAIT and connection close stage. In addition to these, connection establishment is based on a three-way process of handshake. But server have to keep a port free for a client, going to connect to that server for connection purpose (passive open). After that, a client initiates an active open in order to establish a connection using following three-way handshake process: SYN, SYN-ACK and ACK. | <urn:uuid:9b3fca23-10df-49d0-97cc-7097d7c68897> | CC-MAIN-2017-04 | https://howdoesinternetwork.com/2011/tcp | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00061-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.927442 | 657 | 3.640625 | 4 |
The diagnosis of a disease is part science, part intuition and artistry. The medical model trains doctors and healthcare specialists using an apprentice system (in addition, of course, to long schooling and lab work). The hierarchical nature of disease diagnosis has long invited automation using computers and databases. Early expert medical systems such as MYCIN at Stanford or CADUCEUS at Carnegie-Mellon University were initially modest sized arrays of if-then rules or semantic networks that grew explosively in resource consumption, time-to-manage, and cost and complexity of usability. They were compared in terms of accuracy and speed with the results generated by real world physicians. The matter of accountability and error was left to be worked out later. Early results were such that automated diagnoses was as much work, slower, and not significantly better - though the automation would occasionally be surprisingly "out of the box" with something no one else had imagined. One lessons learned? Any computer system is better managed and deployed like an automated co-pilot rather than a primary locus of decision making or responsibility.
Work has been ongoing at universities and research labs over the decades and new results are starting to emerge based on orders of magnitude improvements in computing power, reduced storage costs, ease of administration, and usability enhancements. The case in point is IBM's Watson, which has been programmed to handle significant aspects of natural language processing, play jeopardy (it beat the humans), and, as they say in the corporate world, other duties as assigned.
Watson generates and prunes back hypotheses in a way that simulates
what human beings do in formulating a differential diagnoses. However, the computer system does so in an explicit, verbose, and even clunky way using massive parallel processing whereas the human expert
distills the result out of experience, years of training, and
unconscious pattern matching. Watson requires about eight refrigerator size cabinets for its hardware. The human brain still occupies a space about the size of a shoe box.
Still, the accomplishment is substantial. An initial application being considered is having Watson scan the vast medical literature on treatments and procedures to match evidence-based outcomes to individual persons or cohorts with the disease in question. This is where Waton's strengths in natural language processing, formulating hypotheses, and pruning them back based on confidence level calculations - the same strengths that enabled it to win at Jeopardy - come into play. In addition, oncology is a key initial target area because of the complexity of the underlying disorder as well as the sheer number of individual variables. Be ready for some surprises as Watson percolates up innovative approaches to treatment that are expensive and do not necessarily satisfy anyone's cost containment algorithm. Meanwhile, there are literally a million new medical articles published each year, though only a tiny fraction of them are relevant to any particular case. M.D.s are human beings and have been unable to "know everything" there is to know about a specialty for at least thirty years. In short, Watson just could be the optimal technology for finding that elusive needle in a haystack - and doing so cost effectively.
A medical differential diagnosis in medicine is a set of hypotheses that subsequently have to be first exploded, pruned, and finally combined based on confidence and prior probability to yield an answer. This corresponds to the so-called Deep Question and Answering Architecture implemented in Watson. Within five years, similar technologies will have been licensed and migrated to clinical decision support systems from standard EMR/EHR vendors.
While your clinical data warehouse may not be running 3,000 Power 750 cores and terabytes of self-contained data in a physical footprint about the size of eight refrigerators, some key lessons learned are available even for a modest implementation of clinical data warehousing decision support:
- Position the clinical data warehouse as a physician's assistant (think: co-pilot) to answer questions, provide a "sanity check," and fill in the gaps created by explosively growing treatments.
- Plan on significant data preparation (and attention to data quality) to get data down to the level of granularity required to make a differential diagnoses. ICD-10 (currently mandated for 10/2013 but likely to slip), will help a lot, but may still have gaps.
- Plan on significant data preparation (and more attention to data quality) to get data down to the level of granularity required to make a meaningful financial decision about the effectiveness of a given treatment or procedure. Pricing and cost data is dynamic, changing over time. New treatments start out expensive and become less costly. Time series pricing data will be critical path. ICD-10 (currently mandated for 10/2013 but likely to slip) will help but will need to be augmented significantly into new pricing data structures and even then but may still have gaps.
- Often there is no one right answer in medicine - it is called a "differential diagnosis" - prefer systems that show the differential (few of them today do, though reportedly Watson can be so configured) and trace the logic at a high level for medical review.
- Continue to lobby for tort and liability reform as computers are made part of the health care team, even in an assistant role. Legal issues may delay, but will not stop implementation in the service of better quality care.
- Look to natural language interfaces to make the computing system a part of the health care team, but be prepared to work with a print out to a screen till then.
- Advanced clinical decision support, rare in the market at this time, is like a resident in psychiatry, in that it learns from its right and wrong answers using machine learning technologies as well as "hard coded" answers from a database of semantic network.
- This will take "before Google (BG)" and "after Google (AG)" in medical training to a new level. Watson-like systems will be available on a smart phone or tablet to residents and attendings at the bedside.
Finally, for the curious, the cost of the hardware and customized software for some 3,000 Power 750 cores (commercially available "off the shelf"), terabytes of data and including time and effort of a development team of some 25 people with Ph.D.s working for four years (the later being the real expense), my back of the envelope pricing (after all this is a blog post!) weighs in at least in the ball park of $100 million. This is probably low, but I am embarrassed to price it higher. This does not include the cost of preparing the videos and marketing. One final thought. The four year development time of this project is about the length of time to train a psychiatrist in a standard residency program.
- "Wellpoint's New Hire. What is Watson?" The Wall Street Journal. September 13, 2011. http://online.wsj.com/article_email/SB10001424053111903532804576564600781798420-lMyQjAxMTAxMDEwMzExNDMyWj.html?mod=wsj_share_email
- IBM: "The Science Behind and Answer": http://www-03.ibm.com/innovation/us/watson/
Posted September 14, 2011 4:06 PM
Permalink | No Comments | | <urn:uuid:7ba4bd67-dad7-41c3-9dd4-19cde22cfca5> | CC-MAIN-2017-04 | http://www.b-eye-network.com/blogs/agosta/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00547-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.94703 | 1,499 | 2.859375 | 3 |
Hadoop is a software solution that was developed to solve the challenge of doing a very rapid analysis of vast, often disparate data sets. Also known as big data, the results of these analytics, especially when produced quickly, can significantly improve an organization’s ability to solve problems, create new products and to cure diseases. One of the key tenets of Hadoop is to bring the compute to the storage instead of the storage to the compute. The fundamental belief is that the network in-between compute and storage is too slow, impacting time to results.
Download the full paper to read Storage Switzerlands’s analysis on why moving to shared storage, and software-defined storage in particular, is the right architecture to support Hadoop going forward. | <urn:uuid:b514e705-a557-4331-8f28-67aa3d39881e> | CC-MAIN-2017-04 | http://www.hedviginc.com/press-releases/whitepaper-storage-switzerland-hadoop-storage-das-versus-shared | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00273-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.962301 | 155 | 3.015625 | 3 |
Ruby on Rails is the name of the Web framework created by David Heinemeier Hansson. Ruby is the language that RoR is based on. The name Ruby on Rails is the concoction thought up when rails.com wasnt available. It became a joint moniker to sell the frame
Why Ruby on Rails is to application development what Apple is to desktops. | <urn:uuid:9123efe0-2cb2-4451-87cc-4aa7ea02dedb> | CC-MAIN-2017-04 | http://www.eweek.com/c/a/Application-Development/12-Things-You-Need-to-Know-About-Ruby-on-Rails/9 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00181-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.950117 | 75 | 2.796875 | 3 |
Near Field Communication (NFC) is a short-range wireless connectivity technology (also known as ISO 18092) that is built upon Radio Frequency Identification (RFID) technology. Examples of contactless smart card communications are ISO/IEC 14443 and FeliCa, which allow communications at distances up to 4 cm. NFC operates at 13.56 MHz and transfers data at up to 424 Kbits/second. NFC readers will be able to interrogate tags based on the ISO 15693 standard, provided that the tags employ the NFC Data Exchange Format (NDEF), which sets a common data-exchange format for NFC Forum-compliant devices and tags.
NFC has three modes:
- Read and write tags (open mode):
An NFC enabled device can read an NFC tag that is embedded within physical material. For example; the device can read a sticker, with an NFC tag embedded, in order to download an application. An NFC enabled device can also write to an NFC tag.
- Tap to connect and share (open mode):
When two NFC enabled devices are brought within four centimeters of one another, a handshake is performed between two devices to establish a connection. Once a connection is established, data can be transferred without the need for manual configuration.
- Emulate card (secured mode):
An NFC enabled device can be used for retail payments by using a mobile wallet app or for accessing a secured building.
NFC will make tapping a new consumer practice. The simplicity of the tapping and the high volume deployment of the NFC devices will give the consumer the possibility to interact with both the real world and the virtual world. NFC enabled devices will be used in a wide array of business applications. ABI Research expects 285 Million NFC-enabled devices to be shipped in 2013 and believe the NFC will come out of its "trial phase". The increase in potential use base is making the investment in NFC applications more justifiable. (ABI, 2012)
In order to be ready for the wave of opportunity that NFC will bring with it, it is important that developers understand the use cases and are familiar with the current APIs using NFC technology. Android, Blackberry and Window Phone 8 all support NFC. | <urn:uuid:f45e655d-d715-45f7-b975-b47192168527> | CC-MAIN-2017-04 | https://developer.att.com/technical-library/device-technologies/near-field-communication | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280587.1/warc/CC-MAIN-20170116095120-00089-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.9027 | 451 | 3.015625 | 3 |
Reprinted with permission from the May 2005 issue of Government Technology's Public CIO
Criminal justice leaders have long envisioned how technology can expand and improve information sharing, only to be frustrated in their efforts. Now the justice community has extensible markup language (XML) in its sights, which will allow police, prosecutors, court clerks, judges and corrections officials to exchange information in a timely manner without breaking the bank.
Within the Department of Justice (DOJ), the Office of Justice Programs put together a task force of 32 federal, state, local and international organizations to design an XML standard specifically for criminal justice.
Their hard work seems to be paying off -- funds are beginning to flow toward pilot projects, and more than 50 justice information-sharing projects now use XML.
In February, the National Governors Association awarded six states -- Colorado, Kansas, Kentucky, Nebraska, Pennsylvania and Wisconsin -- each a $50,000 grant to run pilots to improve existing justice systems.
Also in February, the U.S. departments of Justice and Homeland Security announced a new partnership encouraging use of XML throughout federal, state and local government. Government officials believe this is a major step in broadening how the public sector uses XML standards, especially the Global Justice XML Data Model (Global JXDM) being developed by the DOJ.
XML is a programming language that marks the meaning of content within a document or form. Unlike another markup language known as HTML, which has to do with the appearance of documents and forms on the Web, XML specifies what the information is with tags that identify categories of information.
These categories are called objects and consist of tagged data elements. A "person" object may contain elements that are physical descriptors (eye and hair color, weight, height, etc.), biometric descriptors (DNA, fingerprints) and social descriptors (marital status, occupation). A vehicle object would contain other types of elements, such as make, model, registration number or title. XML can then address the relationship between the objects (Is the person the owner of the vehicle?).
The key to XML is that objects have their own vocabulary -- described in a data dictionary -- making it possible to identify and exchange the information objects from one computer to another without having to use the same operating systems or application software.
Because the justice community is riddled with incompatible legacy systems, it has embraced XML as a basis for quick and inexpensive document exchange. For the first time, various justice and public safety agencies can develop a common vocabulary so documents and information can be exchanged quickly and efficiently.
Global Justice XML Data Model
Global JXDM allows different agencies to organize a justice-based data dictionary within their separate databases, which identifies content and gives it meaning. Besides the dictionary, Global JXDM is also a data model that defines structures and a repository of reusable software components.
By making the standard independent of vendors, operating systems, storage media and applications, JXDM is fast emerging as a key technology for assisting how criminal and judicial organizations exchange information.
Despite all the kudos XML has received, problems with the standard lurk beneath the surface. A report issued by the Government Accountability Office
(GAO) in 2002 warned that XML lacks maturity because of the scarcity of public-sector leadership for implementing it. While technical standards, such as the Global JDXM, are in place, the GAO determined that mature business standards necessary to make XML ready for extensive use are lacking. For example, there are no standards for identifying potential business partners for transactions, exchanging precise technical information about the nature of proposed transactions and executing transactions in a formal, legally binding manner.
Another problem is performance. XML takes up to 10 times the processing power used by other data formats, according to some reports. By storing information as text, XML creates large files, in part, because each element within a document must be tagged. That strains servers and computer networks
Plenty of jurisdictions are going down the Global JXDM path. Kentucky proposed to electronically transmit data collected at the time of booking from the Automated Fingerprint Identification System (AFIS) to state prosecutors' case management system. Wisconsin's Department of Justice will use Global JXDM to provide justice personnel access to current conditions of probation and parole.
The Unified Port of San Diego, the Los Angeles Port Police and the Los Angeles County Sheriff now share data with the San Diego Harbor Police Department, as part of an initiative undertaken with the U.S. Department of Homeland Security's Office of State and Local Government Coordination and Preparation. The project is the first commercialized use of the Global JXDM standard, thanks to a partnership with Crossflo Systems.
A host of other jurisdictions are using Global JXDM, including North Carolina's Department of Justice, Arizona's state courts, Arkansas' Integrated Justice Information Systems Program, Colorado's Integrated Criminal Justice Information System, Pennsylvania's JNET system and Minnesota's Department of Public Safety. | <urn:uuid:bf2e1ba1-3a37-4b0f-a0a6-863bfc296b7a> | CC-MAIN-2017-04 | http://www.govtech.com/policy-management/How-It-Works-XML--Justice.html?page=2 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00235-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.911968 | 1,005 | 2.546875 | 3 |
Good Viruses Simply Don't Exist
21 Aug 2003
"Welchia" only offers a false sense of security
The appearance of the "Welchia" network worm has provoked lively debate over the legitimacy of malware programs that battle other malware. Unfortunately many users have failed to properly weigh the relative benefits and disadvantages of "Welchia". Kaspersky Lab feels it is important to shed light on the situation.
There is no such thing as a good virus. The side effects caused by "Welchia" in deleting "Lovesan" and its attempts to update Windows are just the tip of the iceberg. Users need to be aware of the vital issues lying hidden just beneath the water line.
Firstly, "Welchia" is guilty of breaking into computers, an unambiguously criminal act. The worm makes every effort to hide itself and even attacks IIS servers, leaving them vulnerable. Moreover, the worm only installs the Windows patch, but does not reboot computers. Until a reboot is done a system is still vulnerable, and in the case of servers and machines which are rarely rebooted, the "beneficial" effect of the worm is nil.
Secondly, the network worm modifies infected systems and downloads potentially dangerous objects (an FTP server module and a carrier-file containing the malicious program). These objects can lead to operating system malfunctions and open breaches that can be exploited by evildoers. For example, using an FTP server makes it easy to steal sensitive information from infected systems.
Thirdly, "Welchia" creates malicious data streams that compromise the owners of infected machines and which require additional payments for network traffic. These data streams clog up Internet channels and can potentially provoke a global Internet catastrophe. If the number of infected systems passes a certain threshold, the volume of virus traffic could overload data transmission channels and lead to an Internet-wide slowdown.
Finally, the worm gives users a false sense of security and promotes passivity with regard to self-security. Such user apathy and inaction can lead to unpredictable consequences. The Internet could turn into a virus battlefield where network traffic is soaked up by a pack of malicious programs battling each other for supremacy.
Kaspersky Lab stresses that there is no such thing as a good virus. There are destructive viruses and seemingly harmless viruses. Nevertheless, all viruses commit cyber crimes in that they conduct unauthorized activities and have negative side effects. Additionally, rather than hope for an "anti-virus virus", it is far better for users to actively protect their own machines. This is the only way to significantly prevent malicious programs from penetrating computer security systems and to avert increasing Internet chaos. | <urn:uuid:1c356539-dc5b-48f9-947f-2da6a6c07f18> | CC-MAIN-2017-04 | http://www.kaspersky.com/au/about/news/virus/2003/Good_Viruses_Simply_Don_t_Exist | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00447-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.92139 | 545 | 2.859375 | 3 |
Is Open Source Out to Take Over the World?
A more accurate question, though, might be: Is Open Source Out to Stop the World Takeover?
“Open source” refers to developmental practices, primarily in software but some hardware, that give access to end products’ source materials, usually the source code. This access allows coexisting use of various methods in production, whereas commercial software tends to be restrictive and secretive.
Basically, open-source software is free and not only does it let you keep, modify and redistribute it at will, it encourages you to do so. Commercial or proprietary software isn’t free, but you can keep it. Chances are, though, it will be obsolete in a few years.
Open-source experts are quick to point out that their interests and mission is based on a greater goods and needs for humanity.
“The thing about open source is, most of us are not out to undermine proprietary software, bring it to its knees and make sure nobody ever uses anything but open software,” said Danese Cooper, Intel senior director of open-source strategy. “The open-source people want to see the proprietarycommunity forced to a level playing field, see standards adopted so that developers can follow what’s going on and write meaningful add-ons to existing software.”
Cooper is also the secretary and treasurer of the Open Source Initiative (OSI) board of directors and is a self-described “open-source diva.”
The benefits of open-source software to the regular computer user are best described through examples. One of the most obvious is OpenOffice.org, a free office suite comparable to Microsoft’s.
Cooper said she sees it as another example of how open-source software helps everyone and is not just free, knockoff software.
“So, open source even helps people who don’t use the software by them applying pressure on a competitor, which previously had no competitor, to fix the problems with the software they were shipping,” she said. “Better software, easily tried at no risk/cost, creates market pressure to force the dominate player to be more responsive.”
Open source arguably has made a bigger impact overseas, where software is still largely pirated because of high prices. An open-source program solves the cost problem but also helps to break the language barrier.
“Young college students in Romania localized OpenOffice into Romanian in three days,” Cooper said. “It was the first time the Romanian people had software that didn’t require that they also learn a foreign language.”
Open-source software offers a third option for anyone in the world to have access to programs without breaking the bank or breaking the law, and as a result, open-source software might end up making a difference.
“We recognize, in open source, everyone is motivated by self-gain, so we let people do things that benefit themselves and the rest of the world at the same time,” Cooper said. | <urn:uuid:9e06b258-c837-4efc-b48c-796367c93500> | CC-MAIN-2017-04 | http://certmag.com/is-open-source-out-to-take-over-the-world/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00263-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.957732 | 638 | 2.875 | 3 |
If you spend time in or around the programming community you probably hear the term “P versus NP” rather frequently. Unfortunately, even many with formal computer science training have a weak understanding of the concept.
So here’s a simple and concise explanation:
- P vs. NP
- The P vs. NP problem asks whether every problem whose solution can be quickly verified by a computer can also be quickly solved by a computer.
So let’s figure out what we mean by P and NP.
P problems are easily solved by computers, and NP problems are not easily solvable, but if you present a potential solution it’s easy to verify whether it’s correct or not.
As you can see from the diagram above, all P problems are NP problems. That is, if it’s easy for the computer to solve, it’s easy to verify the solution. So the P vs NP problem is just asking if these two problem types are the same, or if they are different, i.e. that there are some problems that are easily verified but not easily solved.
It currently appears that P ≠ NP, meaning we have plenty of examples of problems that we can quickly verify potential answers to, but that we can’t solve quickly. Let’s look at a few examples:
- A traveling salesman wants to visit 100 different cities by driving, starting and ending his trip at home. He has a limited supply of gasoline, so he can only drive a total of 10,000 kilometers. He wants to know if he can visit all of the cities without running out of gasoline.
- A farmer wants to take 100 watermelons of different masses to the market. She needs to pack the watermelons into boxes. Each box can only hold 20 kilograms without breaking. The farmer needs to know if 10 boxes will be enough for her to carry all 100 watermelons to market.
All of these problems share a common characteristic that is the key to understanding the intrigue of P versus NP: In order to solve them you have to try all combinations.
This is why the answer to the P vs. NP problem is so interesting to people. If anyone were able to show that P is equal to NP, it would make difficult real-world problems trivial for computers.
- P vs. NP deals with the gap between computers being able to quickly solve problems vs. just being able to test proposed solutions for correctness.
- As such, the P vs. NP problem is the search for a way to solve problems that require the trying of millions, billions, or trillions of combinations without actually having to try each one.
- Solving this problem would have profound effects on computing, and therefore on our society.
[ 03.02.14: Cleaned up some of the explanation based on the possibility of confusion. ]
- There is a class of NP problems that are NP-Complete, which means that if you solve them then you can use the same method to solve any other NP problem quickly.
- This is a highly simplified explanation designed to acquaint people with the concept. For a more complete exploration, check out the Wikipedia article or the numerous resources online. | <urn:uuid:6cb73a07-30ea-4920-8caa-f88dcb1444e6> | CC-MAIN-2017-04 | https://danielmiessler.com/study/pvsnp/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280763.38/warc/CC-MAIN-20170116095120-00043-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.961156 | 662 | 3.65625 | 4 |
Net-worm:W32/Allaple.A is a powerful polymorphic worm that can spread over the Internet and over Local Area Networks (LAN).
Once detected, the F-Secure security product will automatically disinfect the suspect file by either deleting it or renaming it.
More scanning & removal options
More information on the scanning and removal options available in your F-Secure product can be found in the Help Center.
You may also refer to the Knowledge Base on the F-Secure Community site for further assistance.
Disinfection of Network Worms
For instructions on how to eliminate a local worm infection, please Eliminating a Local Network Outbreak
To propagate, it is able to scan for computers vulnerable to a number of exploits to spread itself; it can also perform a dictionary attack on network share passwords.
Additionally, the worm performs a Denial of Service (DoS) attack on a number of websites based in Estonia.
The worm copies itself multiple times to a hard drive and also affects HTML files.
The worm's file is polymorphically encrypted, which means every copy of the worm is different. The only constant aspect of the worm's code is the size of its executable file - 57856 bytes.
The worm creates a different CLSID for every copy of itself that it creates on the hard drive. The number of these copies can be quite large. The names of the worm's files are random. For example:
Execution & Propagation
After the worm's file is run it goes through the polymorphic decryptor and then proceeds to the static part of the code that allocates a memory buffer and extracts the main worm's code into it. Then the control is passed directly to the extracted worm's code.
After getting control, the worm creates a few threads. One thread scans for vulnerable computers (on TCP ports 139 and 445) and sends exploits there in order to infect them.
The other thread scans for .HTM and .HTML files on all local hard disks and infects them by prepending a reference to worm's CLSID there.
One of the remaining threads performs a DoS attack on three websites located in Estonia. The following TCP ports used during the DoS attack:
The worm also tries to brute-force network share passwords by performing a dictionary attack on them. The following passwords are used:
F-Secure Anti-Virus detects this malware with the following updates: | <urn:uuid:d1fddf8e-3aca-4f49-98e7-0306ba8fa236> | CC-MAIN-2017-04 | https://www.f-secure.com/v-descs/allaple_a.shtml | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00402-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.841171 | 494 | 2.53125 | 3 |
IP Version 6 Address Types
In 1998, the Internet Engineering Task Force (IETF) released RFC 2460, outlining the technical specifications of IPv6, which addressed the shortcomings of the aging IPv4 protocol. As with any evolution of technology, new elements exist in the protocol that may seem strange and unfamiliar. This certainly includes address representation, space, and so forth, but also includes a number of different types of addresses as well. A subset of these new addressing types has corresponding types in IPv4, but many will seem significantly different. The purpose of this white paper is to examine addressing classifications in detail and outline their functions within the context of the protocol.
To begin with, using the word types immediately implies the existence of numerous possible categories, and that impression is absolutely correct. Since it would be understandable to think of IPv4 addressing as somewhat monolithic, the existence of more than one addressing type can sound daunting to remember and understand. To put this concept into perspective, consider Figure 1 below, depicting various types of cellular telephones. The various "species" of devices displayed have differing colors, form factors, manufacturers, and capabilities, but in reality all are phones. In our discussion of IPv6 addressing types, just think of the different classifications have specific characteristics, but still having the same basic purpose: communication!
Three major addressing classification types exist in IPv6, with some subtypes as well. These consist of multicast, anycast, and unicast addresses, and each is technically worthy of separate consideration.
In the classful addressing world of IPv4, multicast addressing was neatly confined into the Class D range, from 188.8.131.52 to 184.108.40.206. Within this group were specific allocations, such as the 220.127.116.11/8 administratively scoped range. Unicast represented one-to-one communication between hosts, and broadcast represented one-to-every communication. Multicast revolves around the concept not unlike being part of a club; communication and inclusion is a matter of whether or not you belong to that particular group or not. In IPv6, multicast takes on an even greater significance: first, because all broadcasts have been removed from protocol operations, and second, because a variety of protocol functions take place by means of multicast. Understandably, the address itself has several distinct features, as reflected in Figure 2.
Indicator or Prefix
Multicast addresses begin with eight bits, indicating the function involved, composed of all ones (1s), and yielding the hexadecimal characters of FF. The general designation for the entire multicast range is FF00::/8, which is readily recognizable with even casual observation.
While the RFC specifies several values for the flag field, at present only one (the T bit) is in current use and identifies the lifetime of the address. 0 indicates a permanent address, while 1 denotes a temporary one.
The 4-bit scope field describes to what degree the multicast address may be forwarded throughout the network, though not all values of this field have been defined. A partial list of these values is as follows:
4 Admin-Local E Global
The remaining 112 bits of the IPv6 multicast address identify the multicast group itself. In fact, some multicast addresses are already defined, and every network professional should be able to readily identify:
FF02::1 All Hosts
FF02::2 All Routers
FF02::5 OSPFv3 Routers
FF02::6 OSPFv3 Designated Routers
FF02::A EIGRP Routers
The anycast addressing type deserves a special explanation, not only because it may appear unfamiliar, but also because it seems contrary to fundamental networking principles. To begin with, anycast employs the use of identical IP addressing to multiple devices within an internetwork, with nodes relying on routing protocols to determine which is physically closest. While not deployed extensively even in IPv4, this type of approach could be applied to shared resources such as DNS servers, for example. In IPv6, anycast is called out as a distinct addressing type, even though it is still not widely implemented.
No Distinct Format
As if the anycast concept itself is not confusing enough, the issue becomes more complicated by the fact that any IPv6 unicast address can be used as an anycast address. Unlike multicast addresses, there is no easily distinguishable format by which to recognize them. The most obvious distinguishing characteristic of an anycast address is whether it exists on more than one device within a routing domain.
As an engineer, I frankly have had a difficult time really grasping the concept of anycast addressing, though I have read many detailed explanations of the topic. The basic idea is to use identical addressing with shared resources, and have the natural process of IP routing select the closest resource, but that was as far as my understanding went. During the course of using a Global Positioning System program on my Android Phone, I inadvertently discovered a completely new method of understanding this previously ambiguous topic. | <urn:uuid:7421b1b2-aa69-49d2-883d-dd2299392215> | CC-MAIN-2017-04 | https://www.globalknowledge.com/ca-en/resources/resource-library/white-paper/ip-version-6-address-types/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00402-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.938943 | 1,038 | 3.46875 | 3 |
“Effective encryption is in the midst of becoming the default way that many communications occur on the Internet,” reads the paper by Peter Swire, the former chief counselor for privacy in the U.S. Office of Management and Budget during the Clinton Administration, and now a law professor at Ohio State University. He highlights an Internet landscape where people have options such as virtual private networks, which give them secure pathways to browse the Internet, and encrypted email services from free email providers, such as Gmail.
The rise of encryption tools will create a chasm between sophisticated intelligence agencies and less tech-savvy law enforcement divisions and widen the “separation between ‘have’ and ‘have not’ agencies” that are able to tap encrypted data, Swire writes.
As it becomes more difficult for some agencies to intercept data, “government access to communications thus increasingly relies on a new and limited set of methods, notably featuring access to stored records in the cloud,” Swire argues.
He argues that cloud services -- such as online data storage systems -- “very often” have access to the contents of communications. “It will thus very often be technically possible for the companies to respond to lawful access requests,” the paper notes. | <urn:uuid:38b1f562-00c5-4410-ba18-6f1ed082fad5> | CC-MAIN-2017-04 | http://www.nextgov.com/cloud-computing/2012/06/encryption-could-drive-government-break-your-cloud/56233/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00126-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.955008 | 261 | 2.75 | 3 |
Malicious software programs like TDSS (another term for TDL), as detected by Kaspersky Lab products, are the most advanced and perfected tools today in the arsenal of cybercriminals. This particular example of malware uses sophisticated methods to infect a system, hide its tracks, control the respective PC remotely and prepare it for installation of other malicious programs. The diverse capabilities of TDL have allowed its author to create a botnet made up of millions of personal computers.
Experts at Kaspersky Lab investigated the behavior of a new version of the TDL-4 malicious program and evaluated its new capabilities, among which are the use of peer-to-peer networks for controlling infected computers, and functions for opening a proxy-server. The analysis of TDL-4 undertaken by Kaspersky Lab experts Sergey Golovanov and Igor Sumenkov has allowed them to determine the new capabilities of the malware and to estimate the number of infected PCs. Changes in TDL-4 have been aimed at building a botnet which is as well-hidden from competitors and anti-virus companies as possible, and which theoretically provides access to infected machines even upon closing all the command centers.
In particular, TDL-4 can now delete around 20 of the most popular competing products on an infected machine, among them such widespread programs as Gbot, ZeuS, Optima and others. Besides, TDSS itself installs on a PC around 30 utilities, including fake anti-virus programs and systems for both increasing advertising traffic and distributing spam. One of the most significant new additions to TDL-4 is the possibility to infect 64-bit operating systems. To control the botnet – besides the command servers – for the first time the Kad public file exchange network is being used. Another new function of TDL-4 is the possibility to open a proxy-server. Cybercriminals offer anonymous access services via infected computers, charging for such a service around 100 dollars per month.
Like previous versions, TDL-4 is distributed mainly with the use of so-called partner programs. The authors of the malware do not expand the network of infected computers themselves; instead they pay third parties for that. Depending on the particular terms and conditions, partners are paid from 20 to 200 US dollars for the installation of 1000 malicious programs.
Despite the protective measures in place on the controlling servers, the experts of Kaspersky Lab managed to extract general statistics on the number of infected computers. Analysis of the obtained data shows that in just the first three months of 2011 TDL-4 helped infect more than 4.5 million computers around the world, with a large proportion of those being situated in the US. Taking into account the above-mentioned price rates for the distribution of malware, one can estimate the approximate expenditure of cybercriminals on the creation of a botnet made up of American users: around 250 000 dollars. “We don’t doubt that the development of TDSS will continue,” said the experts who carried out the investigation. “Malware and botnets connecting infected computers will cause much unpleasantness - both for end-users and IT-security specialists. Active reworkings of TDL-4 code, rootkits for 64-bit systems, the launch of a new operating system, use of exploits from the Stuxnet arsenal, use of p2p technologies, proprietary “anti-virus” and much much more make the TDSS malicious program one of the most technologically developed and most difficult to analyze.”
The full version of the TDL-4 investigation can be found at the site securelist.com. | <urn:uuid:b93f8de2-c704-42e7-9b2c-fee20bd4a9c8> | CC-MAIN-2017-04 | http://www.kaspersky.com/au/about/news/virus/2011/Promotion_of_TDSS_Botnet_in_US_Costs_Cybercriminals_250 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283301.73/warc/CC-MAIN-20170116095123-00034-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.938842 | 746 | 2.546875 | 3 |
Significant vulnerabilities can exploit WhatsApp Web, the web-based extension of the popular WhatsApp application for phones.
The exploit can allow attackers to trick victims into executing malware on their machines in a new, sophisticated way.
Check Point security researcher Kasif Dekel found that to exploit the vulnerability, an attacker simply needs to send a WhatsApp user a seemingly innocent vCard contact card, containing malicious code. Once opened in WhatsApp Web, the executable file in the contact card can run, further compromising computers by distributing malware including ransomware, bots, remote access tools (RATs), and other types of malicious code.
To target an individual, all an attacker needs is the phone number associated with the account. WhatsApp Web allows users to view any type of media or attachment that can be sent or viewed by the mobile platform/application, including images, videos, audio files, locations and contact cards.
In September 2015, WhatsApp announced they had reached 900 million active users a month, and at least 200M are estimated to use the WhatsApp Web interface. WhatsApp Web mirrors all messages sent and received (includes images, videos, audio files, locations and contact cards), and fully synchronizes users’ phones and desktop computers so that users can see all messages on both devices.
WhatsApp has verified and acknowledged the security issue and has developed a fix. This started rolling out on August 27th 2015, however users should update their WhatsApp Web software immediately to ensure they are protected.
All versions of WhatsApp Web after v0.1.4481 contain the fix for the vulnerability. | <urn:uuid:3aba57a5-5ee4-4989-8d57-c2059fb11f40> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2015/09/08/vulnerabilities-in-whatsapp-web-affect-200-million-users-globally/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00520-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.922775 | 317 | 2.578125 | 3 |
FDDI – Fiber Distributed Data Interface, which is a 100 Mega-bit technology using a timed token over a dual ring of trees. FDDI standard was developed by the American National Standards Institute - ANSI X3T9.5 standards committee. FDDI was designed to run through fiber cables, transmitting light pulses to convey information between stations, but it can also run on copper using electrical signals.
FDDI can transport data up to 100Mbps in a local area network (LAN) that can extend in range up to 200 kilometers (120 mi). FDDI is often divided into metropolitan area network (MAN) range. An extension to FDDI, called FDDI-2, supports the transmission of voice and video information as well as data. Another variation of FDDI, called FDDI Full Duplex Technology (FFDT) uses the same network infrastructure but can potentially support data rates up to 200 Mbps. Work is underway to connect FDDI networks to the developing synchronous optical network (SONET).
The FDDI components of FDDIXPress and the accompanying FDDI board conform to the ANSI and ISO FDDI standards. The specific FDDI components (and the ANSI and ISO standards on which they are based) are listed below: | <urn:uuid:c09934c8-8add-4fac-8cd6-689fd31a7e0b> | CC-MAIN-2017-04 | http://www.fs.com/blog/fddi.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00430-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.920224 | 267 | 2.84375 | 3 |
Human-machine Interfaces: Opportunities for Great Experiences
13 Aug 2015
The graphical user interface (GUI) has dominated computing and mobile device interaction, using icons along with a computer mouse or touchscreen to navigate the system. The growth of smart accessories, wearables, and the concept of connected devices will be influenced by other types of user interface (UI). As the vision of the Internet of Things (IoT) starts to take hold, how do people engage and interact with the plethora of new connected devices? Will each thing have its own interface or will it rely on some common, external method, such as a mobile device, a mounted display, or a public kiosk? | <urn:uuid:fd3ab1e7-e886-4fb2-9a6b-40e3c014ec26> | CC-MAIN-2017-04 | https://www.abiresearch.com/whitepapers/Human-machine-Interfaces-Opportunities-for-Great/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00246-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.901714 | 139 | 2.5625 | 3 |
This week’s #broaddata is a breakdown of how a botnet attack works. For those of you who have already heard of a botnet, jump right in and enjoy the graphic!
But if you’re one of the many asking yourself, “What’s a botnet and why do I care?”, we’re here to help.
A botnet is a collection of Internet connected devices communicating with each other in order to perform tasks in concert. As a group, these devices are capable of causing serious havoc if they’re all simultaneously instructed to attack a single target like a bank or government office. A criminal can take control of thousands of computers – even your computer – and create a botnet. Before your computer can be taken over, it often has to have a piece of malware installed on it. This can be a program with a virus that you accidentally downloaded or it could be installed remotely, without you knowing about it.
However it happens, once your computer is infected, it’s possible that it can be controlled and used in a botnet attack.
There are things you can do to help prevent your computer form becoming infected and used by a criminal. Check out this article in PCWorld to learn how to prevent malware and what to do if you think your computer has malware on it.
And know that your ISP is there to help you keep your computer safe. If they detect you or something on your home network is communicating with a compromised server or with a known botnet, they can alert you.
To learn more about how your ISP can help you prevent a bonet attack, go to http://transition.fcc.gov/bureaus/pshs/advisory/csric3/CSRIC-III-WG7-Final-Report.pdf | <urn:uuid:0813f302-0898-4ad3-a6ad-c72d8f653fe8> | CC-MAIN-2017-04 | https://www.ncta.com/platform/technology-devices/botnet-from-criminal-to-target/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00062-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.952658 | 380 | 3.3125 | 3 |
GCN LAB REVIEWS
A breakdown on common RAID configurations
What’s a RAID?
- By Greg Crowe
- Aug 31, 2009
A Redundant Array of Independent Disks is a way to make a computer treat a number of physically distinct drives as one big drive. There are several different RAID configurations, each with advantages and disadvantages. The most common types of RAID are 0, 1, and 5. Here are the configurations mentioned in this review.
In this GCN Lab comparison report:
NAS appliances cover the middle ground of extra storage
What Is A RAID?
Buffalo TeraStation III
LaCie 5big Network
Sans Digital EliteNAS
Seagate BlackArmor NAS 440
Gaining Virtual V-locity
JBOD span: Not usually considered a RAID configuration, JBOD stands for “just a bunch of disks” and is exactly that. In this mode, disks of equal or varying sizes are linked and treated as one big, single drive. This configuration has the advantage of being able to use drives of different sizes and utilizing nearly all their combined capacity. However, a JBOD span has no fault tolerance — if one drive fails the whole thing fails — which is why it isn’t used often.
RAID 0: This configuration uses striping, which is one of two basic RAID techniques. Each disk's space is split into stripes of a certain length, which can vary based on different RAID management programs but is always the same within a single RAID. The software then writes to each stripe before moving onto a stripe in the same position on the next drive. So the first stripe on disk one would be followed by the first stripe on disk two, and so on. This configuration allows you to use all the drives' capacity. But with RAID 0, if a single drive fails, the entire RAID fails, and you will likely lose all your data. It does offer better disk access times than JBOD.
RAID 1: This RAID uses the other basic technique, called mirroring. The disks are paired, and the second disk in each pair is an exact copy of the first. Each pair is treated as a separate drive. This has maximum redundancy, as both drives in a pair would need to fail for that logical drive to fail. Also, RAID 1 tends to have faster access times than other configurations. However, the capacity is half that of RAID 0.
RAID 1+0 (or 10): This approach combines the techniques of RAIDs 0 and 1. Each pair of drives is mirrored, and each mirrored pair is striped so that the RAID is treated as one drive. This is just as secure as RAID 1 but has half the capacity. The difference is that each pair is not treated as a separate drive but one contiguous one.
RAID 5: This more secure RAID uses striping but does something different with the stripes. For each set of stripes — such as the first stripe of every disk in the RAID — one of the disk's stripes contains parity data corresponding to the data on the other disks. The disk that has the parity information alternates with each set of stripes. The advantage is that if one disk fails, the RAID continues to function, and the failed drive can be rebuilt onto a replacement from the information on the other disks. Still, RAID 5 isn't 100 percent bulletproof in protecting data, only the mirroring of RAID 1 can offer that. But RAID 5 is highly reliable. The advantage it has over RAID 1 is increased capacity. With only one disk in the array being used for storing parity, the capacity of a four-disk array would be the total size of the remaining three disks.
RAID 6: More secure than RAID 5 at the cost of some drive space, RAID 6 uses two drives for the parity of each stripe. The two drives used for parity alternate with each stripe. For instance, if drives 3 and 4 in a four-disk array have the parity for the first stripe, then drives 2 and 3 might have it for the next stripe, and so on. This has the increased advantage that data would remain intact even if two drives fail. Of course, this means that there is one less drive’s worth of storage capacity, as the equivalent of two of the drives are used for parity. It’s often only available on high-end RAID configurations.
Greg Crowe is a former GCN staff writer who covered mobile technology. | <urn:uuid:10d380f6-6cdc-4e9d-b716-d0b0af4f0e97> | CC-MAIN-2017-04 | https://gcn.com/articles/2009/08/31/gcn-lab-review-nas-sidebar-raid.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00274-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.952811 | 916 | 2.921875 | 3 |
While there is a universal desire in the HPC community build the world’s exascale system, the achievement will require a major breakthrough in not only chip design and power utilization but programming methods, NVIDIA chief scientist Bill Dally said in a keynote address at ISC 2013 last week in Leipzig, Germany.
In last Monday’s speech, titled “Future Challenges of Large-scale Computing,” Dally outlined what needs to happen to achieve an exascale system in the next 10 years. According to Dally, who is also a senior vice president of research at NVIDIA and a professor at Stanford University, it boils down to two issues: power and programming.
Power may present the biggest dilemma to building an exascale system, which is defined as delivering 1 exaflop (or 1,000 petaflops) of floating point operations per second. The world’s largest rated supercomputer is the new Tianhe-2, which recorded 33.8 petaflops of computing capacity in the latest Top 500 list of the world’s largest supercomputers, while consuming nearly 18 megawatts of electricity. It has a theoretical peak of nearly 55 petaflops.
Theoretically, an exascale system could be built using only x86 processors, Dally said, but it would require as much as 2 gigawatts of power. That’s equivalent to the entire output of the Hoover Dam, Dally said, according to an NVIDIA blog post on the keynote.
Using GPUs in addition to X86 processors is a better approach to exascale, but it only gets you part of the way. According to Dally, an exascale system built with NVIDIA Kepler K20 co-processors would consume about 150 megawatts. That’s nearly 10 times the amount consumed by Tianhe-2, which is composed of 32,000 Intel Ivy Bridge sockets and 48,000 Xeon Phi boards.
Instead, HPC system developers need to take an entirely new approach to get around the power crunch, Dally said. The NVIDIA chief scientist said reaching exascale will require a 25x improvement in energy efficiency. So the 2 gigaflops per watt that can be squeezed from today’s systems needs to improve to about 50 gigaflops per watt in the future exascale system.
Relying on Moore’s Law to get that 25x improvement is probably not the best approach either. According to Dally, advances in manufacturing processes will deliver about a 2.2x improvement in performance per watt. That leaves an energy efficiency gap of 12x that needs to be filled in by other means.
Dally sees a combination of better circuit design and better processor architectures to close the gap. If done correctly, these advances could deliver 3x and 4x improvements in performance per watt, respectively.
According to NVIDIA’s blog, Dally is overseeing several programs in the engineering department that could deliver energy improvements, including: utilizing hierarchical register files; two-level scheduling; and optimizing temporal SIMT.
Improving the arithmetic capabilities of processors will only get you so far in solving the power crunch, he said. “We’ve been so fixated on counting flops that we think they matter in terms of power, but communication inside the system takes more energy than arithmetic,” Dally said. “Power goes into moving data around. Power limits all computing and communication dominates power.”
Besides addressing the power crunch, the way that supercomputers are programmed today also serves as an impediment to exascale systems.
Programmers today are overburdened and try to do too much with a limited array of tools, Dally said. A strict division of labor should be instituted among the triumvirate of programmers, tools, and the architecture to drive efficiency into HPC systems.
The best result is delivered when each group “plays their positions,” he said. Programmers ought to spend their time writing better algorithms and implementing parallelism instead of worrying about optimization or mapping, which are better off handled by programming tools. The underlying architecture should just provide the underlying compute power, and otherwise “stay out of the way,” Dally said according to the NVIDIA blog.
Dally and his team are investigating the potential for items such as collection-oriented programming methods to make programming supercomputers easier. Exascale-sized HPC systems are possible in the next decade if these limitations are addressed, he said. | <urn:uuid:a51a4795-39ce-4b36-adb3-6f04fb407f98> | CC-MAIN-2017-04 | https://www.hpcwire.com/2013/06/24/exascale_requires_25x_boost_in_energy_efficiency_nvidias_dally_says/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00274-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.942472 | 937 | 3.03125 | 3 |
New research from Incapsula yielded a few interesting facts about RFI attacks. The data for the report was collected by monitoring billions of web sessions over a 6-month period. The RFI link’s lifespan information is based on a sampled data from a group of 1000 RFI links, which carry 226 different types of backdoor shells and shell variants.
All data was aggregated from a dedicated crowdsourced database, developed for ongoing research of RFI attacks and backdoor shell behavior.
RFI attack definition
Remote File Inclusion (RFI) attacks abuse user-input and file-validation vulnerabilities to upload a malicious payload from a remote location. With such shells an attacker’s goal is circumvent all security measures by gaining high-privileged access to website, web application and web hosting server controls.
Typically, RFI attacks are fairly simple processes. Initially, the attacker will use a scanner or search engine to identify vulnerable targets. Once detected, the targets will be compromised, either by the scanner itself or by an automated script, which will be used for a mass-scale attack – exploiting a group of similarly vulnerable targets. With the scanner (or script) an attacker will exploit a RFI vulnerability to upload a backdoor shell or “Dropper” – small single-function shell, used to upload the actual malicious payload.
With the backdoor in place the attacker can use it for the worst sorts of malicious activities. High-visibility RFI attacks will lead to defacement of the website and deletion of its content. However, the attackers is more likely to prefer the less-suspicious approach, turning a compromised website as a long-term resource used to distribute malware, steal visitor data and unwillingly participate in DDoS attacks.
RFI is an overlooked menace
RFI is no joke. Although often overlooked in favor of the more “popular flavors”- DDoS, Cross Site Scripting (XSS) and SQL injections – RFI attacks are more widespread than most assume. To put it in numbers, our study shows that RFI attacks are today’s most common security threat, accounting for more than 25% of all malicious sessions, far surpassing XSS (12%) and even exceeding SQLIs (23%).
The reason behind these numbers is obvious. With its relative ease of its execution and extremely high damage potential, RFI offers an attacker the best “return on investment” – providing a direct control over the target’s website and even the whole hosting server for almost no-effort.
Kept alive through negligence
Thankfully, for all their damage potential, RFI attacks are mostly zero-day threats – very dangerous in their early stage but also rapidly disarmed, as soon as they are discovered and patched.
However, not all RFIs die young. Our numbers show that even today, a healthy 58% of all scanners are still hunting for the good-old TimThumb exploit. From a security point of view, these are nothing more than naïve attempts to make use of a two-year old vulnerability, probably looking for unpatched WP sites or old WP templates that could be compromised to recruit new foot-soldiers for DDoS botnet armies.
Of course such outdated attacks pose very little threat to vigilant website owners. Still, even today, such relentless efforts eventually yield some successes. This should come as no surprise, as every security professional has at least one campfire story to tell about the disastrous results of security negligence.
Leveraging RFI links longevity
Discovered RFI attack vectors pose few challenges to most security experts, as they can be thwarted by simple signature-based techniques. But what about the next, yet undiscovered, RFI exploit?
This is the question that we are answering with our new reputation based techniques. To protect our clients from zero-day attacks we had to find a constant factor in an unpredictable RFI equation.
Going in, we had a pretty good hunch that the RFI link, which supply the malicious payloads, can provide the reliable constant we needed. Our data proved us to be correct. The research showed that – even when dealing with different attack vectors – the same RFI links were being re-used for multiple assaults on different targets. Moreover, we also found that the lifespan for most of these links averages over 60 days, making them perfect tell-tale signs of an RFI attack and great candidates for long-term intelligence gathering.
Zero-day is every day
With our new reputation-based rules, we are now using this information as a backbone for an effective early warning system, allowing us deal with the most extreme scenarios of absolutely unique zero-day threats. As we see it, zero-day is every-day and we have to be ready for it. | <urn:uuid:09d68027-6831-4e80-af6f-944d9307e217> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2013/08/15/where-rfi-attacks-fall-in-the-security-threat-landscape/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00484-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.950623 | 983 | 2.703125 | 3 |
The Google Datastore in Action
Behind the scenes of these GAE applications is a type of database that Google created called the Datastore. Early in the development of GAE, it didn't take long to find posts listing a number of concerns about the Datastore and how limited it seemed to be. The Datastore is a radical departure from the usual RDBMS (relational DMBS) we've all come to know inside and out. Google describes it as a multidimensional sorted map. For hardcore database developers, that sounds pretty limiting. I don't know if Google anticipated the resistance, but blog comments usually revolved around, "Yes, but I can't do such-and-such." Typically somebody would post such a question, and then others would respond by showing just how you really could do such-and-such, with a little bit of shift in thinking.However, the approach isn't to start with a relational design and come up with workarounds, but rather to redesign from scratch under the new approach. When you do that, you end up with a database system that is, some bloggers have said, "blindingly fast." The blog post linked here also suggests that writing data isn't as fast as it is in relational databases. That may be true, but does it matter? When you write data to a database through a Web application, you often don't have to sit and wait for the results. For example, if you are composing an e-mail in a site such as Gmail, you can click the send button. Google can immediately take you to the next page, showing the mail as sent. Behind the scenes, Google may not have finished saving the data to your sent box. Either way, you can continue with your next task. (Readers of this article will certainly think of specific examples where you do need to immediately find out the results of a database write, but I'm talking about general-purpose Web applications used by the masses.) When you're reading a database, on the other hand, you typically don't want to wait. Type something into Google and notice how fast you get your response. It's pretty much instant. Then think of the millions of other queries that were likely taking place at the same time yours was, and it's really impressive. If there's a tradeoff between write speed and read speed, write speed should win from a usability standpoint. Final thoughts What, then, are my final thoughts about Google App Engine? I saw some really cool applications, and I saw some that weren't at all impressive. None of the applications demonstrated any massive power. However, there are some things to consider here. For one, the applications are all running on Google's servers together; they aren't hosted individually. That means while many of them might not have a lot of activity at any given instant, all the applications combined may together have a great deal of activity. They all seemed just as fast as any other good sites out there. Combine this with the defense that Google's Datastore is incredibly fast, and the fact that these applications are running on Google's tried-and-proven servers. I would conclude that even though there might not be many really powerful applications written in Google App Engine yet, as more developers start using GAE, we are sure to see some. (And those existing ones that I found may become immensely popular.) I would guess these applications will, quite likely, perform very well compared with non-GAE apps like Facebook and Twitter. My conclusion, then, is that Google App Engine is a totally viable platform for large Web-based applications. Senior Editor Jeff Cogswell can be reached at jeffrey.cogswell@ZiffDavisEnterprise.com.
For example, the Datastore limits your result sets to 1,000 rows per query. In addition, the concept of relations is gone. To many of us, that would seem to make the Datastore basically unusable. | <urn:uuid:69519bb7-240a-46fd-a5d6-2025dd1a2626> | CC-MAIN-2017-04 | http://www.eweek.com/c/a/Application-Development/Scaling-Apps-on-the-Google-App-Engine/2 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00384-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.968624 | 815 | 2.765625 | 3 |
A few days ago I wrote about an artificial intelligence startup, Vicarious, which demonstrated software that breaks the widely used - and much disliked by users - CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart) used to prevent software run by the bad guys from automating the creation of, and hacking into, accounts on Web sites.
The reason CAPTCHA is disliked by users is that it's become hard even for humans to pass the test; the distorted images employed have become so difficult to read most people have a significant trouble decoding the text and, as a consequence, often give up when creating accounts and using services.
A new company, Keypic, may have the answer to these problems by doing away with CAPTCHA altogether and replacing it with their own eponymously named verification system. In fact Keypic can not only rate how human a user is but can also detect spam submitted as comments.
Keypic works by presenting whatever form you please along with an image. The image can be as minimal as a single transparent pixel or it can be a logo or even an advertising banner . The purpose of the image is to ensure that it's retrieved (most hackers' automation won't bother with graphical elements, they'll usually just retrieve the form, fill it and then submit it).
Whether the image is retrieved is just one of the ten or so data points Keypic checks. Other data points include how long it takes for the form to be submitted (which reveals software that tries to submit at a high rate), what order are the fields filled in, what the IP address is, what browser is being used, how many requests are received per minute from a single IP address, and the characteristics of any text entered into fields other than name and password.
The data points are analyzed by comparing them to Keypic's database of thousands of other form submissions and a score calculated as to how fake the submission is considered to be. You can then decide based on that score whether to accept and act on the form data or reject the submission.
For a program to get past Keypic would require that it behave in a very human way taking enough time to respond, downloading all page content, limiting the submission rate from any single IP address, and so on. To defeat this range of tests would require some pretty creative coding and that's the key to detecting non-human interactions.
The client-side of Keypic is free and open source while the backend that actually determines the score is proprietary and closed source. Keypic is currently available as a plugin for WordPress, Drupal, Joomla, and TYPO3 as well as a REST Web service, a PHP Class, an for ASP and ASP.NET.
My only reservation about Keypic is that although the company is based in the US (in Walnut, CA, in Silicon Valley) their Web site is a horrible mess of poor design, misspellings, weak explanations, and broken links.
So, is Keypic more effective than CAPTCHA? That all depends on what you value. If you believe that you're losing traffic and users because CAPTCHA tests put them off then there's a very good reason to use Keypic. As of writing over 5,800 sites are using the system and over 113.5 million spam messages have been blocked without CAPTCHAs.
On the other hand if you are adamant that you can't tolerate any non-humans at all accessing your site you might want to stick with CAPTCHA ... remembering, of course, that the test has been shown to be broken at a level that will eventually (and, in fact, sooner rather than later) render it useless. I think my money is on Keypic. | <urn:uuid:ccf97fd2-2ebf-457a-83e6-c57ea40a6f9f> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2225794/security/keypic--replacing-captcha-without-annoying-users--updated-.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00410-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.963875 | 763 | 2.5625 | 3 |
Technology Primer: Cloud and Virtualization
- Course Length:
- 0.5 Day Instructor Led
This half-day Technology Primer introduces the audience to the concepts of Cloud Computing and Virtualization. Cloud Computing is generally characterized by its Service Model types. The course first introduces the audience to the idea of Virtualization, Virtual Machines, Hypervisors and Containers. These are the first building blocks of Cloud Computing. Then the course introduces the audience to second set of building blocks of Cloud Computing - the Cloud Computing Service Models - and presents a high level comparison of the three primary Service Models and where they may fit into a wireless networking environment. The final building block introduces the audience to OpenStack and wraps up the discussion with a simple example of Cloud Computing implemented using OpenStack.
This technology primer is designed for a wide range of audiences including operations, engineering, and performance personnel, as well as other personnel interested in understanding the basics of Cloud Computing and Virtualization in the context of a wireless service provider’s network.
After completing this course, the student will be able to:
• Describe Virtualization
• Describe Virtual Machines
• List the role and tasks of a Hypervisor
• Describe Containers
• Describe Cloud Computing
• Explain Cloud Computing in the context of a Wireless Network
• Describe OpenStack
• Illustrate an example implementation of the Cloud using OpenStack
1.1. What is Virtualization?
1.2. Types of Virtualization
1.3. Physical Network Functions
1.4. Virtual Network Functions
2. Virtualization Technology
2.1. Virtual Machine
3. Cloud Computing
3.1. What is the Cloud?
3.2. Cloud Computing
3.3. Applicability to the wireless domain
4. Cloud Computing Technology
4.1. What is OpenStack?
4.2. OpenStack architecture
4.3. OpenStack as a Cloud enabler | <urn:uuid:6ffe2cc0-44a1-4d46-95a2-5469ece29208> | CC-MAIN-2017-04 | https://www.awardsolutions.com/portal/ilt/technology-primer-cloud-and-virtualization?destination=ilt-courses | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280065.57/warc/CC-MAIN-20170116095120-00073-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.798587 | 404 | 3.265625 | 3 |
The Lean Six Sigma Green belt course is aimed at teaching students to identify, analyse and solve quality problems within improvement projects using the Six Sigma D-M-A-I-C (Define, Measure, Analyse, Improve, and Control) methodology. A Lean Six Sigma Green Belt professional possesses a comprehensive understanding of all phases of D-M-A-I-C.
The Lean Six Sigma Green belt course aims to teach the following:
- Project identification
- Process analysis
- Data collection and summarising
- Tools for planning and management | <urn:uuid:b49b8fb4-7171-4fad-a542-b1fe73aa4998> | CC-MAIN-2017-04 | https://www.itonlinelearning.com/course/lean-six-sigma-green-belt/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283301.73/warc/CC-MAIN-20170116095123-00035-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.881006 | 113 | 2.546875 | 3 |
You cannot create a backup of the database. The error message that you see on the screen is, 'Database backup failed'.
You cannot create a backup of the database because of one of the following reasons:
You are required to identify why you are not able to create the database. Follow the relevant resolution:
Adding the required folders to the exception list of your antivirus program
Check if the antivirus program is running in the computer where the Desktop Central server is running. If it is, add the folder named DesktopCentral_Server to the exception list of the antivirus program. This folder is typically located in the installation directory of your computer. For example, C:\Program Files.
Adding Folders to the Exception List of Norton Antivirus
Assume that you have Norton Antivirus 2009 installed in your computer. To add a folder to the exception list of the Norton antivirus program, follow the steps given below:
You've added the required folder to the exception list in your the antivirus program installed in your computer.
Specifying a valid folder name that is not write-protected
Specify a valid folder name for the backup directory path. If you're using a network share, the directory should have write permission for everyone in the network.
Assume that you have created a folder called Database Backup in a computer (not the computer where the Desktop Central server is installed) in your network. This folder will be used to store all the database backup that is created every week. To enable all the users in your network to access this folder to store their database backup, you must give them share permissions. To share a folder and give share permissions to everyone in the network, follow the steps given below:
Note: Repeat steps 10 and 11 to complete the process.
You've shared a specific folder with permission for everyone in the network to access and use it.
Ensuring that there is enough space in the required drive
Check the amount of free space in the drive. If there isn't enough space do one of the following:
Changing the backup directory to another drive
To change the backup directory to another drive, follow the steps given below:
You've changed the backup directory to another drive. Follow the steps given in the follow up section below, to check if you can create a backup.
Clear Temp Folder
Ensure that you clear up the temp folder in the Server machine before trying to take a back up. To clear the folder, follow the steps mentioned below.
After using the required solution, mentioned in the section above, you can check manually whether the database backup takes place successfully. To create a backup of the database manually, follow the steps given below:
Note: This is where the folder DesktopCentral_Server is stored. For example, the path could be D:\>DesktopCentral_Server.
Note: Desktop Central first estimates how much space is available in the specified drive where you want to store the backup file.
You have created the backup file. The backup file is stored at the location you chose. The naming format is buildnumber-date-time.zip. For example, 70117-Jul-23-2010-11-26.zip
Applies to: Database backup, Creation
Keywords: Database backup-creation failure, Version incompatbility
|Unable to resolve this issue?|
If you feel this KB article is incomplete or does not contain the information required to help you resolve your issue, upload the required logs, fill up and submit the form given below. Include details of the issue along with your correct e-mail ID and phone number. Our support team will contact you shortly and give you priority assistance and a resolution for the issue you are facing.
|Other KB articles||24/5 Support|
Support will be available 24hrs a day and five days a week (Monday through Friday), excluding USA & India public holidays.
Tel : +1-888-720-9500
Speak to us | <urn:uuid:fbbe8208-c0dc-4cf7-8ee8-e53a92732311> | CC-MAIN-2017-04 | https://www.manageengine.com/products/desktop-central/backup-creation-failed.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283301.73/warc/CC-MAIN-20170116095123-00035-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.88858 | 821 | 2.546875 | 3 |
When migrating to 64-bit Windows, traditional “unmanaged” applications can pose challenges. That is because unmanaged binaries contain hardware-dependent CPU instructions – and the view on the hardware differs between 32- and 64-bit mode. But .NET? It should be unaffected of a system’s bitness since “managed” binaries contain instructions in a so-called intermediate language that is executed in a virtual machine at run-time and only then translated to machine language. But is it really? This article is about .NET programs that are dependent on OS bitness.
When you create a new C# or VB application in Visual Studio it is created as a bitness-independent program by default. That means that it will run as a 32-bit process on 32-bit Windows, and as a 64-bit process on 64-bit Windows, optimally leveraging each platform’s capabilities. That sounds good, and indeed it is a great achievement of the .NET framework to make this possible. But there are caveats, otherwise this article would be quite pointless.
Under the Hood
Each .NET binary (correct: assembly, an EXE or DLL) stores flags marking it as compiled for either:
- AnyCPU [default]
There is also Itanium, but let us ignore that soon-to-be obsolete platform.
These flags are relevant at runtime. AnyCPU assemblies are independent of the OS bitness. AnyCPU DLLs can be loaded into 32-bit and 64-bit processes while AnyCPU EXEs are started as 32-bit processes on Windows x86 and as 64-bit processes on Windows x64.
x86 assemblies can be loaded into 32-bit processes only (DLLs) respectively are always started as 32-bit processes (EXEs).
x64 assemblies can be loaded into 64-bit processes only (DLLs) respectively are always started as 64-bit processes (EXEs).
Why the Distinction?
The obvious question at this point: why even create three different types of .NET assemblies if .NET code is platform-independent? The answer is simple: While .NET code is platform-independent, legacy code is not. And a lot of .NET applications rely on unmanaged code.
Consider the following (fictional) scenario: You are writing a nice program for transferring files from one computer system to another. At some point you discover that compressing the files transferred by your program would greatly enhance speed and usability. So you decide to compress files. But how? Write a packing algorithm yourself? No way! The easiest solution is to use something existing. So you might end up with the free 6-zip library, written in unmanaged C++ and available as 32-bit only. And here the problem starts.
You call the packing routines from the unmanaged 32-bit DLL, test very thoroughly on your 32-bit develepment machine (where everything works great) and deliver the resulting application to your customers. Another job well done! But then one of your customers decides to migrate his systems to Windows x64 – and discovers that your application does not work there.
Why? On 64-bit Windows your AnyCPU assembly runs as a 64-bit process – and a 64-bit process cannot load 32-bit DLLs. But your application tries to do exactly that and fails.
What to Do – As a Programmer
If you are the guy developing the application, you have two options if you want your program to run on both 32-bit and 64-bit Windows:
- Compile your managed code as “AnyCPU” but have the installer determine that target system’s bitness and install any unmanaged DLLs in the appropriate bitness. This means you need additional logic during setup.
- Compile your managed code as “x86” – that way it will always run as a 32-bit process, regardless of the bitness of the OS. Since your code is always 32-bit, it is safe to only distribute 32-bit versions of unmanaged DLLs.
What to Do – As an Administrator
If you want to get the application to run on 64-bit Windows, the safest way to do so would be to contact the vendor and ask him to make his software compatible with x64. Only by going back to the vendor will your configuration be supported, which is a requirement especially in larger enterprises.
If vendor support is not paramount you can try the following hack. It is made possible by the fact that all three flavors of .NET assemblies described above are basically identical except for two flags in the PE header of the binary file storing whether this is a x86, x64 or AnyCPU assembly. And flags can be changed easily…
First of all you might want to check your .NET assemblies. There are several ways to do that:
- System.Reflection.AssemblyName.GetAssemblyName (use the property “ProcessorArchitecture” of the returned object)
If you find managed AnyCPU EXE files on a 64-bit system, next check if they came with unmanaged 32-bit DLLs. For that you first need to determine whether a DLL is managed or unmanaged, which can be done with CorFlags.exe (see above) – it will complain if it is fed an unmanaged binary. Secondly you need to check if your unmanaged DLL is 32-bit or 64-bit – information that, again, is stored in the PE header. So you need a PE header dumper like PEDump from Matt Pietrek, an oldie but goldie. Unfortunately it crashes when analyzing 64-bit DLLs on 64-bit Windows 7, but only after printing the information we are interested in:
D:\Tools\PEDump>PEDUMP.exe c:\Windows\notepad.exe Dump of file C:\WINDOWS\NOTEPAD.EXE File Header Machine: 8664 (unknown) <<<<<<<<< This means "x64" Number of Sections: 0006 TimeDateStamp: 4A5BC9B3 -> Tue Jul 14 01:56:35 2009 ... D:\PEDump>PEDUMP.exe c:\Windows\syswow64\notepad.exe Dump of file C:\WINDOWS\SYSWOW64\NOTEPAD.EXE File Header Machine: 014C (i386) Number of Sections: 0004 TimeDateStamp: 4A5BC60F -> Tue Jul 14 01:41:03 2009 ...
Once you have verified that you indeed have a managed EXE compiled as AnyCPU that needs 32-bit unmanaged DLLs, you can change the EXE’s type from AnyCPU to x86 with CorFlags:
CoreFlags.exe TheApp.exe /32BIT+
If you want to revert:
CoreFlags.exe TheApp.exe /32BIT-
Please note that modifying a binary file invalidates its digital signature (if available). Reverting back to the original state reverses this – the signature is valid once again. | <urn:uuid:2ea10c13-bb3a-44d4-b2bf-8942856693b7> | CC-MAIN-2017-04 | https://helgeklein.com/blog/2010/03/net-applications-on-windows-x64-easy-yes-and-no/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00521-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.909245 | 1,487 | 2.859375 | 3 |
Chile is one of the world's fastest growing energy markets. The Chilean solar market is making its ascent, with nearly one-fifth of all renewable energy coming from solar PV. Northern Chile has the highest solar incidence in the world (The Atacama Desert receives some of the planet's steadiest concentrations of direct sunlight, and has an area of 40,000 square miles, presenting ideal conditions for solar power generation). Chile has the potential to produce all of the electricity used in the country through solar power. Chile is building the largest solar power plant in Latin America. It has 400 MW of solar photo voltaic generation under construction-more than any other nation in the region. It has been proven in other nations, large-scale deployment of solar inevitably spurs greater market adoption, which brings down costs for communities. The future looks promising for solar in Chile and Latin America.
The global annual solar power production is estimated to reach 500GW by 2020, from 40.134 GW in 2014, making solar power market one of the fastest growing one. Chile Solar Power Market is estimated to reach $XX billion in 2020.
With fossil fuel prices fluctuating continuously and disasters like Fukushima and Chernobyl raising serious questions about nuclear power, renewable sources of energy are the answer to the world’s growing need for power. Hydro Power has environmental concerns; so apart from water the other renewable source of energy in abundance is Solar. The Earth receives 174 petawatts of solar energy every year. It is the largest energy source on the Earth. Other resources like oil and gas, water, coal etc. require lot of effort and steps to produce electricity, solar energy farms can be established easily which can harness electricity and the electricity produced is simply given to the grid.
Falling costs; government policies and private partnerships; downstream innovation and expansion; and various incentive schemes for the use of renewable energy for power generation are driving the solar power market at an exponential rate.
On the flipside, high initial investment, intermittent energy Source, and requirement of large installation area to setup solar farms are restraining the market from growth.
In the recent years, lot of research is going on in this field to make solar panel production easier, cheaper and also to make them smaller, sleeker and more customer friendly. Lot of efforts are being put into increase the efficiency of solar panels which used to have a very meagre efficiency percentage. Different techniques like Nano-crystalline solar cells, thin film processing, metamorphic multijunction solar cell, polymer processing and many more will help the future of this industry.
This report comprehensively analyzes the Chile Solar Power Market by segmenting it based on type (Concentrating type, Non Concentrating type, Fixed Array, Single Axis Tracker, and Dual Axis Tracker) and by Materials (Crystalline Silicon, Thin Film, Multijunction Cell, Adaptive Cell, Nano crystalline, and others). Estimates in each segment are provided for the next five years. Key drivers and restraints that are affecting the growth of this market were discussed in detail. The study also elucidates on the competitive landscape and key market players. | <urn:uuid:55b72d67-ae3f-4bf4-b25b-eaa57878ac5b> | CC-MAIN-2017-04 | https://www.mordorintelligence.com/industry-reports/chile-solar-power-market-industry | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00339-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.935556 | 636 | 2.828125 | 3 |
SPARQL (SPARQL Protocol and RDF Query Language) is the most widely supported (which is not the same as most widely used, which it may well not be) language by graph database vendors, regardless of how we define database in this context—that is, irrespective of whether data is actually stored in graph format as triples or as a property graph or using some other storage mechanism.
As you might infer from its linguistic similarity to SQL, SPARQL is a declarative language. That is, you don’t have to know where the data is in order to create and run queries. However, just as with SQL and relational databases, the performance of said queries is therefore dependent upon the database and, in particular, the database optimiser. Unfortunately, while relational databases have sophisticated optimisers, graph databases typically do not (Neo4j is an exception). The same, it has to be said, applies to NoSQL databases in general—you may be able to run SQL (or HiveQL) against Hadoop, for example, but without an optimiser performance is still going to suffer.
The second issue with SPARQL is that, as the "R" implies, it was designed for RDF (resource description framework), which is the basis of the semantic web. It wasn’t designed for business intelligence and analytics. Moreover, while RDF stores may have their place in supporting Web 3.0, for most commercial applications of graph technology there is a clear shift towards property graphs.
The difference between a property graph and a triple store is that in a property graph the edges and nodes of the graph may have values associated with them. As a result, they are much more practical for general-purpose business uses: they are much more compact and nodes do not grow like Topsy every time you add a new attribute (or value).
So, property graphs are becoming the popular option. But that means that SPARQL, developed to support RDF or triple stores, is not particularly well suited to support property graphs: so what language do you use?
Generally speaking the answer is to use a procedural language such as Gremlin (which is a scripting language based on Groovy). However, this has all the drawbacks of being procedural and there are also portability issues associated with Groovy and Gremlin. As far as I know the only company that has a declarative language is Neo Technology, which has developed Cypher alongside its database optimiser.
The problem, from my point of view, is that Cypher is proprietary. Neo4j is considering—and assures me that it re-evaluates on a regular basis—making it open but that’s not going to happen anytime in the immediate future. While it may be good for Neo to be the only vendor to be in this position my opinion (and I know that Neo disagrees with me on this: it’s view is that it doesn’t want the language bogged down in standards discussions at this stage) is that it would serve the market well if Cypher was to be made open and more widely available sooner rather than later. Neo4j would still have the advantage of a database optimiser but I think that the general availability of a declarative language would help to drive the market. | <urn:uuid:4250b3a6-34a2-4d3e-b115-f9825641d6bc> | CC-MAIN-2017-04 | http://www.bloorresearch.com/analysis/the-language-of-graphs/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00063-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.964093 | 679 | 2.703125 | 3 |
If you’ve been following the US exascale roadmap, then chances are you’ve been following the work of William (“Bill”) J. Harrod, Division Director for the Advanced Scientific Computing Research (ASCR), Office of Science with the US Department of Energy (DOE). In January, Harrod asserted that the DOE’s mission to push the frontiers of science and technology would require extreme-scale computing with machines that are 500 to 1,000 times more capable than today’s computers, albeit with a similar size and power footprint.
In line with Harrod’s assessment, DOE officials established a 10-year roadmap for achieving exascale computing. An associated study laid out the top 10 technical challenges, including – number one – energy efficiency that is 40 times better than today, and – number two – interconnect technology that fosters more efficient data movement.
A revised version of this report, Big Data and Scientific Discovery, zeros in on the challenges of the post-petascale era as they relate to the ubiquitous data explosion. As Alok Choudhary has observed, “Very few large scale applications of practical importance are NOT data intensive.” This new paradigm calls for major new advances in computing technology and data management.
In this updated report, Harrod maintains that it is no longer possible to handle computing and data challenges simply by scaling up from or modify existing solutions. The issue is further compounded by the need to share data and research across national and international borders. “Collaboration is inherently a ‘big data’ issue,” notes Harrod.
The DOE Office of Science outlines the four main scientific data challenges as follows:
- Workflows for computational science must drive fundamental changes in computer architecture for exascale systems.
- Breaking with the past: traditional scientific workflow – simulate or experiment, saving the data to disk for later analysis.
- Worsening I/O bottleneck and energy cost of data movement combine to make it impossible to save all of the data to disk.
- in situ data analysis, occurring on the supercomputer while the simulation is running.
To address these challenges as they relate to data management, analysis and visualization, DOE computer scientist Lucy Nowell has compiled a twelve-point approach, reproduced below:
1. Data structures and traversal algorithms that minimize data movement.
2. Methods for data reduction/triage that support validation of results and data repurposing.
3. Maintaining the ability to do exploratory analysis to discover the unexpected despite severe data reduction.
4. Knowledge representation and machine reasoning to capture and use data provenance.
5. Coordination of resource access among running simulations and data management, analysis and visualization technologies that run in situ.
6. Methods of in situ data analysis that minimize reliance on a priori knowledge
7. Data analysis algorithms for high-velocity, high-volume multi-sensor, multi-resolution data.
8. Methods for comparative and/or integrated analysis of simulation and experimental/observational data.
9. Design of sharable in situ scientific workflows to support data management, processing,
analysis and visualization.
10. Maintaining data integrity in the face of error-prone systems.
11. Methods of visual analysis for data sets at scale and metrics for validating them.
12. Improved abstractions for data storage that move beyond the concept of files to more richly represent the scientific semantics of experiments, simulations, and data points.
The proposed exacale computing initative timeline has also been enhanced to include more precise deliverables with a node prototype (P0) planned for early 2018, a petascale prototype scheduled for early 2019, and an exascale prototype on track for 2022. | <urn:uuid:903ac1f3-2c5a-47af-ba10-9e598e6f2694> | CC-MAIN-2017-04 | https://www.hpcwire.com/2014/04/07/doe-exascale-roadmap-highlights-big-data/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00063-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.87696 | 819 | 2.984375 | 3 |
Thinking About the Data Center on Earth Day
Earlier this month, government agencies in the U.S., Japan and across Europe agreed to use Power Usage Effectiveness (PUE) as their official metric for data center efficiency. When it comes to energy efficiency in the data center, servers usually get the most attention. However, network equipment vendors have begun to emphasize energy efficiency features in their wares, notes Computerworld.
Networking can account for about 15 percent of the total power budget, and gear isn't equipped with power management controls like servers. Vendors have begun using high-efficiency power supplies and variable-speed cooling fans and are working to improve the efficiency of switches, which can use 40 percent to 60 percent of maximum operating power when idle. | <urn:uuid:aa873d3b-bcbc-470a-843c-66b2357b98ea> | CC-MAIN-2017-04 | http://www.enterprisenetworkingplanet.com/print/datacenter/datacenter-blog/thinking-about-data-center-earth-day | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00549-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.906564 | 153 | 2.578125 | 3 |
Chip makers are moving to new manufacturing processes to help boost the performance and availability of multicore chips.
Manufacturers have begun tapping new processes that are capable of producing smaller, more power-efficient processors, in order to help dual-core chips proliferate in 2006.
Dual-core chips, which include two processor cores in place of one, have been limited to a relatively small number of high-end desktops and servers to date.
But now manufacturers such as Intel Corp. and Advanced Micro Devices Inc. are looking to new 65-nanometer chip manufacturing processesthe means by which manufacturers knit together the transistors that make up the circuits inside their chipsto help them expand the market for the chips in the coming year.
Chip manufacturers generally move to new and successively smaller manufacturing processes every two years.
The shift, which costs billions and takes years of development, allows them to produce chips with greater numbers of transistors, but still make them smaller by packing those features more tightly together.
The cycleas dictated by Moores Law, which observes that chip transistor counts will double every two yearshas allowed the chip makers to drive up performance with each generation of manufacturing technology.
However, with the coming generations move from 90-nanometers to 65-nanometers, the chip makers will emphasize their dual-core designs.
Thus, the new crop of 65-nanometer dual-core chips will run faster, incorporate larger onboard memory caches, and still have space to add circuitry to support virtualization or other on chip features, while fitting within power budgets similar to those of todays dual-core chips.
Click here to read about Intels quest for less power-hungry processors.
"The really big challenge in any [manufacturing] technology transition is getting it right in smaller geometries," said Nick Kepler, vice president of logic technology development at AMD.
However, once there, he said, "You could produce a [65-nanometer] chip thats the same size [as a 90-nanometer chip] and put two cores on it. You can just fit more in it."
Big-name chip makers such as AMD, Intel and IBM all report that, at a minimum, they have begun the early stages of 65-nanometer production. That means businesses and consumers can expect new crop of 65-nanometer chips over the course of 2006.
For its part, Intel appears to be the first brand-name chip maker to hit the new mark. Intel said its shipping Presler, a 65-nanometer, dual-core desktop processor, for revenue, and aims to ship hundreds of thousands of the chips by the end of this year.
Read more here about Intels two 65-nanometer manufacturing processes.
Presler, which will come out in systems in early January, just about two years after Intels first 90-nanometer Pentium 4 chip,
will be joined by Yonah, a dual-core processor for notebooks thats also due in January, and a Xeon server chip, dubbed Bensley, that will also arrive in the first quarter of 2006.
The 65-nanometer mark "equals high volume production of dual core in all three segmentsthats the bottom line," said George Alfs, a spokesperson for Intel.
Presler, in keeping with 65-nanometer manufacturings advantages, is expected to offer more clock speed as well as extra cache. However, its expected to fit within current dual-core Pentium D chips envelopes for power consumption.
The first Preslers are expected to top out at 3.4GHz and offer twin 2MB caches. Intels Pentium D, on the other hand, hits 3.2GHz and offers two 1MB caches. Intel will offer the chips for both corporate desktops and consumer machines.
Costly chip investments will pay off. | <urn:uuid:692e93c3-c366-4061-a26a-886fdc010de4> | CC-MAIN-2017-04 | http://www.eweek.com/c/a/Desktops-and-Notebooks/Competing-Chip-Tech-Proves-Smaller-Is-Better | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00393-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.94387 | 791 | 3.34375 | 3 |
Ben-Gurion University of the Negev (BGU) researchers have discovered a new method to breach air-gapped computer systems called “BitWhisper” which enables two-way communications between adjacent, unconnected PC computers using heat.
The research, conducted by Mordechai Guri, Ph.D. is part of an ongoing focus on air-gap security at the BGU Cyber Security Research Center. Computers and networks are air-gapped when they need to be kept highly secure and isolated from unsecured networks, such as the public Internet or an unsecured local area network. Typically, air-gapped computers are used in financial transactions, mission critical tasks or military applications.
According to the researchers, “The scenario is prevalent in many organizations where there are two computers on a single desk, one connected to the internal network and the other one connected to the Internet. BitWhisper can be used to steal small chunks of data (e.g. passwords) and for command and control.”
BGU’s BitWhisper bridges the air-gap between the two computers, approximately 15 inches (40 cm) apart that are infected with malware by using their heat emissions and built-in thermal sensors to communicate. It establishes a covert, bi-directional channel by emitting heat from one PC to the other in a controlled manner. By regulating the heat patterns, binary data is turned into thermal signals. In turn, the adjacent PC uses its built-in thermal sensors to measure the environmental changes. These changes are then sampled, processed, and converted into data.
“These properties enable the attacker to hack information from inside an air-gapped network, as well as transmit commands to it,” the BGU researchers explain. “Only eight signals per hour are sufficient to steal sensitive information such as passwords or secret keys. No additional hardware or software is required. Furthermore, the attacker can use BitWhisper to directly control malware actions inside the network and receive feedback.”
Here’s a video demonstration: | <urn:uuid:89aec288-817f-4e0f-af35-f8d51ab133a7> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2015/03/24/hack-air-gapped-computers-using-heat/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00541-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.928649 | 430 | 3.3125 | 3 |
“There are many different silos of information that have been painstakingly collected; and there are a number of existing tools that bring some strands of data into relation. But there is no overarching tool that can be used across silos.”
The sentiments behind this quote could apply to a wide range of scientific disciplines, not to mention to enterprises that have collected vast amounts of data but are still piecing together the puzzle of how to integrate and make sense of it.
In fact, the above quote came from quantitative biologist, Michael Schatz, as he reflected on the need for massive data integration for scientists worldwide—and the computational models needed to produce connected information sets.
Schatz is one of several biologists involved in Systems Biology Knowledgebase, also known as Kbase. This DOE project was started in 2008 to make data more accessible and integrated for biological researchers. Just last year the research and development required to design and implement the Kbase effort was completed by the Genomic Science program—but there is still plenty of work ahead.
As Ariella Brown noted, “Kbase should be a boon both for those who want to gain better understanding of such life forms for the sake of pure science and to those who would apply the Kbase data, metadata, and tools for modeling and predictive technologies to help the production of renewable biofuels and a reduction of carbon in the environment.”
Brown goes on to describe the Kbase program and its goals:
“The plan is for Kbase to start off with seven data centers on ESnet (the Department of Energy Energy Sciences Network). That is one for each of the six defined scientific objectives of Kbase; the seventh is devoted to coordinating the infrastructure development of the project. According to the current timetable, it should take 12 months to get the Kbase hardware platform operational. Version 1.0 is anticipated to be accessible after 18 months and version 2.0 after 36 months; five years is the estimated time to achieve operation and support at target levels.
The idea is to implement a system that can grow as needed and be easily used by scientists without extensive training in applications. It should produce understandable results based on clear scientific assumptions, engage all members of the scientific community, and encourage further discovery, with findings that inspire “new rounds of experiments or lines of research.”
While Kbase is an ongoing project, the model for its integration and collaboration developments will extend to other disciplines, allowing greater, more open access to scientific data across the world. As the graphic below shows, the need for such integration is clear—but it is a slow climb to full data integration, sharing and use for biology researchers.
Image Source: Genomic Science Program, US Dept. of Energy | <urn:uuid:1217af85-b966-415b-ad14-fa56e3798e88> | CC-MAIN-2017-04 | https://www.hpcwire.com/2011/08/10/doe_focuses_on_scientific_data_integration/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00385-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947363 | 558 | 2.875 | 3 |
In Russ White’s â€Å“Working with IP Addressesâ€ï¿½ article (IPJ Volume 9, Number 1), he presents an example subnetting problem (â€Å“The Hardest Subnetting Problemâ€ï¿½) together with a worked solution. While useful as a reinforcement exercise for the rest of the article, care should be exercised before using the steps in the solution â€Å“as-isâ€ï¿½ in a real-world network configuration.
The main problem is that by packing subnets tightly together as shown, growth is restricted in order to guarantee that no address space is wasted. Worse, growth of host numbers on all but the smallest subnet requires renumbering of the subnet or all the smaller subnets allocated after it.
For example, the /26 subnet with 58 hosts will not accommodate more than another four hosts, less than 10-percent growth, without being renumbered.
Since renumbering a network is a nontrivial task even with the tools at our disposal, it is desirable to make it as infrequent as possible.
Allowing for growth will likely but not necessarily waste some address space, but it is preferable to frequent renumbering. It turns out that this example has alternative arrangements of subnets that would permit growth of some subnets without the need to renumber and would lessen the amount of renumbering when it is required.
Using realistic estimates of future hosts rather than current numbers is a simple measure to decrease the frequency of renumbering required. This would also make it obvious that the entire allocation is close to exhaustion and can be exhausted by the need to accommodate as little as six hosts on two subnets that are near full capacity.
Constraints on the supply of IPv4 address space limits how much growth can be accommodated and requires taking a shorter-term rather than longer-term view of growth. For private RFC 1918 IP allocations (such as the one used in the example), this applies in only very large organisations, allowing a long-term view to be accommodated.
Unfortunately, the future is hard to predict with any degree of accuracy. In most cases needs for subnet allocation become gradually known over time rather than all at once. The consequences of incorrect estimation can be minimised by using an allocation scheme that allows for as much growth as possible in existing subnets while leaving as much room as possible for future allocations.
This scenario can be achieved by distributing the subnets evenly, weighted by size, across the available address space. The larger the subnet, the more room that needs to be left between it and other large networks. This is particularly important for subnets that are near to capacity. At least the sum of the sizes of neighbouring networks should be allowed. Space close to a network should be reserved for it to grow into, and the remaining space between can be allocated to smaller networks in a recursive fashion. Any allocations in the areas of likely growth should be reclaimable, and preferably these networks should be sparsely populated in order to limit the impact of re numbering on these networks. Working with a diagram of the address space, for example, a linear graph or a binary tree of the address space is a helpful aid.
A more systematic way of distributing the subnets evenly is to use mirror-image (MI) counting for allocating subnet numbers. This process is described in RFC 1219 , but note that some aspects of subnet addressing have altered since this RFC was written (see RFC 1878), so the description of mirror-image counting there and procedure text exclude subnet numbers that are now valid.
Using mirror-image counting is like normal counting starting from zero, except that the binary digits of the number are reversed. These numbers can be allocated as subnet numbers, starting from the most significant bit. Contrary to the example in RFC 1219, leading zeros (including the solitary zero in zero itself) should always be removed before the number is reversed.
Simplifying greatly, new subnets are allocated by incrementing the subnet number until a number is reached where a subnet of the required size can be accommodated or the subnet prefix becomes so long no subnets of the required size remain. If the prefix matches a common but shorter prefix, the subnet may be able to be allocated if we can lengthen the mask of the matching subnet prefix, freeing space from a previous allocation by reducing its maximum possible size. If the longest mask is always used when allocating subnets it is sufficient to just to skip matching prefixes. Note that the null prefix is common with all subsequent prefixes until its subnet mask is made smaller, extending the prefix.
The mask chosen is preferably the longest for the required subnet size—but can be as short as the length of the subnet prefix, because it can be adjusted later: made shorter if the subnetwork grows beyond its mask (if no later allocation has been made) or longer if a subnet sharing its prefix is allocated or increases size. The host number ignoring the subnet part must be allocated from 1.
As the number is incremented it grows from right to left, progressively enumerating subnets in smaller sizes. Since subnet numbers grow from right to left and host numbers from left to right, collision is delayed between the two. Allocating subnets in descending order of size is preferable in this procedure because it tends to reduce fragmentation of the address space.
The following table shows an example allocation using the sorted number of hosts in the example:
Note that the /28 and the /29 can grow simply by changing their netmask. A better allocation is possible if the third and fourth hosts in the sorted list are inter changed. In this case the three smallest networks would be able to grow without renumbering. Shortening a netmask is a much simpler operation than renumbering.
Of course in the real world, needs for subnet allocation do not conveniently arrive sorted in ascending order. If it happened that one of the two largest subnets was the fifth requiring allocation, fragmentation of the address space would require renumbering one of the three smallest networks to recover an address block of the necessary size.
Another point that may be worth mentioning is that most modern hosts and routers allow for multiple subnets to share the same physical subnet, allowing two smaller subnets to cover a range of addresses that would otherwise receive a single larger allocation. For example, a 40-host subnet can be allocated a /27 and a /28 rather than a /26.
—Andrew Friedman, Sydney, Australia
Ed: Readers may wish to also peruse RFC 3531 .
P. Ferguson and H. Berkowitz, â€Å“Network Renumbering Overview: Why Would I Want It and What Is It Anyway?â€ï¿½ RFC 2071, January 1997.
Y. Rekhter, B. Moskowitz, D. Karrenberg, G. J. de Groot, and E. Lear, â€Å“Address Allocation for Private Internets,â€ï¿½ RFC 1918, February 1996.
P. F. Tsuchiya, â€Å“On the Assignment of Subnet Numbers,â€ï¿½ RFC 1219, April 1991.
T. Pummill and B. Manning, â€Å“Variable Length Subnet Table for IPv4,â€ï¿½ RFC 1878, December 1995
M. Blanchet, â€Å“A Flexible Method for Managing the Assignment of Bits of an IPv6 Address Block,â€ï¿½ RFC 3531, April 2003.
The author responds:
Andrew is correct in stating that it is often better to try to account for future growth when assigning address space. There are many viable ways to allow for growth when allocating address spaces; hopefully, this topic will be covered more fully in a future article. I used the method in the article to illustrate how to employ the technique for working with IP addresses, rather than as an absolute best practice for allocating addresses.
—Russ White, Cisco Systems | <urn:uuid:ac379e26-950b-40da-a59e-7e7575c94a79> | CC-MAIN-2017-04 | http://www.cisco.com/c/en/us/about/press/internet-protocol-journal/back-issues/table-contents-16/letters-editor.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00560-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.936889 | 1,713 | 2.703125 | 3 |
Producing documentation and reusing information in XML, Part 1, Document publishing using XML
Create, format, and publish documents using XML standards and open source tools
From the developerWorks archives
Date archived: December 6, 2016 | Last updated: July 07, 2009|First published: March 24, 2009
XML provides a way to identify data items and subcomponents within any structured data set, but has its roots in documentation development and production. Robust, open standards for XML document markup and a rich set of freely available tools for XML document parsing and format conversion make it easy to install and configure a complete documentation development and formatting environment on any UNIX® or Linux® system.
This content is no longer being updated or maintained. The full article is provided "as is" in a PDF file. Given the rapid evolution of technology, some steps and illustrations may have changed. | <urn:uuid:354efc5c-832b-44cc-ab77-ab4fd477f8b0> | CC-MAIN-2017-04 | http://www.ibm.com/developerworks/library/x-reuseinfo1/index.html?S_TACT=105AGY75 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00560-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.81791 | 180 | 2.671875 | 3 |
FTTH (Fiber To The Home) has changed a lot in the way we live and work. When planning an installation, many factors should be taken into consideration, such as regulation, implementation cost, the need to future-proof investment and so on. This blog will mainly focus on two main FTTH architectures–point to point (P2P) and passive optical network (PON) as one of the suggestions for FTTH deployment.
Currently, the requirements for higher internet access speeds are increasing by various applications, such as cable TV, Movie Streaming, Multi player Gaming, Video Conferencing, 3D, etc. Apparently the transmission capacity of copper cables is limited and can’t meet the the needs of higher bandwidth. So fiber cables soon become the substitutes of copper cables. FTTH technology uses optical fiber cable from a central point directly to individual buildings such as residences, apartment buildings and businesses to provide unprecedented high-speed Internet access. FTTH dramatically improves the network speeds available to computer users compared with technologies now used in most places.
Before deploying FTTH networks, let’s take a look at two main FTTH infrastructure types P2P and PON. In short, P2P architecture uses all active components throughout the chain & point to multi-point (P2M) and PON architecture uses passive optical splitters at the aggregation layer.
In a PON network architecture, an optical line terminal (OLT) will be deployed in the Point of Presence (POP) or central office. One fiber cable connects the passive optical splitter and the fan-outs connect end users (a maximum of 64) with each one having an Optical Networking Unit (ONU) at the point where the fiber cable terminates.
While a P2P architecture is more complex. It has a core switch at the central office, which connects over optical fiber cables to an aggregation switch at the distribution point (typically located at a street corner). These aggregation switches have many fiber ports and each port directly connects to an Optical Network Termination (ONT), which is located inside or outside the user’s residence or business premises.
To decide which kind of architecture to choose, more details should be known. Each type has its own advantages and disadvantages. The following will list the strengths and weaknesses to make the decision.
The above content shows information about the advantages and disadvantages of FTTH P2P and PON architectures. When designing the architectures, network operators should balance the strengths and weaknesses of both types. If you need a future-proof infrastructure, you better select P2P. Besides, cost and network efficiency are also the factors to decide which architecture is more suitable. Actually, architectures design may depend on many other situations. Hope this article is helpful for you. | <urn:uuid:5f948380-02b7-4e27-8ce4-34b616ea4e86> | CC-MAIN-2017-04 | http://www.fs.com/blog/ftth-architecture-p2p-and-pon.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00192-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.930176 | 568 | 2.78125 | 3 |
Windows Font files can become corrupt. When this happens, an error could occur when our software attempts to use the corrupted font file. It is usually a style within the font that is affected (bold, italics, or regular).
Example Error Messages:
Font 'Tahoma' doesn't support style 'Regular'.-Util:main
Font 'Tahoma' doesn't support style 'Italic'.-Util:main
Replace the corrupt Font on your computer with one from another computer that is running the same Operating System. Windows fonts are located in the C:\Windows\Fonts\ directory.
An alternative solution: Navigate to the C:\Windows\Fonts\ directory. Locate the corrupt Font file. Copy the font file to another directory. Right-click on the copied file and from the pop-up menu, and select Install. Repeat with all copied font files.
If this solution does not work, please refer to Solution 1. | <urn:uuid:9d80cec3-6538-4d52-b4ea-2b48bbc67646> | CC-MAIN-2017-04 | https://www.boson.com/support/153-Error-Message-Font-Error.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285315.77/warc/CC-MAIN-20170116095125-00100-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.864916 | 201 | 2.625 | 3 |
Part 2By David F. Carr | Posted 2007-12-20 Email Print
Will this be the year that the Web development architecture known as REST invades the enterprise, derailing or fundamentally altering the orthodox approach to web services and service oriented architecture (SOA) that's been built up over the past several
When you use the web, your browser issues HTTP GET commands every time you enter a new address or click on a link and HTTP POST commands for data entry forms. REST suggests that these standard operations and a couple of others (such as PUT and DELETE) are also a natural way of designing machine-to-machine communications. For example, a program can explore the programming interface of a remote service with a series of GET commands that bring back links to other resources.
Where SOAP is a specific XML protocol, REST is an architectural style a set of rules for designing networked application. Applications can be described as more or less "RESTful" depending on how closely they adhere to those principles, but there's no official standard to implement, other than the basic protocols of the Web. That's one of the attractions for web-centric businesses because it means REST applications can run on the same caching, load balancing, and security infrastructure as other web applications, without requiring additional middleware.
Fielding finds it annoying that SOAP and related standards became synonymous with web services, given that their workings aren't particularly web-like. "I never considered them web services at best, they're XML services," he says.
SOAP, which was originally known as the Simple Object Access Protocol, became less "simple" as the result of efforts to create sophisticated enterprise services on top of it. SOAP started out as a way of using XML and invoke the functions of remote software objects or components over the web's Hypertext Transfer Protocol (HTTP) and followed in the tradition of other types of remote procedure call, distributed object computing, and message oriented middleware systems. The SOAP specification itself defines the format of an XML "envelope" that wraps around the actual message or "payload" and specifies its destination. Since it was introduced in the late 1990s, with the backing of both Microsoft and the vendors of Java-based middleware, SOAP has been at the center of the marketing of the concept of web services and, more recently, SOA.
The promise of these technologies has always been that they would bring new levels of reuse, flexibility, and agility to enterprise systems. But even though there are many success stories to be told, Gartner's Gall says enterprise architects are starting to feel some jealously for what's happening outside the corporate firewall. "These guys are looking over their shoulders at the true web at what's going on with Google and Amazon and mashups and saying, 'Hey, wait a minute, how come we're not getting that level of flexibility out this stuff you sold us called 'web services,' " he says.
Clients who are disappointed by the payoff from their SOA efforts have often created too much unnecessary complexity, Gall suggests. One of the reason that web-oriented architectures like REST have an advantage is that they're simpler and therefore easier to reuse and mash up, he says.
Even if it's not practical for an enterprise to make a wholesale shift from SOAP to REST, the organization can use the contrasting example of public web services as an impetus for simplification. For example, SOAP services are typically described with WSDL (Web Services Description Language) files. Often, by asking how many WSDL files the client has created, Gall says he discovers the enterprise has created hundreds of "one-off" WSDL files. Often, one reason the underlying services aren't particularly reusable is that they lack common data definitions. One way of changing that is to combine SOA efforts with master data management and making sure services, for example, use only master lookup keys to identify customers.
Next page: Part 3 | <urn:uuid:993bc0d9-737a-440a-b1f1-b88e2e7001a0> | CC-MAIN-2017-04 | http://www.baselinemag.com/c/a/Projects-Enterprise-Planning/Will-Enterprise-Architects-Get-Any-REST-in-2008/1 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00220-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.956607 | 815 | 2.6875 | 3 |
Tokoro M.,Asada Ladies Clinic Medical Corporation |
Fukunaga N.,Asada Ladies Clinic Medical Corporation |
Yamanaka K.,RIKEN |
Itoi F.,Asada Ladies Clinic Medical Corporation |
And 5 more authors.
PLoS ONE | Year: 2015
Generally, transportation of preimplantation embryos without freezing requires incubators that can maintain an optimal culture environment with a suitable gas phase, temperature, and humidity. Such incubators are expensive to transport. We reported previously that normal offspring were obtained when the gas phase and temperature could be maintained during transportation. However, that system used plastic dishes for embryo culture and is unsuitable for long-distance transport of live embryos. Here, we developed a simple lowcost embryo transportation system. Instead of plastic dishes, several types of microtubes-usually used for molecular analysis-were tested for embryo culture. When they were washed and attached to a gas-permeable film, the rate of embryo development from the 1-cell to blastocyst stage was more than 90%. The quality of these blastocysts and the rate of full-term development after embryo transfer to recipient female mice were similar to those of a dish-cultured control group. Next, we developed a small warm box powered by a battery instead of mains power, which could maintain an optimal temperature for embryo development during transport. When 1-cell embryos derived from BDF1, C57BL/6, C3H/He and ICR mouse strains were transported by a parcel-delivery service over 3 days using microtubes and the box, they developed to blastocysts with rates similar to controls. After the embryos had been transferred into recipient female mice, healthy offspring were obtained without any losses except for the C3H/He strain. Thus, transport of mouse embryos is possible using this very simple method, which might prove useful in the field of reproductive medicine. © 2015 Tokoro et al. Source | <urn:uuid:7acc40bb-4fda-4194-a316-5e614c056fc5> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/asada-ladies-clinic-medical-corporation-423900/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00128-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.945695 | 400 | 2.578125 | 3 |
|CCINETB Configuration||CCINAMPU Configuration|
CCITCP is supported on Windows 95, Windows 98, Windows NT and UNIX systems.
CCI over TCP/IP (CCITCP) allows an application to connect to another application (on another machine, usually, although it can be another process on the same machine) in two ways: directly, by specifying the TCP/IP address and port being used by the application you want to connect to (this is described in the Advanced Features section); or indirectly, by specifying a logical "Server Name" of the service you wish to connect to.
This second method allows the most flexibility: the applications can be moved from machine to machine without being altered; and the details of the machine they are running on are unimportant (as long as they support CCI and TCP/IP, of course). This flexibility is achieved by having an intermediate program which CCI can query when a connection request is made. This program contains or can obtain the necessary details to allow an application to establish a connection with another application of a given Server Name. This program is called the CCITCP2 registration daemon.
Before using CCI over TCP/IP (with or without using CCITCP2) there is some configuration that is necessary, described in the sections below.
In order to enable CCI support for TCP/IP three executable modules are provided:
Before the CCI modules can work effectively the appropriate TCP/IP network hardware and software drivers must be already installed, fully configured, and working. Once the TCP/IP software is installed and configured, the network connections should be fully checked out using suitable utilities, such as the TCP/IP standard commands "ping" and "ftp", before attempting to use CCI over TCP/IP. Ensure that if hostnames are going to be used (as opposed to only dotted decimal IP addresses) that the Domain Name Server (DNS) configuration is correct and that the service is working.
Where a network is connected to other networks, ensure that TCP/IP is aware of the gateway(s) to these networks by running "route add" or a similar command. Please see your TCP/IP vendor's installation and configuration guides for details of these steps.
Once you are certain that TCP/IP is fully configured and functional and that the machines which you wish to communicate with are similarly set up and reachable, then the only other configuration issue that needs to be resolved is where (and whether) a CCITCP2 daemon will be run to enable these machines to contact each other.
This section does not apply to UNIX installations. Please refer to CCI Configuration for UNIX users
If a CCITCP2 daemon is used to allow CCITCP clients and servers to establish contact then the "CCI Configuration" utility needs to be run first. It can be started by going to the "Configuration" group under the product Start menu item and selecting "CCI Configuration". Alternatively just run CCIINST from any Net Express or Mainframe Express Command Prompt.
The CCI Configuration Utility asks for the address of the machine in the reachable network which is running a CCITCP2 daemon. You can enter either a hostname or a dotted decimal format address. If you do not know the TCP/IP hostname or address of the system that the CCITCP2 daemon is running on, contact your system/network administrator.
The CCI Configuration Utility also updates the TCP/IP services file with values necessary for CCI to work over TCP/IP.
If you leave the hostname field empty, CCI assumes that the CCITCP2 daemon will be running on the local machine, and the changes to the services file are still made.
If the machine running the CCITCP2 daemon changes, you must rerun the CCI Installation Utility and change the hostname to the new value. If there are any MERANT processes running which were started before the value was changed (e.g. Command Prompt, Integrated Development Environment, etc.) then these will need to be re-started before they pick up the new value.
Prior to using CCITCP with CCITCP2, an entry must be made in the /etc/services file as follows:-
Without this entry in the /etc/services file, CCITCP2 cannot function.
On UNIX systems the user is required to set an environment variable "CCITCP2" to select the location of the machine hosting CCITCP2 executable in the network that is being used.
An example of the setting of a CCITCP2 environment variable for korn shell users is
The method for setting environment variables in other UNIX shells differs slightly from the examples given here, please use the appropriate method for the user shell of your choice.
Please refer to the section Environment Variables and the CCI.INI File for more details of the advanced configuration of CCITCP.
There only needs to be one CCITCP2 process active on a network in order for CCI over TCP/IP to work, as long as all machines have been configured (using the CCI Configuration Utility) to use it. Alternatively, multiple CCITCP2 processes running in the network can coexist and can communicate between each other so that a machine configured to use one daemon can find an application registered with a different daemon in the network. For instance, it is possible for every machine on the network to be running a copy of CCITCP2 and be configured to use their own local copy to search for connections. This would, however, generate a lot more network broadcast traffic compared to having just one daemon which all machines are configured to use.
The CCI Configuration Utility needs to be run on a machine before CCITCP2 can be successfully started on it (enter a blank hostname if the daemon is run locally). Once started, the daemon will accept service registrations until the effective RAM available to the process is exhausted, or until it is shut down.
CCITCP2 can be started manually at any time (as long as it is not already running) by entering:
at a Net Express or Mainframe Express command prompt, or it can be started automatically at system start-up by including it in the startup folder. Alternatively, on NT, it can be installed as a service (see CCITCP2 as an NT Service).
On UNIX systems CCITCP2 can only be started by a superuser (root).
If both the Server Name and Machine Name (see Application Configuration for a definition of these terms) are specified by the client application, the CCITCP2 process returns the machine and port address of the service specified to the client so that it can establish a connection with the named service. If the service is not found, an error is returned.
However, if the Server Name is specified but the Machine Name is not, the CCITCP2 process searches all the registered Server Names in the reachable network in the following order until either the named service is found or an error is returned:
You can avoid the potential problem of producing an undesired connection from the third level search by closely observing the rules followed at the first two search levels or by using a unique name for each server process on the network.
This section provides information that may be useful if you experience problems when using CCI over TCP/IP with CCITCP2.
"mfcobol port entry not found in services file" error
Run the CCI Configuration Utility on the machine receiving this error: this should automatically update the TCP/IP services file with the necessary entries.
"Cannot find CCITCP2" error
Run the CCI Configuration Utility on the machine receiving this error. Make sure that the hostname or TCP address value matches that of the machine running the CCITCP2 process you expect to use, and that the CCITCP2 process is running on that machine. Use "ping" and "ftp" to make sure that the machine is reachable using the address you have specified. Check that there are no CCITCP environment variables or CCI.INI file entries over-riding the value specified by the Configuration Utility. If the value has been changed recently, make sure that the failing process has been subsequently re-started.
"Registered service found but could not make a connection" error
There are two main possible causes of this message:
To check to see if this is what is happening close and re-start the CCITCP2 daemon that the client is pointed at. This will clear it of all registered Server Names, including any "orphan" servers. (If CCITCP2 is running in debug console mode then you can press F2 and it will list what CCI servers are registered with it.) Check to see that the CCI service is not abnormally terminating.
"A CCITCP call has timed out" error
This most commonly occurs if the server application that the client is attempting to contact is not running at the time the client is started. However, it can also a be produced in a network topology which includes routers or bridges which have not been configured to allow CCITCP2 daemons to communicate with each other. For example, if a client fails to connect with a registered service as follows:
The CCITCP2 modules are designed to communicate with each other using the TCP/IP broadcast address mechanism. If this is not correctly configured, CCITCP2 cannot locate services registered with other CCITCP2 modules. If this happens, you should advise your system administrator.
System administrators should be aware of the following:
For CCITCP2 modules to communicate correctly, your network must be configured to pass broadcast packets to all areas that want to use CCITCP.
On Windows NT you can run the CCITCP2 daemon as an NT service by entering:
at the Net Express or Mainframe Express command prompt. (You must be logged on as a user with Administrator privilege to do this.)
Once installed as an NT service, CCITCP2 can be started and stopped via the Control Panel in the usual way.
You can use the -c option to install CCITCP2 as a service and run it in debug mode so that a console shows services being registered and connections being made:
Note that if CCITCP has already been installed as a non-console service it will need to be uninstalled first. To uninstall CCITCP2 as a service, use the -u option:
The -? option lists the available CCITCP2 startup modes.
Note: If CCITCP2 is installed as an NT service, it will be started automatically when the NT system is re-started. If a non-service CCITCP2 has previously been added to the Windows Startup group, it should be removed; otherwise, Windows will try to start a second instance of CCITCP2.
If CCITCP2 is started from a command line with the -d (debug) option or installed as a service with the -c option, then pressing the F2 key will list the CCI services that are registered with the daemon. F4 toggles the echoing of the console output to the ccitcp2.log file in the current directory of the daemon.
A CCITCP2 registration daemon is not necessary if the CCI client application and the CCI server application establish a connection directly between each other. This can only be done when the server application starts up on a specific TCP port, and the client knows which address and port the server is using.
In a relatively static application enterprise environment where the server application location is well-defined and persistent, then using direct connection can have advantages over using a CCITCP2 registration process to help establish a connection.
Allowing a server application to listen on a known, fixed port means that clients can contact the server through a security firewall if a gap has been configured for that port and address.
A CCI server can support both direct connection and normal connection via a CCITCP2 daemon simultaneously. If a server starts on a fixed port, and a CCITCP2 daemon is present, then it will register a Server Name with it as normal, but in addition clients will be able to connect to it directly because the port it is using is known. If the CCITCP2 daemon is not present or reachable then a fixed port server will still start successfully, but only clients who attempt to connect directly will be able to contact it.
A variety of environment variables and CCI.INI file entries can be used to customize CCI behavior either on a machine-wide or per-process basis. Most users should not need to use either of these methods, but they are provided in case a greater degree of application control is required.
If more than one of the above methods is used for the same purpose or feature, the order of precedence that determines the behavior is:
In the following sections UNIX users should use the appropriate method of setting environment variables for the user shell which is being used. The examples given are guides using the manner of setting environment variables found on the PC.
Instead of using the CCI Configuration Utility to set the TCP address of the machine running the CCITCP2 registration daemon the environment variable "CCITCP2" can be used instead. This may be useful if you need different processes on the same machine to contact different registration daemons. The value can be set from the command prompt by typing:
where hostname is the TCP hostname or dotted decimal IP address of the machine running the CCITCP2 daemon you wish to contact from that session.
The environment variable value will always take precedence over any value set using the Configuration Utility. To restore a process to using the value set by the Configuration Utility simply set the environment variable to an empty string e.g.
Alternatively, if this environment variable is set system-wide (by creating a system variable in the system environment settings, or by using a CONFIG.SYS file) then this value will always take precedence over any value set using the Configuration Utility.
If you want to start a CCI server on a fixed port, then you can associate the Server Name with the port value by using the CCITCPS_ environment variable (instead of appending the information on the Server Name itself, as described in the CCITCP Server Name section). If the Server Name is server_name, and you want it to use the fixed TCP port 3000, this can be specified by typing:
Note that this will only work if the server application process is started in the same session or process that has this environment variable set.
If a client is known to be trying to connect to a server with Server Name server_name, and the TCP address (server_hostname) and port (e.g. 3000) that the server is using is known, then the client can be made to connect directly to it by setting an environment variable as follows:
Note that this can be used instead of setting the client Machine Name value (see section CCITCP Machine Name). This is useful if the Machine Name value the client specifies cannot be altered by an application defined method.
The CCI.INI file is only used by CCITCP if direct connection or a fixed port server is required. In addition, this configuration method is only recommended when the Server Name or Machine Name cannot be modified by the application that is using CCI, and the environment variable method described in the section above is also not considered suitable.
There are two possible sections in the CCI.INI file which are specifically used by CCITCP:
which can have entries of the following form:
This is used exclusively to match a given Server Name server_name (see CCITCP Server Name for a description of valid values) with a fixed TCP port value xxxxxx.
The other CCI.INI section used is:
which expects entries of the following form:
This is used exclusively by clients to associate a target Server Name server_name with a Machine Name value to the right of the "=" character (see CCITCP Machine Name for valid values and uses).
CCITCP Client/Server applications use the CCI Server Name and Machine Name parameters to enable the CCI Client to specify the CCI Server with which to communicate.
The CCI Server identifies itself on the network by using the Server Name parameter. The CCI Client specifies the CCI Server using the Server Name. Additionally, if it only wants to find this Server running on a particular machine in the network, it needs to specify the Machine Name parameter.
Both CCI server and CCI client applications need to have a Server Name specified: server applications need to register themselves as available under this name (or what fixed port to use); and clients need to specify which server they wish to contact.
This can be any valid alphanumeric string (up to 127 characters in length). Names beginning with a ',' character, or including the consecutive characters ",MF" are reserved for MERANT use.
If you wish to have the server start on a fixed or known TCP port (see Direct Connection and Starting Servers on Fixed Ports), then this can be achieved by appending the string ",MFPORT:xxxxx" to the Server Name, where xxxxx is the decimal value of the port. This string is not used as part of the Server Name, but is interpreted by CCI so that the server uses the specified port for listening for incoming clients. If you do not wish the server to have any name (that is, clients cannot use CCITCP2 to contact the server, but can only specify it directly by using the Machine Name parameter) then the Server Name need only consist of this string.
Note: It is recommended that you do not use port numbers below 2000 as these may already be in use by standard TCP services. The maximum port value normally supported is 65535. Care should be taken not to use a port value that is already in use by other standard applications: check with the system or network administrator on valid port value ranges that should be used.
This is a parameter that only needs to be specified by CCI client applications.
This should be the TCP address (either the hostname if DNS is enabled, or the dotted decimal address if not) of the machine you wish to restrict the client to contacting. If you know that the Server Name of the application the client is trying to contact is unique in the reachable network, or you do not care which server of this same name it connects to, then this parameter is optional.
If you do not wish to use a CCITCP2 daemon to resolve the TCP address and port that the server is using (see Using CCI over TCP/IP without CCITCP2), but want the client to contact the server directly, then the Machine Name must have the following form:
where server_hostname is the hostname or dotted decimal address which the server application is using, and xxxxxx is the decimal port value.
Copyright © 1999 MERANT International Limited. All rights reserved.
This document and the proprietary marks and names used herein are protected by international law.
|CCINETB Configuration||CCINAMPU Configuration| | <urn:uuid:a87f3670-026b-4049-82f1-a2834a0a7f0b> | CC-MAIN-2017-04 | https://supportline.microfocus.com/documentation/books/sx20books/ccitcp.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00184-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.879943 | 3,980 | 2.515625 | 3 |
Researchers say new studies show that taking part in online social networks like the 1 billion-member strong Facebook can lead to users exhibiting less self control.
WHO'S NEXT? Facebook outage follows on heels of Gmail one
According to a new study in the Journal of Consumer Research: "Using online social networks can have a positive effect on self-esteem and well-being. However, these increased feelings of self-worth can have a detrimental effect on behavior. Because consumers care about the image they present to close friends, social network use enhances self-esteem in users who are focused on close friends while browsing their social network. This momentary increase in self-esteem leads them to display less self-control after browsing a social network,"
For more on this research: Keith Wilcox (Columbia University) and Andrew T. Stephen (University of Pittsburgh). "Are Close Friends the Enemy? Online Social Networks, Self-Esteem, and Self-Control." Journal of Consumer Research: June 2013. For more information, contact Keith Wilcox (email@example.com) or visit http://ejcr.org/. | <urn:uuid:cc1e8edd-3838-4162-b35f-44ec88cf6d4c> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2223665/how-facebook-can-make-you-fat.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00394-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.898058 | 232 | 2.515625 | 3 |
One-third of college students and young professionals consider the Internet to be a vital daily resource and requirement, just like air, water, food and shelter, according to a new Cisco study. Indeed, over half of the respondents to the Cisco study said the Internet is integral to their lives, so much so that they can't live without it... and that they'd take it over cars, dating and partying.
The 2011 Cisco Connected World Technology Report is based on surveys of college students and professionals 30 years old and younger in 14 countries. There were 2,800 respondents in all, 200 from each country.
The study found that, if forced to make a choice between one or the other, 64% of college student respondents would choose an Internet connection instead of a car. Forty percent said the Internet is more important to them than dating, going out with friends, or listening to music.
And 27% said staying updated on Facebook was more important than partying, dating, listening to music, or hanging out with friends. This indicates that young people are opting more for online socialization that live social interaction.
And their interface of choice? Two-thirds of students and more than half of employees say a mobile device -- laptop, smartphone or tablet -- is "the most important technology in their lives," the study found. And smartphones may soon surpass desktops as the most prevalent tool for accessing the Internet as 19% of college students consider smartphones as their "most important" device used on a daily basis, compared to 20% for desktops.
Says Cisco about this trend:
This finding fans the debate over the necessity of offices compared to the ability to connect to the Internet and work anywhere, such as at home or in public settings. In the 2010 edition of the study, three of five employees globally (60%) said offices are unnecessary for being productive.
And not surprisingly, the study found that the importance of TV and newspapers is decreasing among college students and young employees in favor of mobile devices. Only 4% of those surveyed said the newspaper is their most important tool for accessing information. And fewer than one in 10 college students and employees said the TV is the most important technology device in their daily lives.
And sadly, so sadly, one of five students have not bought a physical book -- excluding textbooks -- in a bookstore in more than two years, or never at all. One of the downsides of technology.
And in the "Just Say No" or "Just Turn It Off" Dept., college students reported constant online interruptions while doing projects or homework, such as instant messaging, social media updates and phone calls. In a given hour, 84% said they are interrupted at least once; 19% said they are interrupted six times or more; and 12% said they lose count of how many times they are interrupted while they are trying to focus on a project.
There is an "off" button on a smartphone, isn't there?
More from Cisco Subnet:Cisco Subnet bloggers on Twitter.Jim Duffy on Twitter | <urn:uuid:63ad5f22-353a-4059-be05-bf86b468b6be> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2220701/cisco-subnet/cisco-study-finds-young--uns-need-internet-to-live.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00146-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.962225 | 619 | 2.5625 | 3 |
The UK anti-bullying charity has produced a safeguarding Handbook with Securus which will be distributed to schools this week.
Securus provides child protection software for schools to protect them against cyber bullying, harmful websites and explicit images. The company monitors inappropriate activity on a network and alerts schools if there is anything that could put a child at risk.
The new partnership will see the two organisations working closely together to raise awareness of both preventative and responsive approaches to tackling cyber bullying and other challenges commonly faced by schools.
The safeguarding handbook will provide preventative measures and responsive approaches to help curb the number of cyber bullying incidents in UK schools.
"BeatBullying is pleased to be partnering with Securus, which has a decade of experience in helping schools adapt to new technologies," said Emma-Jane Cross, CEO and founder of BeatBullying. Protecting children has always been their clear focus and I’m sure that the Safeguarding Handbook, which is just the start of our collaboration, will be a very useful tool for schools. The Handbook supports the objectives of our popular BeatBullying training programmes which seek to create a culture in which bullying is unacceptable both on and offline.
The new initiative follows a report by Nominet Trust which revealed earlier this year that even online games were a source of cyber bullying.
The research found that 27% of British primary school students were experiencing bullying while playing games online.
Unmonitored use of technology at an early age was found to be one of the main problems with online bullying.
Almost 62% of children aged 8-11 have their own phone, personal computer, tablet, or gaming device that connects to the internet which means children are constantly susceptible to online bullying.
Nearly 50% of children surveyed said they felt schools should teach them more about how to protect themselves from online bullying. Another 34% said they wanted parents and teachers to learn more about cyber bullying in order to properly teach about protecting themselves from online harassment.
"Providing a safe and secure environment has never been more challenging for schools," said Russell Hobby, General Secretary of the National Association of Head teachers (NAHT). "New technologies have provided multiple platforms through which a range of threats to children can be played out, often publicly and usually indelibly. What schools need is to be made aware of any issues before they become serious – and to know how to respond effectively to each situation." | <urn:uuid:54cb96c6-1e04-44b5-968b-bfc15c88e8a0> | CC-MAIN-2017-04 | http://www.cbronline.com/blogs/cbr-rolling-blog/beatbullying-teams-up-with-securus-software-to-stop-cyber-bullying-in-schools-031212 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00358-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.972839 | 492 | 2.71875 | 3 |
Implementing proper IT security policies, software, and equipment is vital to any enterprise. All enterprises have something of value that needs to be protected, regardless of the size of the company or their industry. At the very least, companies have services and infrastructure that hackers can exploit. Common exploits include using phone systems to place expensive long distance calls and using the network storage to host illegal Web sites. Governments and industry regulatory groups are aware of the importance of implementing a sound IT security system and many have put security requirements in place.
What is Security?
IT information security is the process by which enterprises protect their information, internal systems, and platforms from unauthorized use, theft, deletion, and unauthorized changes. Security is not about eliminating risks to the enterprise, it is about mitigating these risks to acceptable levels. | <urn:uuid:849db572-2274-4995-aacf-8c572e180be6> | CC-MAIN-2017-04 | https://www.infotech.com/research/it-security-101 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00358-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.942627 | 161 | 2.65625 | 3 |
Photo: Vivane Reding, EU commissioner for the Information Society and Media
Imagine an electronic safety system in your car that would automatically call emergency services if you had an accident, begins an EC report released today. Supposing that, even if you were unconscious, the system would inform rescue workers of your exact whereabouts. The ambulance and the fire brigade could be on their way in minutes, without the need for any human intervention. And imagine that this system would work anywhere in Europe -- whatever the local language. Such a system is not only possible; it is currently being rolled out across the European Union. The system is called "eCall" and it is one of the most important road safety actions under the European Union's "e-safety" initiative.
The European Commission outlined today new plans to accelerate the drive for safer, cleaner and smarter cars. The Commission will start negotiations with European and Asian automotive industry associations later this year to reach an agreement on offering the pan-European in-vehicle emergency call system (eCall) as a standard option in all new cars from 2010. It will also further promote the take-up of other life-saving technologies and investigate how technology can help make cars greener and smarter.
"Technology can save lives, improve road transport and protect the environment," said Viviane Reding, the EU's commissioner for the Information Society and Media. "The EU must spread this good news among consumers and continue to put pressure on stakeholders to ensure Europeans benefit from these winning technologies sooner rather than later. If we are serious about saving lives on European roads, then all 27 member states should set a deadline to make eCall and Electronic Stability Control (ESC) standard equipment in all new cars. At the same time we need to clear administrative obstacles to innovations that will make cars safer and cleaner. For example, making sure radio frequencies are available for cooperative driving systems that will cut accidents, reduce congestion and lower CO2 emissions. If fast progress cannot be made voluntarily, I stand ready to intervene."
Jacques Barrot, commissioner for transport, said: "In our fight to halve the number of road casualties by 2010, we are taking action on all fronts -- safer drivers, safer infrastructure and safer vehicles. With this action on intelligent cars, the Commission is pushing to ensure that cutting-edge technology finds its way into our cars as soon as possible where it will help save lives and reduce the environmental impact of transport." | <urn:uuid:82397d72-2205-4fe8-bca2-d4f70d0037ed> | CC-MAIN-2017-04 | http://www.govtech.com/policy-management/European-Commission-Leads-Drive-for-e-Safe.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00568-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.94348 | 494 | 2.625 | 3 |
From 1980 through 2002, the number of Americans diagnosed with diabetes more than doubled -- from 5.8 million to 13.3 million -- according to the Centers for Disease Control and Prevention (CDC). Another 5 million people may be undiagnosed with the chronic disease, according to national statistics.
Every state in the nation has a Diabetes Prevention and Control Program (DPCP), but the California Diabetes Program
(CDP), under the Department of Health (DHS), is using technology to respond to the epidemic.
The state is launching its Diabetes Information Resource Center (DIRC), a Web-based system to collect and communicate best practices, share resources and data, and promote collaboration on diabetes-related issues throughout California. The site may be the first of its kind in the nation, its creators said.
The online information clearing-house is operated by the CDP, which is charged with reducing the burden of diabetes in California.
"That's kind of a tall order. We look out for those who are at risk and for those who have it, so that's an even taller order," said Susan Lopez Mele, CDP administrative manager, and media and marketing specialist for the organization. "We try to really look at the big picture and figure out how we can be more helpful."
The Web-based DIRC fills gaps in health care and encourages collaboration among organizations fighting diabetes throughout the state, Mele said.
"Providing a 'one-stop' shopping experience, organizations will easily find and use what is available, thereby reducing duplications of effort," she said. "This means more people with diabetes would have a better chance to prevent costly complications such as blindness, amputations, kidney failure, heart disease and strokes. By improving health and health-care delivery, we ultimately will reduce health-care costs."
DIRC's launch comes when more than 2 million Californians are diagnosed with diabetes, and at least that many more risk developing the disease during their lifetimes.
Organizations submit profiles of their work to DIRC, and other groups can search the online data to find programs that complement their work or fill a current need. That model may be unique among state government health programs, Mele said.
However, similar technology could soon link multiple state programs, providing even broader information sharing among these organizations, she added.
The Diabetes Division at the CDC has an online reporting system that state diabetes government programs, like California's, use for reporting surveillance, epidemiology and program evaluation activities, Mele said, adding that the CDC can search the system for information submitted by all states and territories. "A new feature will be added soon where state programs can search all other diabetes programs. We are told this will be ready in 2005."
In 2002, the CDP created California's Plan for Diabetes: 2003-2007,
which emphasized four specific areas for special attention: increased access to care, improved quality of care, promoting primary prevention and guiding public policy.
"For the last several years, we've been working really hard to step back and assess what we're doing, and also assess what's going on in California to make sure we're doing the right thing -- that we make good decisions, that we're filling gaps that need to be filled," Mele said.
In 2003, the CDP conducted a statewide diabetes assessment, and after getting input from industry partners, boiled down responses and created goals to make the CDP most useful.
"One of the major gaps was communication," Mele said. "We recognize there's a lot of really good work being done in the state -- tiny projects having an impact in a small community to huge health plans doing great work -- but what's clear is those folks don't regularly communicate with each other, and sometimes they reinvent the wheel because they don't know the other guy already did it."
DIRC was born of that realization. The idea started years ago, but grew after the assessment, and gained support from industry partners and within the DHS.
Some partners participating in the development of DIRC include Aventis, the California HealthCare Foundation, the California Optometric Association, the California Public Health Workforce Training and Technology Coalition, the Diabetes Coalition of California, and Lifescan, a company that produces blood glucose monitors.
The Flip Side
Though DIRC targets everyone connected with diabetes -- health professionals, community-based organizations, coalitions, advocacy groups, local/county/state health departments, health plans/medical organizations, support groups, educational institutions, media, funding agencies, policy-makers, and people who have or are at risk for diabetes -- Mele said its primary audience is organizations rather than individuals with diabetes.
"Anybody can visit DIRC, but the ADA [American Diabetes Association] has a great site for the individual," she said, adding that DIRC directs individuals to the ADA site. "DIRC is designed to help organizations in their diabetes work."
The guiding theme is meeting a need, said Karen Black, DIRC project manager and evaluation lead for the CDP, because what's missing right now is a Web site for organizations that work with those who have, or are at risk for, diabetes.
"With DIRC, [those organizations] won't have to waste their time sifting through numerous Web sites for information," Black said. "We want folks to quickly find what they're looking for, so they can continue doing the important work of helping people with diabetes."
The initial stages of DIRC are being funded by the CDP with a grant from the CDC. David Levin, president of the Web design company Angry Sam Productions, is building DIRC from scratch.
Levin coordinates the entire project with the CDP staff, and provides technical assistance to help them turn their ideas and goals into a functional Web site.
"The Web site is meant to be an online learning community where organizations large and small, urban and rural, throughout California can find each other and share ideas and resources," Levin said.
Because nothing else like it exists, Levin said DIRC's creation has been time consuming.
"We've been doing our best to create an extensive and all inclusive diabetes resource center where organizations can find models, resources and data about diabetes," he said. "At the same time, we want DIRC to be extremely easy to use and administer."
Benefits and Rewards
The first phase of DIRC was completed in February and set to launch March 1, said Mele, who expects full implementation by the end of 2005.
At that point, Levin said, the resource center will have two key resources.
One function is designed for all types of organizations to search and find diabetes community intervention models, educational resources and data categorized by intended recipients of the data, whether it be for caregivers, patients or community groups.
The second function is a back-end administration system that will allow the CDP staff to keep the site running smoothly, and will prompt partner organizations to upload and maintain content related to their particular project.
Besides enabling groups to share what they've done, Black said DIRC has turned into much more.
"There's information about how local groups can apply for grants. [DIRC] accepts different data and statistics at both the statewide and county level. It includes national level tools and resources; educational opportunities for those who work with people who have diabetes; a bulletin board where people can have a moderated discussion about diabetes; as well as a feature about laws and regulations that will show what kind of legislation is pending, what has been passed related to diabetes and what kind of regulations have come out of that," she said, adding that the CDP is working to make DIRC a portal that leads to already available information in addition to providing original content.
"A lot of it already exists, and we need to pull it together," she said. "Some of it doesn't exist, and we'll need to create it. It's going to be pretty all encompassing." | <urn:uuid:69323fbf-c003-48cb-ba75-af2c40b6c59f> | CC-MAIN-2017-04 | http://www.govtech.com/health/Fighting-Diabetes.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280899.42/warc/CC-MAIN-20170116095120-00110-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.958112 | 1,635 | 3.078125 | 3 |
Startup Sewbo has announced that is has used a robot to sew a T-shirt, producing the world's first robotically-sewn garment, Jonathan Zornow, inventor, told Apparel magazine.
"Many people are surprised to learn that this is the very first time that a robot has sewn a piece of clothing," he said. Despite decades and millions of dollars worth of industrial research, the big hurdle has been that robots can't reliably handle fabrics. Sewbo figured out how to temporarily stiffen materials, making it easy for industrial robots to assemble clothes. You can view the video here.
This is a significant change for the global garment industry, with this technology potentially allowing manufacturers to automate the production of hundreds of billions of dollars worth of clothing each year. | <urn:uuid:b642713c-6e31-4826-a780-fd49405cd27b> | CC-MAIN-2017-04 | http://apparel.edgl.com/news/Robot-Sews-Entire-Garment-for-First-Time107438?googleid=107438 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00469-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.957734 | 158 | 2.875 | 3 |
Creating User Services
70-305 – Developing and Implementing Web Applications with Microsoft Visual Basic .NET and Microsoft Visual Studio .NET
Objective: Creating User Services
SubObjective: Use and edit intrinsic objects. Intrinsic objects include response, request, session, server and application.
Item Number: 70-305.1.13.1
Single Answer, Multiple Choice
Joan has a Web application developed in ASP.NET that allows the user to set many preferences for the user interface. Joan includes this code in the Page_Load event for her Web Form: (Line numbers are included only for reference.)
01 If Request.Cookies (“UserPreferences”) Is Nothing Then
02 Dim MyCookie As HttpCookie
03 MyCookie=New HttpCookie (“UserPreferences”)
04 MyCookie.Values.Add (“ForeColor”, “black”)
05 MyCookie.Values.Add (“FontSize”, “10pt”)
06 ‘INSERT CODE HERE
07 End If
After Joan added the code and tested the application, she expected a cookie to be deposited on the client’s browser, but she finds that this was not the case.
Which statement should be added at Line 06 to place the new cookie on the user’s computer?
- Request.AppendCookie (MyCookie)
- Cookies (“UserPreferences”).Transfer
- Request.Cookies (“UserPreferences”).Append
- Response.Cookies (“UserPreferences”).Append
You should use the line of code that calls the Add method of the Response object’s Cookies collection. Cookies are files that store small amounts of volatile or non-volatile data on the users’ computers. This code checks to see if the user’s computer contains a cookie named “UserPreferences”. If it does not, a new one is created. A couple of preferences are set in the cookie, but the code is missing the line to write the cookie to the user’s computer. The new cookie was created with the variable name MyCookie. This variable name is passed to the Add method of the Response object’s Cookies collection, which is responsible for transferring the cookie to the client.
You should not use either of the lines of code that reference the Request.Cookies object. The Request.Cookies object transfers cookie information from the client to the server, not the other way around.
You should not add the line of code that calls Cookies.Transfer or Response.Cookies.Append. Nether of these is a valid object/property combination.
1. MSDN Library Visual Studio .NET – Search
– Visual Basic and Visual C# Concepts – State Management Concepts | <urn:uuid:d7415154-8bc7-4137-adb5-c70cc37efe23> | CC-MAIN-2017-04 | http://certmag.com/creating-user-services/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00129-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.801884 | 608 | 2.5625 | 3 |
GlossaryBy David F. Carr | Posted 2003-07-01 Email Print
Shorthand for Linux-Apache-MySQL-and (your choice of) Perl, Python or PHP.: Getting To Know Open Source"> Glossary: Getting To Know Open Source
One of the first obstacles in getting comfortable with open source is the terminology. If your company wants to write or sell software based on open-source code, make sure everyone is speaking the same language.
The heart of an operating system (OS); manages memory, processes, etc. Linux is actually a kernel, though it comes packaged with code that qualifies it as an OS.
A recursive acronym: "GNU's Not Unix." The code whose presence makes a kernel an OS. GNU components round out the Linux system; GNU has its own kernel as well.
In open source, "free" means "freely available," not "without cost." Equal access to source code was first advocated by the Free Software Foundation, begun in 1983.
When lowercase, open source usually refers to freely available code that is developed collaboratively by volunteers. Capitalized, Open Source is a certification from the nonprofit Open Source Initiative.
GNU Public License
The GPL, which governs the Linux kernel, is the most frequently used open-source license. It requires that any modifications to the source be shared.
Source: Adapted by Baseline research from The Business and Economics of Linux and Open Source (Martin Fink; ISBN: 0-13-047677-3; Prentice Hall PTR 2003) | <urn:uuid:12722e82-69fe-498f-94cb-94a8344d0b6b> | CC-MAIN-2017-04 | http://www.baselinemag.com/c/a/Projects-Networks-and-Storage/Primer-LAMP/1 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283301.73/warc/CC-MAIN-20170116095123-00037-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.881296 | 322 | 3.25 | 3 |
Choosing where to build a Data Center is a major decision. The location of the facility affects everything from how well-protected your company's computing resources and critical data are to how much it costs to run the Data Center, potentially raising or lowering operating expenses by hundreds of thousands of dollars per year.
The site you choose for a Data Center also dictates how green the facility can be. That's important because green Data Centers are more energy efficient, have lower utility bills, and get more effective use from their power and cooling resources (and therefore can support more hardware) than equivalent conventional Data Centers — not to mention have less impact upon the environment.
Here, then, are 10 questions to ask about any property you're considering for your next Data Center. The corresponding answers can help you choose a suitable site and put you on your way to have a productive, green server environment.
1. Is the area prone to any natural disasters? A Data Center's primary mission is to safeguard your critical data, so it's wise to identify whether earthquakes, fires, flooding, hurricanes, ice storms, landslides or tornadoes have threatened a site in the past. Avoid high-risk areas whenever possible, but if other factors cause your Data Center to be built in a hazardous location, knowing what disasters are likely can allow you to design mitigation technologies into the facility.
2. Are there man-made hazards nearby? Not all threats to Data Centers are naturally-occurring. Determine what man-made sources of electromagnetic interference, pollution, or vibrations might exist near a site, as they can interfere with the proper functioning of your Data Center hardware. Investigate regional airports and try to avoid locations within their flight path. Also be aware of political instability — turmoil in an area can delay delivery of Data Center equipment and supplies, make utility services unreliable, and threaten employee safety.
3. What is the cost of electricity? Electrical costs are the largest operational expense for a Data Center, so local utility rates can have a big impact upon your company's bottom line. (Microsoft, Yahoo and other companies began constructing major Data Centers in the rural town of Quincy, Washington a few years ago thanks to the availability of power at about 2 cents per kWh — a fraction of the national average of about 9 cents per kWh or Silicon Valley rates of about 14 cents per kWh.)
4. What's the mix of that electricity? Electricity comes from many sources, each of which generate different quantities of carbon dioxide and other greenhouse gases. Turning coal into electricity produces more carbon dioxide than natural gas, for instance. Choosing a site whose commercial power company provides a greener mix of power — incorporating more wind power and solar power, for instance — reduces the carbon footprint of the Data Center.
5. How's the weather? The more days per year that a site has cool temperatures, the more frequently that energy-saving technologies such as air economizers and heat wheels can cool the Data Center, thereby reducing your utility bills, maintenance costs and carbon footprint.
6. What other costs does the site involve? If your company does business online and the hardware in your Data Center handles those transactions, the location of the Data Center can have potential tax implications. Be sure to explore those implications before choosing a site. Also research what financial incentives that might be available from area utility companies and governments for energy-efficient Data Center designs.
7. Are dual power sources and dual service providers available? If you want your Data Center to be highly available, then you want its power to come from two different electrical substations and its connectivity to involve two different service providers.
8. Does the site contain any pre-existing infrastructure that can be re-used? Just because you want a new Data Center doesn't mean you have to build it entirely from the ground up. Choosing a site with an existing building — or even better an old computing environment you can upgrade — will allow you to complete construction faster and less expensively. It's also a greener choice, as fewer materials are consumed thanks to re-using the building.
9. What are local building codes like? Confirm that the building designs and technologies you want to implement for your project are allowed by local building codes for any sites you're interested in. For instance, if you hope to reduce water consumption by collecting and using rainwater for landscaping, it's useful to know if the practice is prohibited (as is the case in some regions).
10. How far must workers commute? The closer the Data Center to its workforce, the less carbon that is generated when they commute to and from the facility every day.
It's unlikely for a site to have all of the conditions you want, so decide which are the most important factors for the success of your Data Center and weigh them accordingly as you evaluate each site.
Douglas Alger is an IT Architect at Cisco and a member of Cisco's Network and Data Center Services – Architecture (NDCS-Arch) team. His latest book is Grow a Greener Data Center: A Guide to Building and Operating Energy-Efficient, Ecologically Sensitive Server Environments. | <urn:uuid:10dfba33-28a7-456d-b1ca-df0726cc66c7> | CC-MAIN-2017-04 | http://www.ciscopress.com/articles/article.asp?p=1338345 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00249-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.935765 | 1,040 | 2.703125 | 3 |
Mapping from coast to coast, two centimeters at a time
Survey USA Day contributes precise GPS data to national database
- By William Jackson
- Mar 22, 2011
Hundreds of surveyors using Global Positioning System technology hit the fields across the country Saturday to provide detailed positioning information for the online database maintained by the National Geodetic Survey (NGS).
Survey USA Day was, first of all, a public relations event for the National Society of Professional Surveyors, said Joe Evjen, a geodesist with the NGS. But the data submitted to the Online Positioning User Service (OPUS) will contribute to the reference system that is the basis for most mapping and surveying in the country.
The National Spatial Reference System consists of about 1.5 million passive markers installed by the NGS over the last 200 years, as well as about 1,700 Continuously Operating Reference Stations that provide streams of real-time GPS data.
Fixing the flaws in North American maps
Geodesy is a specialized area of surveying that involves accurately measuring factors such as the shape of the Earth and its gravitational fields, which can affect positioning information. For the first 200 years of its history, field workers with NGS, now a part of the National Oceanic and Atmospheric Administration, surveyed and placed the markers for the spatial reference system.
“We don’t really do anything anymore,” Evjen said. “All of this work is being cloudsourced now.”
Since the 1970s, individual professional surveyors have placed the markers and submitted survey information to NGS. With the advent of professional-grade GPS equipment, this survey data has become more accurate, and OPUS makes it easier for surveyors to use and submit data.
GPS satellites orbit around the center of the Earth’s mass, which now has been located to within less than a centimeter. These precisely measured orbits enable the current generation of consumer GPS devices to be accurate to within a few meters on the Earth’s surface instantaneously. But more sophisticated equipment used by surveyors to take measurements over a matter of hours can provide data on horizontal location to within about two centimeters, and within three to eight centimeters in height.
The surveyor’s equipment receives precise timing signals from GPS satellites every 30 seconds. With 15 minutes of data, a surveyor can submit it to OPUS and receive an e-mail response detailing the location of the spot surveyed. If the surveyor collects four hours of data, it can be submitted for inclusion in the database, to become part of the National Spatial Reference System and used by other surveyors.
Evjen said OPUS has about 1,200 users a day and about 1 percent of them choose to submit data. These submissions not only establish new points in the spatial reference system, but they also can track changes in previously charted points.
With the accuracy of GPS data, “we’re seeing geophysical phenomena we’ve never seen before,” Evjen said. “Plate tectonics was a theory,” but the NGS now can observe the movement of continental plates, even detect them buckling upward when the moon passes over them.
Results from Saturday’s Survey USA Day have not yet been submitted, but the event's chairwoman, Debi Anderson, said surveyors were taking part in all but one or two states. She said the event was intended in part as a morale booster for the profession, “since the economy has devastated so many of us.”
Evjen said surveyors in Pennsylvania were charting the position of a number of original markers along the Mason-Dixon Line, the boundary between Pennsylvania and Maryland and the traditional dividing line between North and South. In Washington, D.C., a stone now on the National Mall near the Washington Monument, placed by Thomas Jefferson to mark the prime meridian for the United States, was surveyed. This was only the second time the position of the stone has been officially checked by NGS, Evjen said.
“We ignored it for most of 200 years,” he said, before positioning it about 10 years ago. Saturday’s survey was a revisit to check for accuracy.
Evjen himself surveyed a point in the District of Columbia’s Fort Reno, designated unofficially by the National Park Service as the highest point in the District, at 409 feet above sea level. He said he thought it was a shame the point had not been officially surveyed and added that he has wanted to do the job for years.
“This gave me the impetus to get out of bed on a Saturday and do it,” he said. He has not yet gotten the results of his measurements on the official height of the point.
William Jackson is a Maryland-based freelance writer. | <urn:uuid:8c80e907-394a-4f02-b980-15222c894f31> | CC-MAIN-2017-04 | https://gcn.com/articles/2011/03/22/survey-day-gps-ngs-database.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280834.29/warc/CC-MAIN-20170116095120-00157-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.960329 | 1,005 | 2.890625 | 3 |
New research from ESET shows the depth of the problem among the UK’s schoolchildren, where 50% of kids aged 9 to 16 have had no formal internet safety training in school. They probably get little training outside of school either, since one-in-four parents admit they lack the confidence to initiate a conversation – believing, possibly incorrectly, that their children have a better understanding of security than themselves.
Mark James, technical director at ESET UK, calls it the taboo subject of the modern world. “Online safety is the modern day ‘birds and bees’ conversation; it evokes dread and nervousness in parents who feel ill-prepared to teach their child the dos and don’ts of the online world. The research shows that two thirds of parents believe it’s primarily their role to educate children about Internet safety, above schools, the police or the Government, however their own online behaviors are questionable.”
It does indeed invoke a similar cat-and-mouse approach, with parents watching and children hiding. According to the research, three-quarters of parents monitor their children’s online activity, with 23% doing so without their children’s knowledge. The children, however, may not be so naive about their parents' behavior. Forty percent of children clear their browsing history to keep it hidden, and almost a third have created online accounts they keep hidden from their parents.
For themselves, the majority of children (84%) believe they should be able to browse the internet without parental oversight – and that includes 70% of those aged between 9 and 16. But believing you are safe and being safe without any formal instruction are two different things.
“The Internet has brought a tremendous benefit to every aspect of daily life,” comments James, “and we want to encourage people of all ages to engage, explore, learn and experience the value it can bring – however education is fundamental to keep everyone armed with the knowledge of how to browse safely.”
That is the purpose behind the new CyberSmart Awards, “to recognize individuals and organizations across the UK that are leading initiatives to educate others about Internet safety.” | <urn:uuid:b187b8c5-543d-46e8-a0df-e6ee717b4edf> | CC-MAIN-2017-04 | https://www.infosecurity-magazine.com/news/cybersmart-awards-2013-launched-5000-grant-on/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280834.29/warc/CC-MAIN-20170116095120-00157-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.963682 | 446 | 3.109375 | 3 |
Remote Access Trojan (RAT)
Remote Access Trojans are programs that provide the capability to allow covert surveillance or the ability to gain unauthorized access to a victim PC. Remote Access Trojans often mimic similar behaviors of keylogger applications by allowing the automated collection of keystrokes, usernames, passwords, screenshots, browser history, emails, chat lots, etc. Remote Access Trojans differ from keyloggers in that they provide the capability for an attacker to gain unauthorized remote access to the victim machine via specially configured communication protocols which are set up upon initial infection of the victim computer. This backdoor into the victim machine can allow an attacker unfettered access, including the ability to monitor user behavior, change computer settings, browse and copy files, utilize the bandwidth (Internet connection) for possible criminal activity, access connected systems, and more.
While the full history of Remote Access Trojans is unknown, these applications have been in use for a number of years to help attackers establish a foothold onto a victim PC. Well-known and long established Remote Access Trojans include the SubSeven, Back Orifice, and Poison-Ivy applications. These programs date to the mid to late 1990s and can still be seen in use to this day.
The successful utilization of such applications led to a number of different applications being produced in the subsequent decades. As security companies become aware of the tactics being utilized by Remote Access Trojans, malware authors are continually evolving their products to try and thwart the newest detection mechanisms.
Common infection method
Remote Access Trojans can be installed in a number of methods or techniques, and will be similar to other malware infection vectors. Specially crafted email attachments, web-links, download packages, or .torrent files could be used as a mechanism for installation of the software. Targeted attacks by a motivated attacker may deceive desired targets into installing such software via social engineering tactics, or even via temporary physical access of the desired computer.
There are a large number of Remote Access Trojans. Some are more well-known than others. SubSeven, Back Orifice, ProRat, Turkojan, and Poison-Ivy are established programs. Others, such as CyberGate, DarkComet, Optix, Shark, and VorteX Rat have a smaller distribution and utilization. This is just a small number of known Remote Access Trojans, and a full list would be quite extensive, and would be continually growing.
Remote Access Trojans are covert by nature and may utilize a randomized filename/path structure to try to prevent identification of the software. Installing and running Malwarebytes Anti-Malware and Malwarebytes Anti-Exploit will help mitigate any potential infection by removing associated files and registry modifications, and/or preventing the initial infection vector from allowing the system to be compromised.
Remote Access Trojans have the potential to collect vast amounts of information against users of an infected machine. If Remote Access Trojan programs are found on a system, it should be assumed that any personal information (which has been accessed on the infected machine) has been compromised. Users should immediately update all usernames and passwords from a clean computer, and notify the appropriate administrator of the system of the potential compromise. Monitor credit reports and bank statements carefully over the following months to spot any suspicious activity to financial accounts.
As in all cases, never click email or website links from unknown locations or install software at the urging of unknown parties. Using a reputable antivirus and anti-malware solution will help to ensure Remote Access Trojans are unable to properly function, and will assist in mitigating any collection of data. Always lock public computers when not in use, and be wary of emails or telephone calls asking to install an application. | <urn:uuid:ea23b88e-6d90-4c0d-8b00-fc101ed73972> | CC-MAIN-2017-04 | https://blog.malwarebytes.com/threats/remote-access-trojan-rat/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00121-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.914014 | 770 | 2.578125 | 3 |
Even though mobile malware is growing rapidly, malware still remains a small proportion of the threats on mobile devices. By far the greatest mobile threat is from “legitimate” applications downloaded from the official Apple Store or from Google Play – apps that undertake “risky” behaviours, such as location tracking, identifying the user’s ID (UDID), accessing the user’s contact list, and sharing sensitive user data.
As an example, consider the current number 28 most popular free app on the Google Play Store – a flashlight application called Super-Bright LED Torch. On installation, this app requires the following permissions:
– Storage: modify or delete the contents of your SD card
– Camera: take pictures and videos
– Your applications information: retrieve running apps
– Phone calls: read phone status and identity
– Network communication: full network access
– System tools: modify system settings
On the surface, all this app does is shine a light – it’s a simple flashlight. Ask yourself, why does it need all these permissions? They indicate that the app performs “risky” behaviour silently in the background while the user is unaware. Many users do not carefully consider the app’s permissions when installing – they simply accept all permissions on good faith. There have already been over 50 million downloads of this flashlight app.
Research conducted by Appthority found that less than 0.4% of apps have malware, while 79% pose other risks indicated by their permissions. Overall, iOS apps undertake more “risky” behaviours than Android apps. Appthority found that 95% of the top free apps performed at least one “risky” behaviour. The figure for the top paid apps was still 80%.
However, carefully considering an app’s permissions before installing, still does not mean that the user is safeguarded. Mobile applications can expand their permissions after installation, through permission leaks and pileup flaws.
Permission leaks arise from the use of customised permissions, where applications grant permissions to other applications. Trend Micro has demonstrated how a malicious application can tap into the permissions of another app via customised permissions. The company has identified almost 10,000 mobile apps that are at risk of this vulnerability, including an online store, a chat application and a social networking application.
Pileup flaws are another threat. Indiana University researchers recently discovered that malicious apps are able to expand their permissions during an operating system upgrade. A user may install an app based on it’s limited stated permissions, however when the user later upgrades the version of Android OS, the app may escalate it’s permissions without the knowledge of the user. This enables the app to engage in “risky” behaviour after it’s permissions have been accepted by the user during installation. This vulnerability, called a pileup flaw, utilises Android’s Package Management Service (PMS).
The mobile threat environment is certainly different from the PC threat environment. On the PC platform, threats come from malware, whereas on mobile, most of the threats are from legitimate apps downloaded from the official app stores, as indicated by permissions beyond that required for their stated purpose. Unsuspecting mobile users are granting widespread permissions, and malicious apps are using techniques to escalate their permissions. The mobile user, and their private, sensitive data, is at risk. If the permissions of an app makes you suspicious, don’t install it. | <urn:uuid:16716598-53cb-4ce7-8232-d29ddc0719ff> | CC-MAIN-2017-04 | https://dwaterson.com/2014/03/31/mobile-app-permissions-leaks-and-pileup-flaws/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00359-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.927662 | 710 | 2.578125 | 3 |
The difference between the revenue and costs in the As Is (before) scenario with those in the To Be (after) scenario determines the bottom-line impact , and the overall cash flow of the project.
To calculate the opportunity, the analysis first starts with an understanding of revenue and cost projections over the analysis period. The team gathers current revenue and costs, and projections of how the revenue and costs are expected to grow without implementing the proposed solution.
It should be noted that in many cases, if the project may only be focused on cost savings, and revenue is not relevant. Therefore, only the cost portion of the equation is tallied for these cost savings projects and analyses.
Next, the impact of the proposed project is simulated on the As Is revenue / cost projections to determine:
- What incremental investment is required
- What are the benefits – savings in IT and Business Costs, or improvements in Revenue
The simulation lets the business tally the level and duration of required investments, which add to costs initially, usually as capital investment such as hardware / software, and over time, usually in the form of incremental operating expenses such as service agreements, support and management costs.
The simulation also estimates the magnitude of benefits such as cost avoidance, savings and revenue improvements, and importantly, how quickly the benefits can be realized.
Comparing the incremental costs versus the benefits, the difference between As Is and To Be, creates a cash flow over time.
To make sure the cash flows make sense financially, and to compare the project with other projects and investment options, an ROI analysis typically summarizes the cash flow into financial key indicators. These indicators are typically:
- Return on Investment (ROI) – a ratio of the net benefits divided by the total investment. A higher ratio means that the projects net benefits are much higher than the investment, and the project is often judged as less risky as a result. To calculate the value, ROI = net benefits / total investment, where net benefits are equal to total benefits – total investment.
- Net Present Value (NPV) Savings – a calculation that measures the net benefit of a project in today’s dollar terms using a discount rate to discount future cash flows. Many times a project requires up-front investment, and this is more expensive in time value of money terms compared to future benefits, so looking at the cash flows over time assures that all cash flows over time are made equivalent. Sometimes a project may have a positive cash flow, but because of a large upfront investment and a long time to accumulate benefits, may actually have a negative NPV savings. A high NPV savings indicates that the project can deliver real bottom-line impact to the organization.
- Payback Period- The payback period is the time frame needed for the project to yield a positive cumulative cash flow, which is typically specified in months. The payback period starts by comparing cumulative costs versus cumulative benefits by month from the beginning of a project until the point when the cumulative benefits exceed the cumulative costs. A quick payback on a project usually is a sign of less risk.
- Internal Rate of Return (IRR) – The IRR calculates the effective interest rate that the project generates. A higher interest rate than competitive projects means that the project has a higher return and generates more effective interest on the investment. In mathematical terms, the Internal Rate of Return is calculated as the projected discount rate that makes the Net Present Value calculation equal to zero. The method of calculation involves a series of guesses, making it the most difficult to understand, but when comparing projects, one of the most effective metrics in selecting the best comparative project. | <urn:uuid:48394d5d-6991-45aa-92b9-e8b730b3b2fb> | CC-MAIN-2017-04 | http://blog.alinean.com/2010/08/is-roi-good-way-to-make-case-for-change.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00569-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.935329 | 739 | 2.84375 | 3 |
Kaspersky Lab, a leading developer of secure content management solutions, has published “‘Instant’ Threats” by Denis Maslennikov and Boris Yampolsky, two of the company’s virus analysts. The article analyzes the spread of malware via instant messengers.
Instant messaging programs are very attractive to malicious users of all kinds, and because of this the problem of malware distribution via IM clients is a serious one. New versions of IM clients contain as yet unknown vulnerabilities, which can be identified first by hackers and only afterwards by program developers. Such situations can easily lead to mass epidemics. Some users are also extremely tired of getting unwanted messages (IM spam).
The article uses the example of ICQ – a popular IM client in many countries – to demonstrate the most widespread types of attack used by cybercriminals against instant messengers.
The widespread theft of ICQ numbers using various malicious programs – primarily, the Trojan PSW.Win32.LdPinch family – has posed a threat to users for several years now. LdPinch not only steals passwords to ICQ and other IM clients but also to email accounts, various FTP programs, online games, etc.
ICQ is used most commonly to spread the following malware: IM worms that use the client as a base for self-propagation; Trojan programs for stealing passwords, including those for ICQ numbers (in the vast majority of cases, it is Trojan-PSW.Win32.LdPinch); and malicious programs created to fraudulently obtain money from users (e.g., Hoax.Win32.*.*).
If IM worms usually spread with little or no help from the user, then in the other cases cybercriminals use a variety of social engineering ploys to provoke a potential victim into clicking on a link, and opening a file if the link downloads a malicious program.
Sometimes the vulnerabilities that are exploited to carry out such attacks may be present in the instant messaging programs themselves. In many cases these vulnerabilities can lead to buffer overflow and the execution of arbitrary code on a system, or enable remote access to a computer without the knowledge or consent of the owner.
The number of unwanted messages received by a user in any given period of time depends on the ICQ number. Users with six-digit UINs receive an average of 15 to 20 unwanted messages every hour. Users with unremarkable nine-digit numbers receive an average of 10 to 14 such messages every day, while users with 'attractive' numbers get 2 to 2.5 times more spam.
Currently, there are no methods or solutions designed specifically to protect IM clients. However, observing the simple rules of ‘computer hygiene’, and using a well-configured anti-spam bot combined with a healthy dose of common sense can help users enjoy worry-free chat via the Internet.
The full version of the article “‘Instant’ threats” can be found at www.viruslist.com. | <urn:uuid:4fb3a4c3-bfa1-48ee-9a12-601d5e67ccb1> | CC-MAIN-2017-04 | http://www.kaspersky.com/au/about/news/virus/2008/Kaspersky_Lab_announces_the_publication_of_the_analytical_article_8220_8216_Instant_8217_Threats_8221_ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00505-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.923266 | 623 | 2.703125 | 3 |
Median household incomes are down for the second straight year, according to the U.S. Census Bureau, but technology is the saving grace in many areas of the nation. Cities with booming technology industries continue to do well, while those that rely on retail stores and agriculture are languishing.
Poor areas continue to become poorer as low-wage jobs accumulate in higher concentration, explained Elizabeth Kneebone, a fellow at the Brookings Institute. And overall, the U.S. made a net loss, she said. “That is something we’ve seen in this recovery. Over the decade, the kinds of jobs we are growing are in lower-skill, low-wage industries like hospitality, food service and retail. Those are jobs that don’t tend to pay the kinds of wages that the jobs we lost did.”
While even some of the wealthiest cities show falling numbers, census data from 2011 found that the cities with the highest incomes tended to be technology centers. Of the 10 richest cities, nine have a larger proportion of workers in the professional, scientific and management sectors than the national average of 10.7 percent.
The top 10 richest U.S. metropolitan regions by median income:
For a more detailed analysis of the richest cities in the nation, visit 24/7 Wall St. | <urn:uuid:a786567c-af60-4841-b9e9-53f0c2709dcd> | CC-MAIN-2017-04 | http://www.govtech.com/education/Americas-Richest-Cities-Tech.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00413-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.942175 | 269 | 2.6875 | 3 |
Researchers behind the Intelligence Advanced Research Projects Activity (IARPA) want to gather computer scientists engineers and physicists to define the challenge of “encoding imperfect physical qubits into a logical qubit that protects against gate errors and damaging environmental influences.”
A quantum bit or qubit or quantum bit in the quantum computing realm usesqubitsinstead of the usual bits representing 1s or 0s. Ultimately quantum computing efforts should result in super-fast, super secure computers the experts say. [For a good article on why quantum computing can be so damn confusing and why its development is critical, go here.]
+More on Network World: Intelligence agency wants a superconducting, super cool, supercomputer+
According to IARPA, quantum information processing has witnessed tremendous advances in high-fidelity qubit operations and an increase in the size and complexity of controlled quantum computing systems, “it still suffers from physical-qubit gate and measurement fidelities that fall short of desired thresholds, multi-qubit systems whose overall performance is inferior to that of isolated qubits, and non-extensible architectures—all of which hinder their path toward fault-tolerance.”
That’s why a program IARPA will detail next month called LogiQ, will build a logical qubit from physical qubits and push for higher fidelity in multi-qubit operations which it hopes will develop vigorous quantum processors.
+More on Network World: US intelligence group wants to reverse-engineer human brain algorithms+
“In order to capture the full range of issues that impact the development of a larger quantum processor, it is important to focus on improving all of these aspects concurrently. Moreover, it is essential to develop relevant error budgets and statistics that can robustly estimate system performance from isolated performance metrics,” IARPA stated.
In trying to define LogiQ, IARPA listed a number of challenges the program will address. Among them LogiQ aims to:
- Experimentally address the true complexity and requirements for encoding quantum information into a logical subspace;
- Ascertain the complete error environment in a system of over ten connected physical qubits;
- Characterize temporal and spatial correlations affecting system performance;
- Choreograph and optimize open loop control of individual qubits throughout gate operations;
- Implement the closed loop feedback needed to compensate for errors and determine the required level of synchronicity and operational scheduling;
- Characterize crosstalk and determine the level of mitigation required for high- fidelity logical-qubit operations;
- Establish the logical-qubit requirements of classical control infrastructure, latencies, speed and fidelity of measurement, etc.;
- Develop classical controllers and measurement devices to manipulate physical qubits appropriate for logical-qubit processing;
- Include extensibility in the logical qubit design to provide flexibility for future, more complex systems.
IARPA will host a Proposers' Day on May 19 at the University of Maryland Stamp Student Union if you are interested. For more details look here.
Check out these other hot stories: | <urn:uuid:8e13b0e0-d620-4e9e-be9d-fccfad673c16> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2910828/security0/us-intelligence-outfit-wants-the-ultimate-quantum-qubit.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00562-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.883561 | 638 | 3.046875 | 3 |
Chromebooks, iPads, Edmodo, network printers, USB drives, Androids, cloud storage, Prezi, virtual reality, LCD projectors and apps.
The vocabulary words above that were at one time nonexistent now fill the conversations of educators and students today. Technology is now at the heart of the classroom experience, but it can be the bane of a teacher’s existence. If he or she needs to capture students’ attention and ensure they have mastered core concepts, then Chromebooks and apps can get in the way. Teaching can be a frustrating and challenging occupation all by itself. Adding in all of those gadgets and gizmos can be totally overwhelming at times.
Despite the headaches, there is no denying that technology has brought some wonderful positives into the classroom. So raise your SMART board remotes, smile at your dual monitors and give three cheers for these reasons to be thankful for technology in the classroom:
Thank you technology… for new teaching tools.
Teaching tools have come such a long way over the years. Imagine trying to illustrate a scientific experiment with only chalk and a chalkboard or teaching a new concept without an updated textbook. Remember filmstrips, mimeographs and slide projectors?
Teachers used to have to wait years to get new tools in their classrooms. But now that we have the Internet with free applications that download in seconds, we can change our lessons and activities on the fly. Have a kid that just isn’t getting it? Just Google a new tool to teach in a different way.
Check out “The Evolution of Classroom Technology” by Edudemic here for a little teaching nostalgia.
Thank you technology… for time savings.
Speaking of changing up a lesson plan: With Internet tech at their fingertips, teachers can save so much time planning by looking for other lessons online. Technological resources like screen sharing between student and teacher computers can get students to the right place in an online textbook quickly and easily.
Cloud storage resources, such as Google Drive, allow teachers to access files ASAP instead of searching through a file cabinet for an old lesson. Need a quick video to help illustrate a concept? Youtube it! For years, teachers had to present the same lessons with the same activities over and over because it took so much time to change anything. Now, we have a zillion resources at our fingertips to find new ideas or concepts to make our own.
Thank you technology… for better communication.
Before we had technology that informed parents about their kids’ grades, the message was delivered in a note sent home in a backpack. Inevitably, a mischievous student would “lose” the note and keep the parent uninformed of what was going on in class. Now, there is email. A teacher can send a note to a parent any time during the day and inform him or her of everything going on in class.
Because of technology, teachers can now communicate with students more easily, too. Apps like Edmodo allow teachers and students to connect on the Internet from anywhere. The Remind app lets teachers send safe text message reminders to students about tests, permission slips or upcoming events. Network monitoring software allows teachers and students to instant message each other through their laptops and desktop computers in classrooms so that questions can be asked privately without disrupting class. Because of technology, students and teachers are more connected and communicative than ever.
Thank you technology… for fun!
As stated in the introduction, technology in classrooms can be frustrating. But it also can be so much fun! How cool is it that we can project a live view of the Eiffel Tower onto a huge screen in our classrooms because of Google Earth? We can now have a lively game of educational Jeopardy by using smart phones as clickers. We can give kids a virtual reality experience of climbing Mount Everest.
Thanks to technology, we are able to show students how vast and amazing the world is. We can open their eyes to new experiences and give them the tools to educate themselves. We can instill the excitement of discovery by providing fun ways of learning. Thanks technology, for making learning FUN!
This Thanksgiving, Impero Software would like to thank all of our clients for believing in our software solution and for supporting our business. We appreciate you. | <urn:uuid:cd8fd97d-a76a-49ae-8f59-596dc8936764> | CC-MAIN-2017-04 | https://www.imperosoftware.com/4-reasons-to-be-thankful-for-technology-in-the-classroom/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00378-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.943654 | 885 | 3.03125 | 3 |
Interesting post from the curator of the Computer History Museum who says that of all the possible historical items he'd like to get his hands on for the museum would be a computerized, pseudo-guitar device called a SynthAxe.
A Wikipedia entry defines the device, which really looks like some sort of deformed guitar as "a fretted, guitar-like MIDI controller that was created by Bill Aitken, Mike Dixon, and Tony Sedivy and manufactured in England in the middle to late 1980s. It is a musical instrument that uses electronic synthesizers to produce sound and is controlled through the use of an arm resembling the neck of a guitar in form and in use. Its name comes from the words synthesizer and axe, a slang term meaning guitar. The SynthAxe itself has no internal sound source; it is purely a controller and needs synthesizers to produce sound. The neck of the instrument is angled upwards from the body, and there are two independent sets of strings."
[NEWS: Rise of the humanoid robots]
Computer History Museum curator Chris Garcia wrote: "Many musicians recorded using a SynthAxe, though only about 100 were made. Part of the reason for that was the price tag: ten thousand UK pounds; about thirteen thousand US dollars at the time. That put the instrument outside the reach of all the but the most well-heeled musicians. In addition to the hefty price tag, they were hard for even experienced players to play, and they were delicate. While the SynthAxe did not have a major impact on the mainstream music industry, it was a fascinating sidebar in the history of MIDI and electronic music, and an excellent example of the effect of a musician working with a technologist."
A good video demonstrating the wide range of sounds the SynthAxe could produce can be seen here:
So if you have one sitting around you might contact Garcia.
Check out these other hot stories: | <urn:uuid:383c9ee9-7d51-46e6-b885-dbab741aa51a> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2225644/data-center/in-search-of-the-mythical-computerized-synthaxe.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283301.73/warc/CC-MAIN-20170116095123-00038-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.964461 | 400 | 2.859375 | 3 |
Trusteer’s research has shown that vulnerable media players are constantly targeted by malicious actors. Since in most environments media players exist on users’ desktops for their own personal use, IT and security administrators ignore these applications and the content files they use. After all, you want to keep your employees productive and happy, and allow them to listen to their harmless music while they work. However, because these applications are not controlled, and users are not in a rush to patch these applications, most installations are vulnerable to exploits.
A Media Player is a software program designed to play multimedia content as it streams in from a website, or from local storage or other resources. Some employees use the media players that arrives with the operating system – like Windows Media Player, while others prefer to download a different media player and install it on their workstation. But both the OS provided players and the downloaded players contain vulnerabilities that can be exploited to deliver malware and infect the user machine.
According to the National Vulnerabilities Database (NVD), over 1,200 vulnerabilities were discovered in media players since 2000. Most of these vulnerabilities were discovered in popular media players like Apple Quicktime, iTunes, RealPlayer and Adobe Shockwave.
Media Players are popular yet vulnerable applications, and can be found on many user endpoints. Because they are designed to process and play files that originate from an external source, they become a top target for exploit attacks. By developing weaponized media content, i.e. an audio or video file that contains an exploit that takes advantage of a media player vulnerability, an attacker can effectively deliver malware to the user’s machine.
All that is left for the attacker is to send the weaponized file to the target user, or convince a target user to view the content from a compromised website using phishing and social engineering schemes. Typical examples include “promotional videos”, links to “free” song downloads and more.
Exploits targeting media players exist in the wild
This is not a theoretical threat. Over the past few years we have seen exploits targeting both known and unknown zero-day vulnerabilities in media players. It is important to note that many exploits target known vulnerabilities for which a patch exists. As long as the patch is not deployed to mitigate the vulnerability, or some other controls are implemented to prevent the exploit, the media player is vulnerable to exploits and drive-by download attacks.
For example, here is a story about Drive-by-download attack exploits a known critical vulnerability in Windows Media Player:
On January 10th, 2012, Microsoft released a security fix for to address the MIDI Remote Code Execution Vulnerability (CVE-2012-0003) in Windows Media Player as part of its monthly patch cycle. Microsoft explained at the time that “An attacker who successfully exploited this vulnerability could take complete control of an affected system.”
A couple of weeks later, security researchers found an active drive-by download attack that exploited the known vulnerability. The attack used a malicious HTML page to load the malformed MIDI file as an embedded object for the Windows Media Player browser plug-in. If successful, the exploit silently downloaded a Remote Access Trojan (RAT) on the user’s machine, without the user’s knowledge.
Author: Dana Tamir, director of enterprise security at Trusteer. | <urn:uuid:0e22dbc5-ddb0-4516-a7f3-f930c742946c> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2014/03/17/exploiting-vulnerabilities-in-media-players-to-spread-advanced-malware/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280834.29/warc/CC-MAIN-20170116095120-00158-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.946827 | 683 | 2.65625 | 3 |
Check Point found a critical vulnerability in the MediaWiki project Web platform, a popular open source Web platform used to create and maintain ‘wiki’ Web sites. The MediaWiki platform includes Wikipedia.org, with over 94 million unique visitors per month.
Check Point researchers discovered that this critical vulnerability left MediaWiki (version 1.8 onwards) exposed to remote code execution (RCE), where an attacker can gain complete control of the vulnerable web server.
This vulnerability has been assigned CVE-2014-1610 by the MITRE organization. In order for a site to be vulnerable, a specific non-default setting must be enabled in the MediaWiki settings. While the exact extent of affected organizations is unknown, this vulnerability was confirmed to impact some of the largest known MediaWiki deployments in the world.
They alerted the WikiMedia Foundation about the vulnerability, and after verification, the Foundation issued an update and patch to the MediaWiki software.
Prior to the availability of a patch for this vulnerability, an attacker could have injected malware infection code into every page in Wikipedia.org, as well as into any other internal or Web-facing wiki site running on MediaWiki with the affected settings.
Since 2006, this is only the third RCE vulnerability found in the MediaWiki platform.
“It only takes a single vulnerability on a widely adopted platform for a hacker to infiltrate and wreak widespread damage. The Check Point Vulnerability Research Group focuses on finding these security exposures and deploying the necessary real-time protections to secure the Internet. We’re pleased that the MediaWiki platform is now protected against attacks on this vulnerability, which would have posed great security risk for millions of daily ‘wiki’ site users,” said Dorit Dor, vice president of products at Check Point Software Technologies. | <urn:uuid:6cab3495-7f41-47ee-9fe7-a374d154d9b2> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2014/01/30/check-point-discovers-critical-vulnerability-in-mediawiki/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00460-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.866534 | 365 | 2.734375 | 3 |
Any questions? Ask Google. In the time of authority crisis, it’s the only place left where people want to seek the answers. It looks as if after a magical gate, unimaginable amount of knowledge was hidden and the only key to access it, was placed just next to the colorful “g” letter in the corner of our web browsers. It seems, like in most of the cases, Internet has also its second depth and it’s deeper than we have ever imagined.
Last month, I read an interesting article about Ian Clarke and his invention. This modest student of Artificial Intelligence and Computer Science at the University of Edinburgh devised a smart way for web users to stay completely anonymous. In assumption, the whole shared data was supposed to be encoded in that way that none of the users could never recognize, from whom they receive and to whom they send information. That’s how Freenet was invented as a sort of dark and impossible control division of Internet. What’s interesting, it is now far bigger than its original prototype and the access to it is protected by mysterious meta-browsers.
Of course, it is clear what kind of data can be found in Freenet. All kinds of agents, weirdos and criminals exchange such interesting documents, as for instance “The Handbook Of Terrorism: The Practical Guide For Explosive Materials” or “The Companion Of Animal Rights Defender: How To Deal With Fire”:) But it’s not the content that really drew my attention. The fact that really shocked me was the size of this phenomenon.
It turns out that using standard web browsers, we only get the access to a tiny part of whole web resources. The characteristic window next to the “g” letter mentioned in the beginning of the article collects only the “cottage” from the vast sea of all information stored. Some sources say, that using Google, we only see 0,003% of all Internet data!
So how big is Internet? Is it possible to measure? If yes, than is there a mind that could imagine its real size? When you consider those questions, an analogy to Universe comes easily to your mind. Whenever anyone tried to draw its borders, it ended up with nothing. It seems, that humans, in their race for reigning the World, created another parallel reality that can’t be controlled anymore. | <urn:uuid:78fa4302-f273-4a5e-8f42-2d4c6b27fcfe> | CC-MAIN-2017-04 | https://www.codetwo.com/blog/which-universe-is-bigger-2/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280266.9/warc/CC-MAIN-20170116095120-00030-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.946104 | 494 | 2.671875 | 3 |
Photo: OLPC XO-1
One Laptop per Child (OLPC) is working on a new version of its XO laptop. The XO-2 will incorporate an enhanced touch screen and remain affordable for volume purchase by developing nations. XO laptops are sufficiently inexpensive to provide every child in the world access to knowledge and modern forms of education, which is the goal of OLPC. These XO laptops are rugged, open source, and so energy efficient that they can be powered by a child manually. Mesh networking can give many machines Internet access from one connection.
"The delivery of the first generation XO laptop has sparked tremendous global interest in the project and provided valuable input on how to make the XO laptop an even better learning tool moving forward,"said Nicholas Negroponte, founder and chairman of One Laptop per Child."Based on feedback from governments, educators and most important, from the children themselves, we are aggressively working to lower the cost, power and size of the XO laptop so that it is more affordable and useable by the world's poorest children," Negroponte said.
"One Laptop per Child and the XO laptop are crucial to the fulfillment of the proposed UN Ninth Millennium Goal: to ensure that every child between the ages of 6 and 12 has immediate access to a personal laptop computer by 2015," said Nirj Deva, Member of the European Parliament. "It's only through access to education that young people will be able to develop the skills necessary to compete globally and to develop the solutions required to break the cycles of poverty, disease and malnutrition. Learning unites the child with the world, binds thevillage into a community, and joins that community to the globalvillage."
Key goals for the XO-2 include reducing the cost, power consumption size of the machine. The original target price of the XO laptop, set in early 2005, was $100; and although that target has not yet been met (it is now at $188), it is clear that OLPC must aim for an even lower target price of $75. New developments in display, processor and other hardware and software technologies will make it possible to achieve that low cost in the future.
Another goal of the next generation XO laptop is to reduce its power consumption. While the first generation XO laptop already requires just one-tenth (2-4 watts versus 20-40 watts) of the electrical power necessary to run a standard laptop, the XO-2 will reduce power consumption even further to 1 watt. This is particularly important for children in remote and rural environments where electricity is scarce or non-existent. Lowering the power consumption will reduce the amount of time required for children to generate power themselves via a hand crank or other manual mechanisms.
The XO-2 will also be about half the size of the current model and approximate the size of a book. The new design will make the XO laptop lighter and easier for children to carry with them to and from school or wherever they go. The XO-2 will continue to be in a green and white case and sport the XO logo in a multitude of colors that allow children to personalize the laptop.
The new laptop will also have dual-touch sensitive displays that will enhance the e-book experience, with a dual-mode display similar to the current XO laptop. The design provides a right and left page in vertical format, a hinged laptop in horizontal format, and a flat two-screen wide continuous surface that can be used in tablet mode. Younger children will be able to use simple keyboards to get going, while older children will be able to switch between keyboards customized for applications as well as for multiple languages.
The dual-touch display is being designed by Pixel Qi, which was founded in early 2008 by Mary Lou Jepsen, former chief technology officer of One Laptop per Child and a leading expert on display technology.
The XO-2 is planned for delivery in 2010. An XO-1.5 will be released in the spring of 2009 with the same design as the first generation but with fewer physical parts and at a lower cost than XO-1. | <urn:uuid:23da4be5-92bd-49f2-b6b6-837a12d0feb4> | CC-MAIN-2017-04 | http://www.govtech.com/products/One-Laptop-Per-Child-Planning-Next-Generation_of_its_PC.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00544-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.942536 | 855 | 2.9375 | 3 |
With the rate of innovation, it’s challenging to keep up with emerging technology. Although the price of 3D printers is coming down, most households don’t have one yet. So it was surprising to come across news that makes it appear almost as if 3D printing is old news. Meanwhile DARPA wants self-destructing tech a bit like messages in Mission Impossible. A potential way to go about this might be shapeshifting 4D-printed technology.
DARPA -- Your device will poof in 5 seconds
When you purchase electronics, you hope it lasts longer than the warranty. In fact, you probably hope it lasts until it’s practically obsolete and has been replaced with a newer, faster and all-around better product. What if electronics simply disappeared when no longer needed . . . as in poof, dissolving into the environment? Those wild DARPA scientists created a Vanishing Programmable Resources (VAPR)
program earlier this year “with the aim of revolutionizing the state of the art in transient electronics or electronics capable of dissolving into the environment around them.”
In a post called, “This web feature will disappear in 5 seconds,” DARPA explained that electronics used on the battlefield are often scattered around and could possibly be captured and reverse-engineered by the enemy "to compromise DoD’s strategic technological advantage." The electronics still need to be rugged and maintain functionality, but “when triggered, be able to degrade partially or completely into their surroundings. Once triggered to dissolve, these electronics would be useless to any enemy who might come across them.”
DARPA program manager Alicia Jackson said, “The commercial off-the-shelf, or COTS, electronics made for everyday purchases are durable and last nearly forever. DARPA is looking for a way to make electronics that last precisely as long as they are needed. The breakdown of such devices could be triggered by a signal sent from command or any number of possible environmental conditions, such as temperature.”
How could anyone pull off DARPA’s James Bond-esque self-destructing tech that would more or less vaporize when triggered? The Self-Assembly Lab at MIT may be heading in that direction with shapeshifting technology, otherwise called 4D printing.
3D printing is old news: Welcome to 4D printing & shapeshifting tech
3D printing is exciting and has become increasingly more sophisticated, but what if a 3D-printed object could morph into another object? Computer scientist, MIT Department of Architecture faculty member and TED senior fellow Skylar Tibbits is working in collaboration with Stratasys Inc. on 3D printing with a twist; the goal is to make it more adaptive and more responsive so it can change from one thing into another thing.
During a TED talk called “The Emergence of 4D Printing," Tibbits stated, “The idea behind 4D printing is that you take multi-material 3D printing -- so you can deposit multiple materials -- and you add a new capability, which is transformation, that right off the bed, the parts can transform from one shape to another shape directly on their own. And this is like robotics without wires or motors. So you completely print this part, and it can transform into something else.”
A 3D-printed object would have a “program embedded directly into the materials” that would allow it “to go from one state to another.” Describing a potential 4D printed object in a CNN video, Tibbets said it has invented within it a “potential energy, that activation, so it can transform on its own.” He gave potential environmental activation examples of water, heat, vibration, sound or pressure. Tibbets suggested that 4D printing could have practical applications in fashion, or perhaps in sports equipment, and in “things that need to respond as the conditions are changing.”
If you can’t take eight and half minutes to watch the TED video, then the following links will jump you directly to the cool demonstrations. In the first demonstration, Tibbets showed “a single strand dipped in water that completely self-folds on its own into the letters M I T.” In the second, a single strand self-folds into a three-dimensional cube without human interaction.
He said, “We think this is the first time that a program and transformation has been embedded directly into the materials themselves. And it also might just be the manufacturing technique that allows us to produce more adaptive infrastructure in the future.” He also suggested that a potential use of self-assembly 4D tech in space might include a highly functional system that can transform into another highly functional system.
In an example about more adaptive infrastructure in the future, Tibbits stated, “Let's go back to infrastructure. In infrastructure, we're working with a company out of Boston called Geosyntec. And we're developing a new paradigm for piping. Imagine if water pipes could expand or contract to change capacity or change flow rate, or maybe even undulate like peristaltics to move the water themselves. So this isn't expensive pumps or valves. This is a completely programmable and adaptive pipe on its own.”
That could be great, but our infrastructure is currently a bit of a mess and very hackable . . . the same as many embedded medical devices. Let's hope that if 4D printing is used in infrastructure that it would be more secure.
Detect heart-rate via webcam and app
Lastly, while less sensational, this next one is more attainable for most geeks. If you have ever wanted to detect your heart-rate using a webcam and a Python app, then you are in luck. The project is on GitHub and interested parties might want to start with the README.
The Changelog states:
webcam-pulse-detector is a cross-platform Python application that can detect a person’s heart-rate using their computer’s webcam. I could write 1,000 words about it, or just show you this: | <urn:uuid:107ee2ce-f5f4-4006-a516-97ea56ef65ca> | CC-MAIN-2017-04 | http://www.computerworld.com/article/2475113/emerging-technology/cool--darpa-self-destructing-tech---4d-printed-tech-that--shapeshifts-.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00169-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.955995 | 1,272 | 2.8125 | 3 |
A research group at IBM has come up with a prototype parallel storage system that they claim is an order of magnitude faster than anything demonstrated before. Using a souped-up version of IBM’s General Parallel File System (GPFS) and a set of Violin Memory’s solid-state storage arrays, the system was able to scan 10 billion files in 43 minutes. They say that’s 37 times faster than the last time IBM topped out GPFS performance in 2007.
The idea behind 10-billion files scans is demonstrate GPFS can keep pace with the enormous flood of data that organizations are amassing. According to IDC, there will be 60 exabytes of digitized data this year and these data stores are expected to increase 60 percent per year. In a nutshell, we’re heading for a zettabyte world.
But it’s not just the aggregate size of storage. Individual businesses and government organizations will soon be expected to actively manage 10 to 100 billion files in a single system. The HPCS DARPA program requires a trillion files in a single system.
That’s certainly beyond the capabilities of storage systems today. Even parallel file systems designed for extreme scalability, like GPFS and Lustre currently top out at about 2 billion files. But the limit is not storage capacity, it’s performance.
While hard drive capacity is increasing at about 25 to 40 percent per year, performance is more in the range of 5 to 10 percent. That’s a problem for all types of storage I/O, but especially for operations on metadata. Metadata is the information that describes file attributes, like name, size, data type, permissions, etc. This information, while small in size, has to be accessed often and quickly — basically every time you do something with a file. When you have billions of files being actively managed, the metadata becomes a choke point.
Typically metadata itself doesn’t require lots of capacity. To store the attributes for 10 billion files, you only need four 2TB disks; they just aren’t fast enough for this level of metadata processing. To get the needed I/O bandwidth, you’d actually need around 200 disk drives. (According to IBM, their 2007 scanning demo of 1 billion files under GPFS required 20 drives.) Using lots of disks to aggregate I/O for metadata is a rather inefficient approach, considering the amount of power, cooling, floor space and system administration associated with disk arrays.
The obvious solution is solid-state storage, and that is indeed what the IBM researchers used for their demo this week. In this case, they used hardware from Violin Memory, a maker of flash storage arrays. According to the IBM researchers, the Violin gear provided the attributes needed for the extreme levels of file scan performance: high bandwidth; low I/O access time, with good transaction rate at medium sized blocks; sustained performance with mixing different I/O access patterns; multiple access paths to shared storage, and reliable data protection in case of NAND failure.
When I asked the IBM team why they opted for Violin in preference to other flash memory offerings, they told me the Violin storage met all of these requirements as well or better than any other SSD approach they had seen. “For example, SSDs on a PCI-e card will not address the high availability requirement unless it replicates with another device,” they said. “This will effectively increase the solution cost. Many SSDs we sampled and evaluated do not sustain performance when mixing different I/O access patterns.”
The storage setup for the demo consisted of four Violin Memory 3205 arrays, with a total raw capacity of 10 TB (7.2 GB usable), and aggregate I/O bandwidth of 5 GB/second. The four arrays can deliver on the order of a million IOPS with 4K blocks, with a typical write latency of 20us and read latency of 90us.
Driving the storage were ten IBM 3650 M2 dual-socket x86 servers, each with 32 GB of memory. The 3650 cluster was connected with InfiniBand, with the Violin boxes hooked to the servers via PCIe.
All 6.5 TB of metadata for the 10 billion files was mapped to the four 3U Violin arrays. No disk drives were required since, for demonstration purposes, the files themselves contained no data. To provide a more or less typical file system environment, the files were spread out across 10 million directories. Scaled up to 100 billion files, the researchers estimated that just half a rack of flash storage arrays would be needed for the metadata, compared to five to ten racks of disks required for the same performance.
It’s noteworthy that the researchers selected Violin gear for this particular demo, especially considering that IBM is currently shipping Fusion-io PCI-based flash drives with its System X servers. Even though the work describe here was just a research project, with no timetable for commercialization, it’s not too big a stretch to imagine future IBM systems with Violin technology folded in. The larger lesson though is that solid-state storage is likely to figure prominently in future storage system, IBM or otherwise, when billions of files is are in the mix. | <urn:uuid:2ae36753-1711-4813-8a0f-f5b3c923249a> | CC-MAIN-2017-04 | https://www.hpcwire.com/2011/07/22/ibm_demos_record-breaking_parallel_file_system_performance/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00563-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.93432 | 1,082 | 2.9375 | 3 |
They’re calling it ‘the missing link’ of the Internet of Things. This is HitchHike, the first self-sufficient WiFi system to enable the transmission of data using just microwatts of energy.
While many futurists have predicted a world not far from now in which every device – across industry, smart cities and consumer technology – taps into and communicates via an interconnected Internet of Things, putting that vision in place is easier said than done.
To bring about this connected, automated future, IoT developers and engineers are going to need to find a way to power it all. Low-power solutions such as LPWA are making headway and working towards solving the problem, and now a group of Stanford engineers have taken the concept one step further.
The team at Stanford was led by Sachin Katti, an associate professor of electrical engineering and computer science, and Pengyu Zhang, a postdoctoral researcher. The group has developed HitchHike, a tiny, ultra-low-energy wireless radio system.
The video below features an off-the-shelf Intel WiFi transmitter ( the black box on right) and an Apple MacBook Pro on the left, acting as the WiFi receiver. In the middle is a prototype of the HitchHike device. It’s connected to a heart rate sensor. HitchHike samples the heart rate data and piggybacks it on the WiFi signal from the Intel WiFi router. The Apple laptop receives, extracts and displays the piggybacked signal in real-time.
“HitchHike is the first self-sufficient WiFi system that enables data transmission using just microwatts of energy – almost zero,” Zhang said. “Better yet, it can be used as-is with existing WiFi without modification or additional equipment. You can use it right now with a cell phone and your off-the-shelf WiFi router.”
Hitchhike could boost the adoption of IoT technology
According to a research paper presented at the Association for Computing Machinery’s SenSys Conference, HitchHike technology is so low-power that it could be driven for a decade or more by a tiny battery. Devices could even harness energy from radio waves to power themselves indefinitely.
The team’s HitchHike transmitters have a range of up to 50 metres and can transmit around 300 kilobits of date per second.
“HitchHike could lead to widespread adoption in the Internet of Things,” Katti said. “Sensors could be deployed anywhere we can put a coin battery that has existing WiFi. The technology could potentially even operate without batteries. That would be a big development in this field.”
The Stanford team’s technology has been named ‘HitchHike’ because it essentially jumps on board incoming radio waves, translates incoming signals and retransmits its own data on a different WiFi channel. So HitchHike projects WiFi signals out with a slightly different message to when they came in, a type of signal called backscatter.
To send out a meaningful message and communicate with other devices, the HitchHike team has developed a system they call Codeword Translation.
“HitchHike opens the doors for widespread deployment of low-power WiFi communication using widely available WiFi infrastructure and, for the first time, truly empower the Internet of Things,” Zhang said. | <urn:uuid:653cc504-66ff-4c1c-ba5c-3390264c9b4e> | CC-MAIN-2017-04 | https://internetofbusiness.com/stanford-low-power-hitchhike/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00287-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.929991 | 702 | 3.171875 | 3 |
Researchers at Tohoku University in Sendai, north-eastern Japan, announced on Wednesday that they had broken a batch of performance records on their NEC SX-9 supercomputer, as measured on the HPC Challenge Benchmark test. Hiroaki Kobayashi, director the university’s Cyberscience Center, said the SX-9 had achieved the highest marks ever in 19 of 28 areas the test evaluates in computer processing, memory bandwidth and networking bandwidth. The scores were matched against those previously achieved on the same independent benchmark test by other leading supercomputers, including IBM’s Blue Gene/L, Cray’s XT3/4 and SGI’s Altix ICE, with the SX-9 coming out on top 64 percent of the time.
The news comes at a good time for NEC. The Tokyo-based manufacturer of vector-based supercomputers is battling in a market that has been moving away from its expensive high-performance vector processing models to systems that use more modestly priced commodity-type superscalar CPUs. These cheaper chips can be coupled tightly together or used in clusters of computers to achieve similar or better results than vector competitors — at least in some areas of supercomputing.
At Tohoku University, however, a stronghold of vector computing since it installed its first SX-1 in 1985, Director Kobayashi argues that vector computing is essential for certain types of applications and will only increase in importance as advances are made in parallel processing.
“In the future, data parallel processing will become more important in high performance computing,” says Kobayashi. “And vector processing provides a very efficient model for it.” This is why, he adds, Intel, which has long provided short vector SIMD code extensions for its x86 architecture, is employing wider vector operations in its upcoming Larrabee graphics processing chip. “Regarding parallel processing, at the instruction-set level, vector instruction sets are the key to future processors, no matter what kind of micro-architecture is used,” says Kobayashi.”
In addition, he emphasizes that for the kind of programs that the 1,500 paying supercomputer users of the University’s Cyberscience Center want to run, vector is still king. Most of these users are involved in government and academic research programs in areas like aerospace, environmental simulations, structural analysis and nanotechnology. “They want to conduct very large simulations, so are looking for an efficient handling mechanism to process extremely large amounts of data in a single operation,” says Kobayashi. “Vector processing is best suited to this kind of application.”
The SX-9 employs a single-chip vector processor capable of reaching 102 GFLOPS. Up to 16 CPUs sharing 1 TB of memory can be incorporated on a single node, combing to produce 1.6 TFLOPS of peak performance. The Tohoku University SX-9 set-up, which began operations this April, consists of 16 nodes, each of 16 CPUs, producing an overall peak performance of 26 TFLOPS. On a sustained performance bases, the Cyberscience Center’s test results show a single SX-9 CPU outperforms that of the previous SX-8R by between four to eight times, depending on the application.
Much of the new CPU’s improved performance can be accounted for by the addition of an arithmetic unit and raising the number of vector pipelines — all integrated on a single chip that is the first to surpass 100 GFLOPS.
But Kobayashi notes that a new feature of the SX-9, the inclusion of an assignable data buffer or ADB, has also helped boost performance significantly. “ADB is software-controllable cache memory,” he explains. “It lets the user assign the data to be cached, which prevents it from being evicted.”
In a simulation used to detect the presence of land mines with electromagnetic waves, for instance, performance increased by 20 percent when ADB was used. In another simulation, which tracked the movement of tectonic plates (the cause of earthquakes), the use of ADB improved performance by 75 percent, while a simulation involving the physics of plasma under certain conditions saw performance jump two times when employing ADB.
Despite such gains, Kobayashi has a gripe with the current ADB design: the cache space is limited to just 256 kilobytes. This means users cannot place all the target data in the cache; rather, they must select only the portion that they judge will work most effectively in ADB. To determine the optimum amount of cache memory, the Cyberscience Center, which is developing a software simulator based on the SX-9 architecture to design future supercomputer models, ran simulations using real application code. To achieve the highest performance, the researchers found that a minimum of 8 MB of ADB memory is necessary. NEC has been so advised.
Regarding the HPC Challenge Benchmark results, it was no surprise that the SX-9, the architecture of which is particularly designed to produce efficient processing of large data amounts, came out on top in memory performance and did well in networking bandwidth. But Kobayashi was also keen to point out that when it came to computing performance, despite the relatively small size of the Center’s SX-9 set-up, it still competed well against much larger configured systems.
“In the case of global-FFT testing, for instance, we still made second place to Cray’s XT3, which is a huge system, with maybe 100 times more processors,” says Kobayashi. “And while the XT3’s peak performance was five times higher (than our system) its global-FFT result was only 20 percent higher. So if we could add even just one more lane (consisting of four nodes) we would expect to do much better.”
In recent years NEC has had to relinquish its No. 1 position in the TOP500 list of best performing supercomputers to scalar-based systems from Cray, IBM and other competitors when it comes to sheer peak speeds. As a result, it has turned to emphasizing efficient sustained performance and productivity. But now there is belief within the company that given a large enough SX-9 installation, NEC could once again challenge for the top performance spot, which it held from 2002 to 2004 with its SX-6 generation.
“Next March JAMSTEC (Japan Agency for Marine-Earth Science Technology) will begin operations of its Earth Simulator II,” notes Rie Toh, manager of NEC’s HPC marketing promotion division. The system, used to forecast global climate changes, typhoons and other extreme weather conditions, as well as predict earthquakes, volcano activity and the like, will use NEC supercomputer technology, as did the previous Earth Simulator I. The new system will incorporate 160 SX-9 nodes, each containing eight CPUs, making a total of 1280 CPUs. NEC says this would produce a peak performance of 131 TFLOPS. “Given that Cray’s XT3 holds the HPC Challenge Benchmark’s highest score for G-FFT system performance with 124.4 TFLOPS,” says Toh, “we are eager to see what the SX-9-based Earth Simulator II will achieve when it’s up and running.”
But NEC’s window of opportunity to win speed-king bragging rights may not be open for long. In the endless game of breaking supercomputer performance records, Cray has just announced it plans to ship its next-generation XT5 model at about the time the Earth Simulator II is to begin operations. | <urn:uuid:61102c11-27f3-44f5-b818-56157692afd8> | CC-MAIN-2017-04 | https://www.hpcwire.com/2008/11/19/latest_benchmark_results_on_nec_super_highlights_sx-9_performance/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00195-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.946139 | 1,600 | 2.578125 | 3 |
10 Things You Should Know About Safer Social Networking
News Analysis: Although Facebook and other social networks won't cause sexually transmitted diseases, there are a number of threats to personal security that individuals and enterprise users need to pay attention to. No matter what you do with your friends on the Internet, here is what you should know about safer social networking.It's not often that a tech story pops up relating to syphilis. According to researchers in the U.K., they can draw a link between an upswing in syphilis cases and increased Facebook use that has resulted in more strangers meeting up offline. For its part, Facebook says that the claim is downright ludicrous and using the social network will in no way increase a user's chances of contracting a sexually transmitted disease. That's probably true. And it's more than likely that users who connect with friends on the social network won't need to worry about getting syphilis.
But the story highlights two things that are center to the social networking world: security and privacy.
Security and privacy are extremely important issues to both consumers and companies that log in to social networks and communicate with others. As recent history has shown, malicious hackers are doing their part to capitalize on user desire to access social networks by stealing sensitive data through phishing attacks and other scams. And those threats just keep coming.
That's why we've decided to give a little refresher course on things that users need to know as they continue to jump feet first into the social world. Here is what they are:
1. The security threats are real
Although some folks scoff at the security threats posed by social networks, they are real and they can do significant damage. Security problems related to social networks might not compare to those found on Windows, but they are still troublesome. Security firm Sophos recently witnessed an uptick in malware resulting from social networking use. It's a real issue. If users want to maintain security going forward, they need to be more aware of the potential flaws that exist in social networks. If that doesn't happen, even more trouble could erupt.
2. Employers aren't too fond of social networking
The enterprise isn't very inviting when it comes to social networking. The issue at most firms is that users are attempting to access social networks from corporate computers. Because of the aforementioned security issues and the inherent trust some folks have in social networks, malware can break out across a corporate network. That's precisely why employees need to be more careful accessing social networks in the workplace. If trouble erupts, it's the employee who could face the most trouble.
3. Phishing scams galore
Malicious hackers love that more and more people are joining social networks. As millions of people around the globe connect with others and continue to receive e-mail requests from their favorite social networks, malicious hackers have found a way to capitalize. They simply look at the design and wording of a message sent by a social network, mimic it and send it to peoples' e-mail addresses. If a person clicks a link in the e-mail and is redirected to a malicious site, the hackers can potentially steal sensitive information. Going forward, users need to be more careful about what they click on in e-mails. There are some telltale signs that an e-mail is a phishing scam, and users need to be aware of them.
4. Privacy isn't guaranteed
It's nice to think that as users communicate with friends on social networks, all of their information will be kept private from others. But the reality is, that doesn't happen. Social networks are becoming increasingly less private, due to user desire to share more content than ever before. Years ago, the Web was a place of anonymity where users would rarely share anything beyond their usernames. Today, their lives are out in the open for anyone to see. For example, Bing features real-time Facebook status updates and a feed of tweets from Twitter. If a user is saying something they don't want folks to know, putting it into a status update or tweet is probably not the best place for it. | <urn:uuid:c67d5585-1d46-478a-8785-0ba8775898a0> | CC-MAIN-2017-04 | http://www.eweek.com/c/a/Enterprise-Applications/10-Things-You-Should-Know-About-Safer-Social-Networking | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283301.73/warc/CC-MAIN-20170116095123-00039-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.955615 | 832 | 2.546875 | 3 |
Heartbleed Bug: What Risks Remain?A Progress Report on Ongoing Mitigation Efforts
In the more than a month since news of the OpenSSL vulnerability known as the Heartbleed bug first surfaced, many organizations have made progress in remediating the risks. Yet the vulnerability still exists on many systems.
"Since the announcement, we're seeing Heartbleed vulnerabilities in the wild on the vast majority of penetration tests we do for clients that run a platform that could be susceptible to this bug," says Mike Weber, vice president at Coalfire Labs, a forensic investigations firm.
Although many of the risks associated with Heartbleed have been mitigated, some gaps still need to be addressed, especially patching internal systems that are using vulnerable OpenSSL versions, security experts say.
In dealing with the risks, organizations are continuing to monitor their networks, patch systems and conduct risk analyses to identify unforeseen issues, such as devices that are more difficult to patch or require consumer action to update.
"Due to the nature of the Heartbleed bug, it's difficult for people to know if they've been compromised or not," says David Chartier, CEO of Codenomicon, the Finland-based security vendor that discovered the bug, along with a researcher at Google Security.
Heartbleed exposes a flaw in OpenSSL, a cryptographic tool that provides communication security and privacy over the Internet for applications such as e-mail, instant messaging and some VPNs (see Heartbleed Bug: What You Need to Know).
"The Heartbleed bug allows anyone on the Internet to read the memory of the systems protected by the vulnerable versions of the OpenSSL software," a statement from Codenomicon notes.
While great strides have been made to eliminate the bug from Internet-facing systems, internal systems are still vulnerable, Weber says.
"That is, there are internal systems that are using the vulnerable OpenSSL libraries to secure communication," he says. "Enterprises are justifying the presence of this vulnerability through 'exposure' - since the internal systems can't be accessed by the Internet at large, the systems are at much lower risk of attack. While that may be mathematically true, those that can do the most damage - the insider threat - are able to exploit these systems for a much more targeted and damaging attack."
Additionally, if an unpatched internal system establishes an SSL connection outbound to a server, the server could initiate the heartbeat request and subsequently exploit it, Weber explains. "This could be the result of connecting to a compromised server, or if that server were to be impersonated via man-in-the-middle attacks."
Many organizations reacted very quickly to address the Heartbleed bug in their environments, says Satnam Narang, researcher at Symantec Security Response. "[Still], we have seen reports stating that out of 500,000 vulnerable sites, [only] 375,000 have been patched," he adds.
The Heartbleed bug is still a significant issue, says David Rockvam of Entrust, a digital certificate provider. He cites a report from Internet research firm Netcraft that identified remaining gaps.
"Although many secure websites reacted promptly to the Heartbleed bug by patching OpenSSL, replacing their SSL certificates and revoking the old certificates, some have made the critical mistake of reusing the potentially compromised private key in the new certificate," according to the Netcraft report.
"Since the Heartbleed bug was announced on April 7, more than 30,000 affected certificates have been revoked and reissued without changing the private key," Netcraft says.
According to the research firm, only 14 percent of affected websites completed all three necessary steps after patching the Heartbleed bug: replacing the SSL certificates, revoking the old ones and making sure to use a different private key.
Another concern is that a new vulnerability like Heartbleed could emerge, says Christopher Paidhrin, security administration manager at PeaceHealth, a healthcare provider in the Pacific Northwest. "The pace of code development and feature enhancement is stressing the security testing and code validation processes," he says. "The complexity of core Web services is daunting. The frequency of announced exploits is a measure of how big a challenge we face."
Organizations need to thoroughly test their critical infrastructure for bugs similar to Heartbleed, Codenomicon's Chartier stresses. "We're trying to get more organizations motivated to do that," he says. "If they don't do that, it's just a matter of time before something else is found and exploited and they may suffer."
Responding to Heartbleed
After learning of Heartbleed, the U.S. Department of Homeland Security worked to create a number of compromise detection signatures for various government systems, Larry Zelvin, director of the National Cybersecurity and Communications Integration Center at the U.S. Department of Homeland Security, said at a May 21 hearing of the House Subcommittee on Counterterrorism and Intelligence and Subcommittee on Cybersecurity, Infrastructure Protection and Security Technologies.
"DHS worked with civilian agencies to scan their .gov websites and networks for Heartbleed vulnerabilities, and provided technical assistance for issues of concern identified through this process," he said. "The NCCIC and its components also began a highly active outreach to cyber researchers, critical infrastructure owners, operators and vendors ... and international partners to discuss measures to mitigate the vulnerability and determine if there had been active exploits."
Zelvin noted, however, that while there was rapid and coordinated federal government response to the Heartbleed bug, "the lack of clear and updated laws reflecting the roles and responsibilities of civilian network security caused unnecessary delays in the incident response."
Meanwhile, Christopher Glyer, technical director at Mandiant, a cybersecurity firm, says the vast majority of his company's clients have patched most of their Internet-facing and internal systems. "The larger risk going forward would be on devices that are more difficult to patch or require a consumer to take an action," he says.
And the National Association of Federal Credit Unions, which educated its members about the vulnerability, has "heard very little impact from our members other than they were working with their IT divisions to work through the fixes," says Anthony Demangone, executive vice president and chief operating officer.
A key step in mitigating the risk, Demangone says, is to conduct a basic risk analysis of operating systems. "See if the Heartbleed vulnerability is found through the various IT systems and, for that matter, your vendors you are utilizing, and then close the loop as quickly as possible," he says. "It's classic risk management."
Glyer of Mandiant says organizations should "prioritize patching devices that would allow remote access into the organization. Most organizations we are working with are actively reaching out to their vendors to determine if their software is vulnerable, and are running vulnerability scanners on internal and Internet-facing systems to help identify what needs to be fixed."
The single most important mitigation step organizations need to do is revoke, reissue and re-install certificates, Entrust's Rockvam says. Additionally, he recommends organizations upgrade systems to a software version that uses OpenSSL 1.0.1g or higher; renew SSL certificates with a new private key; ask users to change their passwords; and notify users if content may have been compromised.
Paidhrin of PeaceHealth says organizations need to complete an exposure assessment and validate that remediations were exhaustive. "If [you're] unsure of your security status, contact one of your major security vendors and ask for guidance and a review of your action plan and progress." | <urn:uuid:f891a7f3-8fc2-4549-8502-3f1b13dd553e> | CC-MAIN-2017-04 | http://www.bankinfosecurity.com/heartbleed-where-are-we-now-a-6872/op-1 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00279-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.954852 | 1,566 | 2.640625 | 3 |
You can tell that complex event processing (CEP) was not initially developed by a company because no marketing man would ever have allowed a technology to be referred to as complex. In fact, the man generally regarded as the father of CEP, David Luckham, is a professor at Stamford University and he invented the term to describe the processing of complex events, as opposed to any idea that event processing is, in itself, particularly complex. He defines a complex event as “an event that could only happen if lots of other events happen”. For example, to use CEP (or any other technology for that matter) to recognise fraud means recognising the pattern of events that indicates that type of fraud: so this pattern consists of the “lots of other events”. To take a more prosaic example, the completion of an online shopping basket is a complex event that is dependent on a whole series of preceding steps.
Also, bear in mind that events are not limited to a single environment: for example, the availability of hotel rooms represents a complex event that will be influenced by such diverse individual events as the weather, time of year, whether there are any conferences in town and how many, the state of the economy and so on.
So, CEP is essentially about monitoring those individual steps or (micro)events or, in some cases, transactions, and then looking to see if they make up the complex event you are looking for. Or, of course, you might be looking for any one of a number of different complex events so there are multiple patterns against which the incoming event data must be tested against.
That’s it, basically. Of course there are bells and whistles that you can add to make the software faster or easier to use or look prettier but, essentially, CEP is about two things: monitoring events and then looking to see if those events fit into patterns that you are looking for or, sometimes, looking for events that don’t fit an expected pattern (anomaly detection). If you think about what the intelligence communities do, those responsible for squashing terrorist threats before anyone is affected, they are looking for a series of events to establish and identify a complex event that might be a threat. That’s the type of things they’ve been doing for years and businesses can leverage the same approach.
So, the question, naturally, is what can CEP do for your business? And the answers are as diverse as industry and commerce. Two generic examples are a) any environment in which you might want to prevent and/or detect fraud or criminal behaviour or any sort of unwanted behaviour (even if it is not actually illegal but is, perhaps, against corporate governance policies); and b) any network that you need to monitor, whether that be a road, rail, pipeline, computer or utility network. In its broadest sense you can even think of a shop-floor production line as a sort of network and certainly CEP has been employed on the shop-floor as it has in airports and by airlines.
An even more generic example is when you want to link events to a process of some sort. It is often the case that many business processes, for example, are embedded within application software while other processes have been formally modelled and are managed within a BPM (business process management) environment. And then, of course, informal processes abound. One of the issues that arises is how to link these together, and one answer to that is to use CEP, treating each step in a business process and each transaction as an event in its own right. However, this is not limited to business processes per se but any sort of environment where processes are involved, including process manufacturing (the shop-floor again), communications processes (not necessarily within communications companies), integration processes (witness Informatica’s acquisition of AgentLogic) and so on.
Finally, there lots of specific use cases: monitoring PC fleets for carbon emissions, monitoring stock ticks within capital markets, monitoring automated number plate recognition systems, monitoring patient heart rates and so on and so forth.
The bottom line is that if you need something monitored then you may need CEP. It might be because you need to detect a problem (typically, something anomalous has happened); or because there is an opportunity to buy stock or up-sell or cross-sell to an existing customer because you have recognised a particular pattern of events that predicts a higher than usual success rate (using CEP in conjunction with predictive analytics); or because you need to prevent, or at least detect, unwanted behaviour such as a security breach or potential fraud. But whatever your requirement is don’t get put off because it is called complex: it isn’t or, at least, it doesn’t have to be (with the right vendor). | <urn:uuid:ebfc38cf-aba4-4ad4-830c-4523ece50516> | CC-MAIN-2017-04 | http://www.bloorresearch.com/analysis/simplifying-cep/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00489-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.961225 | 992 | 2.59375 | 3 |
Viral Art: A Gallery Of Security ThreatsVisually, online threats such as viruses, worms, and Trojans can be as beautiful as they are menacing to individual PC users, enterprises, and IT security professionals.
With 94 % of IT professionals expecting to suffer a security breach, and Windows 7 already showing signs of vulnerability to hackers, it's fair to say we're under siege from attackers.
But what does the enemy look like? What color is spyware? What shape and form identify varying strains of malware, worms, and Trojans?
Artists Alex Dragulescu and Julian Hodgson accepted a commission from MessageLabs, now part of Symantec, and set to work to find out.
It turns out the look of online threats can be as beautiful as they are menacing to individual PC users, enterprises, and IT security professionals.
Using pieces of disassembled code, API calls, memory addresses, and subroutines associated with the bane of a security team's existence, they analyzed the data by frequency, density, and groupings. Algorithms were then developed and the artists mapped the data to the inputs of the algorithms, which then generated virtual 3-D entities.
The patterns and rhythms found in the data gave shape to the configuration of the artificial organisms, and the result was a series of images called Malwarez.
In addition to malware, worms, Trojans, the artists also analyzed and created renderings of e-mail spam, phishing attacks, keyloggers, and malicious e-card attacks.
Dragulescu's projects are experiments and explorations of algorithms, computational models, simulations, and information visualizations that involve data derived from databases, spam e-mails, malware, blogs, and video-game assets.
In 2005, his software Blogbot won the IBM New Media Award. Blogbot is a software agent in development that generates experimental graphic novels based on text harvested from blogs. Since 2007, Dragulescu has worked as a researcher in the Social Media Group at the MIT Media Lab.
InformationWeek Analytics has published an independent analysis on what executives really think about security. Download the report here (registration required). | <urn:uuid:8890eacf-bf3c-45b7-8e3a-4d61f07a91c1> | CC-MAIN-2017-04 | http://www.darkreading.com/vulnerabilities-and-threats/viral-art--a-gallery-of-security-threats/d/d-id/1079278 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00397-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.944662 | 453 | 2.609375 | 3 |
Public librarians find themselves at a crossroads, thanks to the wide array of information available on the World Wide Web. Controversy clouds the role librarians should play in society and what are considered constitutionally appropriate library resources.
In communities across the country, the political and moral climate is becoming more conservative. Debate is heating up over allowing access to sexual material on the Net, particularly by minors. As more public libraries offer connections to the Internet, librarians must grapple with the concepts of intellectual freedom and censorship and worry about protecting themselves from the threat of criminal liability.
Already, President Clinton signed into law the Communications Decency Act, making it a felony punishable by fine and jail time to make available sexually offensive material which may be accessible to children. In June, a special three-judge appellate panel named to hear the case, ruled the act an unconstitutional violation of free speech. The court's ruling has been appealed to the Supreme Court.
The American Library Association (ALA) believes that a public library's mission is to provide patrons equal access to all library resources, and that library policies and procedures should not deny minors equal access. Equal access increasingly means allowing children to navigate the Web on their local library computer terminals.
Today, 45 percent of U.S. public libraries offer connections to the Internet, according to a survey conducted by the U.S. National Commission on Libraries and Information Sciences (NCLIS). This is a 113 percent increase since 1994, when a similar NCLIS survey found 21 percent of public libraries connected. Preliminary survey analysis indicates that this number could exceed 60 percent by 1997.
While libraries can select specific books for their collections, they cannot do the same with Web sites. The Internet comes as a whole collection. Yet citizens who pay taxes for this access feel that libraries are responsible for holding this resource up to community standards. So, public librarians have come up with a
variety of creative solutions that address the political concerns of offering Internet access.
In Michigan, Bev Papai, director of the Farmington Hills Community Library, purchased privacy screens to fit over the monitors of the Adult Department computers. (Computers in the children's room use filtering software.) The screens limit the observation of images on the computer monitors. Papai is pleased with the privacy screens, which make the monitor appear black from an angle, making it difficult for any passerby to see what the user is viewing. Only when a person stands directly behind the monitor is the screen visible.
Some libraries use filtering software to circumvent complaints about children accessing inappropriate material on the Web. Several software companies have created programs that claim to police the Net, preventing access to graphic pornography. Cyber Patrol, SurfWatch and Net Nanny are among the more popular ones available. These programs filter out material the programs' publishers view as offensive by using a database of banned sites. Any request to visit a specific site is compared against the database. If there is a match, the computer blocks the user and fails the access.
The database also may contain a list of words that could lead a user to an objectionable site if entered into a search engine, or as part of a Uniform Resource Locator (URL). Some of these software tools can restrict access to only those sites rated acceptable by the Recreational Software Advisory Council and SafeSurf, two Internet rating groups.
The programs work because search engines look at HTML tags describing origins of a home page in order to create abstracts of the sites. Pornographic Web pages usually advertise their addresses by using sexual terms in these tags. Filtering software can prevent much of the descriptive word searching used to locate this material.
However, these types of filtering programs are far from perfect. Software makers have a hard time keeping up with the 3,000+ new Internet sites posted daily. The programs are only updated on a monthly basis, so new sites will fall through the cracks. Nor are the databases of objectionable words perfect in catching the many synonyms for sexual terms which can link to an indecent site.
Other libraries are offering both filtered and unfiltered access to the Internet. Because of the "nature of the Internet and the World Wide Web, objectionable sites cannot be totally eliminated even through the use of filtering software," said Farmington Community Library Board of Trustee Clark G. Doughty.
The library installed Cyber Patrol, developed by Microsystems Software. Papai was concerned about eliminating access to useful information with the software, but she feels justified in using it. "It is simply a risk we are willing to take to assure the greatest level of comfort to all users of the building," she said. The software costs about $50 and is available on a subscription basis. But Papai cautions that "filtering software will never replace a parent's guidance. Parents have to instill in children certain values."
Lesley Williams, head of Information Services at Evanston Public Library, feels that "by using filtering software, libraries are setting themselves up for liability, due to the presumed protection from graphic sites." Joyce Latham, director of library automation at the Chicago Public Library, agrees. "Libraries are vulnerable," she said. "We should not pursue strategies in the short-term that make us vulnerable in the long-term."
Other libraries are less discriminating about the use of blocking software. Michael Madden, director of Schaumburg Township District Library in Illinois, uses Cyber Patrol at all Internet stations in the library. He likes the software's flexibility. "You can unblock a specific site by overriding the program," he said.
There is concern among many librarians that blocking software will prevent access to valuable information, leading to a form of censorship. Certain words can be targeted and blocked, but it's a hit-or-miss proposition. Earlier this year, SurfWatch, a filtering software, blocked access to the White House home page over the word "couple." The term was used in talking about the Clinton and Gore families.
Meanwhile, Schaumburg's library has been using Cyber Patrol for a few months, and Madden mentioned that a few inappropriate sites have come through. "It's not perfect," he said.
VIEWER DISCRETION ADVISED
Certain libraries forego software tools and risk full Internet access. According to Judith Krug, director of ALA's Office of Intellectual Freedom, "the librarian's role is to bring people together with information, not keep it from them." She believes filtering software is "contrary to what a library stands for, and is definitely not appropriate for a public library."
In Evanston, the library's policy states, in part, that patrons use the Internet at their own discretion and that parents are expected to monitor its use by minors. Some libraries will not restrict access to any material based on the age of the borrower.
Libraries such as Evanston, and advocates of unfettered access to the Web, feel that the amount of indecent material on the Web is exaggerated. "For some reason, people believe the Internet is replete with pornographic and explicit sites," said Krug. "Only three percent of the material out there may be sites you wouldn't want your child to view."
To retrieve much of this material, she feels a user would have to be looking for it, and is not likely to stumble across it. Also, much of the access to hardcore pornography requires a credit card to buy passwords. Libraries that do give full access to the Internet provide a warning statement that the library has no control over the contents of cyberspace. Oklahoma City's public library system uses the following disclaimer: "The Internet is an unregulated medium. It offers access to a wealth of material that is personally, professionally and culturally enriching. It also enables access to some material that may be offensive, disturbing and/or illegal."
Other libraries will not provide access to minors unless parental consent is obtained. At most libraries, when children receive permission to obtain a library card, parents are asked to take responsibility for materials the child reads. Similar policies regarding Internet use are springing up in libraries, requiring parents to be responsible for their child's Internet activity.
Other strategies include offering classes to both parents and children on searching the Internet. At Metropolitan Library Systems, serving Oklahoma City, Donna Morris, director of Public Services, said that because of legal concerns with Internet access the library requires all patrons to become certified either by taking an introductory Internet class or by completing a self-instruction program.
Some libraries try to provide useful listings of Web sites for searching specific types of information. The Simsbury, Conn., Public Library recommends and catalogs Web sites and takes full responsibility for these. Selection is guided by the collection development policy of the library, according to Susan Bullock, director.
Public pressure is mounting on libraries to find acceptable solutions that balance intellectual freedom with the community support that all public libraries need. With the Communication Decency Act's ruling appealed to the Supreme Court, the censorship controversy continues, placing libraries in a very difficult situation.
Karen Jo Gounaud, founder of Family Friendly Libraries -- a national grassroots network of concerned citizens, librarians and library trustees -- believes we need "a return to policies placing libraries under maximum local control with more acknowledgment of taxpayer authority and community standards."
ALA's Krug, however, counters that the Internet is a unique communications medium, echoing the words of District Judge Stewart Dalzell, who, in his supporting opinion against the Communications Decency Act, said: "As the most participatory form of mass speech yet developed, the Internet deserves the highest protection from governmental intrusion."
Krug also reiterated the ALA guidelines which state that "only parents and legal guardians have the right and responsibility to restrict the access of their children -- and only their children -- to library resources."
For more information call Bev Papai at 810/848-4301.
Pat Newcombe is the reference librarian at Western New England College School of Law. E-mail: < firstname.lastname@example.org >. | <urn:uuid:5c8e11bb-1de5-47d6-8c07-95e385b0624a> | CC-MAIN-2017-04 | http://www.govtech.com/magazines/gt/Librarians-in-Quandary-Over-Web-Access.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00397-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.938617 | 2,048 | 2.984375 | 3 |
Many of us would have heard stories of flying objects, in our childhood. This is now becoming a reality with the advent of Drones. Drones are unmanned aerial vehicles that may /may not be remotely controlled through software and GPS.
Drones are generally used by Defense organizations for the purpose of monitoring personnel or activities. In 1990, the U.S. used remote enabled Drones in the Iraq and Afghanistan wars.
Drones for Logistics?
According to a recent survey, 42 per cent of logistics operators believe that they will use unmanned Drones to ship cargo in the future; and most believe that this will happen within 15 years. A survey taken by the National Aeronautical Centre (NAC) found that one third of all 3PLs are interested in investing in Drone deliveries.
A patent granted to Amazon reveals its plans for delivery Drones, the BBC reports. Filed in September last year and granted at the end of the last month, U.S. Patent No. 20150120094 relates to the use of an unmanned aerial vehicle (UAV) configured to autonomously deliver items of inventory to various destinations.
The patent seems to fit well with what we already know of Amazon’s Drone intentions. The UAVs may be able to deliver to a variety of locations, and may even follow customers using GPS to deliver to them no matter where they are.
DHL also plans to start Drone delivery for small shipments. DHL has already commenced trials on Drone delivery in collaboration with Audi. The interesting part is that parcels can be delivered to the car of the customer.
Drone delivery can be implemented for critical small shipments and for medicines.
The 3PL industry could use Drone deliveries for a number of reasons; some include:
- Express delivery
- Ambient intelligence
- Process automation
- Technology upgrades
Drones can also be used in big factories, to deliver parts from the mother warehouse to the factory supermarket or to the production line.
Here are some common, day-to-day operational issues that 3PL companies face:
- Delay in delivery
- Resource productivity
- Manual intervention in SCM
- Quality issues with time bound shipments
- Technology transfer/upgrade
A majority of these issues can easily be fixed through Drone delivery; but it will take some time for its implementation, especially considering the law of the land.
Since Drone delivery is an emerging trend in contract logistics and distribution, hi-tech solutions can radically overcome such challenges faced by 3PLs. HCL is building new logistics solutions and propositions to meet increasing customer expectations, through new age technologies for the 3PL industry. | <urn:uuid:f12850e7-3461-481f-b3c8-a5730f850391> | CC-MAIN-2017-04 | https://www.hcltech.com/blogs/public-services/drones-future-logistics-industry | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00397-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.938657 | 541 | 2.78125 | 3 |
With the prospect of increasing amounts of data being collected by a proliferation of internet –connected devices and the task of organizing, storing, and accessing such data looming, we face the challenge of how to leverage the power of the cloud running in our data centers to make information accessible in a secure and privacy-preserving manner. For many scenarios, in other words, we would like to have a public cloud which we can trust with our private data, and yet we would like to have that data still be accessible to us in an organized and useful way.
One approach to this problem is to envision a world in which all data is preprocessed by a client device before being uploaded to the cloud; the preprocessing signs and encrypts the data in such a way that its functionality is preserved, allowing, for example, for the cloud to search or compute over the encrypted data and to prove its integrity to the client (without the client having to download it). We refer to this type of solution as Cryptographic Cloud Storage.
Cryptographic cloud storage is achievable with current technologies and can help bootstrap trust in public clouds. It can also form the foundation for future cryptographic cloud solutions where an increasing amount of computation on encrypted data is possible and efficient. We will explain cryptographic cloud storage and what role it might play as cloud becomes a more dominant force.
Applications of the Cryptographic Cloud
Storage services based on public clouds such as Microsoft’s Azure storage service and Amazon’s S3 provide customers with scalable and dynamic storage. By moving their data to the cloud customers can avoid the costs of building and maintaining a private storage infrastructure, opting instead to pay a service provider as a function of its needs. For most customers, this provides several benefits including availability (i.e., being able to access data from anywhere) and reliability (i.e., not having to worry about backups) at a relatively low cost. While the benefits of using a public cloud infrastructure are clear, it introduces significant security and privacy risks. In fact, it seems that the biggest hurdle to the adoption of cloud storage (and cloud computing in general) is concern over the confidentiality and integrity of data.
While, so far, consumers have been willing to trade privacy for the convenience of software services (e.g., for web-based email, calendars, pictures etc…), this is not the case for enterprises and government organizations. This reluctance can be attributed to several factors that range from a desire to protect mission-critical data to regulatory obligations to preserve the confidentiality and integrity of data. The latter can occur when the customer is responsible for keeping personally identifiable information (PII), or medical and financial records. So while cloud storage has enormous promise, unless the issues of confidentiality and integrity are addressed many potential customers will be reluctant to make the move.
In addition to simple storage, many enterprises will have a need for some associated services. These services can include any number of business processes including sharing of data among trusted partners, litigation support, monitoring and compliance, back-up, archive and audit logs. A cryptographic storage service can be endowed with some subset of these services to provide value to enterprises, for example in complying with government regulations for handling of sensitive data, geographic considerations relating to data provenance, to help mitigate the cost of security breaches, lower the cost of electronic discovery for litigation support, or alleviate the burden of complying with subpoenas.
For example, a specific type of data which is especially sensitive is personal medical data. The recent move towards electronic health records promises to reduce medical errors, save lives and decrease the cost of healthcare. Given the importance and sensitivity of health-related data, it is clear that any cloud storage platform for health records will need to provide strong confidentiality and integrity guarantees to patients and care givers, which can be enabled with cryptographic cloud storage.
Another arena where a cryptographic cloud storage system could be useful is interactive scientific publishing. As scientists continue to produce large data sets which have broad value for the scientific community, demand will increase for a storage infrastructure to make such data accessible and sharable. To incent scientists to share their data, scientific could establish a publication forum for data sets in partnership with hosted data centers. Such an interactive publication forum would need to provide strong guarantees to authors on how their data sets may be accessed and used by others, and could be built on a cryptographic cloud storage system.
Cryptographic Cloud Storage
The core properties of a cryptographic storage service are that control of the data is maintained by the customer and the security properties are derived from cryptography, as opposed to legal mechanisms, physical security, or access control. A cryptographic cloud service should guarantee confidentiality and integrity of the data while maintaining the availability, reliability, and efficient retrieval of the data and allowing for flexible policies of data sharing.
A cryptographic storage service can be built from three main components: a data processor (DP), that processes data before it is sent to the cloud; a data verifier (DV), that checks whether the data in the cloud has been tampered with; and a token generator (TG), that generates tokens which enable the cloud storage provider to retrieve segments of customer data. We describe designs for both consumer and enterprise scenarios.
A Consumer Architecture
Typical consumer scenarios include hosted email services or content storage or back-up. Consider three parties: a user Alice that stores her data in the cloud; a user Bob with whom Alice wants to share data; and a cloud storage provider that stores Alice’s data. To use the service, Alice and Bob begin by downloading a client application that consists of a data processor, a data verifier and a token generator. Upon its first execution, Alice’s application generates a cryptographic key. We will refer to this key as a master key and assume it is stored locally on Alice’s system and that it is kept secret from the cloud storage provider.
Whenever Alice wishes to upload data to the cloud, the data processor attaches some metadata (e.g., current time, size, keywords etc…) and encrypts and encodes the data and metadata with a variety of cryptographic primitives. Whenever Alice wants to verify the integrity of her data, the data verifier is invoked. The latter uses Alice’s master key to interact with the cloud storage provider and ascertain the integrity of the data. When Alice wants to retrieve data (e.g., all files tagged with keyword “urgent”) the token generator is invoked to create a token and a decryption key. The token is sent to the cloud storage provider who uses it to retrieve the appropriate (encrypted) files which it returns to Alice. Alice then uses the decryption key to decrypt the files.
Whenever Alice wishes to share data with Bob, the token generator is invoked to create a token and a decryption key which are both sent to Bob. He then sends the token to the provider who uses it to retrieve and return the appropriate encrypted documents. Bob then uses the decryption key to recover the files. This process is illustrated in Figure 1.
Figure 1: (1) Alice’s data processor prepares the data before sending it to the cloud; (2) Bob asks Alice for permission to search for a keyword; (3) Alice’s token generator sends a token for the keyword and a decryption key back to Bob; (4) Bob sends the token to the cloud; (5) the cloud uses the token to find the appropriate encrypted documents and returns them to Bob. At any point in time, Alice’s data verifier can verify the integrity of the data.
An Enterprise Architecture
In the enterprise scenario we consider an enterprise MegaCorp that stores its data in the cloud; a business partner PartnerCorp with whom MegaCorp wants to share data; and a cloud storage provider that stores MegaCorp’s data. To handle enterprise customers, we introduce an extra component: a credential generator. The credential generator implements an access control policy by issuing credentials to parties inside and outside MegaCorp.
To use the service, MegaCorp deploys dedicated machines within its network to run components which make use of a master secret key, so it is important that they be adequately protected. The dedicated machines include a data processor, a data verifier, a token generator and a credential generator. To begin, each MegaCorp and PartnerCorp employee receives a credential from the credential generator. These credentials reflect some relevant information about the employees such as their organization or team or role.
Figure 2: (1) Each MegaCorp and PartnerCorp employee receives a credential; (2) MegaCorp employees send their data to the dedicated machine; (3) the latter processes the data using the data processor before sending it to the cloud; (4) the PartnerCorp employee sends a keyword to MegaCorp’s dedicated machine ; (5) the dedicated machine returns a token; (6) the PartnerCorp employee sends the token to the cloud; (7) the cloud uses the token to find the appropriate encrypted documents and returns them to the employee. At any point in time, MegaCorp’s data verifier can verify the integrity of MegaCorp’s data.
generates data that needs to be stored in the cloud, it sends the data together with an associated decryption policy to the dedicated machine for processing. The decryption policy specifies the type of credentials necessary to decrypt the data (e.g., only members of a particular team). To retrieve data from the cloud (e.g., all files generated by a particular employee), an employee requests an appropriate token from the dedicated machine. The employee then sends the token to the cloud provider who uses it to find and return the appropriate encrypted files which the employee decrypts using his credentials.
If a PartnerCorp employee needs access to MegaCorp’s data, the employee authenticates itself to MegaCorp’s dedicated machine and sends it a keyword. The latter verifies that the particular search is allowed for this PartnerCorp employee. If so, the dedicated machine returns an appropriate token which the employee uses to recover the appropriate files from the service provider. It then uses its credentials to decrypt the file. This process is illustrated in Figure 2.
Implementing the Core Cryptographic Components
The core components of a cryptographic storage service can be implemented using a variety of techniques, some of which were developed specifically for cloud computing. When preparing data for storage in the cloud, the data processor begins by indexing it and encrypting it with a symmetric encryption scheme (for example the government approved block cipher AES) under a unique key. It then encrypts the index using a searchable encryption scheme and encrypts the unique key with an attribute-based encryption scheme under an appropriate policy. Finally, it encodes the encrypted data and index in such a way that the data verifier can later verify their integrity using a proof of storage.
In the following we provide high level descriptions of these new cryptographic primitives. While traditional techniques like encryption and digital signatures could be used to implement the core components, they would do so at considerable cost in communication and computation. To see why, consider the example of an organization that encrypts and signs its data before storing it in the cloud. While this clearly preserves confidentiality and integrity it has the following limitations.
To enable searching over the data, the customer has to either store an index locally, or download all the (encrypted) data, decrypt it and search locally. The first approach obviously negates the benefits of cloud storage (since indexes can grow large) while the second scales poorly. With respect to integrity, note that the organization would have to retrieve all the data first in order to verify the signatures. If the data is large, this verification procedure is obviously undesirable. Various solutions based on (keyed) hash functions could also be used, but all such approaches only allow a fixed number of verifications.
A searchable encryption scheme provides a way to encrypt a search index so that its contents are hidden except to a party that is given appropriate tokens. More precisely, consider a search index generated over a collection of files (this could be a full-text index or just a keyword index). Using a searchable encryption scheme, the index is encrypted in such a way that (1) given a token for a keyword one can retrieve pointers to the encrypted files that contain the keyword; and (2) without a token the contents of the index are hidden. In addition, the tokens can only be generated with knowledge of a secret key and the retrieval procedure reveals nothing about the files or the keywords except that the files contain a keyword in common.
Symmetric searchable encryption (SSE) is appropriate in any setting where the party that searches over the data is also the one who generates it. The main advantages of SSE are efficiency and security while the main disadvantage is functionality. SSE schemes are efficient both for the party doing the encryption and (in some cases) for the party performing the search. Encryption is efficient because most SSE schemes are based on symmetric primitives like block ciphers and pseudo-random functions. Search can be efficient because the typical usage scenarios for SSE allow the data to be pre-processed and stored in efficient data structures.
Another set of cryptographic techniques that has emerged recently allows the specification of a decryption policy to be associated with a ciphertext. More precisely, in a ciphertext-policy attribute-based encryption scheme each user in the system is provided with a decryption key that has a set of attributes associated with it. A user can then encrypt a message under a public key and a policy. Decryption will only work if the attributes associated with the decryption key match the policy used to encrypt the message. Attributes are qualities of a party that can be established through relevant credentials such as being an employee of a certain company or living in Washington State.
Proofs of Storage
A proof of storage is a protocol executed between a client and a server with which the server can prove to the client that it did not tamper with its data. The client begins by encoding the data before storing it in the cloud. From that point on, whenever it wants to verify the integrity of the data it runs a proof of storage protocol with the server. The main benefits of a proof of storage are that (1) they can be executed an arbitrary number of times; and (2) the amount of information exchanged between the client and the server is extremely small and independent of the size of the data.
Trends and future potential
Extensions to cryptographic cloud storage and services are possible based on current and emerging cryptographic research. This new work will bear fruit in enlarging the range of operations which can be efficiently performed on encrypted data, enriching the business scenarios which can be enabled through cryptographic cloud storage.
About the Authors
Kristin Lauter is a Principal Researcher and the head of the Cryptography Group at Microsoft Research. She directs the group’s research activities in theoretical and applied cryptography and in the related math fields of number theory and algebraic geometry. Group members publish basic research in prestigious journals and conferences and collaborate with academia through joint publications, and by helping to organize conferences and serve on program committees. The group also works closely with product groups, providing consulting services and technology transfer. The group maintains an active program of post-docs, interns, and visiting scholars. Her personal research interests include algorithmic number theory, elliptic curve cryptography, hash functions, and security protocols.
Seny Kamara is a researcher in the Crypto Group at Microsoft Research in Redmond and completed a Ph.D. in Computer Science at Johns Hopkins University under the supervision of Fabian Monrose. At Hopkins Dr. Kamara was a member of the Security and Privacy Applied Research (SPAR) Lab. Seny Kamara spent the Fall of 2006 at UCLA’s IPAM and the summer of 2003 at CMU’s CyLab. Main research interests are in cryptography and security and recent work has been in cloud cryptography, focusing on the design of new models and techniques to alleviate security and privacy concerns that arise in the context of cloud computing. | <urn:uuid:ce304b06-a55d-47c9-b3ce-ec3d0ef2eacb> | CC-MAIN-2017-04 | https://www.hpcwire.com/2011/03/11/considerations_for_the_cryptographic_cloud/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00425-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.913988 | 3,282 | 2.859375 | 3 |
There is a link to the manuals at the top of this page. If you follow that link and look into the JCL Language Reference manual, navigate to section 2.1, you can find this:
Jobs placed in a series and entered through one input device form an input stream. The operating system reads an input stream into the computer from an input/output (I/O) device or an internal reader. The input device can be a card reader, a magnetic tape device, a terminal, or a direct access device. An internal reader is a buffer that is read from a program into the system as though it were an input stream. | <urn:uuid:14339446-15d8-497b-b0c7-7720310bea1a> | CC-MAIN-2017-04 | http://ibmmainframes.com/about38426.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00333-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.916422 | 131 | 2.609375 | 3 |
Many hear the term “smart grid” yet fail to delve deeper to find out what it really means. Put simply, "smart grid" is simply an umbrella term for the technologies, equipment, systems and concepts that are enabling the ageing power grid system with the ability to communicate. The advantages of this, of course, are numerous to both utilities and the consumer.
"The biggest benefit is empowerment," said Patty Durand, executive director of the Smart Grid Consumer Collaborative. "It includes empowerment of the consumer to take control over their energy expenditures. Right now, consumers have no idea what their electricity bill is going to be when it comes. When it does come, they don't know what went into the numbers they have to pay."
Cost savings aside, a smart grid also allows utility infrastructure to detect and report faults or outages. According to Dan Delurey, head of the Demand Response and Smart Grid Coalition, a smart grid even has the potential to reroute power to places in order prevent outages.
That said, the smart grid isn't necessarily about renewable energy sources as many mistakenly think — though it does offer more efficient use of all sources of power. This leads to greater efficiency, reliability, sustainability and resiliency of the electric grid.
That said, new smart grid technology is being developed at West Virginia University, while projects are also underway to improve existing technology. The primary struggle at the moment is to develop a standard communication protocol for vendors to adhere to, which will make interoperability much easier for all involved parties.
Though smart grid implementation may seem like a no-brainer, there is an upfront cost associated with implementing any new technology, and this, coupled with the misinformed belief that the smart grid eschews fossil fuels such as coal, has slowed smart grid adoption.
There are many organizations, such as the Smart Grid Consumer Collaborative and Pecan Street Incorporated, hard at work in furthering the smart grid, though, so there is little doubt that it will soon be ubiquitous in many nations around the world.
Want to learn more about the latest in communications and technology? Then be sure to attend ITEXPO Miami 2013, Jan 29- Feb. 1 in Miami, Florida. Stay in touch with everything happening at ITEXPO (News - Alert). Follow us on Twitter. | <urn:uuid:b0af2e48-98e8-4c13-9cc5-5cdbc6f14772> | CC-MAIN-2017-04 | http://www.iotevolutionworld.com/topics/smart-grid/articles/2012/12/24/320632-smart-grid-benefits-extend-utilities-energy-users.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282202.61/warc/CC-MAIN-20170116095122-00085-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.951675 | 474 | 3.390625 | 3 |
Rosenfeld J.A.,American Museum of Natural History |
Rosenfeld J.A.,Rutgers University |
Reeves D.,New York Medical College |
Reeves D.,HRH Prince Alwaleed Bin Talal Bin Abdulaziz Alsaud Institute for Computational Biomedicine |
And 36 more authors.
Nature Communications | Year: 2016
The common bed bug (Cimex lectularius) has been a persistent pest of humans for thousands of years, yet the genetic basis of the bed bug's basic biology and adaptation to dense human environments is largely unknown. Here we report the assembly, annotation and phylogenetic mapping of the 697.9-Mb Cimex lectularius genome, with an N50 of 971 kb, using both long and short read technologies. A RNA-seq time course across all five developmental stages and male and female adults generated 36,985 coding and noncoding gene models. The most pronounced change in gene expression during the life cycle occurs after feeding on human blood and included genes from the Wolbachia endosymbiont, which shows a simultaneous and coordinated host/commensal response to haematophagous activity. These data provide a rich genetic resource for mapping activity and density of C. lectularius across human hosts and cities, which can help track, manage and control bed bug infestations. © 2016, Nature Publishing Group. All rights reserved. Source | <urn:uuid:6585b28a-aea9-4683-bae9-c9cca8aaece5> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/bionanogenomics-inc-1437993/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00443-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.868147 | 304 | 2.65625 | 3 |
Input / Output and I/O Strategies
The Four Major Input / Output Strategies
A Silly Example to Illustrate
A Context for Advanced I/O Strategies
The Four Strategies
Here are the simple definitions of the four I/O strategies.
Program Controlled I/O
This is the simplest to implement. The executing program manages every aspect of I/O processing. I/O occurs only when the program calls for it. If the I/O device is not ready to perform its function, the CPU waits for it to be ready; this is “busy waiting”.
The next two strategies are built upon program controlled I/O.
Interrupt Driven I/O
In this variant, the I/O device can raise a signal called an “interrupt” when it is ready to perform input or output. The CPU performs the I/O only when the device is ready for it.
In some cases, this interrupt can be viewed as an alarm, indicating an undesirable event.
Direct Memory Access
This variant elaborates on the two above. The I/O device interrupts and is sent a “word count” and starting address by the CPU. The transfer takes place as a block.
This assigns I/O to a separate processor, which uses one of the above three strategies.
I/O Strategies: A Silly Example
am giving a party to which a number of people are invited. I know exactly
how many people will attend.
I know that the guests will not arrive before 6:00 PM.
All guests will enter through my front door. In addition to the regular door (which can be locked), it has a screen door and a doorbell.
I have ordered pizzas and beer, each to be delivered. All deliveries at the back door.
must divide my time between baking cookies for the party and going to the door
to let the visitors into the house.
I am a careless cook and often burn the cookies.
We now give the example, by I/O categories.
The Silly Example
Here I go to the door at 6:00 PM and wait.
As each one arrives, I open the door and admit the guest.
I do not leave the door until the last guest has arrived; nothing gets done in the kitchen.
Here I make use of the fact that the door has a doorbell. I continue working in the kitchen until I hear the doorbell.
When the doorbell rings, I put down my work, go to the door, and admit the guest.
1: I do not “drop” the work, but bring
it to a quick and orderly conclusion.
If I am removing cookies from the oven, I place them in a safe place to
cool before answering the door.
2: If I am fighting a grease fire, I
ignore the doorbell and first put out the
fire. Only when it is safe do I attend to the door.
3: With a guest at the front door and
the beer truck at the back door, I
have a difficult choice, but I must attend to each quickly.
The Silly Example (Part 2)
Direct Memory Access
I continue work in the kitchen until the first guest arrives and rings the doorbell.
At that point, I take a basket and place a some small gifts into it, one for each guest.
I go to the door, unlock it, admit the guest and give the first present.
I leave the main door open. I place the basket of gifts outside, with instructions that each guest take one gift and come into the house without ringing the doorbell.
There is a sign above the basket asking the guest taking the last gift to notify me, so that I can return to the front door and close it again.
the Interrupt Driven analog, I had to go to the door once for each guest.
In the DMA analog, I had to go to the door only twice.
I hire a butler and tell him to manage the door any way he wants.
He just has to get the guests into the party and keep them happy.
Another Simple Example
We first examine program controlled I/O. We give an example that appears to be correct, but which hides a real flaw. This flaw rarely appears in a high–level–language program.
We are using the primitive command “Input” to read from a dedicated input device. It is the ASCII codes for characters that are read, with a 0 used to indicate no more input.
Skip if AC > 0
Loop: Store X[J] // Not really a MARIE instruction
J = J + 1 // Again pseudocode
Skip if AC = = 0
Jump Loop // Go back and get another.
What’s wrong? Simply put,
what is the guarantee that the dedicated input device
has a new character ready when the next Input is executed?
Program Controlled Input and the Busy Wait
Each input or output device must have at least three registers.
This allows the CPU to determine a number of status issues.
Is the device ready? Is its power on? Are there any device errors?
Does the device have new data to input? Is the device ready for output?
Enable the device to raise an interrupt when it is ready to move data.
Instruct a printer to follow every <LF> with a <CR>.
Move the read/write heads on a disk.
Whatever data is to be transferred.
our dedicated input device has three registers, including Device_Status
which is greater than zero if and only if there is a character ready to be input.
Busy: Input Device_Status
Skip if AC > 0
When Is Program Controlled I/O Appropriate?
put, it is appropriate only when the I/O action can proceed immediately.
There are two standard cases in which this might be used successfully.
1. The device can respond immediately when
polled for input.
For example, consider an electronic sensor monitoring temperature or pressure.
(However, we shall want these sensors to be able to raise interrupts).
2. When a device has already raised an
interrupt, indicating that it is ready to
In a modern computer, the basic I/O instructions (Input and Output for the MARIE) are considered privileged. They may be issued only by the Operating System.
User programs issue “traps” to the operating system to access these instructions. These are system calls in a standardized fashion that is easily interpreted by system programs.
Diversion: Processes Management
now must place the three advanced I/O strategies within the proper context
in order to see why we even bother with them.
We use a early strategy, called “Time Sharing” to illustrate the process management associated with handling interrupts and direct memory access.
the Time Sharing model, we have
1. A single computer with its CPU, memory, and sharable I/O resources,
2. A number of computer terminals attached to the CPU, and
3. A number of users, each of whom wants to use the computer.
In order to share this expensive computer more fairly, we establish two rules.
1. Each user process is allocated a “time slice”, during which it can be
At the end of this time, it must give up the CPU, go to the “back of the line”
and await its turn for another time slice.
2. When a process is blocked and waiting on
completion of either input or output,
it must give up the CPU and cannot run until the I/O has been completed.
With this convention, each user typically thinks he or she is the only one using the computer. Thus the computer is “time shared”.
The Classis Process Diagram
Here is the standard process state diagram associated with modern operating systems.
When a process (think “user program”) executes an I/O trap instruction (remember that it cannot execute the I/O directly), the O/S suspends its operation & starts I/O on its behalf.
When the I/O is complete, the O/S marks the process as “ready to run”. It will be assigned to the CPU when it next becomes available.
The Three “Actors” for Input
User Program Operating System Input Device
(Is blocked) Block the process
Reset the input status register Status = 0
Enable the device interrupt Interrupt is enabled
Command the input Input begins
Dispatch another process
Input is complete
Input (place data into AC)
Place data into buffer for process
Mark the process as ready to run
Copy from buffer
Resume processing Obviously, there is more to it than this.
What is DMA?
Remember that Main Memory is accessed through two registers and some control signals
MAR Memory Address Register
MBR Memory Buffer Register (holds the data)
READ and WRITE If one is true, the memory is either written or read.
If both are true, only one action is performed.
The CPU normally issues these signals. This holds for both program controlled I/O and interrupt driven I/O. In DMA, the device controller issues these signals.
A DMA controller is one that can directly manipulate memory.
But suppose a fast disk wants to transfer a 512–byte block of data to memory. It would not be efficient to have 512 interrupts.
Scenario: 1. The
disk raises an interrupt and the CPU responds.
2. The device driver sends a start address and byte count to the
disk controller, which connects the disk to the bus.
3. The disk transfers its block of data directly to memory.
4. The disk again raises an interrupt when it is complete. | <urn:uuid:ba97bf9a-6d3b-4040-aa8d-e31526e52781> | CC-MAIN-2017-04 | http://edwardbosworth.com/My5155_Slides/Chapter12/IO_Strategies.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280065.57/warc/CC-MAIN-20170116095120-00078-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.912887 | 2,067 | 3.125 | 3 |
The research comes from Michael Hanspach and Michael Goetz of the Fraunhofer Institute FKIE in Germany. It demonstrates how audio signals, hardly if at all discernible to the human ear, can be used to transmit data between computers that do not have a direct network connection.
We demonstrate, say the researchers, "how the scenario of covert acoustical communication over the air medium can be extended to multi-hop communications and even to wireless mesh networks. A covert acoustical mesh network can be conceived as a botnet or malnet that is accessible via near-field audio communications."
It is, they say "a considerable threat to computer security and might even break the security goals of high assurance computing systems based on formally verified micro kernels that did not consider acoustical networking in their security concept."
This in turn means that BadBIOS could be genuine; but the research needs to be taken in context. It demonstrates that a theoretical possibility can be made an actual reality – but not easily. The research does not demonstrate a new vulnerability or malware, nor does it describe a new method for compromising computers; it shows a method of exfiltrating data across an air gap from a machine that is already infected. This fits in with Ruiu's account of his 'infection:' the audio signals only became apparent after a system update, which is the suspected point and method of infection.
Craig Young, security researcher at Tripwire, says that the research reinforces "the plausibility of claims made by Dragos about phantom malware (BadBIOS) capable of communicating between infected systems without using traditional networking. It should serve as a reminder that air-gapped machines should be limited to only the hardware necessary to perform their intended functionality.”
Ken Westin, another Tripwire security researcher, points out that the system is "able to transmit data 20 meters but only at 20 bits per second so this approach is not exactly a key way to exfiltrate data." Acoustic exfiltration simply is not traditional malware: it requires extensive resources and effort for very little data return.
Jacob Appelbaum's tweet on BadBIOS takes on new impetus: "I think I know when and why @dragosr was owned. I also think I know who likely did it and many of the details. A hint: #NSA #CSE #GCHQ." Acoustic exfiltration is not suitable for traditional criminal activity; but is entirely suited to highly targeted, patient espionage.
It is also worth considering one other possibility in this era of nation-state cyberwarfare. Stuxnet showed years ago that agencies can infect computers across air gaps. But infecting an air-gapped computer with a trojan is one thing; maintaining control over that trojan going forward is altogether more difficult. Acoustic transmission may not be suitable for large scale exfiltration; but it could, says Westin, "easily be used to transmit commands.” This process may be better suited to control an air-gapped computer than to steal from it. | <urn:uuid:b8811687-deab-4167-9e72-d2d05819ec4f> | CC-MAIN-2017-04 | https://www.infosecurity-magazine.com/news/research-shows-air-gap-hopping-super-trojan/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00472-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.942622 | 629 | 2.671875 | 3 |
Have a question for us? Use our contact sales form:
With this post, I’ll be wrapping up a multi-part discussion about measuring video quality. In the first part, I described Mean Opinion Scores and how these scores provide a quantitative value of multimedia quality. Last time, I spoke about “full-reference” and “reference-free” measurement techniques, and gave some advantages of each.
Today, we discuss ways that mobile video quality problems may be introduced, some of the more common types of quality issues, and some of the causes behind these.
Most video quality problems may be introduced at several points along the video distribution chain:
During video creation. While most professionally-produced video content is created by trained technicians using expensive devices, most user-generated video content is recorded with poor-quality cameras. In addition, the person doing the recording will often have shaky hands. This leads to quality problems that are introduced as soon as the video is created.
During transcoding. While today’s algorithms for encoding and decoding video content are quite sophisticated and very good at preserving quality, the nature of compression causes some loss of quality each time content goes through a coding cycle. Therefore, the number of encodes and decodes should be kept to a minimum. However, in the real world, content must often be translated (or “transcoded”) between algorithms. This can be because different devices support different algorithms, or because one algorithm might be great for display quality while another is better for transport efficiency. In today’s telecommunications networks, it is common for video content to be transcoded several times between creation and delivery; each of these transcodes can contribute to reducing video quality.
During video transmission. IP networks suffer from impairments such as packet loss and jitter. While these problems have affected voice services for years, the human ear can recover from communication gaps better than the human eye. Even a small glitch in a video communications stream can cause distraction for a viewer.
When displaying video on a device. Even the best video content can be unsatisfying if it is displayed on a poor viewing device. Most of today’s mobile devices were designed to provide a high-quality voice experience, while video capabilities may be limited. Higher-quality video displays require a lot of power, so device makers may consider tradeoffs like offering extended battery life by sacrificing display quality in a mobile device.
While the previous paragraphs discussed how quality problems may be introduced during the process of video creation and distribution, the next will describe several of the more common types of problems that may be experienced (and some of the potential causes of each):
I hope that these last few posts have been helpful as a reference to any of our customers (or potential customers) that are considering tools and systems for measuring video quality. The increasing focus on video quality, and particularly the quality of mobile video, is an important step forward for our industry. At Dialogic, we welcome this increased attention to quality, as our business is focused on providing our customers with the highest-quality, highest-performance video solutions. Please stay tuned to this space in the next few months, and you’ll be hearing more about our efforts related to video quality measurement and improvement.
In the meantime, if you would like to learn more about this topic, we have a white paper available for free download here. And please take a moment to let us know what you are thinking about this subject – I’d enjoy the opportunity to continue the discussion! | <urn:uuid:334fff07-6d4a-41c6-9a3a-a54597ff3b73> | CC-MAIN-2017-04 | http://www.dialogic.com/den/d/b/corporate/archive/2009/12/07/measuring-video-quality-part-3.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00380-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.946257 | 724 | 2.84375 | 3 |
A virtual disk (vDisk) represents a VM’s view of its storage devices, which could include any type of physical disk (such as a file-backed disk image, an ISO image file, a physical hard drive, a CD/DVD device, or a block device) associated to a VM. The vDisk objects are discovered, along with their associated VM, when a Discover VM Images job is run on a repository.
The vDisk is modeled as a Grid object, located as a subordinate to the VM Grid object in the Explorer tree of the Orchestration Console. In the Explorer Tree, a vDisk is given the form vmname_vdisk<n> where <n> represents the numerical order in which this vDisk was discovered, with 1 appended to the name of the first vDisk discovered or created. For example, suse11_vdisk1 would be the name of the first disk discovered for a VM with the Grid ID suse11. Each additional vDisk is incremented by one, so the second vDisk in this example would be named suse11_vdisk2.
This section includes the following information:
This section includes the following information:
You might want to manually create a vDisk in the following scenarios:
When you want to create a “blank” disk image file for the VM. In this scenario, the disk image does not actually reside on the local file system, but a disk image of the specified size (measured in MB) should be created at the location specified for use by the VM. This is essentially a blank file, until it is used by the VM.
When the Orchestration Server might not have discovered the vDisk objects correctly, such as omitting a disk that should exist. You need to manually correct the incorrect discovery.
A VM that already exists needs to have patches applied to it. The patches are delivered through an ISO file, which was not configured to be attached to the VM. This configuration lets the administrator configure the VM with access to the ISO disk image, then apply the patches, and then later delete the vDisk object, returning the VM to its original configuration.
You need to manually add the vDisk, select theaction in the Orchestration Console, then apply the patches to the running VM. Later, you shut down the VM, delete the vDisk object from the Orchestration Server, then select the action again.
The scenario includes configuring the VM to use the existing ISO file (that is, creating the vDisk object, then selecting), and then deconfiguring the VM to no longer use the ISO file (that is, deleting the vDisk object, then selecting ).
In this scenario, only the vDisk object from the Orchestration Server is deleted, not the ISO file.
To create a virtual disk in the Orchestration Console, you can either right-click the VM where you want to create the vDisk, then select Step 4 below) or you can use the following procedure from the Orchestration Console menu:(if you do this, you can skip to
In the Orchestration Console main menu, select> to display the Create a New Virtual Disk dialog box.
In thedrop-down list, select the name of the VM where you want to add a vDisk, then click .
When you have created all of the vDisks you need, click.
Select a newly created vDisk object in the Explorer tree to view the Info/Groups page of the admin view.
On the Info/Groups page, configure the following settings:
Type: Specify the vDisk type as the VM host sees it.
Description: Describe the vDisk with any text that you choose
Healthy: Designates the health state of the vDisk. Do not configure.
Moveable: Specifies whether the disk image can be copied (relocated) with the VM when the VM is moved (relocated) to another repository. For more information, see “Moveable” later in this section.
Mode: Specifies the mode of the vDisk as made available and supported by the provisioning adapter:
r = read only
w = read/write
VM: Specifies the name of the VM that uses this vDisk.
Repository: The repository where this disk location path resides.This setting is important because the Orchestration Server uses it to find a suitable VM host for provisioning, building, or migration actions. The value of this setting can be , which informs the server to ignore this vDisk when locating a suitable VM host.
Physical Disk: The name of the pDisk that this vDisk is associated with.
Location: The path (location) to the disk image.
If you specify a location to a disk that already exists, the existing disk file is used and the VM configuration is modified (according to the value in vdisk.location fact) to use this existing disk.
If you specify a path to a disk that does not exist (that is, if the value in vdisk.location is invalid), the action fails and an empty disk image file of the specified size is created. An error in the action status or job log is created.
For a vDisk created for a Hyper-V VM, you need to provide the complete path of that vDisk file.
To form the path, you need to know the repository path where the VM currently resides, the vDisk name, which is the name you give it plus the .vhd extension. For example, the syntax would be
NOTE:Make sure that the .vhd file you designate in this field doesn’t already exist in the path.
Size: The size (measured in MB) of the disk image. Do not configure.
Sparse Disk: Designates whether the vDisk file is a sparse file. Do not configure.
Actual Size: The actual sparse size (measured in MB) of the vDisk file. Do not configure.
Click the Save button in the toolbar to save the fact changes you made.
In the Explorer tree, right-click the VM object where you added the vDisk, then selectto apply the changes to the VM’s configuration.
Sparse disk creation is supported by the Xen, vSphere, and KVM hypervisors. If you want to create your vDisk as a sparse file, you can use the procedure for Creating and Configuring a Virtual Disk (see Step 5 in particular).
You need to set the Sparse Disk fact to true, specify the amount of space (in MB) for the disk in the Size fact, and also specify the path to the repository where the sparse vDisk resides. Make sure that you perform the action to apply the vDisk changes to the VM’s configuration.
You might want to manually delete a vDisk in at least two scenarios:
When the Orchestration Server might not have discovered the vDisk objects correctly, such as adding a disk that should not exist. The administrator needs to manually correct the incorrect discovery.
A VM that already exists needs to have patches applied to it. The patches are delivered through an ISO file, which was not configured to be attached to the VM. This configuration lets the administrator configure the VM with access to the ISO disk image, then apply the patches, then later delete the vDisk object, returning the VM to its original configuration.
The administrator needs to manually add the vDisk, run the Save Config command from the Orchestration Console, then apply the patches to the running VM. Later, the administrator shuts down the VM, deletes the vDisk object from the server, then performs theaction again.
The scenario includes configuring the VM to use the existing ISO file (that is, creating the vDisk object and selecting theaction), then deconfiguring the VM to no longer use the ISO file (that is, deleting the vDisk object, then selecting the action).
In this scenario, only the vDisk object from the server is deleted, not the ISO file.
To delete a virtual disk, you can either right-click the vDisk object in the Explorer, then select Step 4, below), or you can use the following procedure from the Orchestration Console:(if you do this, you can skip to
In the Orchestration Console, select> > to display the Delete a Virtual Disk dialog box.
In thelist, select the name of the vDisk (hold down the Ctrl key to select multiple objects), then click to move these objects to the list.
When you have selected all of the vDisks you want to delete, clickto display the Delete dialog box.
In the dialog box, selectto delete all of the vDisk objects in the list, select if you want to delete the files associated with the vDisks you have selected, click , then click .
In the Explorer tree, right-click the VM object where you deleted the vDisk, then selectto apply the changes to the VM’s configuration.
NOTE:The config.xen for the xen provisioning adapter), but it does not delete any vDisk files on the file system. In this case, manual deletion of the vDisk file is required.action rewrites the configuration file for the VM (for example,
To delete a VM and its backing files, use theaction.
For a VM to be provisionable by other VM hosts, all of a VM’s vDisks must be visible in the same way that the VM’s default repository (resource.vm.repository) is visible to VM hosts. If a VM has multiple vDisks and each vDisk has a different associated repository, these repositories must also be visible from a potential VM host.
When you move a VM to a new repository (see the Move Disk Image action for each hypervisor at Moveable:) are moved with it to be co-located in the same repository. The Orchestration Server uses the aggregated size of each moveable vDisk to determine if the designated repository has enough space for all of the disk images. vDisks that are marked as not moveable stay in place and are not used in the calculation for the VM disk size.
The following illustration further explains this concept:
Figure 7-1 Example of Moving Virtual Disks with the VM
VM host 1, VM host 2, and VM host 3 all have their own local storage repositories.
VM host 1 has a vDisk located on it. It is designated as a moveable vDisk.
VM host 1 and VM host 2 are also connected to a shared NAS storage repository.
The local repository connected to VM host 1 has a vDisk located on it. It is designated as a moveable vDisk.
The shared NAS repository has a vDisk located on it. It is designated as a non-moveable vDisk.
NOTE:Shared repositories are not created on discovery. They must be manually created and the sharing (visibility) configured.
VM host 1 has a VM located on it.
VM host 3 cannot communicate with the NAS repository; its vmhost.repositories fact does not include the NAS repository in the array, so that repository is not visible to VM host 3.
If you want to move the VM from VM host 1 to another VM host, the server manifests the following behavior:
The vDisk sizes used by the VM (on local storage and shared storage) are aggregated and compared to free space available on the repositories.
The only vDisk that is allowed to move is the moveable disk. This disk would be copied to either the shared NAS repository or the local storage on VM host 2.
VM host 3 is not considered because it does not have access to the non-moveable disk on the NAS repository. | <urn:uuid:29723826-980e-4203-95d1-67d939856ec7> | CC-MAIN-2017-04 | https://www.netiq.com/documentation/cloudmanager2/ncm2_orch_servcons/data/bnrgqqe.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00196-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.873804 | 2,446 | 2.703125 | 3 |
Tsunami prediction technology proves its worth
Underwater sensors fed accurate projections of recent tsunami; above ground, social media got the word out
- By Patrick Marshall
- Mar 05, 2010
What a difference six years can make for applied technologies. Just ask Vasily Titov, senior tsunami modeler at the National Oceanic and Atmospheric Administration’s (NOAA) Pacific Marine Environmental Laboratory in Seattle.
When a magnitude 9.1 temblor struck Sumatra, Indonesia, in 2004 and created a 100-foot tsunami that killed more than 200,000 people, researchers were largely in the dark and there was little warning that the wall of water was racing for the vulnerable shores. When the magnitude 8.8 quake struck Chile on this Feb. 27, on the other hand, Titov’s team was providing detailed and accurate reports in hours about what potentially affected communities should expect.
The difference? A worldwide network of tsunami-detection devices.
In 2008, NOAA deployed the final two of the 39 deep-ocean Assessment and Reporting of Tsunami (DART) detection buoys that make up the U.S. tsunami detection system. When the 2004 earthquake struck Sumatra, only six of the buoys were in place and only half of those were actually working, Titov said, and none of the working ones were in the Indian Ocean.
Related: 2004 tsunami spurred development of NOAA warning system
Today, the entire tsunami-detection network – including devices provided by other countries – includes 50 buoys around the world and is in mostly good working order.
During the Indian Ocean tsunami, Titov said, “it was mostly frustration.” His group had to manually plug in data primarily derived from tide gauges, which are “confusing and so difficult to interpret,” he said. The team was unable to deliver a report until eight hours after the disaster had already happened.
Tide gauges can be affected by underwater geography near the shore and don’t give an accurate picture of the strength and direction of tsunamis.
DART buoys, on the other hand, are placed in deep water. A sensor is dropped from the buoy to the ocean floor and by measuring water pressure it can detect movement of a tsunami wave only a centimeter in height.
As it happened, when the earthquake struck Chile, the closest buoy – one managed by Chile – was out of commission. So Titov’s team had to wait for the wave to reach the first buoy in the U.S. array, which took approximately three hours. Once it did, the team was able to make forecasts almost instantly. “The forecast played out pretty well – very well, in fact – for all locations, including Hawaii,” he said.
There is still significant work to do on the system, Titov said. When a tsunami approaches the shore, its characteristics – and the amount of damage it can do – depend to a large extent on the underwater geography. Accordingly, forecasters need to profile shorelines to make accurate predictions. So far, NOAA has created 43 out of a planned 75 local profiles. And once researchers have finished with creating the profiles, they’ll need to begin updating them. That’s because underwater geography changes over time, due to such factors as construction at ports and changes in rivers that empty into the ocean.
Titov would also like to see a second set of buoys deployed further out to sea. “Right now we have just this one line of defense,” Titov said. “It would be a good idea to have a second line, not as extensive as the first.” If a second line of buoys is placed further from shore, it would not only provide redundancy, but give researchers an easier time of sorting out the measurements of earthquake waves transmitted through the ground from measurements of tsunami waves transmitted through water.
New technologies also played a significant role in getting the word out to potentially affected communities.
The U.S. Pacific Fleet relied primarily on social networking technologies – most notably, Twitter and Facebook – to keep local media and Navy families informed about the potential effect of the approaching tsunami.
“We immediately first went out with a post on Twitter and Facebook,” said Capt. Jeff Breslau, public affairs officer with the U.S. Pacific Fleet in Honolulu. “What we found is that there was no need to do a press release.” Local media cited the Navy’s Twitter reports, and the Pacific Fleet’s Facebook page experienced a surge in activity, as local residents checked the site for updated tsunami news.
“Over the past 18 months I’ve viewed social media as the centerpiece on how we can communicate with the public and the media,” Breslau said. “But for all that I never envisioned that I just would not do a press release.”
Patrick Marshall is a freelance technology writer for GCN. | <urn:uuid:184321b0-4aac-44fe-b86a-08d06193eadd> | CC-MAIN-2017-04 | https://gcn.com/articles/2010/03/05/noaa-tsunami-warning-system.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00014-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.963567 | 1,032 | 3.046875 | 3 |
Biometric authentication is a process of establishing
the identity of a user by measuring
some aspect of that user's physical self. It is one of three basic
approaches to authentication -- the
others being use of a secret (something the user knows) or a device
(i.e., something the user has).
Examples of biometrics include:
- Finger prints -- i.e., image of the ridges on the skin of a finger.
- Hand print -- same as finger print, but whole hand.
- Finger vein scan -- i.e., image of hemoglobin flowing through blood veins inside a finger.
- Hand vein scan -- same as finger vein, but whole hand.
- Voice print -- i.e., measuring characteristics of the spoken voice.
- Face recognition -- i.e., comparing images of faces.
- Typing cadence -- i.e., comparing the pattern of key-press duration and inter-key time interval.
- Iris and retina images -- i.e., images of features of the human eye.
Biometrics are generally considered to be very convenient to use -- users
do not leave their fingers at home or forget how to use them, for example.
Biometrics are often thought of as quite secure, but there are weaknesses:
- Recordings may be replayed into scanners. For example, a finger
print sample may be acquired using a gummy substance, lifted from a glass
or other surface, and offered to a scanner. A voice print may be
surreptitiously recorded and replayed later. A photograph of a user's
face may be presented to a face scanner, etc.
- Biometrics are not revocable. If a user's biometric has been
compromised, he cannot "take it back."
- Users may fear that parts of their bodies may be physically amputated
in order to attack a system that trusts them.
When considering a biometric system, organizations normally take
- False accept rate (FAR) -- the frequency with which the biometric
system will incorrectly accept the wrong person as a claimed
- False reject rate (FRR) -- the frequency with which the biometric
system will incorrectly reject the right person.
- Inability to register -- the proportion of users who cannot
enroll for whatever reason (smooth skin on fingers, degenerative
eye disease, unable to speak, amputee, etc.).
Typical values for each of the above three rates are on the order of
from 0.1% to 2%.
Return to Identity Management Concepts | <urn:uuid:c2efb68b-2cca-482d-a60d-954be71c88a7> | CC-MAIN-2017-04 | http://hitachi-id.com/resource/concepts/biometric-authentication.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00500-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.888254 | 545 | 3.546875 | 4 |
Hybrid applications, a combination of native code and HTML5, have become the dominant force in app development – a trend that is expected to continue for the next few years. One of the biggest reasons for this growth is the portability that HTML5 brings to cross-platform development.
Native applications are executables that run on a given device. They’re closely tied to the underlying operating system, so to develop an app for Android (News - Alert) and iOS requires two different sources built from the ground up.
The economics of mobile device app development dictates that developers support a cross platform approach, so developing separate native apps for each OS is usually not an option.
According to an article at Slashgear.com, industry research firm Gartner (News - Alert) expects explosive growth in the mobile app market. Over half of the mobile app market will consist of hybrid apps.
The use of HTML5 allows app developers to create part of their apps to be platform independent. As a standardized language, anything developed in HTML5 should appear and behave the same way whether it's on an Android, Blackberry, Microsoft (News - Alert) Phone or iOS.
This saves significant time from an application development standpoint, as the HTML5 part of the hybrid app only has to be written once.
Unfortunately, there is no getting around the native application disadvantages. Many applications require functionality dependent on the operating system that HTML5 cannot deliver.
HTML5, in its own right, is a growing trend. A Business Insider article cites September 2012 statistics from a Kendo UI study showing significant HTML5 development with room for growth. Sixty-three percent of developers already develop with HTML5 and 31 percent plan to start using it.
Portable code has been attempted in the past, most notably with the C/C++ programming languages, which were designed for that purpose. C++, effectively a superset of C, has been defined according to an ISO/IEC (News - Alert) standard since 1998.
The popularity of Windows stood in the way of C/C++ achieving the goal of portability in the PC environment. It took over 100 lines of C code to create a window that said 'Hello World,' which led to the growth of Visual Basic as the preferred application development tool for Windows PCs.
Once Windows became the dominant operating system in the PC market, there wasn't any compelling reason for portability.
The mobile device market is different in 2013. With several different operating systems competing for market share, app developers that want their apps widely distributed must support at least some cross-platform development. Industry data suggests this will continue.
Application development has never been so wide open.
Edited by Braden Becker | <urn:uuid:bd3ebde2-5312-482a-bd1e-9634d4b4216e> | CC-MAIN-2017-04 | http://www.html5report.com/topics/html5/articles/326299-hybrid-applications-growth-popularity-expected-continue.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00408-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.946142 | 555 | 2.59375 | 3 |
Bernal J.,University of Valladolid |
Garrido-Bailon E.,Centro Apcola Regional |
Del Nozal M.J.,University of Valladolid |
Gonzalez-Porto A.V.,Centro Apcola Regional |
And 5 more authors.
Journal of Economic Entomology | Year: 2010
In the last decade, an increase in honey bee (Apis mellifera L.) colony losses has been reported in several countries. The causes of this decline are still not clear. This study was set out to evaluate the pesticide residues in stored pollen from honey bee colonies and their possible impact on honey bee losses in Spain. In total, 1,021 professional apiaries were randomly selected. All pollen samples were subjected to multiresidue analysis by gas chromatography-mass spectrometry ( MS ) and liquid chromatography-MS; moreover, specific methods were applied for neonicotinoids and fipronil. A palynological analysis also was carried out to confirm the type of foraging crop. Pesticide residues were detected in 42% of samples collected in spring, and only in 31% of samples collected in autumn. Fluvalinate and chlorfenvinphos were the most frequently detected pesticides in the analyzed samples. Fipronil was detected in 3.7% of all the spring samples but never in autumn samples, and neonicotinoid residues were not detected. More than 47.8% of stored pollen samples belonged to wild vegetation, and sunflower (Heliantus spp.) pollen was only detected in 10.4% of the samples. A direct relation between pesticide residues found in stored pollen samples and colony losses was not evident accordingly to the obtained results. Further studies are necessary to determine the possible role of the most frequent and abundant pesticides (such as acaricides) and the synergism among them and with other pathogens more prevalent in Spain. © 2010 Entomological Society of America. Source | <urn:uuid:523b24bf-b644-4cff-af56-5e3db46612e9> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/centro-apcola-regional-1325662/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280587.1/warc/CC-MAIN-20170116095120-00096-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.944225 | 408 | 2.609375 | 3 |
What is the difference between Knowledge and Information?
Knowledge consists of facts, truths, and beliefs, perspectives and concepts, judgments and expectations, methodologies and know-how. Knowledge is accumulated and integrated and held over time to handle specific situations and challenges.
Information consists of facts and data organized to describe a particular situation or condition. We use knowledge to determine what a specific situation means. Knowledge is applied to interpret information about the situation and to decide how to handle it.
Difference between Information & Knowledge
To illustrate the difference between information and knowledge, let us take an example. A customer contacts his broker to conduct a transaction and the distinctions between information and knowledge for this interchange are:
Customer: "I have an account with you, its number is 76190. What is the balance in my account?" (This is INFORMATION)
The customer-care executive possesses KNOWLEDGE on how to operate her worksation, how to talk to customer, how to verify that caller is authorized person, how to interpret customer request, how to interpret account data, and how to explain it to customer. That knowledge may be considered "How-to" knowledge. In addition, the executive possesses (or can obtain from others or from support systems) other kinds of knowledge such as concepts about customers, customer accounts, and brokerage in general. The exective obtains from her system INFORMATION such as: Account holder's name, needed password, type of account, account restrictions, account balances.
The above is a practical example of difference between information and knowledge. Today's view about knowledge is changing. Knowledge is not something that is stored in the brain. Knowledge is created in a situation, and is never again used in exactly the same way. This is called "situatedness" or "situated action". As an example; Think about a ballet dancer dancing on stage. There is no symbolic knowledge about the dance stored in the brain of the dancer. It is created while dancing, listening to the music, feeling the music, and the audience. It will never be that same dance again. We can represent knowledge as information (i.e symbols), but that is *not* the same as knowledge. Knowledge is fluid, tacit, and forever changing. We cannot recall knowledge, as we can recall information, we can only experience a situation as similar and react to it in a similar way. A knowledge-based system does not contain knowledge, it represents knowledge as information that can be applied dynamically by the system.
Knowledge vs Information
Information is static; knowledge is information in "knowledge representation" form (conceptual model, objects, frames, constraints, cases, rules, graphs, etc) and different kind of reasoning (decision making, learning, etc). Knowledge has an environment and can be shared (information to, but only data). Knowledge management includes also organisation, strategy, "corporate" decision, and "corporate model".
Information is not knowledge until and unless it is applied effectively.
Information vs Knowledge vs Wisdom
- "Information" is "raw", i.e. un-acted upon by any receiver;
- "Knowledge" is information acted upon cognitively, i.e. transformed into some conceptual framework and hence manipulable and usable for other cognitive uses;
- "Wisdom" is applied knowledge, i.e. knowledge along with the common (or uncommon) sense to know when and how to use it.
The interesting distinction is between knowledge and wisdom. By this notion, "knowledge" connotes a solitary action, capable of being taken in the abstract by any one individual. The addition of wisdom implies the addition of experience. Experience is a cumulative matter; it may refer to an individual's own experience, or to the collective experience of more than one individual.
This list is based on the content of the title and/or content of the article displayed above which makes them more relevant and more likely to be of interest to you.
We're glad you have chosen to leave a comment. Please keep in mind that all comments are moderated according to our comment policy, and all links are nofollow. Do not use keywords in the name field. Let's have a personal and meaningful conversation.comments powered by Disqus | <urn:uuid:7e0b05a5-12c1-47ce-b773-d356617fe2b4> | CC-MAIN-2017-04 | http://www.knowledgepublisher.com/article-914.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00124-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.945184 | 867 | 3.421875 | 3 |
Have you ever been curious as to what information the government has stored about you and your travel records? A Passenger Name Record (PNR) is a computerized travel record created by airlines or travel agencies for both domestic and international flights, as well as hotel bookings, car rentals, cruises, and train trips. Your PNR, which is given to U.S. Customs and Border Protection (CBP) if you travel internationally, can include details like your un-redacted credit card number or IP addresses. As Ars Technica’s Cyrus Farivar found out, your PNR is just another example of the government’s “collect it all” mentality.
Farivar submitted a Freedom of Information Act request to CBP for his PNR; he was eventually given 76 pages of data covering his travel from 2005 to 2013. He said his PNRs included “every mailing address, email, and phone number” he ever used, as well as some PNRs listing the IP address he used when buying the ticket, his full credit card number stored in the clear, and notes jotted down by airline call center employees “even for something as minor as a seat change.”
After he consulted travel writer Edward Hasbrouck, Farivar was told, “PNRs like mine are created for domestic flights, too, but that it's only for international travel that data is routinely given to CBP.” He also learned that every notation made by an airline call center employee, for things such as seat changes or even special needs requests, can stay in your permanent file kept by DHS.
Hasbrouck has written extensively about what’s in a PNR and about Computerized Reservation System databases.
If you make your hotel, car rental, cruise, tour, sightseeing, event, theme park, or theater ticket bookings through the same travel agency, Web site, or airline, they are added to the same PNR. So a PNR isn't necessarily, or usually, created all at once: information from many different sources is gradually added to it through different channels over time.
When a ticket is issued, that is recorded in the PNR; if it's an e-ticket, the actual "ticket", as defined by the airline, is the electronic ticket record in the PNR. When you check-in, the claim check numbers and the weights of your bags are added to the PNR. If you don't show up for a flight on which you are booked, that fact is logged in the PNR.
Any additions, changes, cancellations, seat assignment or special needs requests can also be added to the PNR. Hasbrouck explained, "The bottom line is that PNRs contain a great deal of confidential and sensitive information deserving of strong privacy protection, but not necessarily even the most basic information needed for positive identification or 'profiling' of travelers."
The amount of personal and sensitive data collected in PNRs has been an area of concern for some privacy watchdogs, like EPIC. The PNR could include "the passenger's full name, date of birth, home and work address, telephone number, email address, credit card details, IP address if booked online, as well as the names and personal information of emergency contacts." A PNR could also contain "detailed information on patterns of association between travelers," as well as sensitive information like "religious meal preferences and special service requests that describe details of physical and medical conditions (e.g., 'Uses wheelchair, can control bowels and bladder')."
Farivar found out that after booking a flight with Travelocity, the PNR included "a huge amount of information," like his full credit card number. Storing credit card numbers in the clear is a breach of PCI data security standards (pdf).
“Why isn’t the government complying with even the most basic cybersecurity standards?” asked Fred Cate, a law professor at Indiana University. “Storing and transmitting credit card numbers without encryption has been found by the Federal Trade Commission to be so obviously dangerous as to be ‘unfair’ to the public. Why do transportation security officials not comply with even these most basic standards?”
Cate also told Farivar:
"No wonder the government can’t find needles in the haystack—it keeps storing irrelevant hay. Even if the data were fresh and properly secured, how is collecting all of this aiding in the fight against terrorism? This is a really important issue because it exposes a basic and common fallacy in the government’s thinking: that more data equates with better security. But that wasn’t true on 9/11, and it still isn’t true today. This suggests that US transportation security officials are inefficient, incompetent, on using the data for other, undisclosed purposes. None of those are very encouraging options."
The government may not have wanted Farivar to see what his PNRs contained, as he had to appeal his FOIA request. But it's not just PNRs with sensitive information that DHS/CBP can access. An investigation by the Toronto Star found that thousands of Canadians, who were never convicted of a crime, are listed in massive police databases that are accessible to U.S. border authorities. Toronto police had also been accused of "disclosing the mental health records it logs into Canada’s national police database," and then sharing the sensitive medical records with U.S. border authorities, ultimately resulting in Canadians being blocked from entering the U.S.
CBP claims PNR data is kept for five years, but as Farivar found out after seeing nine years of his travel records, "We now live in a world where it’s increasingly difficult to prevent the authorities from capturing information on one’s movements or communications." Indeed, it's part of the "collect it all" mentality…just in case you – or someone you know or sat by during travel – might turn out to be a crook or terrorist. | <urn:uuid:879c9009-6a7c-4c1c-b688-815f01738735> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2456104/microsoft-subnet/your-pnr-tells-government-your-ip-email-credit-card-call-center-notes.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00242-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.968881 | 1,258 | 2.78125 | 3 |
eWEEK 30: The Linux/Apache/MySQL/PHP (LAMP) combination provided a reliable set of servers, operating systems, languages and tools that helped the Web rapidly grow into what it is today.
For much of computing history, application delivery came by way of proprietary, vertically integrated application stacks that were designed and engineered by IT vendors. In the late 1990s that changed.
The Linux open-source operating system emerged in the early 1990s just in time to serve as an important tool that would drive the nascent Internet. But an operating system alone isn't enough to define a platform. What was needed was the ability to deliver applications especially Web applications and services that would support the rapid growth of the Internet. That's how the LAMP stack came to be.
LAMP is an acronym that stands for Linux, Apache, MySQL and PHP, though the 'P' can also stand for Perl or Python as well. LAMP is one of the defining success stories for Web application development, Linux and the open-source movement that eWEEK
has covered since the rise of the Internet.
Linux is the foundational bare-metal operating system on which the stack runs. The Apache web server first came on the scene in 1995 just as global Web use was starting to grow explosively, tracing its roots back to the very first NSCA HTTPd webserver
. From April 1996 to the present day, the open-source Apache HTTP Server has held the enviable distinction of being the most widely deployed Web server on the planet.
Initially, Apache was only available on Linux and other Unix variants, though it has been available for Windows since 1997. Installing Apache on Linux-based servers is the most typical deployment.
A Web server alone is only enough for static Web page delivery and that's where the other pieces of the LAMP stack come into play. The MySQL database also first debuted in 1995 providing an open-source database that was able to run on top of Linux and connect with the Apache Web server.
The final piece of the LAMP puzzle initially came in the form of the open-source PHP programming language. As is the case with both Apache and MySQL, PHP's roots date to 1995, though it was with the PHP 3 release in 1997 that the language began to gain real traction. Once PHP could be integrated directly with Apache to run together a Linux operating system server, the LAMP stack was born as the serendipitous confluence of developer needs and technology.
"Just as Linus Torvalds (creator of Linux) didn’t set out to create a kernel that would run on multiple architectures and power everything from cell phones to stock exchanges, no one set out to create an open source stack that would revolutionize the software industry," Amanda McPherson, vice president of marketing and developer programs at the Linux Foundation told eWEEK
McPherson added that it just made sense for developers and companies to build infrastructure with a fully open-source stack, allowing for innovations, reduced cost and faster development.
"As more people saw how the stack worked together and used it together, it just kept on getting better and more integrated," McPherson said.
While the initial LAMP stack was a natural evolution, there were and still are commercial vendors that benefit commercially from helping to build and extend the stack. One of those vendors is Red Hat, which Markets Red Hat Linux and related products. Marty Wesley is a senior principal product marketing manager at Red Hat and was with the company during its early days, seeing first-hand the importance of LAMP to Red Hat's growth.
Wesley told eWEEK
that the LAMP stack was something that really evolved naturally as developers determined what tools they needed to build their applications.
"But once it became clear that [LAMP] was a popular solution, the commercial vendors moved to standardize and enhance the stack in various ways," Wesley said. | <urn:uuid:d1663e46-ad8d-4ff6-b170-c7658cbe6051> | CC-MAIN-2017-04 | http://www.eweek.com/cloud/eweek-at-30-the-lamp-stack-switches-on-large-scale-web-development.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00454-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.966749 | 809 | 2.59375 | 3 |
Most days it seems like keeping and protecting any sort of data private is a pipe dream.
There are a variety of research efforts underway to keep private data private but it may be too little to0 late, some experts say.
Despite that notion the researchers at DARPA next month will go over a program the agency says will help develop the “technical means to protect the private and proprietary information of individuals and enterprises.”
The program is named after Louis Brandeis, an associate Supreme Court Justice who was arguably the world’s first privacy champion having helped pen “The Right to Privacy” for the Harvard Law Review in 1890 which is still the basis for a number of privacy protections in the US.
+More on Network World: 26 of the craziest and scariest things the TSA has found on travelers+
DARPA said: “The ability to analyze large amounts of aggregated personal data can help businesses optimize online commerce, medical workers address public health issues, and governments interrupt terrorist activities. However, numerous recent incidents involving the disclosure of data have heightened society’s awareness of the vulnerability of private information within cyberspace. There is so much data that it is currently infeasible for individuals or enterprises to control it in a meaningful way with the information technologies available today. “
DARPA did not detail the technical aspects of what Brandeis will be made of but will go over the program on and determine interest in the program at a Proposer’s Day event March 12.
DARPA isn’t the only organization with privacy protection on its drawing board.
For example, IBM recently said it would offer a technology it says uses a cryptographic algorithm to encrypt the certified identity attributes of a user, protecting privacy and enhancing security. Known as Identity Mixer the technology basically prevents third parties or those looking to steal personal information from ever accessing such data in the first place by revealing only selected data to service providers.
The White House just last month issued its Privacy Bill of Rights which among other requirements requires businesses that collect personal data to describe their privacy and security practices and give consumers control over their personal information.
The IDG News Service wrote: “Even though responsible companies provide us with tools to control privacy settings and decide how our personal information is used, too many Americans still feel they have lost control over their data,” the White House said. ”Fears about identity theft, discrimination, and the trade in sensitive data without permission could erode trust in the very companies and services that have made us better connected, empowered, and informed.” The White House proposal however has been met with criticism from advocates who say it doesn’t go far enough in some cases and lets companies off the hook too easily.
Meanwhile the Federal Trade Commission recently rolled out a report that said in order to best reap the benefits of the myriad Internet-connected devices can offer, businesses need to better enhance security and protect consumers’ privacy.
The sheer volume of data that even a small number of devices can generate is stunning: one participant in the workshop indicated that fewer than 10,000 households using the company’s Internet of Things home-automation product can “generate 150 million discrete data points a day” or approximately one data point every six seconds for each household, the report states.
“…the IoT presents a variety of potential security risks that could be exploited to harm consumers by: enabling unauthorized access and misuse of personal information; facilitating attacks on other systems; and creating risks to personal safety. Privacy risks may flow from the collection of personal information, habits, locations, and physical conditions over time….perceived risks to privacy and security, even if not realized, could undermine the consumer confidence necessary for the technologies to meet their full potential, and may result in less widespread adoption,” the FTC stated.
Check out these other hot stories: | <urn:uuid:eb16dc9c-3bef-4152-8394-578e1e050be5> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2892447/security0/darpa-to-target-cyber-privacy-protection-tech.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00270-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.944956 | 804 | 2.671875 | 3 |
Some stars may host multiple generations of planets, a dazzling new photo suggests. The newly released image, which was captured by the Very Large Telescope Interferometer (VLTI) in Chile, shows a dusty disk around an old double star called IRAS 08544-4431, which lies about 4,000 light-years from Earth in the southern constellation of Vela (The Sails). Scientists created this video look at the dust-shrouded star to showcase the discovery. This disk is very similar to the planet-forming structures commonly observed around young stars. While it's not clear whether planets actually do take shape around older stars, the new photo — the sharpest ever taken of such a disk around a mature star — hints that this is a possibility, researchers said. [The Strangest Alien Planets (Gallery)] "Our observations and modeling open a new window to study the physics of these disks, as well as stellar evolution in double stars," study co-author Hans Van Winckel, of the Instituut voor Sterrenkunde in Belgium, said in a statement. "For the first time, the complex interactions between close binary systems and their dusty environments can now be resolved in space and time." The scientists used several VLTI telescopes, an associated instrument called the Precision Integrated-Optics Near-infrared Imaging ExpeRiment (PIONIER) and a new high-speed infrared detector to take the photo. "We obtained an image of stunning sharpness — equivalent to what a telescope with a diameter of 150 meters [490 feet] would see," study team member Jacques Kluska, of Exeter University in England, said in the same statement. "The resolution is so high that, for comparison, we could determine the size and shape of a 1-euro coin seen from a distance of 2,000 kilometers [1,240 miles]." The IRAS 08544-4431 system consists of an old red giant star, as well a nearby, younger, "normal" star. The dust that comprises the newly imaged disk was expelled by the red giant, researchers said. "We were also surprised to find a fainter glow that is probably coming from a small accretion disk around the companion star," said study lead author Michael Hillen, also of the Instituut voor Sterrenkunde. "We knew the star was double, but weren't expecting to see the companion directly," Hillen added. "It is really thanks to the jump in performance now provided by the new detector in PIONIER, that we are able to view the very inner regions of this distant system." Hillen and his colleagues are publishing their results in the journal Astronomy & Astrophysics. The VLTI is located at the European Southern Observatory's Paranal Observatory in northern Chile. Follow Elizabeth Howell @howellspace, or Space.com @Spacedotcom. We're also on Facebook and Google+. Original article on Space.com. Copyright 2016 SPACE.com, a Purch company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.
This star is surrounded by a disc of gas and dust—such discs are called protoplanetary discs as they are the early stages in the creation of planetary systems. This particular disc is seen nearly edge-on, and its appearance in visible light pictures has led to its being nicknamed the Flying Saucer. The astronomers used the Atacama Large Millimeter/submillimeter Array (ALMA) to observe the glow coming from carbon monoxide molecules in the 2MASS J16281370-2431391 disc. They were able to create very sharp images and found something strange—in some cases they saw a negative signal! Normally a negative signal is physically impossible, but in this case there is an explanation, which leads to a surprising conclusion. Lead author Stephane Guilloteau takes up the story: "This disc is not observed against a black and empty night sky. Instead it's seen in silhouette in front of the glow of the Rho Ophiuchi Nebula. This diffuse glow is too extended to be detected by ALMA, but the disc absorbs it. The resulting negative signal means that parts of the disc are colder than the background. The Earth is quite literally in the shadow of the Flying Saucer!" The team combined the ALMA measurements of the disc with observations of the background glow made with the IRAM 30-metre telescope in Spain. They derived a disc dust grain temperature of only -266 degrees Celsius (only 7 degrees above absolute zero, or 7 Kelvin) at a distance of about 15 billion kilometres from the central star. This is the first direct measurement of the temperature of large grains (with sizes of about one millimetre) in such objects. This temperature is much lower than the -258 to -253 degrees Celsius (15 to 20 Kelvin) that most current models predict. To resolve the discrepancy, the large dust grains must have different properties than those currently assumed, to allow them to cool down to such low temperatures. "To work out the impact of this discovery on disc structure, we have to find what plausible dust properties can result in such low temperatures. We have a few ideas—for example the temperature may depend on grain size, with the bigger grains cooler than the smaller ones. But it is too early to be sure," adds co-author Emmanuel di Folco (Laboratoire d'Astrophysique de Bordeaux). If these low dust temperatures are found to be a normal feature of protoplanetary discs this may have many consequences for understanding how they form and evolve. For example, different dust properties will affect what happens when these particles collide, and thus their role in providing the seeds for planet formation. Whether the required change in dust properties is significant or not in this respect cannot yet be assessed. Low dust temperatures can also have a major impact for the smaller dusty discs that are known to exist. If these discs are composed of mostly larger, but cooler, grains than is currently supposed, this would mean that these compact discs can be arbitrarily massive, so could still form giant planets comparatively close to the central star. Further observations are needed, but it seems that the cooler dust found by ALMA may have significant consequences for the understanding of protoplanetary discs. This research was presented in a paper entitled "The shadow of the Flying Saucer: A very low temperature for large dust grains", by S. Guilloteau et al., published in Astronomy & Astrophysics Letters.
News Article | April 16, 2016
Astronomers say they've recently discovered an elusive dwarf galaxy orbiting our own Milky Way. There are about four dozen galaxies that we know of circling our own, New Scientist reported. Our newly named neighbor, Crater 2, is the fourth largest, according to a paper published in the Monthly Notices of the Royal Astronomical Society on Wednesday. (The biggest satellite galaxy of the Milky Way is the Large Magellanic Cloud, nearly 200,000 light years away.) Crater 2 sits some 400,000 light years away, said paper co-author Dr. Vasily Belokurov, an astrophysicist at the University of Cambridge's Institute of Astronomy. "This is indeed a very rare discovery," Belokurov told The Huffington Post. "A galaxy like Crater 2 is a sort of invisible object." Researchers at Cambridge’s Institute of Astronomy discovered the dwarf galaxy in January when they used a computer algorithm to pinpoint where there might be a significant clustering of stars in images taken of space beyond our Milky Way. They identified a never-before-seen cluster of stars -- and concluded that this was evidence of a dwarf galaxy. Analysis of the data revealed that Crater 2 is roughly the same age as the universe, and its angular size is at least twice that of our own moon. "We have found many similar objects in the last 10 years, but never such a large beast," Belokurov said. "It is orders of magnitude less luminous compared to most objects of similar size. It is extremely diffuse. We believe it was born that fluffy. But why, we do not yet know." Dr. Jay Pasachoff, an astronomer at Williams College in Massachusetts, who was not involved in the new discovery, said that to find such a faint and diffuse galaxy is a nice piece of research. "It is always fun to discover a nearby neighbor about which we didn't know before, and the dwarf galaxy Crater 2 falls into that category," said the co-author of The Cosmos. "It seems to be aligned with a handful of other astronomically nearby objects, which may be teaching us how our group of galaxies formed." The same research team discovered a treasure trove of nine new dwarf galaxies orbiting our Milky Way last year. At the time, Dr. Sergey Koposov of Cambridge’s Institute of Astronomy, who led that previous study, said in a statement, "The discovery of so many satellites in such a small area of the sky was completely unexpected ... I could not believe my eyes." Until 10 years ago, only a dozen dwarf satellite galaxies had been identified around the Milky Way. But Belokurov said that he and his colleagues have since found several tens more. "In the last two years alone, the number of known Milky Way satellite galaxies has doubled, largely thanks to the Dark Energy Camera on the Blanco 4 meter telescope in Chile," Dr. Evan Kirby, assistant professor at Caltech Department of Astronomy & Astrophysics, who was not involved in the research, told HuffPost. "These galaxies are intense concentrations of dark matter," he added. "If there's a place in the universe where we can look to learn about dark matter, it's dwarf galaxies. How is it distributed? What is it made of? Future observations, especially spectroscopy, will help answer those questions." Dwarf galaxies are the most numerous type of galaxy in the universe. "While we cannot say for sure this particular dwarf is the oldest in the universe, dwarf galaxies in general are," Belokurov said. "They are the first systems to be assembled, so they contain the information about the gas densities and the efficiencies of turning that gas into stars," he added. "As we have seen with the follow-up studies of similar objects, many stars in them look like the direct descendants of the very first stars in the universe."
The central map shows the distribution on the sky of the Boss Great Wall. The area subtended by this structure is the equivalent of 400 times the angular size of the Moon, and it is situated at more than 4 thousand million light years away from us. On the map, each point represents a galaxy, while the colours represent the density of the surroundings. So the red areas correspond to the regions with the maximum concentration of galaxies. In the four RGB images from the SDSS (Sloan Digital Sky Survey), each red dot is a galaxy chosen for study, (surrounded by other galaxies at different distances). To show comparison, the combined angular size of these four detailed images is hardly one hundredth of the angular size of the Moon, very tiny compared to the angular size of the complete map. Credit: Alina Streblyanska (IAC). A group of researchers, among them scientists from the IAC, has discovered one of the most distant and massive "hyperclusters" of galaxies found thus far: the BOSS Great Wall (BGW). According to Heidi Lietzen, the principal investigator of this research, there is probably no other similar system so clearly isolated and with a comparable size. As this astrophysicist explains "superclusters of galaxies are the largest structures in the universe, formed by groups of galaxies bound together by their gravitational interactions. These huge structures, with sizes between 10 and 50 megaparsecs, (30 to 150 million light years) can host thousands of galaxies. Galaxies started to form in the early universe, in those regions where the density of matter was somewhat higher than average. Slowly, all the matter began joining and moving toward the denser zones, where the superclusters formed after a long process. They are young structures compared with other systems such as normal galaxy clusters, because it took millions of years for them to group together into a single system. In this way, the structure of the universe as a whole can be seen as the "cosmic web" predicted by Yakov Zeldovich, in which the material of the universe is organized within interconnected filaments around voids which have a much lower density. The results of the study, published today in the journal Astronomy & Astrophysics have shown the presence of the BGW system, with a diameter of some 900 million light years. It is formed by two superclusters and two "walls" of galaxies, probably bigger in volume and diameter than any other known hypercluster. The structure as a whole contains some 830 galaxies, which make it one of the most massive hyperclusters known. The Sloan Great Wall, the most similar known hypercluster of galaxies, which is 160 Mpc long, has about half the mass of the BGW. "To detect the BOSS Great Wall hypercluster measurements were made of 500,000 galaxies to reconstruct the space distribution of the luminous density. The BGW is clearly the biggest isolated structure in volume which has been studied in space," commented José Alberto Rubiño, one of the other authors of the study. The sample was taken from the Sloan Digital Sky Survey (SDSS), a project which has mapped and catalogued the universe to study it in depth. These enormous structures give us valuable information to compare with cosmological models. They can even challenge the numerical simulations that describe the formation and evolution of structures in the universe, because these simulations ought to be able to predict structures as big as this. The superclusters and hyperclusters are very useful for understanding how galaxies have evolved, because this evolution should be quicker in high density environments. "Studying hyperclusters can give us clues about how to predict just when and how matter groups together, and offers new challenges to existing cosmological models," says Alina Streblynanska, an astrophysicist at the IAC. More information: H. Lietzen et al. Discovery of a massive supercluster system at ~ 0.47 , Astronomy & Astrophysics (2016). DOI: 10.1051/0004-6361/201628261
News Article | April 7, 2016
What is the enigmatic planet nine like? Is it a super-Earth, or more like its cosmic neighbors, Uranus and Neptune? How big is it? What’s its temperature? The hypothetical planet is shrouded in mystery. However, that doesn’t stop inquisitive minds. The aforementioned questions have been pondered by University of Bern researchers Esther Linder and Christoph Mordasini since January, when researchers announced evidence of a ninth planet. Linder and Mordasini, in their study published in Astronomy & Astrophysics, operated under the assumption that planet nine would be a smaller version of Uranus and Neptune. With that in mind, they traced the thermodynamic evolution of such a planet since the solar system’s formation, around 4.6 billion years ago. The researchers said they believe the planet is around 700 astronomical units away. According to the University of Bern, the researchers concluded planet nine’s mass is equal to 10 Earth masses, has a radius that measure 3.7 Earth radii, and a temperature of 47 Kelvin (minus 226 Celsius). “This means that the planet’s emission is dominated by the cooling of its core, otherwise the temperature would only be 10 Kelvin,” said Linder in a statement. “Its intrinsic power is about 1,000 times bigger than its absorbed power.” The researchers’ planet nine model consisted of an iron core, which was wrapped in a silicate mantle followed by a water ice layer, and finally in a hydrogen and helium envelope. The researchers also attempted to figure out why planet nine has evaded human detection. “They calculated the brightness of smaller and bigger planets on various orbits,” according to University of Bern. “They conclude that the sky surveys performed in the past had only a small chance to detect an object with a mass of 20 Earth masses or less, especially if it is near the farthest point of its orbit around the sun.” However, a planet with a mass more than 50 Earth masses would be detectable by extant telescopes, such as NASA’s Wide-field Infrared Survey Explorer. Astronomers may be close to figuring out planet nine’s location. According to Universe Today, recent evidence shows that small perturbations in Cassini’s orbit around Saturn may be caused by the ninth planet. | <urn:uuid:cfbf3057-c2ed-4e49-8a09-489b1c430d18> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/astronomy-astrophysics-294539/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00390-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.94881 | 3,466 | 3.5625 | 4 |
You’re a train conductor speeding along when you suddenly see five people tied up on the tracks in front of you. You don’t have enough time to stop, but you do have enough time to switch to an alternate track. That’s when you see there’s one person tied up on the alternate track. Do you pull the lever to make the switch, or stay the course?
Any college graduate who has ever stepped foot in an introductory philosophy course is likely to recognize this problem immediately. The question is a classic jumping off point for discussions about utilitarianism, consequentialism and fairness. Subsequent twists on the question — what if the one person standing on the other track was a child? — come with new moral dilemmas and further abstract discussions. There is no clear correct answer. In this ambiguity lies conversation.
The tech community as a collective whole is now facing a similar conundrum when it comes to programming machines. This time, though, the philosophical decisions aren’t theoretical — and nobody will be saved by the bell. With the advent of smart machines with learning capabilities powered by artificial intelligence, we need to reach a final consensus for a very practical purpose: We need to teach robots how to be moral.
Philosophical theory is now reality
The situation today is marked by groups of computer engineers sitting around discussing age-old philosophical problems. Artificial intelligence is advancing at an unprecedented rate due to affordable computational power and a concentrated focus on the field by tech giants such as Google, Facebook and IBM. Industry insiders predict that self-driving cars will edge onto the roads in five years, and drones are currently permeating everything from the industrial supply chain to farming. Questions about morality are becoming more urgent, yet remain unsolved.
Perhaps most surprising is that defining answers in regards to philosophical judgments, at least for now, is being left up to the tech community. In the 2016 policy statement concerning automated vehicles, released jointly by the Department of Transportation and the National Highway and Traffic Safety Administration, even the government organizations themselves seemed apt to admit that they simply don’t have the expertise nor the authority to create comprehensive legislation, noting that “it is becoming clear that existing NHTSA authority is likely insufficient to meet the needs of the time.” Companies like Google are practically begging for guidance and official regulations so they can move forward, but are coming up empty-handed.
A well-considered delay
Given the financial rewards of being first to market, there is certainly an urgency involved in coming to final conclusions. Yet even those who stand to benefit the most appear to be holding back. Many industry leaders are asking questions, but few are stepping forward with clear and specific proposals.
That’s a good thing. Despite newfound abilities to advance intelligent technology quickly, industry leaders should not give in to pressures to move at an unhealthy pace. Questions should come first, otherwise the industry releases poorly considered intelligence, which is a recipe for chaos.
Take, for example, an autonomous car self-driving along the road when another car comes flying through an intersection. The imminent t-bone crash has a 90 percent chance of killing the self-driving car’s passenger, as well as the other driver. If it swerves to the left, it’ll hit a child crossing the street with a ball. If it swerves to the right, it’ll hit an old woman crossing the street in a wheelchair.
Autonomous cars are sure to face this type of challenge at some point, and their creators need to decide how to program them to react to these no-win situations. Engineers need to come up with clear rules for navigating difficult situations so the robots don’t get confused and malfunction or select the wrong decision.
The easy answer would be to protect the driver at all costs. If we can assume that drivers are all selfish and would always default to the action that contains the least risk for them, wouldn’t we just replicate that in the autonomous driver model? The very fact that the decision to date has not proved easy is a good sign.
The morals of the masses
Ultimately, no matter what the experts decide, any final product and its underlying moral code must be palatable to the public at large if autonomous cars are to be a success. The MIT Media lab, whose scientists I had the privilege to spend time with at the World Economic Forum’s annual meeting in Davos earlier this year, are struggling with how to make moral robots. One thing was very clear — there is no clear answer.
They have created a Moral Machine website tool that gives us insight as to what, exactly, the public expects and wants from autonomous cars. The website invites users to judge between two competing outcomes in an inevitable car crash, with more than a dozen different scenarios to judge.
Overall, the results showed that people strongly prefer utilitarian outcomes: the fewest total number of lives lost. These results align with other surveys where participants consistently say that a more utilitarian model for autonomous cars is a more moral one.
Herein lies the trouble: While people favor utilitarianism in the abstract, their feelings become muddied when they’re the ones who might be making the sacrifice. As reported by The Washington Post, just 21 percent of people surveyed said they were likely to buy an autonomous vehicle whose moral choices were regulated, compared to 59 percent of respondents who said they were likely to make the purchase if the vehicles were instead instructed to always save the driver’s life.
Philosophy hits the road
In an age when technology and decreased face-to-face interaction is blamed for causing people to feel dispassionate and disconnected from one another, the very fact that the discussion on robot morality is so vibrant is a clear demonstration that compassion is alive and well.
In 1942, Isaac Asimov provided one prevailing take on robot morality with the three laws of robotics featured in his famous novel I, Robot. His outline was simple: A robot may not injure a human being, or through inaction, allow a human being to come to harm. But, as the characters discover in the novel, sometimes harm is simply unavoidable. What if the question instead mutates to what is preferable, letting the young or old live or sacrificing one to save many?
The most advanced technology isn’t going to be released until we as a society figure out collective answers to these puzzling questions. Governments around the world will look to the United States to set a regulatory precedent, and we need to make sure that we get things right the first time around. These are important discussions, and government leaders, tech leaders and ordinary citizens must all have a say, so that as a society, we maintain a moral system of checks and balances. There is no putting the genie back in the bottle. | <urn:uuid:7873cb17-028c-40b4-b13e-dbdc1d8f78df> | CC-MAIN-2017-04 | https://axcient.com/blog/ai-autonomous-cars-moral-dilemmas/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00206-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.958826 | 1,391 | 2.65625 | 3 |
The jBASE hash files systems have been found to provide a very high level of file integrity in comparison to legacy systems. The main reason for this high level of integrity is mainly due to the jBASE hash file design. Each hash file is a separate entity, and as such can only be updated, from jBASE applications, by the relevant database driver.
Other processes or activities such as the spooler, which is well known on some legacy systems for causing file corruption cannot actually directly access the hash file systems and therefore can not cause file corruption. Even updates to hash files can not cause a file corruption problem on another hash file.
Unlike other systems jBASE is not an environment or operating system in itself and so obviates the need to recover from any of kind of jBASE environment failure as such a thing does not exist.
The only real time jBASE hash files can suffer from file corruption is when the base operating system fails completely. Even in these situations it is highly likely that actual file corruption has been avoided if the hash files were correctly size. The reasoning being, that file corruption can occur if one memory page has been written to disk, containing a pointer to another memory page and the second memory page did not make it to disk before the system failed. Or as another example, a process was in the middle of writing to a memory mapped file when the system failed. With correctly size files these scenarios should be greatly reduced.
However given that the file is not corrupted does not preclude the fact that updates performed since the last memory flush maybe be lost. Therefore although the file and its integrity may have survived the system crash the last few updates may well have been casualties.
The following commands can be used in to help in the general maintenance of hash files. | <urn:uuid:b800c986-43a6-47bc-b3f4-51f726fea959> | CC-MAIN-2017-04 | http://www.jbase.com/r5/knowledgebase/manuals/3.0/30manpages/man/adv22_INTEGRITY.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281419.3/warc/CC-MAIN-20170116095121-00022-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.964112 | 364 | 2.6875 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.