text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
In continuation with my earlier blog "Test models - The way forward", let us understand how ‘Model-based Testing’ is currently being leveraged across the industry. Model-based Testing (MBT) is a technique for the automatic generation of test cases using models extracted from software artifacts (i.e. system requirements specifications). This approach helps in controlling the software quality and reducing the costs related to testing process, because test cases can be generated from the software artifacts produced throughout the software development process. MBT can use models developed during any software development phase to identify and construct the set of test cases. The MBT strategy usually includes different levels of abstraction, a behavior model, the relationship between models and code, test case generation technology, the importance of selection criteria for test cases, and a discussion of what can or cannot be automated when it comes to testing. MBT entails building the model, generating test cases, executing the tests, comparing actual output with expected output and deciding upon further actions (whether to modify the model, generate more test cases, stop testing, or estimate the software reliability [quality]). The model used is an important element in the application of MBT, as it defines the limitation of each approach based on the information that it can represent about the software structure or behavior. Sometimes the model used for a specific application domain cannot be employed by another domain. The main differences among MBT approaches are the criteria used to define the testing coverage (subset of test cases generated from a specific model) and test generation (steps to be accomplished). Each approach has its specific steps with different complexity or automation level. The steps are: - Modeling of Software Behavior; - Applying Software Product Testing Coverage Criteria to select the set of test cases; - Generation of test cases and expected results; and - Execution of tests and comparison of obtained results against the expected results. Each MBT approach has specific characteristics that make it different from other approaches and therefore a clear comparison cannot be made to define whether one approach is better than the other. The behavior model must be developed carefully. Its limitation and the restrictions must be respected during the modeling of the software under test. Moreover, the correctness of a model is fundamental to start the test case generation process. If the model is wrong, the generated tests will be invalid. There is a promising future for MBT as software becomes even more ubiquitous and quality becomes the only distinguishing factor between brands. Modeling in general seems to be gaining favor; particularly in domains where quality is essential and less-than-adequate software is not an option. It is quite intuitive to use MBT for new software development as the model evolves as the requirements are defined and elaborated. It is especially true for development where system behavior is already modeled using state diagrams. It is equally possible to leverage it for development needs where the software product and artifacts exist by leveraging some existing artifacts like use cases. We will soon discuss about the trends and current market scenarios in our next blog.
<urn:uuid:36ce245c-1ada-425e-914d-b098b2d412d3>
CC-MAIN-2017-04
https://www.hcltech.com/blogs/engineering-and-rd-services/model-based-testing-explained
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285315.77/warc/CC-MAIN-20170116095125-00269-ip-10-171-10-70.ec2.internal.warc.gz
en
0.929338
616
3.078125
3
Once upon a time, most of us just used one computer. For most of us, that computer had just one operating system, and that machine held all our digital data. But those days are quickly disappearing. Fewer and fewer of us now have just one computer. Most of us have (at least) one computer that we use at home, plus one that we use at work, which might not use the same operating system as our home machine. Many of us have a computing device in our phone, which definitely has a different operating system from our desktop or laptop machines. And then there's tablets, media centers, and so on. Many of us interact with (and have our data on) 3 or 4 different machines, with several different operating systems, throughout our day. And our data may even be accessible between those devices with the help of cloud services. There are millions of new malware discovered every year for Windows, so people clearly understand the need for security on that operating system. But it's not just malware that's a problem. The Internet is a dangerous place, not just because of malware. Criminals understand that valuable data does not just exist on Windows machines. And consequently, security is focused less and less on "computer security" and more on "information security." That means security products are not simply focused on finding and removing bad items, but on helping you protect your data, wherever it resides. The other day we discussed the problems with people declaring that because security measures aren't 100% effective, you shouldn't bother with them. This week brings another example, with a report suggesting anti-virus software is "a waste of money." There are numerous problems with the test, and rather than debating this in great detail, let me sum it up in two points: - 82 samples in a Windows environment is not statistically significant. - VirusTotal states no less than 3 times on their About page that it is a bad idea to use VirusTotal for AV testing. This post by Bill Brenner rebuts that test and its conclusions. It makes the point that most people have neither the expertise nor the inclination to set up security systems on their machines cobbled together from various tools and configuration changes, rather than simply using pre-packaged software. All of us, from home users to the biggest corporations, are weighing cost versus benefit versus time and expertise. We are coming out on the right side of the equation if we've protected our system just well enough not to be worth the effort for cyber-criminals, whether we do it all manually or purchase products to help us. The way we achieve that balance is to focus on protecting our data, wherever it resides. Using anti-virus software can help, especially since modern AV software does not simply scan with signatures for known malware, it scans with more advanced techniques to detect malicious behavior. And AV is a type of security technology that is available for every operating system. But it's important to also use other security techniques as well: using strong and unique passwords, having a hardware or software firewall, plus backing up and encrypting your data, for instance. This layering of protection helps close the gaps left by any one technique. Having AV software on just one machine misses the point of cybercrime. Your data has value, to you as well as to cyber-criminals, and it needs to be protected wherever it resides.
<urn:uuid:b0a8e880-8da6-4912-99d2-829011f6a45c>
CC-MAIN-2017-04
https://www.intego.com/mac-security-blog/protect-your-data-not-just-your-computer/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285315.77/warc/CC-MAIN-20170116095125-00269-ip-10-171-10-70.ec2.internal.warc.gz
en
0.949134
691
2.609375
3
The world’s largest country in terms of land size and South America’s largest nation in terms of land and population size, Brazil is a key player in the geo-political arena and a dominant power among emerging countries. As the world’s sixth largest economy, Brazil ranks third among the world’s major agricultural exporters and fourth for food products. The country is the principal recipient and source of FDI (Foreign Direct Investment) in Latin America and fifth largest recipient nation in the world. Thanks to its agricultural and oil resources, Brazil also ranks second worldwide for bio-ethanol production. Blessed with the world’s largest reserves of farm-able and not cultivated land, Brazil has carved out its regional and international rank thanks, to strong exporting agricultural activities, radical economic reforms and an aggressive trade policy. Even while manufacturing and services are showing a steep growth, agriculture is still a driving force of the Brazilian economy with 5.8 percent of GDP. Even while manufacturing and services are showing a steep growth, agriculture is still a driving force of the Brazilian economy with 5.8 percent of GDP (against 2 percent in France), and with the agribusiness share reaching 23 percent. In 2009, agriculture accounted for 19.3 percent of the labor force, or 19 million people, thus strongly contributing to poverty reduction. Agribusiness employment accounted for 2.7 percent of the labor force. Agriculture also benefited from macroeconomic and structural reforms initiated from the 1990s onward, with the goal of better economic stability, inflation reduction and trade expansion. These reforms paved the way for competitive agricultural operations, thanks to reduced production costs (inputs) and governmental interventions to control prices for key commodities (coffee, sugar and wheat). Impacted by the financial crisis, agricultures growth has accelerated starting in 2003, partly thanks to a productivity model based on high mechanization, improved concentration and significant labor force reserves. Encouraged to move away from social conflicts linked to land overcrowding and colonization of new production areas, investment in agricultural research boosted crop productivity by over 151 percent in 30 years. Key factors in the growth of Fruits & Vegetables Market in Brazil are technological advancements, swelling population levels, strong economic growth, good availability of fruits and vegetable products, and expanding local production. High arable land and strong economy are giving boost to Brazilian fruits and vegetables market. Inefficient logistics, and poor post-harvest management are the biggest challenges for the Fruits and Vegetables Industry. Vast distributed production facilities made it tough to transfer fruits and vegetables from one place to another. Low rural infrastructure worsen the transportation and storage. What the report offers The study identifies the situation of Brazil and predicts the growth of its Fruits and Beverages market. Report talks about Fruits and Beverages production, consumption, import and export with prices and market trends, Government regulations, growth forecast, major companies, upcoming companies & projects, etc. Report will talk about Economic conditions of and future forecast of its current economic scenario and effect of its current policy changes in to its economy, reasons and implications on its growth. Lastly, the report is divided by major import & export and importing and exporting partners.
<urn:uuid:91b04381-54bd-4708-95af-58204bfb2e3c>
CC-MAIN-2017-04
https://www.mordorintelligence.com/industry-reports/fruits-and-vegetables-industry-in-brazil-industry
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00297-ip-10-171-10-70.ec2.internal.warc.gz
en
0.930996
655
2.875
3
Video Game Development as a Degree Video game developers never have been in such high demand as the emergence of video games themselves becomes the preferred medium for a diverse array of consumers. This phenomenon is akin to the wave of film schools and film students in the ’60s — although most scoffed at the time, these institutions and individuals would go on to bring film to cultural heights never thought possible. Video game developers who go to school for their craft are poised to make a similar breakthrough and prove their discipline belongs in academia. Just as Steven Spielberg and his peers were members of the first generation to go to school specifically for making movies, a new crop of talent is going to school for the sole purpose of designing video games. As with the previous generation with its new medium, there is limitless potential for video games to grow and evolve under educated leadership. The specifics of earning such a degree still fluctuate depending on the school Michigan State University, one of the more traditional schools to have a video game program, offers a master’s degree in telecommunications, which includes a course in “serious” game design (this is defined as games with a purpose beyond entertainment). The degree description says these “serious” games increasingly become more common in the military and corporate training, which corresponds to the increased need for personnel to design such games. Students from a wide variety of academic backgrounds, from computer science to political science, all are welcomed to apply for the degree. Video game degree programs are especially popular in cities where studios have design hubs, including Los Angeles and Seattle, which ideally creates a steady stream of new employees coming into the workforce every year. An example of this phenomenon is students in the game-development degree program at the University of Southern California going to work for Electronic Arts’ Los Angeles design office. No such pipeline, conversely, can hurt an area with a thriving game industry. For example, a community such as Austin, Texas — a large game design hub — lacked any specific degree program at a nearby university. With companies acknowledging the need to hire people who graduate from specific game-related programs, the broad degrees large universities offer often aren’t as attractive to employers. Austin Community College (ACC), however, is beginning to lay groundwork for such a system that has worked so well elsewhere. By offering a certificate in video game design, ACC fills the need for a specific program. The University of Central Florida, by contrast, has a specific video gram program, as well as a major local work opportunity called the Florida Interactive Entertainment Academy (FIEA). As of December 2006, FIEA graduated its first class of master’s degree candidates, most of whom will go on to work at the nearby Electronic Arts’ Tiburon, Fla., studios. The common theme with video game degree programs is their prevalence is based on the community’s need for those particular skills. It’s only a matter of time, however, that the need for video game programmers reaches beyond finding someone to design the city trash cans in the next “Grand Theft Auto” entry.
<urn:uuid:28362f59-786f-42b9-a7b4-1c1445ceb858>
CC-MAIN-2017-04
http://certmag.com/video-game-development-as-a-degree/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00417-ip-10-171-10-70.ec2.internal.warc.gz
en
0.966093
639
2.53125
3
By Brad Cyprus, RSPA PCI/Data Security Committee Chair By now, everyone has heard of the credit card breach at Target which managed to steal 40 million credit cards. Because this breach is so large, other significant breaches have managed to garner much less attention such as the ones at Sally Beauty Supply, Michael’s, Neiman Marcus, and many other smaller incidents. What all of these breaches have in common is malware. Hackers now rely on malware to help them steal credit cards, and as Point of Sale (POS) professionals, it is important that you understand the risks associated with modern malware attacks. So what is malware? Malware is a generic computer term for any malicious software. This could be in the form of a computer virus (software that does damage to your computer), Trojan (software that performs undesired actions in the background usually secretly), back doors (software that establishes remote connections without your consent), and many others. The benefit of using malware from the computer hacker’s point of view is simple. A hacker can write (or pay someone else to write) a piece of software one time, and through the pervasive use of the internet, can distribute that malware using e-mail, social media, video downloads, and so much mo. Basically, as people browse the internet for information or for entertainment, there is always a chance that a hacker has imbedded malware in something that will be installed on a station after a site is visited. Download the full article below to learn more.
<urn:uuid:dc8ba86a-c657-4554-9cd4-592ac5d80c97>
CC-MAIN-2017-04
http://www.bsminfo.com/doc/why-hackers-use-malware-to-steal-credit-cards-0001
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283689.98/warc/CC-MAIN-20170116095123-00049-ip-10-171-10-70.ec2.internal.warc.gz
en
0.954818
308
2.90625
3
A file is a collection of data, usually stored on disk. As a logical entity, a file enables you to divide your data into meaningful groups, for example, you can use one file to hold all of a company's product information and another to hold all of its personnel information. As a physical entity, a file should be considered in terms of its organization. The term "file organization" refers to the way in which data is stored in a file and, consequently, the method(s) by which it can be accessed. This COBOL system supports three file organizations: sequential, relative and indexed. A sequential file is one in which the individual records can only be accessed sequentially, that is, in the same order as they were originally written to the file. New records are always added to the end of the file. Three types of sequential file are supported by this COBOL system: Record sequential files are nearly always referred to simply as sequential files because when you create a file and specify the organization as sequential, a record sequential file is created by default. To define a file as record sequential, specify ORGANIZATION IS RECORD SEQUENTIAL in the SELECT statement for the file in your COBOL program, for example: select recseq assign to "recseq.dat" organization is record sequential. Because record sequential is the default for sequential files, you don't actually need to specify ORGANIZATION IS RECORD SEQUENTIAL, you could simply use ORGANIZATION IS SEQUENTIAL (as long as the Compiler directive, SEQUENTIAL, has not been set). The primary use of line sequential files (which are also known as "text files" or "ASCII files") is for display-only data. Most PC editors, for example Notepad, produce line sequential files. In a line sequential file, each record in the file is separated from the next by a record delimiter. The record delimiter, which comprises the carriage return (x"0D") and the line feed (x"0A") characters, is inserted after the last non-space character in each record. A WRITE statement removes trailing spaces from the data record and appends the record delimiter. A READ statement removes the record delimiter and, if necessary, pads the data record (with trailing spaces) to the record size defined by the program reading the data. To define a file as line sequential, specify ORGANIZATION IS LINE SEQUENTIAL in the SELECT statement for the file in your COBOL program, for example: select lineseq assign to "lineseq.dat" organization is line sequential. Printer sequential files are files which are destined for a printer, either directly, or by spooling to a disk file. They consist of a sequence of print records with zero or more vertical positioning characters (such as line-feed) between records. A print record consists of zero or more printable characters and is terminated by a carriage return (x"0D"). With a printer sequential file, the OPEN statement causes a x"0D" to be written to the file to ensure that the printer is located at the first character position before printing the first print record. The WRITE statement causes trailing spaces to be removed from the print record before it is written to the printer with a terminating carriage return (x"0D"). The BEFORE or AFTER clause can be specified in the WRITE statement to cause one or more line-feed characters (x"0A"), a form-feed character (x"0C"), or a vertical tab character (x"0B") to be sent to the printer before or after writing the print record. Printer sequential files should not be opened for INPUT or I/O. You can define a file as printer sequential by specifying ASSIGN TO LINE ADVANCING FILE or ASSIGN TO PRINTER in the SELECT statement, for example: select printseq assign to line advancing file "printseq.dat". A relative file is a file in which each record is identified by its ordinal position within the file (record 1, record 2 and so on). This means that records can be accessed randomly as well as sequentially. For sequential access, you simply execute a READ or WRITE statement to access the next record in the file. For random access, you must define a data-item as the relative key and then specify, in the data-item, the ordinal number of the record that you want to READ or WRITE. Because records can be accessed randomly, access to relative files is fast, but if you need to save disk space, you should avoid them because, although you can declare variable length records for a relative file, the system assumes the maximum record length for all WRITE statements to the file, and pads the unused character positions. This is done so that the COBOL file handling routines can quickly calculate the physical location of any record given its record number within the file. As relative files always contain fixed length records, no space is saved by specifying data compression. In fact, if data compression is specified for a relative file, it is ignored by the Micro Focus File Handler. Each record in a relative file is followed by a two-byte record marker which indicates the current status of the record. The status of a record can be: x"0D0A" - record present x"0D00" - record deleted or never written When you delete a record from a relative file, the record's record marker is updated to show that it has been deleted but the contents of a deleted record physically remain in the file until a new record is written. If, for security reasons, you want to ensure that the actual data does not exist in the file, you must overwrite the record (for example with space characters) using REWRITE before you delete it. To define a relative file, specify ORGANIZATION IS RELATIVE in the SELECT statement for the file in your COBOL program. If you want to be able to access records randomly, you must also: select relfil assign to "relfil.dat" organization is relative access mode is random relative key is relfil-key. ... working-storage section. 01 relfil-key pic 9(8) comp-x. The example code above defines a relative file. The access mode is random and so a relative key is defined, relfil-key. For random access, you must always supply a record number in the relative key, before attempting to read a record from the file. If you specify ACCESS MODE IS DYNAMIC, you can access the file both sequentially and randomly. An indexed file is a file in which each record includes a primary key. To distinguish one record from another, the value of the primary key must be unique for each record. Records can then be accessed randomly by specifying the value of the record's primary key. Indexed file records can also be accessed sequentially. As well as a primary key, indexed files can contain one or more additional keys known as alternate keys. The value of a record's alternate key(s) does not have to be unique. To define a file as indexed, specify ORGANIZATION IS INDEXED in the SELECT statement for the file in your COBOL program. You must also specify a primary key using the RECORD KEY clause: select idxfile assign to "idx.dat" organization is indexed record key is idxfile-record-key. Most types of indexed file actually comprise two separate files: the data file (containing the record data) and the index file (containing the index structure). Where this is the case, the name that you specify in your COBOL program is given to the data file and the name of the associated index file is produced by adding an .idx extension to the data file name. You should avoid using the .idx extension in other contexts. The index is built up as an inverted tree structure that grows as records are added. With indexed files, the number of disk accesses required to locate a randomly selected record depends primarily on the number of records in the file and the length of the record key. File I/O is faster when reading the file sequentially. We strongly recommend that you take regular backups of all file types but there are situations with indexed files (for example, media corruption) that can lead to only one of the two files becoming unusable. If the index file is lost in this way it is possible, using the Rebuild utility, to recover the index from the data file and so reduce the time lost due to a failure. The primary key of an indexed file is defined using the RECORD KEY IS clause in the SELECT statement: select idxfile assign to "idx.dat" organization is indexed record key is idxfile-record-key. As well as the primary key, each record can have any number of additional keys, known as alternate keys. Alternate keys are defined using the ALTERNATE RECORD KEY IS clause in the SELECT statement: select idxfile assign to "idx.dat" organization is indexed record key is idxfile-record-key alternate record key is idxfile-alt-key. You can define keys which allow duplicate values. However, you should not allow duplicates on primary keys as the value of a record's primary key must be unique. When you use duplicate keys you should be aware that there is a limit on the number of times the same value can be specified for an individual key. Each time you specify the same value for a duplicate key, an increment of one is added to the key's occurrence number. The maximum number of duplicate values permitted for an individual key varies according to the type of indexed file (look up Indexed file, Types in the online help index for a full list of indexed file types and their characteristics). The occurrence number is used to ensure that duplicate key records are read in the order in which they were created, so any occurrence number whose record you have deleted cannot be reused. This means that it is possible to reach the maximum number of duplicate values, even if some of those keys have already been deleted. Some types of indexed file contain a duplicate occurrence record in the data file (look up Indexed file, Types in the online help file for a full list of indexed file types and their characteristics). In these files, each record in the data file is followed by a system record holding, for each duplicate key in that record, the occurrence number of the key. This number is just a counter of the number of times that key value has been used during the history of the file. The prescence of the duplicate occurrence record makes REWRITE and DELETE operations on a record with many duplicates much faster but causes the data records of such files to be larger than those of a standard file. To enable duplicate values to be specified for alternate keys, use WITH DUPLICATES in the ALTERNATE RECORD KEY clause in the SELECT statement: file-control. select idxfile assign to "idx.dat" organization is indexed record key is idxfile-record-key alternate record key is idxfile-alt-key with duplicates. A sparse key is a key for which no index entry is stored for a given value of that key. For example, if a key is defined as sparse when it contains all spaces, index entries for the key are not included when the part of the record it occupies contains only space characters. Only alternate keys can be sparse. Using this feature results in smaller index files. The larger your key(s) and the more records you have for which the alternate key has the given value, the larger your saving of disk space. To enable sparse keys, use SUPPRESS WHEN ALL in the ALTERNATE RECORD KEY clause in the SELECT statement: file-control. select idxfile assign to "idx.dat" organization is indexed record key is idxfile-record-key alternate record key is idxfile-alt-key with duplicates suppress when all "A". In this example, if a record is written for which the value of the alternate key is all A's, the actual key value is not stored in the index file. Both the primary and alternate keys can be used to read records from an indexed file, either directly (random access) or in key sequence (sequential access). The access mode can be: This is the default, records are accessed in order of ascending (or descending) record key value. The value of the record key indicates the record to be accessed. Your program can switch between sequential and random access, by using the appropriate forms of I/O statement. The method of accessing an indexed file is defined using the ACCESS MODE IS clause in the SELECT statement, for example: file-control. select idxfile assign to "idx.dat" organization is indexed access mode is dynamic record key is idxfile-record-key alternate record key is idxfile-alt-key. A file can contain fixed length records (all the records are exactly the same length) or variable length records (the length of each record varies). Using variable length records may enable you to save disk space. For example, if your application generates many short records with occasional long ones and you use fixed length records, you need to make the fixed record length equal to the length of the longest record. This wastes a lot of disk space, so using variable length records would be a great advantage. The type of record is determined as follows: If neither of the above is true: If none of the above are true: If none of the above are true: A file header is a block of 128 bytes at the start of the file. Indexed files, record sequential files with variable length records and relative files with variable length records all contain file headers. In addition, each record in these files is preceded by a 2 or 4 byte record header. Further detail on file and record headers and the structure of files with headers is available in the online help file. Look under Structure, files with headers in the help file index. Copyright © 1998 Micro Focus Limited. All rights reserved. This document and the proprietary marks and names used herein are protected by international law.
<urn:uuid:5ae77bcf-f9bb-48be-933e-9b189b478fbf>
CC-MAIN-2017-04
https://supportline.microfocus.com/documentation/books/nx30books/fhorgs.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00353-ip-10-171-10-70.ec2.internal.warc.gz
en
0.897465
2,985
3.546875
4
But some say obstacles remain for faster chips. IBM last week became the latest major chip maker to announce a new transistor design for microprocessors that will boost performance and reduce power consumption. The disclosure came less than a week after Intel Corp. revealed a similar research effort aimed at overcoming potential obstacles to building ever-faster and more energy-efficient computer components. The two leading chip manufacturers touted their separate initiatives last week at the International Electron Devices Meeting in Washington. IBM researchers said their "double-gate" transistora microscopic on/off switch that forms the heart of integrated circuitswill be smaller and twice as fast as todays conventional transistors. Shrinking the size of a transistor is one key to building faster processors, with chip makers packing tens of millions of the electronic switches onto a single die. But as transistors are shrunk to once-unimaginable sizes, already as tiny as 0.06 micronsa human hair is about 50 microns widechip makers are nearing the physical limits of conventional designs. "Other than getting smaller, the basic transistor has largely gone unchanged for decades," said Bijan Davari, vice president of semiconductor development at IBM Microelectronics, in East Fishkill, N.Y. "It has now been shrunk nearly to a point where it will cease to function." Despite the latest advances touted by IBM and Intel, one analyst said several challenges still lie ahead for the development of faster chips. "As you get to smaller geometries, theres always the danger of whats called punch-through, and thats where if the voltage is too high, you just fry the insulating material," said Tony Massimini, an analyst with Semico Research Corp., in Phoenix. In addition, by adding insulating material, Massimini said, "youve got a whole other chemistry to deal with, and youre changing the electrical parameters and characteristics of the chip," which may cause further problems. A major obstacle in designing smaller transistors is how to prevent electricity from seeping out of components, which becomes a bigger problem as the parts are made smaller. Energy leakage causes processors to consume more power and heat. Using present designs, processors in a few years would run so hot they would melt metal. To address that issue, IBM is using new materials, such as SOI (silicon on insulator), which it already uses in its chips, to shield components and reduce energy leakage. In a transistor, an element called a gate controls the electrical flow through the transistor. As transistors shrink, it becomes more difficult for a single gate to effectively control switching. In IBMs double-gate transistor, the channel is surrounded by two gates, which gives it twice the control over the current and enables significantly smaller, faster and lower-power circuits.
<urn:uuid:49c9d3b8-750b-4294-99e9-038f4c4a57bd>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Desktops-and-Notebooks/IBM-Transistor-Gets-Smaller-Speedier
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00471-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947667
578
3.0625
3
Real-time systems and garbage collection Real-time (RT) application development distinguishes itself from general-purpose application development by imposing time restrictions on parts of the runtime behavior. Such restrictions are typically placed on sections of the application such as an interrupt handler, where the code responding to the interrupt must complete its work in a given time period. When hard RT systems, such as heart monitors or defense systems, miss these deadlines, it's considered a catastrophic failure of the entire system. In soft RT systems, missed deadlines can have adverse effects -- such as a GUI not displaying all results of a stream it's monitoring -- but don't constitute a system failure. In Java applications, the Java Virtual Machine (JVM) is responsible for optimizing the runtime behavior, managing the object heap, and interfacing with the operating system and hardware. Although this management layer between the language and the platform eases software development, it introduces a certain amount of overhead into programs. One such area is GC, which typically causes nondeterministic pauses in the application. Both the frequency and length of the pauses are unpredictable, making the Java language traditionally unsuitable for RT application development. Some existing solutions based on the Real-time Specification for Java (RTSJ) let developers side step Java technology's nondeterministic aspects but require them to change their existing programming model. Metronome is a deterministic garbage collector that offers bounded low pause times and specified application utilization for standard Java applications. The reduced bounded pause times result from an incremental approach to collection and careful engineering decisions that include fundamental changes to the VM. Utilization is the percentage of time in a particular time window that the application is permitted to run, with the remainder being devoted to GC. Metronome lets users specify the level of utilization an application receives. Combined with the RTSJ, Metronome enables developers to build software that is both deterministic with low pause times and pause-free when timing windows are critically small. This article explains the limitations of traditional GC for RT applications, details Metronome's approach, and presents tools and guidance for developing hard RT applications with Metronome. Traditional GC implementations use a stop-the-world (STW) approach to recovering heap memory. An application runs until the heap is exhausted of free memory, at which point the GC stops all application code, performs a garbage collect, and then lets the application continue. Figure 1 illustrates traditional STW pauses for GC activity that are typically unpredictable in both frequency and duration. Traditional GC is nondeterministic because of the amount of effort required to recover memory depends on the total amount and size of objects that the application uses, the interconnections between these objects, and the level of effort required to free enough heap memory to satisfy future allocations. Figure 1. Traditional GC pauses Why traditional GC is nondeterministic You can understand why GC times are unbounded and unpredictable by examining a GC's basic components. A GC pause usually consists of two distinct phases: the mark and sweep phases. Although many implementations and approaches can combine or modify the meanings of these phases, or enhance GC through other means (such as compaction to reduce fragmentation within the heap), or make certain phases operate concurrently with the running application, these two concepts are the technical baselines for traditional GC. The mark phase is responsible for tracing through all objects visible to the application and marking them as live to prevent them from having their storage reclaimed. This tracing starts with the root set, which consists of internal structures such as thread stacks and global references to objects. It then traverses the chain of references until all (directly or indirectly) reachable objects from the root set are marked. Objects that are unmarked at the end of the mark phase are unreachable by the application (dead) because there's no path from the root set through any series of references to find them. The mark phase's length is unpredictable because the number of live objects in an application at any particular time and the cost of traversing all references to find all live objects in the system can't be predicted. An oracle in a consistently behaving system could predict time requirements based on previous timing characteristics, but the accuracy of these predictions would be an additional source of nondeterminism. The sweep phase is responsible for examining the heap after marking has completed and reclaiming the dead objects' storage back into the free store for heap, making that storage available for allocation. As with the mark phase, the cost of sweeping dead objects back into the free memory pool can't be completely predicted. Although the number and size of live objects in the system can be derived from the mark phase, both their position within the heap and their suitability for the free memory pool can require an unpredictable level of effort to analyze. Traditional GC suitability for RT applications RT applications must be able to respond to real-world stimuli within deterministic time intervals. A traditional GC can't meet this requirement because the application must halt for the GC to reclaim any unused memory. The time taken for reclamation is unbounded and subject to fluctuations. Furthermore, the time when the GC will interrupt the application is traditionally unpredictable. The time during which the application is halted is referred to as pause time because application progress is paused for the GC to reclaim free space. Low pause times are a requirement for RT applications because they usually represent the upper timing bound for application responsiveness. Metronome's approach is to divide the time that consumes GC cycles into a series of increments called quanta. To accomplish this, each phase is designed to accomplish its total work in a series of discrete steps, allowing the collector to: - Preempt the application for very short deterministic periods. - Make forward progress in the collection. - Let the application resume. This sequence is in contrast to the traditional model where the application is halted at unpredictable points, the GC runs to completion for some unbounded period of time, and GC then quiesces to let the application resume. Although splitting the STW GC cycle into short bounded pauses helps reduce GC's impact, this isn't sufficient for RT applications. For RT applications to meet their deadlines, a sufficient portion of any given time period must be devoted to the application; otherwise, the requirements are violated and the application fails. For example, take a scenario where GC pauses are bounded at 1 millisecond: If the application is allowed to run for only 0.1 millisecond between every 1-millisecond GC pause, then little progress will be made, and even marginally complex RT systems will likely fail because they lack time to progress. In effect, short pause times that are sufficiently close together are no different from a full STW GC. Figure 2 illustrates a scenario where the GC runs for the majority of the time yet still preserves 1-millisecond pause times: Figure 2. Short pause times but little application time A different measure is required that, in addition to bounded pause times, provides a level of determinism for the percentages of time allotted to both the application and GC. We define application utilization as the percentage of time allotted to an application in a given window of time continuously sliding over the application's complete run. Metronome guarantees that a percentage of processing time is dedicated to the application. Use of the remaining time is at the GC's discretion: it can be allotted to the application or it can be used by the GC. Short pause times allow for finer-grained utilization guarantees than a traditional collector. As the time interval used for measuring utilization approaches zero, an application's expected utilization is either 0% or 100% because the measurement is below the GC quantum size. The guarantee for utilization is made strictly on measurements the size of the sliding window. Metronome uses quanta of 500 microseconds in length over a 10-millisecond window and has a default utilization target of 70%. Figure 3 illustrates a GC cycle divided into multiple 500-microsecond time slices preserving 70% utilization over a 10-millisecond window: Figure 3. Sliding window utilization In Figure 3, each time slice represents a quantum that runs either the GC or the application. The bars below the time slices represent the sliding window. For any sliding window, there are at most 6 GC quanta and at least 14 application quanta. Each GC quantum is followed by at least 1 application quantum, even if the target utilization would be preserved with back-to-back GC quanta. This ensures the application pause times are limited to the length of 1 quantum. However, if target utilization is specified to be below 50%, some instances of back-to-back GC quanta will occur to allow the GC to keep up with allocation. Figures 4 and 5 illustrate a typical application-utilization scenario. In Figure 4, the region where utilization drops to 70% represents the region of an ongoing GC cycle. Note that when the GC is inactive, application utilization is 100%. Figure 4. Overall utilization Figure 5 shows only a GC cycle fraction of Figure 4: Figure 5. GC cycle utilization Section A of Figure 5 is a staircase graph where the descending portions correspond to GC quanta and the flat portions correspond to application quanta. The staircase demonstrates the GC respecting low pause times by interleaving with the application, producing a step-like descent toward the target utilization. Section B consists of application activity only to preserve utilization targets across all sliding windows. It's common to see a utilization pattern showing GC activity only at the beginning of the pattern. This occurs because the GC runs whenever it is allowed to (preserving pause times and utilization), and this usually means it exhausts its allotted time at the beginning of the pattern and allows the application to recover for the remainder of the time window. Section C illustrates GC activity when utilization is near the target utilization. Ascending portions represent application quanta, and descending portions are GC quanta. The sawtooth nature of this section is again because of the interleaving of the GC and application to preserve low pause times. Section D represents the portion after which the GC cycle has completed. This section's ascending nature illustrates the fact that the GC is no longer running and the application will regain 100% utilization. The target utilization is user-specifiable in Metronome; you can find more information in this article's Tuning Metronome section. Running an application with Metronome Metronome is designed to provide RT behavior to existing applications. No user code modification should be required. Desired heap size and target utilization must be tuned to the application so target utilization maintains the desired application throughput while letting the GC keep up with allocation. Users should run their applications at the heaviest load they want to sustain to ensure RT characteristics are preserved and application throughput is sufficient. This article's Tuning Metronome section explains what you can do if throughput or utilization is insufficient. In certain situations, Metronome's short pause-time guarantees are insufficient for an application's RT characteristics. For these cases, you can use the RTSJ to avoid GC-incurred pause times. The Real-time Specification for Java The RTSJ is a "specification for additions to the Java platform to enable Java programs to be used for real-time applications." Metronome must be aware of certain aspects of the RTSJ -- in particular, RealtimeThreads (RT threads), NoHeapRealtimeThreads (NHRTs), and immortal memory. RT threads are Java threads that, among other characteristics, run at a higher priority than regular Java threads. NHRTs are RT threads that can't contain references to heap objects. In other words, NHRT-accessible objects can't refer to objects subject to GC. In exchange for this compromise, the GC won't impede the scheduling of NHRTs, even during a GC cycle. This means NHRTs won't incur any pause times. Immortal memory provides a memory space that's not subject to GC; this means NHRTs are allowed to refer to immortal objects. These are only some aspects of the RTSJ; see Resources for a link to the complete specification. Technical issues involved in deterministic GC Metronome uses several key approaches within the J9 virtual machine to achieve deterministic pause times while guaranteeing GC's safety. These include arraylets, time-based scheduling of the garbage collector, processing of root structures for tracing live objects, coordinating between the J9 virtual machine and GC to ensure all live objects are found, and the mechanism used for suspending the J9 virtual machine for a GC quantum. Although Metronome achieves deterministic pause times through breaking the collection process up into incremental units of work, allocation can cause hiccups in the GC in some situations. One area is in the allocation of large objects. For most collector implementations, the allocation subsystem keeps a pool of free heap memory, consumed by the application through allocating objects and replenished by the collector through sweeping. After the first collection, free heap memory is primarily the result of objects that were once live but are now dead. Because there's no predictable pattern to how or when these objects die, the resulting free memory on the heap is a collection of fragmented chunks of varying sizes, even if coalescence of adjacent dead objects takes place. Further, each collection cycle can return a different pattern of free chunks. As a result, the allocation of a sufficiently large object can fail if no free chunk of memory is large enough to satisfy the request. Typically, these large objects are arrays; standard objects are generally no larger than a few dozen fields, often resulting in less than 2K in size for most JVMs. To alleviate the fragmentation issue, some collectors implement a compaction, or defragmentation, phase to their collection cycle. After the sweep is complete, if an allocation request can't be met, the system tries to move existing live objects around in the heap in an effort to coalesce two or more free chunks into a single larger chunk. This phase is sometimes implemented as an on-demand feature, embedded into collector's fabric (semispace collectors being an example), or in an incremental fashion. Each of these systems has its trade-offs, but generally the compaction phase is an expensive one in terms of time and effort. The current version of Metronome in WebSphere Real Time does not implement a compaction system. To prevent fragmentation from being a problem, Metronome uses arraylets, which breaks the standard linear representation up into several discrete pieces that can be allocated independently of one another. Figure 6 shows that array objects appear as a spine -- which is the central object and only entity that can be referenced by other objects on the heap -- and a series of arraylet leaves, which contain the actual array contents: Figure 6. Arraylets The arraylet leaves are not referenced by other heap objects and can be scattered throughout the heap in any position and order. The leaves are of a fixed size to allow simple calculation of element position, which is an added indirect. As Figure 6 illustrates, memory-use overhead that's because of internal fragmentation in the spine has been optimized by including any trailing data for a leaf into the spine. Note that this format can mean that an array spine can grow to unbounded sizes, but this hasn't yet been found to be a problem in the existing system. Scheduling a GC quantum To schedule deterministic pauses for GC, Metronome uses two different threads to achieve both consistent scheduling and short, uninterrupted pause times: - The alarm thread. To schedule a GC quantum deterministically, Metronome dedicates the alarm thread to act as the heartbeat mechanism. The alarm thread is a very high priority thread (higher than any other JVM thread in the system) that wakes up at the same rate as the GC quantum time period (500 microseconds in Metronome) and is responsible for determining whether or not a GC quantum should be scheduled. If so, the alarm thread must suspend the running JVM and wake the GC thread. The alarm thread is active for a very short period (typically under 10 microseconds) and should go unnoticed by the application. - The GC thread. The GC thread performs the actual work during a GC quantum. The GC thread must first complete the suspension of the JVM that the alarm thread initiated. It can then perform GC work for the remainder of the quantum, scheduling itself back to sleep and resuming the JVM when the quantum end time approaches. The GC thread can also preemptively sleep if it can't complete its upcoming work item before the quantum end time. In relation to the RTSJ, this thread's priority is higher than all RT threads except NHRTs. Cooperative suspend mechanism Although Metronome uses a series of small incremental pauses to complete a GC cycle, it must still suspend the JVM for every quantum in a STW fashion. For each of these STW pauses, Metronome uses the cooperative suspend mechanism in the J9 virtual machine. This mechanism doesn't rely on any special native-thread capability for suspending threads. Rather, it uses an asynchronous-style messaging system to notify Java threads that they must release their access to internal JVM structures, including the heap, and sleep until they are signaled to resume processing. Java threads within the J9 virtual machine periodically check if a suspend request has been issued, and if so, they proceed as follows: - Release any held internal JVM structures. - Store any held object references in well-described locations. - Signal the central JVM suspend mechanism that it has reached a safe point. - Sleep and wait for a corresponding resume. Upon resumption, threads reread object pointers and reacquire the JVM-related structures they previously held. The act of releasing JVM structures lets the GC thread process these structures in a safe fashion; reading and writing to partially updated structures can cause unexpected behavior and crashes. By storing and then reloading object pointers, the threads allow the GC the opportunity to update the object pointers during a GC quantum, which is necessary if the object is moved as part of any compaction-like operation. Because the suspend mechanism cooperates with Java threads, it's important that the periodic checks in each thread be spaced apart with the shortest possible intervals. This is the responsibility of both the JVM and Just-in-time (JIT) compiler. Although checking for suspend requests introduces an overhead, it allows structures such as stacks to be well defined in terms of the GC's needs, letting it determine accurately whether or not values in stacks are pointers to objects. This suspend mechanism is used only for threads currently participating in JVM-related activities; non-Java threads, or Java threads that are out in Java Native Interface (JNI) code and not using the JNI API, are not subject to being suspended. If these threads participate in any JVM activities, such as attaching to the JVM or calling the JNI API, they will cooperatively suspend until the GC quantum is complete. This is important because it lets threads that are associated with the Java process continue to be scheduled. And although thread priorities will be respected, perturbing the system in any noticeable way in these other threads can affect the GC's determinism. Full STW collectors have the benefit of being able to trace through object references and JVM internal structures without the application perturbing the links in the object graph. By splitting the GC cycle into a series of small STW phases and interleaving its execution with the application's, Metronome does introduce a potential problem in keeping track of the live objects in a system. Unexpected behavior or crashes can occur because the application, after processing an object, can modify the object's references such that unprocessed objects are hidden from the collector. Figure 7 illustrates the hidden-object problem: Figure 7. Hidden-object problem Assume an object graph exists in the heap as described in Figure 7 by section I. The Metronome collector is active and is scheduled to perform tracing work in this quantum. In its allotted time period, it manages to trace through the root object as well as the object that it references, before running out of time and needing to schedule the JVM back in section II. During the application run, the references between the objects are changed such that object A now points to an unprocessed object, which is no longer referred to by any other location in section III. The GC is then scheduled back in for another quantum and continues to process, missing this hidden object pointer. The result is that during the sweep phase of the GC that returns unmarked objects to the free list, a live object will be reclaimed, resulting in a dangling pointer, causing incorrect behavior or even crashes in the JVM or GC. To prevent this type of error, the JVM and Metronome must cooperate in tracking changes to the heap and JVM structures such that the GC will keep all relevant objects alive. This is achieved through a write barrier, which tracks all writes to objects and records the creating and breaking of references between objects so that the collector can track potential hidden live objects. The type of barrier that Metronome uses is called a snapshot at the beginning (SATB) barrier. It conceptually records the heap's state at the beginning of a collection cycle and preserves all live objects at that point as well as those allocated during the current cycle. The concrete solution involves a Yuasa-type barrier (see Resources), where the overwritten value in any field store is recorded and treated as if it had a root reference associated with it. Preserving a slot's original value before overwriting enables the live object set to be preserved and processed. This type of barrier processing is also required for internal JVM structures, including the JNI Global Reference list. Because the application can add and remove objects from this list, a barrier is applied to track both removed objects to avoid a hidden-object problem (similar to a field overwrite) and added objects to eliminate the need to rescan the structure. Root scanning and processing To begin tracing through live objects, garbage collectors start from a set of initial objects obtained from roots. Roots are structures within the JVM that represent hard references to objects that the application creates either explicitly (for example, JNI Global References) or implicitly (for example, stacks). Root structures are scanned as part of the initial function of the mark phase in the collector. Most roots are malleable to some degree during execution in terms of their object references. For this reason, changes to their reference set must be tracked, as we discussed in Write barriers. However, certain structures, such as the stack, can't afford the tracking of pushes and pops without significant penalties incurred on performance. Because of this, certain limitations and changes to scanning stacks are made for Metronome in keeping with the Yuasa-style barrier: - Atomic scanning of stacks. Individual thread stacks must be scanned atomically, or within a single quantum. The reason for this is that during execution, a thread can pop any number of references from its stack -- references that could have been stored elsewhere during execution. Pausing at mid-scan of a stack could cause stores to be lost track of or missed during two partial scans, creating a dangling pointer within the heap. Application developers should be aware that stacks are scanned atomically and should avoid using very deep stacks in their RT applications. - Fuzzy barrier. Although a stack must be scanned atomically, it would be difficult to keep determinism if all stacks were scanned during a single quantum. The GC and JVM are allowed to interleave execution while scanning Java stacks. This could result in objects being moved from one thread to another through a series of loads and stores. To avoid losing references to objects, threads that have not been scanned yet during a GC have the barrier track both the overwritten value and the value being stored. Tracking the stored object, should it be stored into an already processed object and popped off the stack, preserves reachability through the write barrier. It's important to understand the correlation between heap size and application utilization. Although high target utilization is desirable for optimal application throughput, the GC must be able to keep up with the application allocation rate. If both the target utilization and allocation rate are high, the application can run out of memory, forcing the GC to run continuously and dropping the utilization to 0% in most cases. This degradation introduces large pause times often unacceptable for RT applications. If this scenario is encountered, a choice must be made to decrease the target utilization to allow for more GC time, increase the heap size to allow for more allocations, or a combination of both. Some situations might not have the memory required to sustain a certain utilization target, so decreasing the target utilization at a performance cost is the only option. Figure 8 illustrates a typical trade-off between heap size and application utilization. A higher utilization percentage requires a larger heap because the GC isn't allowed to run as much as a lower utilization would allow. Figure 8. Heap size versus utilization The relationship between utilization and heap size is highly application dependent, and striking an appropriate balance requires iterative experimentation with the application and VM parameters. Verbose GC is a tool that logs and outputs GC activity to a file or screen. You can use it to determine if the parameters (heap size, target utilization, window size, and quantum time) support the running application. Listing 1 shows an example of verbose output: Listing 1. Verbose GC sample <?xml version="1.0" ?> <verbosegc version="200702_15-Metronome"> <gc type="synchgc" id="1" timestamp="Tue Mar 13 15:17:18 2007" intervalms="0.000"> <details reason="system garbage collect" /> <duration timems="30.023" /> <heap freebytesbefore="535265280" /> <heap freebytesafter="535838720" /> <immortal freebytesbefore="15591288" /> <immortal freebytesafter="15591288" /> <synchronousgcpriority value="11" /> </gc> <gc type="trigger start" id="1" timestamp="Tue Mar 13 15:17:45 2007" intervalms="0.000" /> <gc type="heartbeat" id="1" timestamp="Tue Mar 13 15:17:46 2007" intervalms="1003.413"> <summary quantumcount="477"> <quantum minms="0.078" meanms="0.503" maxms="1.909" /> <heap minfree="262144000" meanfree="265312260" maxfree="268386304" /> <immortal minfree="14570208" meanfree="14570208" maxfree="14570208" /> <gcthreadpriority max="11" min="11" /> </summary> </gc> <gc type="heartbeat" id="2" timestamp="Tue Mar 13 15:17:47 2007" intervalms="677.316"> <summary quantumcount="363"> <quantum minms="0.024" meanms="0.474" maxms="1.473" /> <heap minfree="261767168" meanfree="325154155" maxfree="433242112" /> <immortal minfree="14570208" meanfree="14530069" maxfree="14570208" /> <gcthreadpriority max="11" min="11" /> </summary> </gc> <gc type="trigger end" id="1" timestamp="Tue Mar 13 15:17:47 2007" intervalms="1682.816"/> </verbosegc> Each Verbose GC event is contained within a <gc></gc> tag. Various event types are available, but the most common are included in Listing 1. A synchgc type represents a synchronous GC, which is a GC cycle that ran uninterrupted from beginning to end; that is, interleaving with the application didn't happen. These can occur for two reasons: System.gc()was invoked by the application. - The heap filled up, and the application failed to allocate memory. The reason for the synchronous GC, contained in the <details> tag, consists of system garbage collect for the first case and out of memory for the second. The first case has no bearing on the sustainability of the application with the specified parameters. However, invoking System.gc() from the user application causes the application utilization to drop to 0% in many cases and causes long pause times; it should therefore be avoided. But if a synchronous GC occurs because of the second case -- an out-of-memory error -- this means the GC was unable to keep up with the application allocation. Therefore, you should consider increasing the heap or decreasing the application utilization target to avoid the occurrence of synchronous GCs. trigger GC event types correspond to the GC cycle's start and end points. They're useful for delimiting batches of heartbeat GC events. heartbeat GC event types roll up the information of multiple GC quanta into one summarized verbose event. Note that this is unrelated to the alarm-thread heartbeat. The quantumcount attribute corresponds to the amount of GC quanta rolled up in the heartbeat GC. The <quantum> tag represents timing information about the GC quanta rolled up in the heartbeat GC. The <immortal> tags contain information about the free memory at the end of the quanta rolled up in the heartbeat GC. The <gcthreadpriority> tag contains information about the priority of the GC thread when the quanta began. The quantum time values correspond to the pause times seen by the application. Mean quantum time should be close to 500 microseconds, and the maximum quantum times must be monitored to ensure they fall within the acceptable pause times for the RT application. Large pause times can occur when the GC is preempted by other processes in the system, preventing it from completing its quanta and allowing the application to resume, or when certain root structures in the system are abused and grown to unmanageable sizes (see Issues to consider when using Metronome). Immortal memory is a resource required by the RTSJ that is not subject to GC. For this reason, it's normal to see the immortal free memory in the verbose GC log drop without ever recovering. It's used for objects such as string constants and classes. You need to be aware of your program's behavior and adjust the size of immortal memory appropriately. You should monitor heap usage to ensure the general trend remains stable. A downward trend in heap free space would indicate the existence of a potential leak caused by the application. A number of conditions can cause leaks, including ever-expanding hash tables, large resource objects being held indefinitely, and global JNI references not being cleaned up. Figures 9 and 10 illustrate stable and downward trends in free heap space. Note that local minima and maxima are normal and expected because the free space only increases during a GC cycle and correspondingly decreases when the application is active and allocating. Figure 9. Stable free heap Figure 10. Descending free heap interval attribute corresponds to the time elapsed since the last verbose GC event of the same type was output. In the case of the heartbeat event type, it can represent the time since the trigger start event if it's the first heartbeat for the current GC cycle. Tuning Fork is a separate tool for tuning Metronome to suit the user application better. Tuning Fork lets the user inspect many details of GC activity either after the fact through a trace log or during run time through a socket. Metronome was built with Tuning Fork in mind and logs many events that can be inspected from within the Tuning Fork application. For example, it displays the application utilization over time and inspects the time taken for various GC phases. Figure 11 shows the GC performance summary graph generated by Tuning Fork, including target utilization, heap memory use, and application utilization: Figure 11. Tuning Fork performance summary Issues to consider when using Metronome Metronome strives to deliver short deterministic pauses for GC, but some situations arise both in application code and the underlying platform that can perturb these results, sometimes leading to pause-time outliers. Changes in GC behavior from what would be expected with a standard JDK collector can also occur. The RTSJ states that GC doesn't process immortal memory. Because classes live in immortal memory, they are not subject to GC and therefore can't be unloaded. Applications expecting to use a large number of classes need to adjust immortal space appropriately, and applications that require class unloading need to make adjustments to their programming model within WebSphere Real Time. GC work in Metronome is time based, and any change to the hardware clock could cause hard-to-diagnose problems. An example is synchronizing the system time to a Network Time Protocol (NTP) server and then synchronizing the hardware clock to the system time. This would appear as a sudden jump in time to the GC and could cause a failure in maintaining the utilization target or possibly cause out-of-memory errors. Running multiple JVMs on a single machine can introduce interference across the JVMs, skewing the utilization figures. The alarm thread, being a high-priority RT thread, preempts any other lower-priority thread, and the GC thread also runs at an RT priority. If sufficient GC and alarm threads are active at any time, a JVM without an active GC cycle might have its application threads preempted by another JVM's GC and alarm threads while time is actually taxed to the application because the GC for that VM is inactive. - Real-time Java series: Read the other parts in this series. - "A real-time garbage collector with low overhead and consistent utilization" (David F. Bacon, Perry Cheng, and V.T. Rajan, Proceedings of the 30th Annual ACM SIGPLAN/SIGACT Symposium on Principles of Programming Languages, 2003): This paper presents a dynamically defragmenting collector that overcomes the limitations of applying GC to hard RT systems. - JSR 1: Real-time Specification for Java: You'll find the RTSJ at the Java Community Process site. - "IBM WebSphere Real Time V1.0 delivers predictable response times using Java standards": Read the product announcement for WebSphere Real Time. - Metronome: Learn more about Metronome, the GC technology incorporated in WebSphere Real Time. - "Real-time garbage collection on general-purpose machines" (T. Yuasa, Journal of Systems and Software, March 1990): More information on Yuasa barriers. - "High-level Real-time Programming in Java": Read about the Staccato research prototype. - developerWorks Java technology zone: Hundreds of articles about every aspect of Java programming. Get products and technologies - WebSphere Real Time: WebSphere Real Time lets applications dependent on a precise response times take advantage of standard Java technology without sacrificing determinism. - Real-time Java technology: Visit the authors' IBM alphaWorks research site to find cutting-edge technologies for RT Java.
<urn:uuid:38091084-5995-4f88-94c8-fa54df542b40>
CC-MAIN-2017-04
http://www.ibm.com/developerworks/java/library/j-rtj4/index.html?ca=dgr-lnxw97javagarbage
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00499-ip-10-171-10-70.ec2.internal.warc.gz
en
0.907685
7,285
2.640625
3
Cloud computing is emerging as a promising paradigm capable of providing a flexible, dynamic, resilient and cost effective infrastructure for both academic and business environments. It aims at raising the level of abstraction of physical resources toward a “user-centric” perspective, focused on the concept of service as the elementary unit for building any application. All the cloud’s resources, both physical/hardware and logical/abstract (software, data, etc) are therefore considered “as a service” and so all cloud’s design and implementation choices follow a “service oriented” philosophy. Cloud is actually a real, operating and effective solution in commercial and business context, offering computing resources and services for renting, accessed through the Web according to a client-server paradigm regulated by specific SLA. In fact, several commercial solutions and infrastructure providers make business on Cloud, such as Amazon EC2 and S3 IBM’s Blue Cloud, Sun Network.com, Microsoft Azure Services Platform, and so on. Recently cloud computing has been quickly and widely spreading in open contexts such as scientific, academic and social communities, due to the increasing demand of computing resources required by their users. For example, there are several research activities and projects on Cloud, such as Nimbus, Eucalyptus, OpenNEbula, Reservoir, OpenCyrrus, OCCI, etc., aimed at implementing their open infrastructure providing a specific middleware. Among the reasons behind the success of cloud, outside of the potential for lower costs, there are other considerations, including the user-centric interface that acts as a unique, user friendly, point of access for users’ needs and requirements; on-demand service provisioning; the QoS guaranteed offer, and the autonomous system for managing hardware, software and data transparently to users . But, on the other hand, there are different open problems in cloud infrastructures that inhibit their use mainly concerning information security (confidentiality and integrity), trustiness, interoperability, reliability, availability and other QoS requirements specified in the SLA, etc., only partially addressed or sometimes still uncovered. Besides, several organizations made important investments in grid and similar distributed infrastructures over the last several years: what to do with these? Discard or reuse? How to reuse? Moreover, the rise of the “techno-utility complex” and the corresponding increase of computing resources demand, in some cases growing dramatically faster than Moore’s Law as predicted by the Sun CTO Greg Papadopoulos in the red shift theory for IT , could bring, in a close future, towards an oligarchy, a lobby or a trust of few big companies controlling the whole computing resources market. To avoid such pessimistic but achievable scenario, we suggest to address the problem in a different way: instead of building costly private data centers that Google CEO Eric Schmidt likes to compare to the prohibitively expensive cyclotrons , we propose a more “democratic” form of cloud computing, in which the computing resources of a single user, company, and/or community accessing the cloud can be shared with the others, in order to contribute to the elaboration of complex problems. In order to implement such idea, a possible source of inspiration could be the volunteer computing paradigm. Volunteer computing (also called Peer-to-Peer computing, Global computing or public computing) uses computers volunteered by their owners, as a source of computing power and storage to provide distributed scientific computing . The key idea of volunteer computing is to harvest the idle time of Internet connected computers which may be widely distributed across the world, to run a very large and distributed application . It is behind the “@home” philosophy of sharing/donating network connected resources for supporting distributed scientific computing. Thus, the core idea of such project is to implement a volunteer cloud, an infrastructure built on resources voluntarily shared (for free or by charge) by their owners or administrators, following a volunteer computing approach, and provided to users through a cloud interface, i.e. QoS guaranteed-on demand services. Since this new paradigm merges volunteer and cloud computing goals, has been named Cloud@Home. It can be considered as a generalization and a maturation of the @home philosophy, knocking down the (hardware and software) barriers of volunteer computing, also allowing to share more general services. In this new paradigm, the user resources/data center are not only passive interface to cloud services, but they can interact (for free or by charge) with one or more clouds, that therefore must be able to interoperate. The Cloud@Home paradigm could be also applied to commercial clouds, establishing an open computing-utility market where users can both buy and sell their services. Since the computing power can be described by a “long-tailed” distribution, in which a high-amplitude population (cloud providers and commercial data centers) is followed by a low-amplitude population (small data centers and private users) which gradually “tails off” asymptotically, Cloud@Home can catch the Long Tail effect , providing similar or higher computing capabilities than commercial providers’ data centers, by grouping small computing resources from many single contributors. We therefore believe that the Cloud@Home paradigm is applicable also at lower scales, from the single contributing user, that shares his/her desktop, to research groups, public administrations, social communities, small and medium enterprises, which make available their distributed computing resources to the cloud. Both free sharing and pay-per-use models can be adopted in such scenarios. It could be a good way to reconvert the investments made on grid computing and similar distributed infrastructures into cloud computing. The Cloud@Home paradigm is inspired by the volunteer computing one. The latter is born for supporting the philosophy of open public computing, implementing an open distributed environment in which resources (not services as in the cloud) can be shared. Volunteer computing is behind the “@home” philosophy of sharing/donating network connected resources for supporting distributed scientific computing. On the other hand, Cloud@Home can be considered as the enhancement of the grid-utility vision of cloud computing. In this new paradigm, user’s hosts are not passive interface to cloud services anymore, but they can interact (for free or by charge) with other clouds. The scenario we prefigure is composed of several coexisting and interoperable clouds. Open clouds identify groups of shared resources and services operating for free volunteer computing; commercial clouds characterize entities or companies selling their computing resources for business; hybrid clouds can both sell or give for free their services. Both open and hybrid clouds can interoperate with any other cloud, also commercial ones, making of clouds’ federations. In this way an open market of computing resources could be established: a private cloud, in case requires computing resources, buy these from third parties; otherwise, it can sell or give for free its resources to the others. Figure 1: Cloud@home Reference Scenario Fig. 1 above depicts the Cloud@Home reference scenario, identified different stakeholder characterized by their role: consuming and/or contributing. Arrows outgoing from the Cloud represent consuming resources, from which a Cloud@Home client submits its requests; otherwise, arrows incoming to the cloud represent contributing resources providing their services to Cloud@Home clients. Therefore, infrastructure providers, datacenters, grids, clusters, servers, till desktops and mobile devices can both contribute and consume. In fact, we believe that the Cloud@Home paradigm is widely applicable, from research groups, public administrations, social communities, SMEs, which make available their distributed computing resources to the cloud until, potentially, the single contributing user, that autonomously decide to share his/her resources. According to the Cloud@Home vision, all the users can be, at the same time or in different moments, both clients and active parts of the computing and storage infrastructure. A straightforward application of this concept to the world of mobile devices is not so much useful, because of the limited computing power and storage capacity that are available on such nodes. Still, the opportunity of an active participation of the mobile nodes to the cloud services can be devised if we start considering as resources, not only computing and storage, but also the peculiar and commonly available peripherals/sensors available on mobile phones (e.g., camera, GPS, microphone, accelerometer, ..) or other devices such as the nodes of a sensor network. If we consider these hardware resources as a mean for acquiring context-related information, an interesting and useful new class of cloud applications can be designed. In the category of context information we can also include the personal information that is available on the device, since it helps to characterize the situation and attributes of the application execution. In other words Cloud@Home, besides virtualizing the computing and storage resources, aims at virtualizing also the sensing infrastructure. Such infrastructure, consistently with the other functionalities, has to be accessed as a service (sensor as a service, SEAAS). According to this perspective, in Fig. 1 mobile devices are considered as both contributing and consuming resources, since they can provide their sensors to Cloud@Home and/or they can access the Cloud for submitting their requests as common clients, respectively. The project framework will be based on a Cloud@Home software system which provides readily available functionality in the areas of directory/information services, security and management of resources. In order to implement such a form of computing the following issues should be taken into consideration: Resources management; User interface; security, accounting, identity management; virtualization; interoperability among heterogeneous clouds; as well as business models, billing, QoS and SLA management. A possible rationalization of the tasks and the functionalities the Cloud@Home middleware has to implement can be performed by considering the layered view shown in Fig. 2 above. Three separated layers are there identified in order to apply a separation of concerns and therefore to improve the middleware development process. These are: The Frontend Layer that globally manages resources and services (coordination, discovery, enrollment), implements the user interface for accessing the Cloud (ensuring security reliability and interoperability), and provides QoS and business models and policies management facilities. The Virtual Layer that implements a homogeneous view of the distributed cloud system offered to the higher frontend layer (and therefore to users) in form of two main basic services: the execution service that allows to set up a virtual machine, and the storage service that implements a distributed storage cloud to store data and files as a remote disk, locally mounted or accessed via Web. Virtual sensors provide the access points to the sensing infrastructure. The access is characterized by abstraction and independence from the actual sensing process and equipment. The bottom Physical Layer that provides both the physical resources for elaborating the requests and the software for locally managing such resources. It is composed of a “cloud” of generic nodes and/or devices geographically distributed across the Internet. Application Scenarios for Cloud@Home Several possible application scenarios can be imagined for Cloud@Home: Research Centers, Public Administrations, Communities – the Volunteer computing inspiration of Cloud@Home provides means for the creation of open, interoperable Clouds for supporting scientific purposes, overcoming the portability and compatibility problems highlighted by the @home projects. Similar benefits could be experienced in public administrations and open communities (social network, peer-to-peer, gaming, etc). Through Cloud@Home it could be possible to implement resources and services management policies with QoS requirements (characterizing the scientific project importance) and specifications (QoS classification of resources and services available). A new deal for volunteer computing, since this latter does not take into consideration QoS, following a best effort approach. Enterprise Settings – Planting a Cloud@Home computing infrastructure in business-commercial environments can bring considerable benefits, especially in small and medium but also in big enterprises. It could be possible to implement own data center with local, existing, off the shelf, resources: usually in every enterprise there exists a capital of stand-alone computing resources dedicated to a specific task (office automation, monitoring, designing and so on). Since such resources are only (partially) used in office hours, by Internet connecting them altogether it becomes possible to build up a Cloud@Home data center, in which users allocate shared services (web server, file server, archive, database, etc) without any compatibility constraints or problems. The interoperability among clouds allows to buy computing resources from commercial cloud providers if needed or, otherwise, to sell the local cloud computing resources to the same or different providers. This allows to reduce and optimize business costs according to QoS/SLA policies, improving performances and reliability. For example, this paradigm allows to deal with peaks or burst of workload: data centers could be sized for managing the medium case and worst cases (peaks) could be managed by buying computing resources from cloud providers. Moreover, Cloud@Home drives towards a resources rationalization: all the business processes can be securely managed by web, allocating resources and services where needed. In particular this fact can improve marketing and trading (E-commerce), making available to sellers and customers a lot of customizable services. The interoperability could also point out another scenario, in which private companies buy computing resources in order to resell them (subcontractors). Ad-hoc Networks, Wireless Sensor Networks, and Home Automation – The cloud computing approach, in which both software and computing resources are owned and managed by service providers, eases the programmers’ efforts in facing the device heterogeneity problems. Mobile application designers should start to consider that their applications, besides to be usable on a small device, will need to interact with the cloud. Service discovery, brokering, and reliability are important issues, and services are usually designed to interoperate . In order to consider the arising consequences related to the access of mobile users to service-oriented grid architecture, researchers have proposed new concepts such as the one of a mobile dynamic virtual organization . New distributed infrastructures have been designed to facilitate the extension of clouds to the wireless edge of the Internet. Among them, the mobile service clouds enables dynamic instantiation, composition, configuration, and reconfiguration of services on an overlay network to support mobile computing . A still open research issue is whether or not a mobile device should be considered as a service provider of the cloud itself. The use of modern mobile terminals such as smart-phones not just as Web service requestors, but also as mobile hosts that can themselves offer services in a true mobile peer-to-peer setting is also discussed in . Context aware operations involving control and monitoring, data sharing, synchronization, etc, could be implemented and exposed as Cloud@Home Web services involving wireless and Bluetooth devices, laptop, Ipod, cellphone, household appliances, and so on. Cloud@Home could be a way for implementing ubiquitous and pervasive computing: many computational devices and systems can be engaged simultaneously for performing ordinary activities, and may not necessarily be aware that they are doing so. About the Authors Dr. Salvatore Distefano received the master’s degree in computer science engineering from the University of Catania in October 2001. In 2006, he received the PhD degree on “Advanced Technologies for the Information Engineering” from the University of Messina. His research interests include performance evalua- tion, parallel and distributed computing, software engineering, and reliability techniques. During his research activity, he participated in the development of the WebSPN and the ArgoPerformance tools. He has been involved in several national and international research projects. At this time, he is a postdoctoral researcher at the University of Messina. Dr. Antonio Puliafito is a full professor of computer engineering at the University of Messina, Italy. His interests include parallel and distributed systems, networking, wireless, and GRID com- puting. He was a referee for the European Community for the projects of the Fourth, Fifth, Sixth, and Seventh Framework Program. He has contributed to the development of the software tools WebSPN, MAP, and ArgoPerformance. He is a coauthor (with R. Sahner and K.S. Trivedi)of the text Performance and Reliability Analysis of Computer Systems: An Example-Based Approach Using the SHARPE Software Package (Kluwer Academic Publishers). He is the vice president of Consorzio Cometa, which is currently managing the Sicilian grid infrastructure. Note: This work has been partially supported by MIUR through the “Programma di Ricerca Scientifica di Rilevante Interesse Nazionale 2008” (PRIN 2008) under grant no. 2008PXNBFZ “Cloud@Home: a new enhanced computing paradigm”. The Programmable Web. http://www.programmableweb.com/. Chris Anderson. The Long Tail: How Endless Choice Is Creating Unlimited Demand. Random House Business Books, July 2006. David P. Anderson and Gilles Fedak. The computational and storage potential of volunteer computing. In CCGRID ’06, pages 73–80. Stephen Baker. Google and the Wisdom of Clouds. BusinessWeek, (December 24 2008), Dec. 2008. http://www.businessweek.com/magazine/content/07 52/b4064048925836.htm. G. Fedak, C. Germain, V. Neri, and F. Cappello. Xtremweb: a generic global computing system. Cluster Computing and the Grid, 2001. Proceedings. First IEEE/ACM International Symposium on, pages 582–587, 2001. Richard Martin. The Red Shift Theory. InformationWeek, (August 20 2007), Aug. 2007. http://www.informationweek.com/news/hardware/showArticle.jhtml?articleID=201800873. F. A. Samimi, P. K. McKinley, and S. M. Sadjadi. Mobile service clouds: A self-managing infrastructure for autonomic mobile computing services. In LCNS 3996, pages 130–141. Springer-Verlang, 2006. Satish Narayana Srirama, Matthias Jarke, and Wolfgang Prinz. Mobile web service provisioning. In AICT-ICIW ’06: Proceedings of the Advanced Int’l Conference on Telecommunications and Int’l Conference on Internet and Web Applications and Services, page 120, Washington, DC, USA, 2006. IEEE Computer Society. M. Waldburger and B. Stiller. ”toward the mobile grid:service provisioning in a mobile dynamic virtual organization”. In IEEE International Conference on Computer Systems and Applications, pages 579–583, 2006. Lizhe Wang, Jie Tao, Marcel Kunze, Alvaro Canales Castellanos, David Kramer, and Wolfgang Karl. Scientific Cloud Computing: Early Definition and Experience. In HPCC ’08, pages 825–830.
<urn:uuid:911a245c-5ac2-4cd3-b736-393cf5ffec88>
CC-MAIN-2017-04
https://www.hpcwire.com/2011/03/17/cloud_home_goals_challenges_and_benefits_of_a_volunteer_cloud/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00407-ip-10-171-10-70.ec2.internal.warc.gz
en
0.910975
3,929
2.640625
3
Endless optimism in the face of obviously difficult, complex difficulties has led humans to succeed long wars, revolutions, plagues, difficult terrestrial explorations, the quest for space and the belief that despite their artificial immortality, neither Tang nor Twinkies would ultimately poison those who loved them. The Naval Research Laboratory is pushing unrealistic optimism to new lengths with a project designed to create a humanoid firefighting robot that can survive toxic smoke and chemicals, murderous heat and cramped conditions to fight fires aboard Naval ships that would kill human firefighters. It's a sensible, even noble intent. Fire, not explosions or gunfire was the biggest fear of sailors under attack by Kamikaze in World War II and in nearly every naval war before that, too. Wooden ships burn easily, especially when you cover them with pitch and fill them with gunpowder. A ship that burned was a ship that sank, and most sailors couldn't swim. So ship that burned and sank would kill most of the crew who survived the disaster itself. On metal ships the problem became more intense, as the number, length and inaccessibility of the territory belowdecks expanded in a maze of long, low, cramped hallways and unexpected caches of fuel, explosives and flammable chemicals, often under pressure. The Shipboard Autonomous Firefighting Robot (SAFFiR) was designed to be able to move around a ship designed for bipeds and sensors that would allow it to find the heart of a fire it is trying to extinguish and batteries capable of powering it long enough to make its firefighting significant. The design includes a visible-light camera, gas detector and infrared camera that allow it to see through smoke and find hot-spots. It's also to be equipped with the ability to throw propelled extinguishing agent technology (PEAT) grenades that are designed to explode near a blaze that is inaccessible to firefighters with hoses, spraying it with fire-retardant chemicals. Researchers from Virginia Tech and University of Pennsylvania are also writing in algorithms designed to allow the 'bot to work in a coordinated way with human firefighters and follow the orders of human team leaders directing firefighting effort. That interactive ability should include the ability to respond to gestures and understand signals such as pointing or hand signals. The prototype will go through initial tests in firefighting mockups on the cashiered destroyer USS Shadwell in September of next year. Judging by the awkward state of the art in Japanese robots – Honda's gawky Asimo is generally considered among the best of the bipeds – we're still several years away from being able to build anything close to the agility, adeptness and breadth of function the NRL predicts for SAFFiR. Asimo moves itself around on a demo stage fairly well, but has had trouble even walking up stairs, let alone making its way down a smoky corridor on a ship pitching in rough seas, avoiding injured sailors and obstacles along the way. This video is of an earlier prototype of the basic bipedal robot frame that will probably become part of the Navy's firefighter. Considering the most practical semi-autonomous industrial robots are those able to do things like cut the grass on golf-course greens using GPS signals to locate their targets, it will be several years before even advanced engineering projects will be able to deliver autonomous, bipedal robots capable of complex behaviors, the ability to maneuver in a difficult environment and identify the best course of action in situations in which there is more than one choice. Being able to code in a working interpretation of the kind of sign language humans use during disasters is one thing. That's just pattern recognition and programming. Being able to identify its own environment, choose an appropriate course of action, propel itself there and take appropriate action under dangerous, ambiguous conditions is the kind of thing the Navy gives medals to sailors for achieving. Hoping a robot can take over to keep sailors from risking themselves is optimistic and noble, as was the assumption radio-controlled ground-bound robots could help in scouting and cleaning up the Fukushima nuclear plants in Japan. They made almost no progress at the time, though adaptations and new designs have been more help in the months since then. A shipboard firefighting robot would be a huge contribution to both the development of effective robotics and a tremendous reducer of the mortal risks faced by sailors. It is also the kind of leading-edge research project the military is better able to fund (and more easily able to take advantage of) than civilian organizations. Figuring out how to make SAFFiR work the way it's supposed to would be among the best applications of Navy engineering research money I've heard about recently. It's very likely to succeed, eventually. It's just not going to happen any time soon. Now read this:
<urn:uuid:e0e00524-4fc6-4d2e-bdb4-414b9c799555>
CC-MAIN-2017-04
http://www.itworld.com/article/2730475/consumer-tech-science/navy-robot-could-fight-most-dangerous-fires-aboard-ship.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281332.92/warc/CC-MAIN-20170116095121-00343-ip-10-171-10-70.ec2.internal.warc.gz
en
0.960007
985
3.078125
3
Definition: A node in a tree without any children. See the figure at tree. Also known as external node, terminal node. Aggregate parent (I am a part of or used in ...) See also root, internal node. Note: Every node in a tree is either a leaf or an internal node. The root may be a leaf. If you have suggestions, corrections, or comments, please get in touch with Paul Black. Entry modified 3 June 2005. HTML page formatted Mon Feb 2 13:10:39 2015. Cite this as: Paul E. Black, "leaf", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 3 June 2005. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/leaf.html
<urn:uuid:7a3864d6-2a10-4e07-a80e-6ffd3c326248>
CC-MAIN-2017-04
http://www.darkridge.com/~jpr5/mirror/dads/HTML/leaf.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00555-ip-10-171-10-70.ec2.internal.warc.gz
en
0.855649
186
2.734375
3
Researchers Store Data in Flash Memory Under Low Voltage Conditions Researchers from the University of Massachusetts Amherst and Texas A&M University have figured out a way to use flash memory in gadgets that use low power or have no batteries at all. Researchers from University of Massachusetts Amherst and Texas A&M University have succeeded in writing information to flash memory under low-voltage conditions, paving the way for a new generation of low-power gadgets that can store data. They presented the paper Feb. 16 at the USENIX File and Storage Technologies Conference in San Jose, Calif. Flash memory generally requires 2.2 to 4.5 volts, which makes them unusable in devices using low-power microprocessors, the researchers wrote. For example, the MSP430 microcontroller from Texas Instruments is intended for embedded applications and runs on as little as 1.8 volts. There are a number of memory manufacturers that have started building low-power flash solid-state-drives for use in embedded storage systems, such as Greenliant and SanDisk. Intel also announced last month a SSD that can be used in industrial embedded applications. Greenliant targets embedded devices in enterprise, industrial, automotive and networking applications. In general, designers have opted to either boost CPUs to meet flash memory's minimum voltage requirements, or just not used flash memory at all, the researchers said. Those gadgets would not be able to store data. Tablets and netbooks can support flash memory because it has the room to support separate power rails for the CPU and flash memory. That is not possible on small gadgets where the flash memory is integrated within a microcontroller. If the battery can support both, that's fine, but generally the batteries are also much smaller in size and power. The most trick to effectively write data to flash memory when running on less than the minimum required voltage was "persistence," according to the project's lead researcher, Mastooreh Salahegheh. The software-only coding algorithms exploited "the electrically cumulative nature" of half-written data based on a quantum mechanical phenomenon called tunneling. In this case, outside electrons travel to the chip a little at a time and accumulate since it's not being used. Once enough electrons have been collected, there's enough power to meet specifications to write data for that instant, the researchers said. Obviously, this process won't have good performance or efficiency, but if the goal is to conserve power, it accomplishes it readily, according to the paper. On a sensor-monitoring application using the MSP430, researchers used persistence to reduce overall energy consumption by 34 percent, according to the paper. "Our evaluation shows that tightly maintaining the digital abstraction for storage in embedded flash memory comes at a significant cost to energy consumption with minimal gain in reliability," according to the paper. A persistence method like this may be useful for a number of small devices, such as remote control key fobs or digital picture frames. It can also be useful on devices that currently don't have batteries, such as RFID tags and electronic passports. With the persistence technique uncovered by Salajegheh's team, embedded application designers can fit in non-volatile storage inside these small devices. Salajegheh, Kevin Fu, and Erik Learned-Miller were the researchers from University of Massachusetts Amherst. Yue Wang and Anxiao Jiang from Texas A&M made up the remainder of the research team.
<urn:uuid:252e3899-01f4-45e2-b388-e648d2a6dbcf>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Data-Storage/Researchers-Store-Data-in-Flash-Memory-Under-Low-Voltage-Conditions-711253
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282926.64/warc/CC-MAIN-20170116095122-00095-ip-10-171-10-70.ec2.internal.warc.gz
en
0.930028
710
3.109375
3
In computing, size and performance were once inextricably linked, with the most powerful computers for many years taking up huge spaces. And because most of these behemoths could only be found in universities or other academic institutions, government buildings, or the headquarters of the biggest businesses, they often had rooms built around them. Now, fast-forward to the present day, and consider: you may well be carrying around a similar amount of processing power, which was once packed into one of these mega-machines, in your trouser pocket—your smartphone. Not Just Computers A range of other devices and technologies have followed a similar route, but developments in all of them have been driven by the scaling-down in size of the most basic components, such as computer chips and moldings of man-made materials. At its most extreme level, these developments are known as nanotechnology, which is a term given to components having dimensions of one billionth of a meter—or even smaller. To put this scale into context, a nanometer is one hundred-thousandth of the width of a human hair. We and everything around us are made up of these tiny particles, and the specific way in which they are arranged is what makes human beings all unique or, conversely, enables us to make goods that are consistently identical in large quantities. The most exciting aspect of substances when they are reduced to the nanoscale is that scientists have found that many do not behave in the way these experts would expect from studying them in their classic physical form. As a result, scientists are being kept busy breaking down substances to the nanoscale so that they can study the different ways in which they do behave, as well as finding ways in which they can be made as stable as possible. The obvious problem of such research, however, is that special types of microscopes are needed to let us see these particles. Reversing the Common Conceptions The problem faced by developers assembling typical-sized data centers is principally how to cope with the heat that is generated by their operation. As the manager of one such large installation told Information Age magazine, such centers “are built for a different kind of specification—we can’t cope with miniaturization.” But another, equally pressing, matter is the demands that running today’s data centers places on power-supply networks. This is why data center and infrastructure management is increasingly viewed as a specialist function; many businesses simply lack the space to accommodate the equipment needed to store and process all their data efficiently. At the same time, they are keen to minimize the risk to continuation of their power supplies, which goes with trying to maximize the capacity and capabilities of their computer systems while limiting the amount of space needed. A major hope for advancement lies in the possibilities for modular construction of data centers. Whereas once they were built with capacity that was seldom expected to be necessary for many years, the newest modular setups mean not only is less space needed in the initial stages, but also setup times can be considerably reduced. Such data centers can be provided in remote locations that were once considered impractical for such applications. A Data Center Dichotomy An outstanding technological challenge facing IT administrators and those in charge of configuring IT systems is that although an increasing number of functions must be integrated into a single device, the silicon that houses all this capability is continuing to shrink. As a result, the technical minds whose job is to integrate all these tasks onto a single chip must concentrate their efforts on maximizing the efficiency with which each individual task is performed, in addition to minimizing any negative impact on the business in performing its myriad tasks. As ComputerWeekly points out, centralization of increasing numbers of functions is no “silver bullet” for tackling this dilemma, no matter how tempting the prospect might be for those who need to pay attention to an organization’s bottom line. For although this approach might be a first-stop solution when a business is looking to cut costs, the topic of outsourcing must be approached with great care so that the right data- and infrastructure-management partners are chosen. The Biggest Challenge The main challenge is likely to be reconciling our demands for data centers that can handle our still-growing appetite for data with the global recognition that it must be done in a way that doesn’t put undue pressure on power supplies. Miniaturization now means that data centers of as little as 2,000 square feet can house up to 20 petabytes of storage. Driven by the concept of virtualization, which enables division of a physical server into a series of multiple virtual servers, this trend means fewer servers are used more efficiently, leading to both cost and resource savings. This challenge is an essential consideration when choosing the location, layout and design of any data center; otherwise, the set-up and operating costs of such an installation could easily consume all the savings from centralizing the data-storage and data-handling processes in the first place. Shared data centers undoubtedly represent the future of how we will take care of our infrastructure-management needs. But the fact remains that such buildings are some of the most power-hungry anywhere in proportion to their physical size. So, for the foreseeable future, we will continue to see the trend of locating data centers in less crowded locations, thanks to the continuing need to build in more capacity than is actually required and to provide a good degree of “future-proofing.” The central challenge for data center managers, therefore, is to offer all this capacity, including for emergency backup systems, while controlling the costs of operating and maintaining their own buildings and infrastructure. Even if nanotechnology does bring the promised reductions in physical space, miniaturization will remain a major consideration for those involved in data center management for the foreseeable future. For more from Geist, visit http://www.geistglobal.com/. About the Author Steven Cox has worked as a professional writer for more than 20 years, including for regional newspapers and national specialist magazines in the U.K. He now specializes in online marketing, including all forms of web content such as guest blogs and in-depth feature articles. Outside work, he enjoys traveling, soccer, restoring old railway engines and relaxing in a sunny beer garden (with the right refreshment of course!).
<urn:uuid:8d5827d1-e3a4-448b-92df-2ad736a94f99>
CC-MAIN-2017-04
http://www.datacenterjournal.com/miniaturization-challenges-facing-data-center-operators/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00307-ip-10-171-10-70.ec2.internal.warc.gz
en
0.962513
1,297
3.25
3
Intel will be figuring prominently at this week’s IEEE International Solid-State Circuits Conference (ISSCC) in San Francisco, where researchers will be presenting a raft of emerging technologies — everything from integrated digital radio to optical interconnects. But some of the chipmaker’s most interesting presentations are being devoted to low-power circuitry. In a prelude to ISSCC, Intel CTO Justin Rattner held a press briefing last week, outlining a couple of energy-focused technologies they’ve been working on. Specifically, Rattner spoke about the ongoing research with near threshold voltage circuitry designs and a new variable precision floating point unit. While both technologies are still confined to the research labs, Intel looks to be grooming them for their commercial debut. The idea behind near threshold voltage (NTV), said Rattner, is to design circuit logic or memory that can operate at very low power voltage, thereby saving on energy. As its name implies, NTV works at just a notch above the transistor threshold level, that is, the point at which the device would actually shut off. The advantage to this is that transistors exhibits peak energy efficiencies in this NTV range — on the order of 5 to 10 times more efficient than when operating at “normal” levels. For a microprocessor, this means you can decrease the voltage significantly, enabling a standard CPU, like a Pentium, to be powered by just a few milliwatts. The downside is that as voltage is squeezed, the clock frequency drops, thereby slowing throughput. But since frequency only decreases linearly with voltage, while power decreases quadratically, throughput per watt should be much better. Another advantage of NTV circuitry is that it enables a much greater dynamic voltage range. So you can crank the clock up and down more easily, thus providing more control over the balance between performance and energy use. This is ideal for settings where the workload is variable, but where you might want to max out performance for at least some of the applications. Of course, since maximum energy savings comes from keeping the voltages low, it makes more sense to use more (if slower) NTV processors for a given application, as long as you can parallelize your code sufficiently. Rattner said one potential application area for this technology is exascale hardware, where any loss of individual processor performance is naturally compensated for by the scale of the system. At the Intel Developer Forum last fall, the company demonstrated an NTV prototype, known as Claremont, which was essentially a Pentium chip overlaid with NTV circuitry. At ISSCC this week, they will show how that design can operate between 3MHz and 915MHz, and is able to achieve up to 4.7 times better energy efficiency than a standard chip. At the most conservative voltage levels, the processor is able to run with a mere 2 milliwatts of power. The NTV technology can also be applied to memory circuits and graphics logic, something Intel will demonstrate at ISSCC with an NTV-tweaked SIMD engine for processor graphics. In this case, since the graphics logic was designed with NTV in mind (unlike the Claremont Pentium-based prototype), the researchers were able to achieve a 9-fold increase in energy efficiency. Intel is also wrapping low-power technology into floating point logic, one of the biggest energy hogs on microprocessors. Part of the problem is that floating point units operate at maximum precision (or more typically at two levels — single and double precesion) thus wasting computational bandwidth and storage. As Rattner noted that most programmers opt to use the default 64-bit floating point level, not realizing that in most cases, far less precision is required to get the correct answer. To address the problem, Intel has invented what they call their “Variable Precision Floating Point Unit. The idea here is to build smarts into the hardware such that the computation is confined to the significant digits rather than the programmer-defined value. Intel’s has built an FP unit prototype that automagically right-sizes the floating point computation by using something called certainty tracking to determine the required accuracy. The prototype has three floating point gears: 24-bit, 12-bit and 6-bit, and uses the certainty tracking to determine which bit width is appropriate. When less digits are warranted, there are fewer bits to shuffle, so not only is energy saved, but performance is increased as well. Rattner claimed the design is able to cut energy consumption by as much as 50 percent over a conventional FP design. According to Intel, the prototype, which is clocked at 1.45GHz, is able to deliver between 52 and 162 gigaflops/watt. Intel estimates that if they used NTV techniques on their variable precision floating point design, they could realize an additional 7-fold efficiency gain. (For reference, a 20MW exaflop system needs an energy efficiency of just 50 gigaflops/watt, but that includes the entire microprocessor as well as external memory, I/O chips, network fabric, and so on.) Rattner said the technology is applicable to GPUs (especially for visual computing and traditional graphics) and HPC-type processor designs. In the case of the latter, the implication is that it could be used for Intel’s Many Integrated Core (MIC) processors, which are essentially big floating point processors in an x86 wrapper. In both the graphics and HPC case, the energy efficiency of the floating point hardware is critical to the value proposition of the associated products. “We have lots of plans for this technology,” said Rattner, “and you can certainly expect to see it as we move out toward the middle of the decade and beyond, where these energy challenges become even more severe than they are today.”
<urn:uuid:56b5b275-487a-48cc-b5e2-60bf40e55d7a>
CC-MAIN-2017-04
https://www.hpcwire.com/2012/02/20/intel_touts_new_energy-efficient_circuitry/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280929.91/warc/CC-MAIN-20170116095120-00123-ip-10-171-10-70.ec2.internal.warc.gz
en
0.945707
1,205
2.90625
3
The Downadup, or Conficker, infection is a worm that predominantly spreads via exploiting the MS08-067 Windows vulnerability, but also includes the ability to infect other computers via network shares and removable media. Not since the Sasser and MSBlaster worms have we seen such a widespread infection as we are seeing with the Downadup worm. In fact, according to anti-virus vendor, F-Secure, the Downadup worm has infected over 8.9 million infected computers. Microsoft has addressed the problem by releasing a patch to fix the Windows vulnerability, but there are still many computers that do not have this patch installed, and thus the worm has been able to propagate throughout the world. When installed, Conficker / Downadup will copy itself to your C:\Windows\System32 folder as a random named DLL file. If it has problems copying itself to the System32 folder, it may instead copy itself to the %ProgramFiles%\Internet Explorer or %ProgramFiles%\Movie Maker folders. It will then create a Windows service that automatically loads this DLL via svchost.exe, which is a legitimate file, every time you turn on your computer. The infection will then change a variety of Windows settings that will allow it to efficiently infect other computers over your network or the Internet. Once the infection is running, you will find that you are no longer able to access a variety of sites such as Microsoft.com and many anti-virus vendors. It does this so that you cannot download removal tools or update your anti-virus programs. It will then perform the following actions in no specific order: - Stop and start System Restore in order to remove all your current System Restore points so that you cannot roll back to a previous date where your computer was working properly. - Check for Internet connectivity by attempting to connect to one of the following - Attempts to determine the infection computer's IP address by visiting one of the following sites: - Download other files to be used as necessary. - Scan the infected computer's network for vulnerable computers and try to infect them. Some symptoms that may hint that you are infected with this malware are as follows: - Anti-malware software stating you are infected with infections using the - Automatic updates no longer working. - Anti-virus software is no longer able to update itself. - Unable to access a variety of security sites, such as anti-virus software - Random svchost.exe errors. Using the following guide we will walk you through removing this worm from your computer and securing your computer so it does not get infected again with Downadup again. Due to the fact that this worm stops us from accessing the sites we need to download the removal tools from, you will need to be able to access another computer that is clean and have the ability to copy files from that computer to the infected one. If at all possible, I suggest you copy the files using a burnable DVD or CD in order to prevent your computer USB drives from possibly becoming infected. This guide will walk you through removing the Conficker and Downadup worms for free. If you would like to read more information about this infection, we have provided some links below. Self Help Guide - Print out these instructions as we will need to close every window that is open later in the fix. - Due to the fact that Downadup and Conficker do not allow you to connect to Microsoft and a variety of security sites you must first download the Windows patch and the removal tool from another computer and transfer the file to your infected PC. On a clean computer, download BitDefender's Anti-Downadup tool from the following location and save the file to your desktop. The current name of the file is bd_rem_tool.zip. - Next visit the following link and download the KB958644/MS08-067 security patch for your particular Windows operating system: Look through the list and click on the link that corresponds to the version of Windows that is running on the infected machine. Then download the file from the page that opens and save it your desktop. - Now copy bd_rem_tool.zip and the Windows patch file to a floppy, CD, or USB drive so we can copy it to the infected PC. - Once the files are stored on a removable device, copy it back onto your infected PC's Windows desktop. - Once the Windows patch and bd_rem_tool.zip file are on your infected computer's desktop, you will need to first install the Windows patch. Simply double-click on the file that you downloaded from Microsoft's web site and follow the prompts to install the patch. This will make it so your computer does not become reinfected again after we clean the current infection. If the patch is already installed, the Microsoft patch will detect that and not reinstall it. - Now we need to extract the files from the bd_rem_tool.zip. You can do this by right-clicking on the bd_rem_tool.zip and then selecting the Extract All... menu option as shown in the image below. - At the next screen, keep clicking the Next button until you see a screen similar to the one below. Now that the file has finished being extracted, click on the Finish button. - A folder will open containing two files. These files are named bd_rem_tool_console.exe and bd_rem_tool_gui.exe. Please double-click on the bd_rem_tool_gui.exe file to start the program. When you run this program, Windows may display a warning similar to the image shown below. If you receive this warning, please click on the Run button to continue starting Anti-Downadup on your computer. If you did not receive this warning, then Anti-Downadup should have started and you can proceed to step 9. - You will now see a screen prompting you to start the scan or close the program. Please click on the Start button to have the program scan your computer and remove any Downadup and Conficker infections on your computer. - Anti-Downadup will now start to scan your computer and determine if you are infected as shown below. This process can take 10 minutes, so please be patient. When it is done, if your computer is clean it will tell you so and you can close the program. Otherwise, continue with the rest of the steps. - When Anti-Downadup has finished scanning your computer it will prompt you to reboot your computer in order to finish the cleaning process. Press Yes button to allow the infected computer to be rebooted. If you do not reboot your computer, you will be left with a blue screen as Explorer was terminated during the cleaning process. - When the computer has finished rebooting you should no longer have the Conficker or Downadup infections on your computer. To see a log of what was deleted you can open the C:\Win32.Worm.Downladup.Gen.log file in Notepad. Though the infection is now removed from your computer, we need to make sure you do not get infected again. As you should have already installed the Windows patch, you will not be able to be infected again via the MS08-067 exploit . This infection, though, does infect you through network shares and removable devices as well. So please examine your computer for any network shares and disable any that are not necessary to have open. The next step is to disable Autorun on your computer. Autorun is a feature that allows executables to automatically run when you insert removable media such as a CD/DVD, Flash Drive, or other USB device. Having Autorun enabled is a security risk due to a fact that a virus can spread through the use of removable media. For example, if you had used your flash drive on a computer infected with a removable media worm, then your flash drive will become infected. Then when you use that infected flash drive on a computer that has Autorun enabled, the infection will automatically run and infect the new computer. As you can see, disabling Autorun is an important step to security your computer. Please note that if you disable this feature, then any time you insert a removable media, including a CD or DVD, they will not automatically open or start. Instead you will need to open My Computer and right click on the specific drive and select Explore or Play in order to access the contents of the media. If you would prefer security over convenience then please download the following file and save it on your desktop: Once the file is downloaded, simply double-click on it. When Windows asks if you would like to merge the data, click on the Yes button. Now that Autorun is disabled, reboot your computer to make the setting effective. Congratulations! Your computer should now be free of the Downadup and Conficker program and you will no longer be vulnerable to infection from this malware.
<urn:uuid:18798c1d-8fa5-4085-a80c-d8d1a7c05db5>
CC-MAIN-2017-04
https://www.bleepingcomputer.com/virus-removal/remove-downadup-conficker
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00517-ip-10-171-10-70.ec2.internal.warc.gz
en
0.906558
1,901
2.515625
3
At home I have two computers, in two different rooms, both with internet access. My main computer is connected through a router that is connected to a cable modem, the cable modem is also connected to a switch that connects to a second router that connects to the other computer (the other computer has a printer). Computer 1 is connected to the basement hub. The cable length is approximately 14 feet. Hub to Linksys router about 25 feet. Linksys router to second computer about 12 feet. IP address of 1st computer is 192.168.123.101, the IP address of the second computer is 192.168.1.100. How can I use the printer on 2, from the first computer or is it possible? -- Ron Hughes Yes, it's possible, but it'll take a bit of work. To be able to print from computer 1 to the printer attached to computer 2, you will need to put some access rules on the router that has the computer with the printer on it so that it will accept traffic from your other router and be able to send traffic back. This may require some access rule on the router where the computer is that is trying to do the printing depends on the firmware in routers handles incoming traffic. You will need to configure the printer on computer 1 by using computer 2's public IP address on the router it is attached to. There's another option, if the routers both have VPN functionality. If so, you can build a tunnel between the two routers and then print to the printer from the other computer using the local (192.168.1.100) IP address. Depending on the firmware, configuring the VPN might be a little tricky if you're not used to doing it. With either of these options, I would suggest getting a USB print server so that you can print to the printer from computer 1 without having computer 2 turned on. But wait, there's another option, and it might be the easiest to set up: Consolidate your current network configuration to a single router. Replace the cable going from the cable modem to router 2 (that now connects to computer 2) with a cable running from one of the LAN ports on router 1. This will give you a local address for both computers out of the same IP address range, reduce the number of routers/firewalls that you need to maintain and make printing from either computer pretty straightforward. If running a new cable isn't an option, you can put a switch/hub in place of the current router 2 - with the caveat that this will add another point of failure. If you do pursue this route, however, make sure the switch/hub you install has auto MDIX ports that will automatically compensate for having to swap the transmit/receive pairs - so you don't have to rewire the RJ-45 connectors with a crossover wiring. You might also consider swapping both routers for a wireless router. This will let you connect one computer via its current wired connection and then link the other computer via a wireless connection. There are a wide variety of wireless cards that can be added to the other computer. Depending on the amount of metal and other building materials that you have to go through to go from the computer using a WiFi connection to the wireless router might require a slightly directional antenna to get a better signal between the router and computer.
<urn:uuid:cfcd3c7e-9dc3-4813-88f9-b7ced59de983>
CC-MAIN-2017-04
http://www.networkworld.com/article/2282991/computers/getting-a-computer-to-print-on-a-printer-connected-to-another-pc.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00418-ip-10-171-10-70.ec2.internal.warc.gz
en
0.952389
679
2.84375
3
Who uses unified storage, and why Unified storage is ideal for organizations with general-purpose servers that use internal or direct-attached storage for shared file systems, applications, and virtualization. Unified storage replaces file servers and consolidates data for applications and virtual servers onto a single, efficient, and powerful platform. How unified storage works Unified storage is a platform with storage capacity connected to a network that provides file-based and block-based data storage services to other devices on the network. Unified storage uses standard file protocols such as Common Internet File System (CIFS) and Network File System (NFS) and standard block protocols like Fibre Channel (FC) and Internet Small Computer System Interface (iSCSI) to allow users and applications to access data consolidated on a single device. Benefits of unified storage Large numbers and various types and release levels of direct-attached storage or internal storage can be difficult to manage and protect as well as costly due to very low total utilization rates. Unified storage provides the cost savings and simplicity of consolidating storage over an existing network, the efficiency of tiered storage, and the flexibility required by virtual server environments.
<urn:uuid:3b33d61c-9f2b-493f-8de2-9844b8929c26>
CC-MAIN-2017-04
https://www.emc.com/corporate/glossary/unified-storage.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00418-ip-10-171-10-70.ec2.internal.warc.gz
en
0.910691
239
3.21875
3
At NASA challenge, a 2-hour flight for $7 in electricity - By Kevin McCaney - Oct 06, 2011 For the first time in aviation history, full-scale electric-powered aircraft have performed in competition, taking the top two spots in the CAFE Green Flight Challenge, organized in part by NASA. The competition, created to spur development of more fuel-efficient aircraft and kick-start an electric plane industry, required aircraft to fly 200 miles in less than two hours while using less than 1 gallon of fuel per occupant or the equivalent in electric power, NASA said. The Pipistrel-USA.com team, based at Penn State University, took the first prize of $1.35 million — the largest prize ever awarded in aviation, NASA says. Team eGenius of Ramona, Calif., took the second prize of $120,000. Both were electric-powered and both achieved twice the required fuel efficiency, using the electric equivalent of just over a half-gallon of fuel per passenger, NASA said. "Two years ago, the thought of flying 200 miles at 100 mph in an electric aircraft was pure science fiction," said Jack Langelaan, a Penn State aeronautics professor and Pipistrel team leader. "Now, we are all looking forward to the future of electric aviation." Considering the 8 cents per kilowatt-hour charged in central Pennsylvania, Langelaan said it would cost $7 to charge his Pipistrel G-4 for a two-hour flight, according to a CAFE Foundation blog entry. The Pipistrel team achieved the equivalent of 403.5 passenger miles per gallon at an average speed of 107 mph. Team eGenius reached 375 passenger miles per gallon. General aviation aircraft flay at about 60 passenger miles per gallon, George Lesieutre, Penn State’s head of aerospace engineering, told Penn State Live. The teams spent two years on design, development and testing of their aircraft, NASA said. Fourteen teams registered, and three met all requirements to fly in the competition. The Green Flight Challenge was held at the Charles M. Schulz Sonoma County Airport in Santa Rosa, Calif. It was managed by the CAFE Foundation under an agreement with NASA and sponsored by Google. It’s a long way from small, light aircraft to passenger jetliners, but the technology developed for electric and fuel-efficient flight could eventually find its way into general use, which is part of the idea behind the challenge. Airlines estimate that fuel makes up about 35 percent of their operating costs, which of course is reflected in the price of a ticket. So more efficient flight likely would make passengers happy, too. Kevin McCaney is a former editor of Defense Systems and GCN.
<urn:uuid:eea71d70-0b2e-40db-b02b-7f2bbb65881b>
CC-MAIN-2017-04
https://gcn.com/articles/2011/10/06/nasa-electric-green-flight-challenge.aspx?admgarea=TC_EMERGINGTECH
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281649.59/warc/CC-MAIN-20170116095121-00142-ip-10-171-10-70.ec2.internal.warc.gz
en
0.955093
575
2.671875
3
If anything is certain about the latest ransomware trends we’ve been tracking, it’s that no one is immune to an attack. Ransomware attackers don’t discriminate, and they have been successful at extorting money from all types of people and organizations, and as long as there’s a way for them to find you – there’s a risk. Victims range from hospitals and police stations with confidential records, to grandparents who are simply online to see photos of their grand kids. Attackers know that the more people they attempt to swindle, the better their chances are of finding someone who isn’t prepared and might be willing to pay a ransom for their files to be returned. The good news is that the more we learn about these attacks, the better suited we are to identify and protect against them. We now know that ransomware is delivered primarily through email, with a few exceptions like compromised websites, file sharing sites or infected thumb drives. If we look at the characteristics of ransomware attacks that use email as a threat vector, we’re seeing some commonalities that everyone can use to help keep themselves and their organizations safe. Here’s what we’re finding about email-born ransomware and how these modern threats are causing the security landscape to adjust: Mailbox Protection in the Age of Advanced Threats Email security is nothing new, however, traditional approaches based on identifying bad senders, scanning messages for keyword patterns and doing signature-based virus detection are no longer sufficient in the face of advanced threats. In order to stop attackers who are adept at evading basic techniques, organizations should be looking to evaluate email security solutions with deep learning systems capabilities, multilevel intent analysis, advanced threat detection and real-time link protection. Simply scanning email to ensure that it’s free of spam and malware – just isn’t enough. Let’s take a deeper look at the some of the security technology present for protection against today’s advanced threats. Deep Machine Learning Over the last five years, tremendous progress has been made in the field of artificial intelligence (AI) – more progress than in the fifty years prior. The progress is driven by the availability of computing power and advanced algorithms that enable machines to beat contestants in Jeopardy, find medical cures and drive our cars. It stands to reason that the same approach could be used to assure that we do not receive bad email. Deep learning is only as effective as the data used to train it. The more diverse the training set, the more likely it is that malicious messages are caught, while still allowing the good ones to get delivered. Deep learning is responsible for assuring that some of the most nefarious messages never reach a users’ inbox. Multi-Level Intent Analysis Sometimes the true intent of the message can only be discovered by following the links embedded in the email, and then following links on the resulting websites. The nefarious content could be buried pretty deep to avoid detection. Security engines must be capable of discovering it, and making sure that the message linking to the bad external content is properly blocked. Advanced Threat Detection Malicious email attachments are a primary means of spreading ransomware. The classic way of detecting bad files was based on comparing the signature of the known malware file to the attachment. This process worked very well when malware writers developed a single program and tried to distribute it to millions of computers. It was the race between the malware distributor and security companies to discover the malware, analyze it, develop signature and publish it to all systems that needed protection. In order to detect today’s threats, files should be checked against a cryptographic hash database that is constantly updated. When a file is unknown, it should be emulated in a virtual sandbox where malicious behavior can be discovered. Administrators need granular, file-type based control including automatic quarantine and blacklisting features to maintain the highest level of protection. Real-Time Link Protection Often at the time a message is scanned and delivered, the included links point to perfectly safe websites. Minutes, hours or even days after sending the message, attackers modify the site to carry malicious content. To protect the user from accessing such sites, original links present in the message could be re-written to ensure that click requests are always re-directed through the site operated by your security vendor in order to make a real-time determination of the target website veracity. If the site turned bad, the user receives a warning and is stopped from proceeding any further. There’s no denying that ransomware and other advanced threats have quickly become a mainstream security issue, but it’s encouraging to see that the security industry is taking on the challenge with some of their own advanced security technologies. Like we’ve mentioned in the past, if advanced threats are a concern – you always have the option to work with your security providers for an assessment to ensure your level of protection is up to date.
<urn:uuid:cac2ed38-ec6e-45da-986b-ab019a51c1eb>
CC-MAIN-2017-04
http://infosecisland.com/blogview/24840-What-Were-Learning-about-Ransomware-and-How-Security-Is-Stepping-Up-to-the-Task.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00500-ip-10-171-10-70.ec2.internal.warc.gz
en
0.944
1,011
2.546875
3
Computer scientists at a US university have devised an experiment to test the effectiveness of social networks. Interest in social networking has grown since 1969 when psychologists produced the “six degrees of separation” theory – the idea that everyone in the world is linked, with only six connections separating any two people. The internet – with social networking sites such as MySpace and the growing community of bloggers – has stoked interest in how these networks function. By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers. Researchers at the University of Pennsylvania School of Engineering and Applied Science tested a number of social network theories by asking a group of students to play a colour picking game on networked computers. Each student had to pick a colour that was different to that chosen by anyone who was immediately connected to him or her in the network. The scientists changed the connection in the network to match different theoretical models and varied the amount of information the students had about which colours were being chosen by their colleagues to test different types of social network. The research, published in the Science journal, found that some of the simplest social networks were the least effective and that seeing beyond a local view of the network could hinder the functioning of more complicated social networks. Vote for your IT greats Who have been the most influential people in IT in the past 40 years? The greatest organisations? The best hardware and software technologies? As part of Computer Weekly’s 40th anniversary celebrations, we are asking our readers who and what has really made a difference? Vote now at: www.computerweekly.com/ITgreats
<urn:uuid:3684f454-7f99-4391-ab35-4e68598abecd>
CC-MAIN-2017-04
http://www.computerweekly.com/news/2240078214/US-scientists-investigate-social-networking
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00408-ip-10-171-10-70.ec2.internal.warc.gz
en
0.969162
336
3.28125
3
It may surprise you to learn that more bandwidth does not necessarily mean higher effective WAN throughput. If you transfer a lot of data over a high-capacity WAN, we bet you are not getting the effective throughput you think you are--and upgrading bandwidth won't help because technical and physical constraints stack the cards against you. We describe why this is and what you can do to about it in a free report you can download here--and Peter Sevcik will discuss it in an upcoming Webinar. Here's a quick synopsis. What Is Effective Throughput? To figure out how effectively you use WAN capacity for large data transfers, you need to know how much available capacity you actually put to work. To understand how much of that capacity you actually use, you must first determine your effective throughput--the number of bits per second successfully delivered from source to destination for an individual data flow. The higher your effective throughput, the more efficiently you use your WAN capacity. Once you know your effective throughput, you need to analyze it relative to the total available bandwidth by calculating the effective throughput ratio (ETR). This entails dividing effective throughput by available bandwidth between source and destination (typically the WAN access circuit). The effective throughput ratio tells you how much of the available bandwidth your single flow uses. The effective throughput ratio for a single flow is highest near the data source and decreases with distance. This is orthogonal to bandwidth utilization, which is a measure of capacity consumed by many flows. Unlike the effective throughput ratio, bandwidth utilization typically starts low and increases with the number of simultaneous users on the path. What Makes Effective Throughput Poor? Three factors conspire to erode effective WAN throughput: distance, TCP window size, and packet loss. These factors make effective throughput lower than the bandwidth of the slowest link along the path--usually an access link. The reason for this hinges on the Automatic Repeat Request (ARQ) mechanism within TCP. ARQ uses a sliding window to enable the sender to transmit multiple packets before waiting for an acknowledgement from the receiver. A faster circuit puts a window's worth of data in flight faster and then must wait longer before sending more, so the percentage of idle time actually increases compared to a slower circuit. The larger the circuit, the more dramatically the ACK wait time lowers effective throughput because no data can be transmitted while both the data and the ACK are in flight. What Can You Do about It? Help is available in the form of WAN optimization solutions. The best of these solutions include TCP optimization to maximize TCP window size, forward error correction to address packet loss, and packet order correction to fix out-of-order packets, which also lowers packet loss. TCP optimization increases TCP window size, which puts more data in flight on long latency paths. TCP optimization can also include a variety of actions such as sending pre-emptive data receipt acknowledgements that maintain high throughput to speed data from the source, and ramping up the TCP transmission rate more quickly by bypassing TCP's ‘slow start' function. TCP optimization also uses a selective acknowledgement (SACK) feature that retransmits only bytes lost rather than returning to the last continuously received data. Forward error correction fixes errors in real time, avoiding the need to retransmit data when packets are lost. Although forward error correction adds overhead, the benefits make the tradeoff acceptable for underutilized high-capacity WAN links. Some WAN optimization solutions minimize overhead by dynamically matching forward error correction levels to loss levels. MPLS and IP VPN environments routinely suffer from out-of-order packet delivery. TCP identifies more than three packets received out of order as packet loss and calls for packet retransmissions and smaller TCP window size. This response can be particularly vexing when trying to keep many bytes in flight using a large window size. Packet order correction properly sequences out-of-order packets on the fly, thus avoiding retransmissions. The report shows extensive analysis that models the before and after results achievable using TCP optimization. When you combine all the WAN optimization technologies described above you can improve effective throughput ratios by 5x to 10x on average, with peaks as high as 50x. This makes WAN optimization solutions a very good way (in fact the best way we know of) to improve your effective throughput. Download the Report.
<urn:uuid:572caeb8-0e74-40da-bc50-db357e217883>
CC-MAIN-2017-04
http://www.networkworld.com/article/2234140/data-center/how-to-improve-effective-throughput.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00528-ip-10-171-10-70.ec2.internal.warc.gz
en
0.919408
893
2.859375
3
The global market for Novel Drug Delivery Systems (NDDS) is expected to hit USD 320 billion by the year 2021. Unprecedented developments in genomics and molecular biology today offer a plethora of new drug targets. The method by which a drug is delivered to these targets can have a significant impact on its efficacy. Some drugs have an optimum concentration spectrum within which maximum benefit is derived, and concentrations out of this range can be toxic or produce no therapeutic benefit at all. This, with the very slow progress in the efficacy of the treatment of severe diseases, has suggested a growing need for a multidisciplinary approach to the delivery of therapeutics to targets in tissues. It is estimated that close to 50% of the newly developed drugs cannot be taken orally. This increase in numbers for the NDDS market is to be driven by the increased requirement for improving the efficacy, safety and bio-recognition of drugs for disease-specific sites within the body, and also the need to improve the comfort levels of patient who are the users of the drugs. These requirements have led to the need for controlling the pharmacokinetics, pharmacodynamics, non-specific toxicity, immunogenicity, bio-recognition, and efficacy of drugs, in turn leading to the birth of drug delivery systems. The introduction of novel delivery systems to an existing molecule should significantly improve its safety, efficacy, and patient compliance. Most of the innovator companies have a parallel research pipeline for biopharmaceuticals and concentrate also on innovative delivery platforms. The market has been segmented by route of administration (oral, injectable, pulmonary, transdermal and others) and by geography (North America, Europe, Asia-Pacific and the Rest of the World). Some of the key players in the market are: What the Report Offers
<urn:uuid:7537d697-95ea-40fd-a616-c2c070db636e>
CC-MAIN-2017-04
https://www.mordorintelligence.com/industry-reports/global-new-drug-delivery-systems-market-analysis-forecasts-2014-2020-industry
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00436-ip-10-171-10-70.ec2.internal.warc.gz
en
0.948106
367
2.71875
3
Cepeda A.M.,Fundacion Hospital Universitario Metropolitano Of Barranquilla | Del Giacco S.R.,University of Cagliari | Villalba S.,Fundacion Hospital Universitario Metropolitano Of Barranquilla | Tapias E.,Fundacion Hospital Universitario Metropolitano Of Barranquilla | And 5 more authors. Nutrients | Year: 2015 Background: Diet might influence the risk of allergic diseases. Evidence from developing countries with high prevalence of childhood asthma is scant. Methods: Information on wheeze, rhinitis, and eczema was collected from 3209 children aged 6–7 years in 2005, who were taking part in the International Study on Asthma and Allergy in Children (ISAAC) in Colombia. Intake frequency of twelve food groups was assessed. Associations between each food group and current wheeze, rhino-conjunctivitis, and eczema were investigated with multiple logistic regressions, adjusting for potential confounders. Simes’ procedure was used to test for multiple comparisons. Results: 14.9% of children reported wheeze in the last 12 months, 16% rhino-conjunctivitis, and 22% eczema. Eczema was negatively associated with consumption of fresh fruits and pulses three or more times per week (adjusted Odds ratio (aOR): 0.64; 95% Confidence Interval (CI): 0.49 to 0.83; p value = 0.004; and aOR: 0.62, 95% CI: 0.47 to 0.80; p value < 0.001, respectively). Current wheeze was negatively associated with intake of potatoes (aOR: 0.44, 95% CI: 0.31 to 0.62, p value = 0.005), whilst this outcome was positively associated with consumption of fast food (aOR: 1.76, 95% CI: 1.32 to 2.35, p value = 0.001). These associations remained statistically significant after controlling for multiple comparisons. Conclusions: A traditional diet might have a protective effect against eczema and wheeze in Colombian children, whilst intake of fast foods increases this risk. © 2015 by the authors; licensee MDPI, Basel, Switzerland. Source
<urn:uuid:2739a3eb-49b9-4099-ad78-aa9823c604bc>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/fundacion-hospital-universitario-metropolitano-of-barranquilla-1530329/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280899.42/warc/CC-MAIN-20170116095120-00280-ip-10-171-10-70.ec2.internal.warc.gz
en
0.895558
492
2.546875
3
I love free open source software. There are literally thousands of FOSS projects that are unbelievably creative, beautifully engineered, and incredibly useful but as compelling as they might be, when they become enormously popular you wind up with a potentially huge problem. When some code in the FOSS world has that special voodoo that addresses a well-defined and mission critical need, is easy to implement, easy to manage, and robust, it's likely to become a market standard. Great examples include Linux, Apache, Sendmail ... there's a long list of FOSS projects that have become dominant in their niche. But dominance in the software world has a parallel in the natural history of biological ecosystems. When a biological monoculture becomes dominant it's guaranteed that a pest or disease that exceeds some level of virulence can threaten the entire biome, which is exactly what's happening with bananas because the bananas you buy at the supermarket are overwhelmingly of one variety: Cavendish. The Cavendish, which originally came from Vietnam, is the result of centuries of selective breeding of a mutant, a cross between Musa acuminata and Musa balbisiana, two wild South Asian species. Unfortunately because the Cavendish is a hybrid, it is sterile (just like the mule which is a cross between a donkey stallion and a horse mare). So, because the Cavendish is sterile it has to be propagated by suckers with the result that all Cavendish banana plants have the same genome. This, in turn, means that a virus or fungus that is aggressive and destructive and relies on a specific taregt biology has, in the case of Cavendish bananas, a huge population to exploit ... which is exactly what's happening today. Currently there's an outbreak of a new strain of a disease called Black Sigatoka, which destroys Cavendish bananas and which originally appeared in a less virulent form in 1970's. This has re-emerged along with another contagion, Panama Disease. These two contaigions are successful simply because there are some many identical hosts so close together. Experts say that it's only a matter of time before these two diseases decimate or, most likely, obliterate the single most cultivated and valuable banana variety in the world today. This, in turn, will have economic consequences on a biblical scale. But enough of bananas; let's talk about contagions in computer ecologies ... The most commonly recognized computer contagions are computer viruses and malware but we also have to also include hackers. Like biological contagions, all of these computer contagions attack specific hosts which are the targets they are adapted for or can adapt to. And when there's a large population of identical targets, these computer contagions, like their biological counterparts have more opportunity to propagate and thereby become harder to eradicate. In the case of OpenSSL, its adoption by a huge market meant that the bug de jour, the Heartbleed bug, became the entry point for computer contagions much as the specific biology of Cavendish bananas is exploited by Black Sigatoka and Panama Disease. What I find so interesting about this vulnerability is that government agencies such as the NSA must have known about the flaw and said nothing to the world at large. You doubt this? Just think, if you''re in signals intelligence and you find out about a way to extract information from supposedly secure systems in a way that is undetectable, aren't you going to use it and keep quiet about it?! And if the NSA didn't know about Heartbleed then they should all be fired for incompetence. So, the takeaways from all of this are simple. First, if you elect to use the most popular product in the market, whether it's FOSS or proprietary, be warned; one flaw in what you've deployed can expose you to a much greater level of risk than using less well-known products. Second? Take a short position on banana futures.
<urn:uuid:7c4e8967-8ea2-4c60-8923-28c38ec2c26a>
CC-MAIN-2017-04
http://www.networkworld.com/article/2226719/security/why-is-heartbleed-like-bananas-and-did-the-nsa-know.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280128.70/warc/CC-MAIN-20170116095120-00089-ip-10-171-10-70.ec2.internal.warc.gz
en
0.962726
804
2.828125
3
IBM sets new record for storing high quantities of big data. Researchers at IBM have set a new record of 85.9 billion bits of data per square inch in areal data density on low-cost linear magnetic particulate tape. This is thought to be a significant update to one of the computer industry’s most resilient, reliable and affordable data storage technologies for Big Data. At this areal density, a standard LTO size cartridge could store up to 154 trillion bytes (154 terabytes) of uncompressed data – a 62 fold improvement over an LTO6 cartridge, the latest industry-standard magnetic tape product. To put this into perspective, 154 terabytes of data is sufficient to store the text from 154 million books, which would fill a book shelf stretching from Las Vegas to Seattle, Washington. This new record was achieved using a new advanced prototype tape, developed by FUJIFILM of Japan. This is the third time in less than 10 years that IBM scientists in collaboration with FUJIFILM have achieved such an accomplishment. The news is being unveiled this week at the IBM Edge conference in front of more than 5,500 attendees. IBM scientists break Big Data into four dimensions – volume, variety, velocity and veracity and by 2020 these so-called Four V’s of Big Data will be responsible for 40 zettabytes (40 trillion gigabytes) of data. Much of this data is archival, such as video archives, back-up files, replicas for disaster recovery, and retention of information required for regulatory compliance. Because tape systems are energy efficient and more cost-effective than hard disks they are the ideal technology to store, protect and access archival Big Data. For example, the Large Hadron Collider (LHC) is the world’s largest and most powerful particle accelerator. By the end of the LHC’s first three-year running period, more than 100 petabytes of physics data had been stored in the CERN mass-storage systems. Most of this data is archived on more than 52,000 tape cartridges of different types, providing scientists with permanent access to data, which could someday answer fundamental questions about the universe. Evangelos Eleftheriou, IBM fellow, said: "Big data has met its match with tape, not only does the technology provide high capacity in a small form factor, it is also reliable for several decades, requires zero power when not in use, is secure in that cartridges cannot be erased at the push of the keystroke and available for the cloud – all at a cost of less than 2 cents per gigabyte and at a greatly reduced operating expense versus disk storage." To achieve 85.9 billion bits per square inch, IBM researchers have developed several new critical technologies, including: – a new enhanced write field head technology that enables the use of much finer barium ferrite (BaFe) particles – advanced servo control technologies that achieve head positioning with nano-scale fidelity and enable a 27 fold increase in track density compared to the LTO6 format – innovative signal-processing algorithms for the data channel that enable reliable operation with a ultra narrow 90nm wide giant magnetoresistive (GMR) reader.
<urn:uuid:83d0fd01-ce6d-49e8-b98a-d6638ffd3379>
CC-MAIN-2017-04
http://www.cbronline.com/news/enterprise-it/storage/is-tape-storage-set-to-make-a-comeback-4272204
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00419-ip-10-171-10-70.ec2.internal.warc.gz
en
0.910153
665
3.15625
3
Center of U.S. population shifts again, and Census is on the trail Path to the west and south continues into Texas County, Missouri - By William Jackson - May 06, 2011 Over the past 220 years, the population center of the United States has shifted westward and southward as the country has grown. When it was first computed the center was in Kent County on Maryland’s Eastern Shore, about 23 miles east of Baltimore, said David Doyle, of the National Geodetic Survey. “From 1790 to 1930 there is almost a straight line going due west. Then it starts to shift, turning to the southwest.” As of 2010, the geographic center of the U.S. population shifted 23 miles, to 37 degrees 31 minutes north, 92 degrees 10 minutes west, in the northwest corner of Texas County, Missouri. On May 9, the Census Bureau and the National Geodetic Survey will mark this shift — figuratively and literally — by placing a survey marker that will become part of the NGS National Spatial Reference System. Fixing the flaws in North American maps The marker will be part of a network of 1.5 million marks making up the system used to map and chart geographic position and height in the United States. “We are trying to showcase the positioning technologies” available today, Doyle said. “The public today is vastly more spatially aware than they have ever been,” because of the growing availability of Global Positioning System data in consumer devices, from driving direction systems to smart phones. The job of computing the center of population has become more complicated over the years as the population has grown, said Paul Donlin, programmer with the Census Bureau’s Geography Division. “The formula is not complicated,” Donlin said, but the large amount of data needed to perform the calculation makes it complex. The data comes from 12 million census blocks of varying sizes, from a city block to a sizable portion of a rural county. The center of population for each block is calculated and this data is used to find the weighted center point for the entire continental U.S. population. “We need a computer to do that,” said Census geographer Ted Sickley. The job was not always done with a computer. The first such calculation was done by hand in 1880, when the center was placed about one mile south of the Ohio River, across from Cincinnati in Kentucky. Eighty years later, the calculation was done by machine, according to a 1970 Census report. “The population centers and the population counts for each of these areas were recorded on punched cards and then transferred to magnetic tape for processing through an electronic compute,” the report said. “The 'program' introduced into the computer controlled the mathematical processes which the computer executed.” Today the data is stored in an Oracle spatial database and the computation is done using software-as-a-service, Donlin said. Why go to the trouble of figuring this theoretical point? Each point is a visual expression of what people were doing at that time. “It gives us a way of characterizing the population of the U.S.,” Sickley said. “It becomes useful when you look at it over time.” From 1850 to 1860, there was a big jump westward from West Virginia to Ohio, reflecting the westward expansion of the country as new states were added and settled. In 1870, a slight shift to the north illustrated the migration to northern cities following the Civil War. From 1890 to 1940, movement slowed and the center remained in southern Indiana, reflecting the large European immigration into eastern cities. A southwesterly arc since then reflects the growth of the sunbelt states. The National Spatial Reference System consists of about 1.5 million passive markers installed by the NGS over the last 200 years, as well as about 1,700 Continuously Operating Reference Stations that provide streams of real-time GPS data. The system is used by surveyors to accurately measure and chart the United States. Since 1960, the NGS has marked the current population center with a commemorative survey marker. Until 1990, the mark was placed within a few centimeters of the actual point, Doyle said. “That’s nice, but they’re out in the middle of a forest somewhere,” he said. Since 1990, the markers have been placed in the nearest incorporated community. This year, it will be next to the Post Office in Plato, Mo., (estimated population 1,430 in 2000), about three miles from the actual location of the population center. William Jackson is a Maryland-based freelance writer.
<urn:uuid:2c86cfab-59b5-4b7b-bae0-a1661272c19b>
CC-MAIN-2017-04
https://gcn.com/articles/2011/05/06/census-tracks-population-center-moving-west.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281649.59/warc/CC-MAIN-20170116095121-00143-ip-10-171-10-70.ec2.internal.warc.gz
en
0.951296
979
3.234375
3
If you’ve ever wondered what software-defined storage is, or have ever been lost deciphering storage terminology, you’re in the right place. There are some really great resources online – such as Techopedia, and the Tech Target glossaries – however to help cut directly to some of the most important concepts in SDS, we’ve created a two-part glossary containing 37 of the key terms you should know to understand how the technology works. Read on for part one, and keep your eyes on Hedvig’s blog for part two in the next couple weeks. We’ll also include a downloadable PDF containing all 37 terms when we post the second installment. Without further ado, here is part one of Hedvig’s software-defined storage glossary. Storage Types and Form Factors Companies typically use several types of storage to meet the various capacity, performance and availability needs of their applications. Software-defined storage (SDS) aims to consolidate all these flavors and forms into one solution that is centrally managed, reducing the time and costs required to provision and administer storage. Block storage – A type of storage that writes data and files to disk in small chunks called blocks – each block with its own address. Block storage is typically abstracted and managed by a file system or application on a host that interacts with the disk media using SCSI (Small Computer System Interface) commands. Unlike an object (see object-based storage), a block of data does not contain metadata to provide context for what that block of data is. Examples of block storage include storage area networks (SANs), iSCSI (Internet Small Computer System Interface) and local disks. Commodity / whitebox hardware – Off-the-shelf server hardware that is inexpensive, easy to maintain and replace, and generally compatible with other devices. A major benefit of SDS is that organizations reduce costs by delivering storage services via software stacks on clusters of commodity hardware. Over time, instead of doing forklift replacements of storage hardware, with software-defined storage you can instead renew and modernize by incorporating the latest, state-of-the-art commodity servers and removing older nodes with zero downtime. Examples of commodity hardware include x86 and ARM servers. File storage – A type of storage presented, controlled, and managed via a file system – typically NFS (Network File System), or SMB (Server Message Block) commonly referred to as CIFS (Common Internet File System). File storage manages the layout and structure of the files and directories on the physical storage. It also facilitates file sharing so many users and systems can access the storage resource at the same time. File-based storage is typically used with network-attached storage (NAS) systems. Flash / solid state drive (SSD) – A storage device that stores persistent data on nonvolatile solid-state flash memory. Unlike spinning electromechanical disks (i.e., hard disk drives), SSDs have no moving parts. SSDs also typically run quietly, store and access data more quickly, have less latency and are more reliable and durable than electromechanical disks. Since the technology is more advanced, the cost of SSDs is usually higher. Object storage – A type of storage where data is managed as “objects” containing three parts: the data itself (i.e., an image or brochure), its metadata, and an identifier or address that allows the object to be found in a distributed system. Object-based storage is well-suited for managing large amounts of unstructured data, including email messages, word processing documents, social media and audio files. Object storage is also frequently used for data archiving systems that need to store large amounts of infrequently accessed data for an extended period of time. Software-defined storage (SDS) – Storage technology that is installed and managed as software on commodity hardware rather than purchased and deployed as a distinct hardware storage array. By abstracting the storage hardware from the software, SDS enables organizations to allocate storage infrastructure resources dynamically and efficiently to match application needs. Storage node – An individual compute server that is a participant in a storage cluster – sometimes referred to as cluster node. Virtual disk – A virtual disk or vdisk is an abstracted logical disk volume presented to a computer or application for read/write use. Some SDS solutions enable advanced policies and features such as deduplication to be set on a per-virtual disk basis. There are many ways companies deploy infrastructure in their data centers. A main objective of SDS is to let companies easily use whatever architecture meets their needs at a given time. Brownfield – A term that describes deployment of new software or systems that leverage established or existing assets. An example of a brownfield deployment is software-defined storage that utilizes a traditional storage array as a part of the system. Distributed system – A cluster of autonomous computers networked together to create a single unified system. In a distributed system, networked computers coordinate activities and share resources to support a common workload. The goals of distributed systems are to maximize performance and scalability, ensure fault tolerance, and enable resource availability. Examples of distributed systems include Amazon Dynamo, Google MapReduce, Apache Hadoop, and the Hedvig Distributed Storage Platform. Greenfield – A term that describes the deployment of a new storage system or assets in an environment where no previous ones exist. Hybrid cloud – A cloud computing environment in which private cloud resources (e.g., on-premise data center) are managed and utilized together with resources provisioned in a public cloud (e.g., Amazon Web Services). Hyperconverged – A system architecture that combines software-defined compute and software-defined storage together on a commodity server to form a simplified, scale-out, data center building block. The “hyper” in hyperconvergence comes from hypervisor – the server virtualization component of the solution. Hyperscale – An architecture where software-defined compute and software-defined storage scale independently of each other. A hyperscale architecture is well-suited for achieving elasticity because it decouples storage capacity from compute capacity. Hyperscale architectures underpin web giants including Google and Amazon and is being increasingly adopted by other enterprises as a means to efficiently scale or contract an environment over time. Storage cluster – A group of storage nodes that form a single scale-out storage resource pool. A storage cluster leverages the aggregate capacity and horsepower of many networked commodity, whitebox servers. Software-defined data center (SDDC) – A data center where all infrastructure elements – storage, compute, networking, etc. – are virtualized and delivered as a service. With IT-as-a-service (ITaaS), all components can be provisioned, operated, and managed (in short, defined) on the fly as needed, typically through an application programming interface (API). The application layer is freed from the constraints of physical infrastructure, and the physical hosts or data center may change at any given time without interruption to services. By contrast, in traditional data centers, infrastructure is usually fixed, and defined by hardware and devices. Software-defined data centers are implicitly hybrid, elastic and include self-provisioning by default. The software-defined data center is increasingly considered to be the next era of IT infrastructure. Stumped by a word or phrase that we don’t have on this list? Click here to access part two for another batch of common SDS terms. In the meantime, download this Forrester whitepaper to learn more about software-defined storage trends, adoption, and use cases.
<urn:uuid:d45278d9-dfe7-40db-b13d-36e7968e19e5>
CC-MAIN-2017-04
http://www.hedviginc.com/blog/37-software-defined-storage-terms-you-need-to-know-part-1
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00447-ip-10-171-10-70.ec2.internal.warc.gz
en
0.918082
1,581
3.21875
3
No doubt social media is a valuable source of business-related information. People routinely discuss brands, products, and services in social networks, blogs, microblogs and forums, where other people can read, comment, share, or just "Like," and in this way opinions are spread around the world by word of mouth (WOM). WOM is driven by customer satisfaction, trust and brand commitment and has far-reaching consequences (e.g., affective/emotional, cognitive, and behavioral) for both consumers and organizations. WOM can significantly change results of advertising activities across all other media channels. There can be synergy between WOM and other forms of marketing. At the same time, word-of-mouth marketing means that an organization has taken active steps to encourage WOM (e.g., by offering rewards to the WOM sender), whereas, in normal WOM, the sender is not rewarded. Individuals are more inclined to believe WOM marketing than more formal promotion methods; the listener tends to believe that the communicator is being honest and does not have an ulterior motive. It is not always easy to separate WOM marketing from public WOM, but some docs cleansing procedures can reduce the portion of WOM marketing in social listening analytics. However, influential effects of such marketing can still be very significant and reflected in public WOM. How we can read WOM as a big data source? If we read just a few random posts about a subject under our interest, then we have some understanding of what is being discussed and what the opinions are, but such reading can be very biased due to sample size, i.e., the volume of posts that we read personally. To analyze business impact, millions of everyday posts have to be treated as unstructured big data, and we have to use statistics, general math, and even sociophysics. Typical characteristics of social media posts about an entity/category/product include: (1) Volume (total number of posts); and (2) Counts of sentiment scores on different scales (positive-neutral-negative, or on a wider range, say, from -5 to +5, where 0 is neutral). These characteristics allow us to compare volumes of discussions for different brands/products (which relate to awareness) and compare opinions (which relate to preferences). These characteristics can be connected with such business values as market share, sales, stock prices, number of complaints, etc. Understanding such connections is beyond the scope of this post, so here we will restrict ourselves to a starting point for such an understanding – dynamics of WOM. On the time-series graphics below, as an example, we demonstrate a typical normalized daily volume of docs for a brand (blue line connecting blue dots). To understand the graphics better, the daily volumes are supported by a moving average (µ) for preceding 14-21 days (red dashed line) and by boundaries (red solid lines) to frame typical volatility of daily volumes with 95% probabilities (µ ± 2σ, where σ stands for standard deviation). If we observe an unusual behavior (e.g., a daily volume outside the 95% probability range), then something unusual is happening in WOM and we have to understand it and, then, make appropriate business decisions. Typically, an unusual volume is associated with corresponding discussion topics and opinions. We can discover these by comparing uniqueness of a words cloud for unusual time interval vs. typical (inside the range) words clouds. This way, early warning is accompanied by information about the corresponding causes. Many of advances social listening methodologies are incorporated in Genpact Media Interactive (GMI) app.
<urn:uuid:f0344d25-d731-4602-b690-d96acdb81ad8>
CC-MAIN-2017-04
http://www.genpact.com/home/blogs/bloginner?Title=Social+listening+analytics+and+early+warnings+for+brands+and+products&utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+GenpactBlogs+%28Genpact+Blogs%29
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281226.52/warc/CC-MAIN-20170116095121-00079-ip-10-171-10-70.ec2.internal.warc.gz
en
0.937448
734
2.671875
3
The latest review of security issues and trends is out, and we’re sorry to say, folks: The rampant use of weak passwords still presents a serious security problem to end users and companies alike. The recently-published Trustwave 2012 Global Security Report details the current threats to user data and identifies the vulnerabilities that persist within organizations. The statistics were generated from their investigation of about 300 breaches across 18 countries. They also analyzed the usage and weakness trends of more than 2 million real-world passwords used within corporate information systems. The verdict? After an initial foothold in a system (via malware and other threat vectors), 80% of security incidents were due to the use of weak administrative passwords. Yes, that’s correct: 80 percent. From weak passwords. “The use of weak and/or default credentials continues to be one of the primary weaknesses exploited by attackers for internal propagation,” the report comments. “This is true for both large and small organizations, and largely due to poor administration.” They found that writing down passwords is still prevalent in the workplace, particularly in organizations that implement complexity requirements, password expiration cycles, and password histories to prevent recycling of old passwords. While these policies are often implemented to improve password management, the reality is that increasing password complexity directly corresponds with a decrease in memorability, hence the insecure practice of writing down passwords. The report found that in 15% of the security tests performed, written passwords were found on or around user work stations. What’s even more astonishing is that rather than find a tool that can help with the password problem, users are getting creative in overriding the policies meant to enforce the use of strong passwords. They exploit loopholes such as: - Setting usernames as the password when complexity requirements aren’t forced - Adding simple variations to fit complexity requirements, such as capitalizing a letter and adding an exclamation point to the end - Using dictionary words or applying simple modifications Default and shared passwords are also a massive point of failure. Companies assign poor default passwords such as “changeme” and “welcome” but don’t later enforce an update of those defaults. Applications and devices that are shipped or installed by default on company systems also utilize default passwords that are rarely modified, a particularly dangerous situation for applications accessible from the Internet. The result: they found a proliferation of simple combinations such as “administrator:password”, “guest:guest”, and “admin:admin”. In another alarming example, the report highlights Active Directory’s policy of password complexity, which states that a password is required to have a minimum of eight characters and three of the five character types (Lower Case, Upper Case, Numbers, Special, Unicode). Guess what meets those requirements? “Password1”, “Password2”, and “Password3”, the first being the most widely used across the pool of two million passwords studied in the report. The top 10 passwords identified by the study were: Variations of “password” made up about 5% of passwords and 1.3% used “welcome” in some form. In some ways, we’re impressed by the creative effort people put into avoiding strong passwords while still operating within the “complexity requirements” imposed on them. However, moving forward into 2012 and beyond, it’s clear there are steps both end users and businesses should be taking to change their password habits, prioritizing: - Education of employees on basic security practices - Tracking of company data and pinning it to an individual every time - Standardizing implementation across all platforms and devices and, most importantly: - The implementation of a password management tool that makes it easy to maintain high security standards. For as long as we force people to create their passwords and remember them, we’ll be stuck with bad passwords. Recognizing the prolific use of poor passwords is one thing – empowering people to act on these recommendations, in a way that doesn’t inconvenience them or tax their memory, is the true source of change. Only with password management solutions like LastPass and LastPass Enterprise will we enable people to follow best security practices. The LastPass Team
<urn:uuid:be8c22cc-b067-4704-9b34-934a8afdf880>
CC-MAIN-2017-04
https://blog.lastpass.com/2012/03/latest-review-of-security-issues-and.html/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280266.9/warc/CC-MAIN-20170116095120-00199-ip-10-171-10-70.ec2.internal.warc.gz
en
0.936054
890
2.546875
3
All aboard! The U.S. National Renewable Energy Laboratory (NREL) has launched an open-source software solution called the Energy DataBus. It will enable facility managers to collect, integrate and analyze data on power usage on a large scale from a wide variety of sources. NREL currently is using the Energy DataBus to track and study energy usage on its own campus in Golden, Colorado. The system is applicable and available to other facilities of practically any scale, from a single building to a large military base or college campus or for other energy data management needs. "It's similar to the one that Facebook and Twitter (News - Alert) use to collect all this data and information on people. We're using the same giant, scalable capability to collect energy data," explained Aaron Beach, Energy Informatics (News - Alert) researcher. Managing and minimizing energy consumption on a large campus is usually a difficult task for facility managers.There may be hundreds of energy meters spread across a campus, and the meter data are often recorded by hand. Even when data are captured electronically, there may be measurement issues or time periods that may not coincide. Making sense of this limited and often confusing data can be a challenge that makes the assessment of building performance a struggle for many facility managers. Four Trillion (News - Alert) Data Points Annually The Energy DataBus software was developed by NREL to address these issues on its own campus, but with an eye toward offering its software solutions to other facilities. Key features include the software's ability to store large amounts of data collected at high frequencies. NREL collects some of its energy data every second. As a result, nearly half a million data points stream into the database over the course of an hour. That adds up to more than 4 trillion data points per year —and rich functionality is required to integrate this wide variety of information into a single database. "The data gathered by NREL comes in different formats, at different rates, from a wide variety of sensors, meters, and control networks," says Keith Searight, development manager of the Energy DataBus. "The Energy DataBus software collects all this data and aligns it within one scalable database." Existing energy data systems tend to be designed for particular applications such as reporting or control and do not provide general data-analysis interfaces flexible enough for NREL’s requirements. But rather than creating a new solution from the ground up, NREL built the Energy DataBus on existing open-source software products that are used to manage complex data problems in other industries. The Energy DataBus supports the popular Cassandra database and PlayORM, which facilitates the easy integration of widely varying data. PlayORM provides the Energy DataBus with the capabilities it needs to interact with many types of data, including time-series, textual, or numerical data. NREL has developed tools that allow the Energy DataBus to collect data: - From control systems using the BACNet protocol, - From electrical meters via the ModBus protocol, and - From weather sensors via Web requests to an on-site weather station. The flexibility of the Energy DataBus Web API has made all of these systems easy to integrate. Additionally, the Energy DataBus software incorporates tools that help assure the highest quality of data in the database. "No matter what you do, when you start collecting data off meters, you get bad data, you get data dropouts, you get all sorts of issues," Searight explained. "So we created some tools to clean and pull out relevant information from the datasets—including such things as filling in missing data and identifying bad data in the system. Being able to mine the data and perform some analytics on it helped us pinpoint where potential issues were with some of the control strategies in our buildings." The Energy DataBus can operate at widely varying scales and can be integrated into small desktop applications and run on a laptop or virtual machine with only a few gigabytes of RAM (News - Alert). On the other hand, Energy DataBus was designed to support cloud architecture and can be scaled to hundreds—or possibly thousands—of nodes across the globe. For instance, NREL currently runs its production version of Energy DataBus with 12 database nodes on high-performance servers and four webserver nodes on virtual machines. However, for development, the team uses a version that runs in memory on a laptop. Looking to the Future "The plan is to roll this platform out to the public in open source format so that university campuses and other institutions can tap into these tools and build off of them," stated Beach. To employ the Energy DataBus, other facilities would need to connect their existing data collection systems with it and then configure it to meet their particular needs. But NREL has found that it's worth the effort. Already, a team at NREL has developed an application that uses the Energy DataBus to provide data to a set of "energy dashboards" —enabling anyone on site to monitor the energy performance of the NREL campus. Ergo, the task of minimizing energy use falls on each person, instead of just the facility managers. Another team at NREL has developed an app that allows building occupants to report their comfort levels. More apps are in the works. For software developers, the Energy DataBus software is now available for free download. For more information on how to obtain, test, and implement the Energy DataBus software, contact Keith Searight and the Energy DataBus team at DataBus@nrel.gov. Edited by Rich Steeves
<urn:uuid:416007b8-cf90-48bb-9029-08147439641c>
CC-MAIN-2017-04
http://www.iotevolutionworld.com/topics/smart-grid/articles/2013/08/15/349652-nrels-open-source-databus-tracks-power-usage-any.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284411.66/warc/CC-MAIN-20170116095124-00161-ip-10-171-10-70.ec2.internal.warc.gz
en
0.934135
1,141
2.53125
3
How do you keep a robot from getting tired? Sounds like the set-up line for a joke, but the scientists at Defense Advanced Research Projects Agency have put out a call for technology that significantly bolsters robot energy efficiency while increasing its range and endurance. Within what it calls the Maximum Mobility and Manipulation (M3) program DARPA said it wants to develop and demonstrate high efficiency "actuation technology" that will let robots similar to the DARPA Robotics Challenge have 20 times longer endurance. The Robotics Challenge, which starts in October, is looking to build machines that will compete in staged situations in which robots will have to successfully navigate a series of physical tasks in a real-world disaster-response. In the news: Quick look: The Higgs boson phenomenon According to DARPA, The Robotics Challenge is going to test supervised autonomy in perception and decision-making, mounted and dismounted mobility, dexterity, strength and endurance in an environment designed for human use but degraded due to a disaster. "Adaptability is also essential because we don't know where the next disaster will strike. The key to successfully completing this challenge requires adaptable robots with the ability to use available human tools, from hand tools to vehicles." With regards to the M3 Actuation program, DARPA says: "Animals operate with significantly higher energy efficiency than today's robots. For example, a horse travels with a specific resistance1 of 0.01 to 0.02, compared to a specific resistance of 1 to 3 for several current legged robots. Specific resistance is a dimensionless quantity that is similar to a thrust to weight ratio, calculated as the ratio of mechanical input power to the product of mass, gravitational acceleration, and velocity. This difference of two orders of magnitude is believed to be due in large part to differences in the efficiency of actuation." A robot that carries hundreds of pounds of equipment over rocky or wooded terrain would increase the range warfighters can travel and the speed at which they move. But a robot that runs out of power after ten to twenty minutes of operation is limited in its utility. In fact, use of robots in defense missions is currently constrained in part by power supply issues. DARPA says research and development will cover two tracks of work: - Track 1 asks performer teams to develop and demonstrate high-efficiency actuation technology that will allow robots similar to the DARPA Robotics Challenge Government Furnished Equipment platform to have twenty times longer endurance than the DRC robots when running on untethered battery power (currently only 10-20 minutes). M3 Actuation performers will have to build a robot that incorporates the new actuation technology. These robots will be demonstrated at, but not compete in, the second Robotics Challenge live competition scheduled for December 2014. - Track 2 will be tailored to performers who want to explore ways of improving the efficiency of actuators, but at scales both larger and smaller than applicable to the Robotics Challenge platform, and at technical readiness levels insufficient for incorporation into a platform during this program. Essentially, Track 2 seeks to advance the science and engineering behind actuation without the requirement to apply it at this point. Layer 8 Extra Check out these other hot stories:
<urn:uuid:ca0c741b-68f9-43a0-9c33-10322931dd5c>
CC-MAIN-2017-04
http://www.networkworld.com/article/2222731/data-center/darpa-program-targets-20-fold-increase-in-robot-range--endurance.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281419.3/warc/CC-MAIN-20170116095121-00189-ip-10-171-10-70.ec2.internal.warc.gz
en
0.94306
650
3.421875
3
The Internet Control Message Protocol (RFC 792) was designed to provide network connectivity information to administrators and applications. The protocol is broken up into two classifications: types, and codes. The types are the overall categories, and the codes are the individual messages within the categories. Some types don’t have any codes beneath them, and receive by default a “no-code” number of zero (0). An example is Type 8 (a ping packet), which is often thought of as Type 8, Code 0. Also notice the color-coded pairings within the types; they indicate a relationship the pair, e.g. an echo request solicits an echo reply, and a timestamp request solicits a timestamp reply. hermes root # tcpdump -nnvXSs 1514 -c1 icmp tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 1514 bytes 23:11:10.370321 IP (tos 0x20, ttl 48, id 34859, offset 0, flags [none], length: 84) 22.214.171.124 > 126.96.36.199: icmp 64: echo request seq 0 0x0000: 4520 0054 882b 0000 3001 7cf5 45fe d52b E..T.+..0.|.E..+ 0x0010: 4815 222a 0800 3530 272a 0000 25ff d744 H."..50'..%..D 0x0020: ae5e 0500 0809 0a0b 0c0d 0e0f 1011 1213 .^.............. 0x0030: 1415 1617 1819 1a1b 1c1d 1e1f 2021 2223 .............!"# 0x0040: 2425 2627 2829 2a2b 2c2d 2e2f 3031 3233 $%&'()*+,-./0123 0x0050: 3435 3637 4567In the pingpacket above I’ve highlighted the type and code in green. 0800 indicates Type 08 and Code 00. The Most Common Types **For a complete list see IANA - Type 0 : Echo Reply - Type 3 : Destination Unreachable - 0 : Net Unreachable - 1 : Host Unreachable - 2 : Protocol Unreachable - 3 : Port Unreachable - 4 : Fragmentation Needed and Don’t Fragment was Set - 5 : Source Route Failed - 6 : Destination Network Unknown - 7 : Destination Host Unknown - 8 : Source Host Isolated - 9 : Communication with Destination Network is Administratively Prohibited - 10 : Communication with Destination Host is Administratively Prohibited - 11 : Destination Network Unreachable for Type of Service - 12 : Destination Host Unreachable for Type of Service - 13 : Communication Administratively Prohibited - 14 : Host Precedence Violation - 15 : Precedence cutoff in effect - Type 5 : Redirect - 0 : Redirect Datagram for the Network (or subnet) - 1 : Redirect Datagram for the Host - 2 : Redirect Datagram for the Type of Service and Network - 3 : Redirect Datagram for the Type of Service and Host - Type 8 : Echo Request - Type 11 : Time Exceeded - 0 : Time to Live exceeded in Transit - 1 : Fragment Reassembly Time Exceeded - Type 13 : Timestamp Request - Type 14 : Timestamp Reply - Type 17 : Address Mask Request - Type 18 : Address Mask Reply - Type 30 : Traceroute Some Key Points About ICMP - ICMP Doesn’t Have Ports You can’t actually pinga port. Or, more accurately, “pinging a port” is a misnomer. When someone speaks of “pinging a port” they are actually referring to using a layer 4 protocol (such as TCP or UDP) to see if a port is open. So if someone “pings” port 80 on a box, that usually means send it a TCP SYN to that system in order to see if it’s responding. The misnomer exists because “pinging something” is now synonymous in the IT world with checking to see if it’s alive in a general sense. So if you’re checking to see if a port is listening, it’s natural to refer to that act as “pinging” the port. Just remember that the original, real pinguses ICMP, which doesn’t use ports at all. - ICMP Works At Layer Three (3) While ICMP sits “on top of”, i.e. is embedded in, IP, ICMP is not a layer 4 protocol. It’s still considered to be at layer 3 rather than one layer higher. - Traceroute Uses ICMP Type 11, Code 0 (TTL Exceeded) To Do Its Work tracert) and Unix/Linux ( traceroute) use different protocols by default to do traceroutes. Windows uses ICMP, while Unix/Linux uses UDP. The key point here, however, is that the embedded protocol doesn’t matter. Tracerouting works because of the TTL value in the IP portion of the packet — not the ICMP, TCP, or UDP parts. This is why it doesn’t matter what “upper level” protocol is used. hermes root # tcpdump -nnvXSs 1514 -c1 icmp and dst hermes tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 1514 bytes 16:07:53.016435 IP (tos 0xc0, ttl 255, id 27812, offset 0, flags [none], length: 56) 188.8.131.52 > 184.108.40.206: icmp 36: time exceeded in-transit 0x0000: 45c0 0038 6ca4 0000 ff01 79e3 4815 2229 E..8l.....y.H.") 0x0010: 4815 222a <blue0b00 f4ff 0000 0000 4500 001c H."........E... 0x0020: 6c53 0000 0001 ccdd 4815 222a 480e cf63 lS......H."H..c 0x0030: 0800 10a2 e75d 0000 .....]..This TTL Exceeded packet shows the Type 11 (0b), Code 0 (00) in the first two bytes of the ICMP header. Fun with ICMP If you’re ever interviewing someone for a networking-oriented position, consider the following trick question: What port does ping work over? If they are interviewing for a position that requires they know their protocols and they give it any real thought, consider another candidate.:
<urn:uuid:b3a30c39-014d-42cc-a105-a6ec9a2b2302>
CC-MAIN-2017-04
https://danielmiessler.com/study/icmp/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282926.64/warc/CC-MAIN-20170116095122-00097-ip-10-171-10-70.ec2.internal.warc.gz
en
0.703507
1,502
2.90625
3
Open Source Is the Foundation for Cloud Computing November 19, 2012 It is common knowledge that open source technology is the basis for many large-scale corporate projects, including Cloud computing. The UK Register printed “The Cloud Made of Penguins: Open Source Goes ‘Industrial Scale’,” an article that explains how the big names in open source are being used. OpenStack Software, a mere child of two years, specializes in storage, networking, plus many more components built on Apache platforms. It has caught the attention of many corporate giants, such as HP for its Cloud and the telecom company Nippon Telegraph and Telephone Corporation. Amazon EC2 is the favorite of Linux servers, mostly for storage. Also do not forget that it is used in infrastructure-as-a-service technology, such as Microsoft Azure. The article predicts that since the Linux kernel and middleware are not the attention-grabber they used to be, that Cloud-computing projects on the industrial level will begin to make more headlines. Jim Zemlin of the Linux Foundation pointed out this new idea: “’The difference now is they are not just obviously tinkering around with how to make a software defined network or block storage file format,’ Zemlin said. ‘These are broad-scale industrial initiatives that are financed by the largest computer companies in the world to create the comments they need to make commercial products.’” What is surprising is that people find this trend surprising. After technology become a core part of industry, developers puzzle over how it cane be manipulated for other projects. Remember, necessity is the mother of invention and you use the tools you have to make it. Thinking back on how open source search programs were back in the day, LucidWorks saw a need for a powerful and robust, yet economically priced, search application. Using Apache Lucene, LucidWorks created LucidWorks Search and LucidWorks Big Data. Whitney Grace, November 19, 2012
<urn:uuid:10a40b1f-e828-406f-ba53-d5871f89a982>
CC-MAIN-2017-04
http://arnoldit.com/wordpress/2012/11/19/open-source-is-the-foundation-for-cloud-computing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285001.96/warc/CC-MAIN-20170116095125-00005-ip-10-171-10-70.ec2.internal.warc.gz
en
0.930672
410
2.875
3
The cyber-criminal is getting smarter and more organised and alongside this as have their methods of attack. Whereas 2 years ago the watchword was computer viruses, now new and infinitely more advanced attack methods exist. New and unknown “Zero-day’ attacks, such as worms and Trojans, seem to have the measure of traditional signature-based security technologies and a determined hacker can bypass even the most stringently configured company firewall to access system files and the registry. Also, attackers now seek to run targeted scams that are kept purposely low enough profile to not raise awareness with AV and spyware vendors. Whatever the reason for a malicious cyber-attack, whether it be for financial gain, espionage or just for the sheer hell of it, companies must protect against unwarranted incursion into their system. However, with these new methods of attack it is becoming increasingly difficult to ensure that they don’t penetrate computer systems because when they do the effects can range from the defacement of websites to the fraudulent extortion of vast sums of money. The recent attacks Bluesquare are a case in point of how far hackers are willing to take extortion. So what can be done to ensure a sufficient level of protection against attack from these sources? Clearly, patching is not working because of the speed with which these attacks strike and propagate to other computers. A PC worm may be similar to a virus, in that it spreads from computer to computer, but where the worm differs is the speed of self-propagation. Worms use the basic transport mechanisms of any computer or network to spread as quickly as possible, allowing attackers to take control of systems and execute malicious behaviour. Normally this happens before a patch has had a chance to be created, let alone implemented, meaning worms can march through the globe’s computer systems at an incredible rate. Sasser and Blaster are examples of how devastating they can be. The traditional signature-based approach still favoured by many companies to detect malicious attacks is based on patches, updated at regular intervals, and is inherently reactionary and out-of-date to stop zero-day attacks. Until a worm or Trojan is known and a signature created and updated, the antivirus program cannot provide protection against the malicious code as it does not recognise it as a threat. Even when the signature for these is known, if a worm executes itself from memory and not from the file system, a lot of antivirus programs are not capable of protecting a system from it. A lot of this type of code also hides in less accessed system directories which will only be processed by AV software during a full file scan. Which means that in most cases AV will help clean up the worm only after it infects the machine or network, and probably infected many other systems. So how do we protect against this new breed of attack that is seemingly marching through unprepared systems? Intrusion detection software does not rely on signatures but highlights when malicious code has accessed critical areas such as memory, file system, OS, registry and applications. However closer to the mark this is it is still a case of closing the gate once the horse has bolted – surely prevention is better than cure. This is where Host Intrusion Prevention Software (HIPS) enters the fray. HIPS recognizes anomalies in exactly the same way that intrusion detection software does – crucially, however, it does so before these have had a chance to access critical systems. Sitting just behind the firewall, HIPS recognizes all the traits of a zero-day attack by understanding the methods used to launch such an attack and blocking them. HIPS requires no patches, no signature updates or rules to work because it identifies the characteristics of the attack behaviour and stops the action taking place. A security guard trained to recognize the faces of wanted criminals is no good if they cannot work out for themselves when a masked man is breaking in. Data is now the most important commodity that many companies have. This data needs to stay protected from outsiders whilst at the same time continually available to those who it is intended for. Malicious attacks seek to undermine both of these objectives using progressively more advanced hacking techniques. It is up to the corporate world to adapt to this and employ progressive IT security capable of addressing the problems of zero-day hacker attacks. Whilst HIPS may not be a silver bullet, employed in-line with AV software it will catch and destroy any attempt to enter your system propagated by these new attacks – providing the last man standing where signature-based security has failed.
<urn:uuid:a0e00ef8-c66f-4100-a4d4-89c3bc708b86>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2004/11/08/not-a-patch-on-the-new-breed-of-cyber-criminal/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00309-ip-10-171-10-70.ec2.internal.warc.gz
en
0.958582
912
3.125
3
Today, the Federal Communications Commission and the Environmental Protection Agency released a two-page fact sheet advising consumers as to what they must do to continue watching television after February 19, 2009, when stations will stop broadcasting an analog signal, and what they can do with their old televisions. Currently, many over-the-air stations are broadcasting in both analog and digital formats. However, in the middle of February 2009, TV stations will have to broadcast solely a digital signal to free up radio spectrum for public safety communications. The two agencies highlight three choices consumers have. Consumers may connect their analog televisions to digital-to-analog converter boxes, buy a television that has a digital tuner already built in or subscribe to a paid TV service (such as cable). Such services are not required to switch any of their channels to digital, according to the fact sheet. Whether one opts to get a digital tuner to extend the life of his or her analog set, or to buy a digital television, the EPA recommends selecting a certified Energy Star product. Converter boxes that are ENERGY STAR-qualified use less energy than conventional converter boxes. "If all of the digital-to-analog converter boxes sold in the U.S. met the ENERGY STAR specification, we would save 823 million kilowatt-hours every year," the two agencies noted in the fact sheet. A list of ENERGY STAR qualified models can be found at (select Digital-to-Analog Converter Boxes). If one decides to get a new television set, the question arises "What do I do with the old one?" The two agencies suggest recycling the old set and note that, because of the cessation of analog broadcasts, many charities will not take analog TVs. Recycling TVs recovers valuable materials from the circuit boards, metal wiring, leaded glass, and plastics. To recycle your old television, call your local household hazardous waste collection and recycling program to find out whether they will be sponsoring an upcoming event to recycle TVs and other electronics. A study conducted in April by the Consumer Electronics Association (CEA) concluded that the transition to digital television will not have a significant impact on the environment. According to the study, households receiving broadcast signals only over-the-air (OTA) expect to remove fewer than 15 million televisions from their homes through 2010, ninety-five percent of which will be sold, donated or recycled. In addition, most OTA-only households expect to buy a digital converter box (48%) and continue using the same TV. "Consumers are far more likely to recycle, reuse, give away or sell analog TVs than throw them away," said CEA's Senior Director of Market Research Tim Herbert, in a statement accompanying the release of the study. "While some have speculated that millions of TVs would enter the waste stream, this new study suggests that is not the case."
<urn:uuid:d39c047e-b94f-4f59-b43d-99c5eca01d20>
CC-MAIN-2017-04
http://www.govtech.com/products/Feds-Advise-Consumers-of.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00033-ip-10-171-10-70.ec2.internal.warc.gz
en
0.9468
594
3.078125
3
If a picture is worth a thousand words, the Environmental Protection Agency (EPA) has provided the ability to talk up a storm. By making a series of geographic information system (GIS) mapping applications available over the Internet, the EPA is allowing government agencies and citizens to look at information such as air and water pollution sources in new ways. Instead of looking at pages of text, users can now map complex GIS information for any geographical area in the United States from their desktop computers. This new mapping capability is useful for local governments that don't have the budgets to implement mapping technology, and for state regulatory staffs who may use the information to help set priorities for their permits and inspections. It also helps community and environmental justice groups as they research facilities that might be impacting watersheds and other natural resources. The EPA has made this possible by combining its Web-enabled Envirofacts facility data warehouse -- an Oracle relational database of EPA-regulated facilities -- with an ArcInfo GIS database containing national spatial data. This combination of data systems -- called Maps on Demand (MOD) -- gives users a powerful Web-based tool to map and access air pollution levels, water-discharge permit compliance reports, Superfund clean-up decisions, and trends of toxic chemical releases and hazardous waste handlers, using site maps and text reports. MOD helps users better understand regulatory information by showing them EPA-regulated facilities in relation to surrounding geographic features. By making GIS mapping available over the Internet, users have a more accurate, less expensive way to map information on their own. In the past, EPA staff had to research records manually to fulfill a request for information, which was slow and expensive. Envirofacts is an application within the EPA's main Web site . Its purpose is to make all EPA information subject to the Freedom of Information Act -- including regulatory, spatial and demographic data -- accessible to federal, state and local regulators, citizens, and private industry. The general public and the EPA's 17,000 employees use Envirofacts to access information such as hazardous waste, air and water emissions, and toxic releases. The site receives more than 200,000 hits a month from the Web and thousands more from those using database access software. Through the Envirofacts database, users generate queries by entering a specific facility name, ZIP code, city, county or state into an online form. The query generates a list of facilities that match the criteria. Users select a facility to receive a detailed environmental profile with information such as the toxic chemicals released over the last year, and air emission estimates for pollutants regulated under the Clean Air Act. Regulators can use this information to monitor noncompliant companies by regularly checking permit status, and to ensure compliance to permit limits. The public can use the information to better understand how a facility is regulated and what a facility discharges in a community. Envirofacts uses custom software to extract data from EPA's five national mainframe systems. The software pulls information into the Oracle Envirofacts database, which is updated monthly by the agency. The Oracle7-based data warehouse is currently 40GB and contains 2,400 pages of metadata information describing how information in the warehouse is structured. The data warehouse includes regulatory information on more than 700,000 facilities over the past five to eight years. The Envirofacts database will grow to hold over a terabyte of information as the agency adds up to three more national databases -- drinking water, hazardous waste and water treatment project information -- to the data warehouse this year, and configures its spatial data holdings into the warehouse data management system. When Envirofacts went live in March 1995, the EPA rolled out a Web-based interface and desktop tools to its regional offices so staff could access the online data. Because employees can tap into all the information they need through an Internet connection, they no longer have to remember multiple account numbers to access different databases on the agency's mainframe, or wait several days for IS to create reports they need. Like the public, they simply log onto the Internet and point and click to access information. Mapping In Envirofacts The Internet provides the EPA with a strategic platform more conducive to government's shrinking budgets -- it is less resource-intensive and expensive to produce and maintain than the traditional GIS mapping methods. And more than 6,000 users download 1.8GB of information from the MOD site each month. MOD links EPA's data warehouse to a GIS ArcInfo database that runs on a Digital Alpha platform. The data is stored in a Clariion RAID array which holds 60GB of spatial data online. Previously, anyone wanting to map GIS information could either pay for EPA-generated maps, deploy a costly GIS workstation with an ArcInfo database and hire a specialist to run it, or rely on static CD-ROMs. With MOD, non-technical users can access the agency's most complete and accurate information resources and mapping capabilities by simply pointing and clicking on a graphical interface. This mapping capability is an important step forward -- instead of struggling with tabular information, users now have well-organized, easy-to-use mapping tools that let them dig into different layers of information. MOD creates maps that include environmental information and nationally consistent spatial data. Within MOD, a SiteInfo application enables users to create maps and text reports of EPA-regulated facilities by entering a facility's latitude and longitude. If users don't know these coordinates, the Envirofacts query form can pull them up using the facility's name and geographic information. Another MOD application called BasinInfo allows users to map watersheds using U.S. Geological Survey hydrologic unit code data. By selecting a region and criteria, such as program system and demographic information, users can map a watershed and see the facilities in and around the watershed. For example, if a group of water specialists wants to assess what is impacting a water basin in a certain geographical area, they can enter the location they want to look at, and the application will generate a map of the area that also shows EPA-regulated facilities within the area or boundary. They can go further into the Envirofacts data warehouse and submit queries on the facilities to see what regulatory programs these facilities have reported under and check if they comply with permit limits and what chemicals they are discharging. A Chemical Reference feature obtains information on the chemicals' effects on the environment and public health. The Facility Density Mapper gives users a big-picture view of the number of facilities in a geographic area. While SiteInfo and BasinInfo maps plot program records, which show whether a facility is a Superfund site or water discharger, for example, the Density Mapper plots the actual number of facilities to a given location. This enables users to pick out the high density areas of EPA-regulated facilities. Then they can go to Envirofacts to do a more in-depth analysis on the facilities. The Mod Future In the future, the Envirofacts team will add links to all three mapping applications. That will enable users to click on a program record or facility on the map and pull up detailed information directly from the Envirofacts data warehouse without having to manually submit a query. For example, a parent concerned about a facility next to their child's school will be able to drill down on the map to see what the facility produces, why it is being regulated by the EPA, etc. The EPA also plans to enhance its MOD application with realtime mapping to make GIS mapping even easier and more thorough. Envirofacts' future is wide open to additional capabilities. With the Internet as the EPA's strategic platform, the agency can continue enhancing Envirofacts to make more information -- and the tools needed to easily access and analyze it -- available to a growing user community. Pat Garvey is Envirofacts manager, Enterprise Information Management Division for the Environmental Protection Agency. August Table of Contents
<urn:uuid:10430350-50b0-47af-8604-e619735a14b7>
CC-MAIN-2017-04
http://www.govtech.com/magazines/gt/Mapping-the-Environment-on-the-Web.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00456-ip-10-171-10-70.ec2.internal.warc.gz
en
0.901757
1,632
3.046875
3
Hu Z.,Chinese Academy of Agricultural Sciences | Jiang Q.,Chinese Academy of Agricultural Sciences | Ni Z.,Chinese Academy of Agricultural Sciences | Chen R.,Central Laboratory of Tianjin Academy of Agricultural science | And 2 more authors. Journal of Integrative Plant Biology | Year: 2013 Plant microRNAs (miRNAs) regulate gene expression mainly by guiding cleavage of target mRNAs. In this study, a degradome library constructed from different soybean (Glycine max (L.) Merr.) tissues was deep-sequenced. 428 potential targets of small interfering RNAs and 25 novel miRNA families were identified. A total of 211 potential miRNA targets, including 174 conserved miRNA targets and 37 soybean-specific miRNA targets, were identified. Among them, 121 targets were first discovered in soybean. The signature distribution of soybean primary miRNAs (pri-miRNAs) showed that most pri-miRNAs had the characteristic pattern of Dicer processing. The biogenesis of TAS3 small interfering RNAs (siRNAs) was conserved in soybean, and nine Auxin Response Factors were identified as TAS3 siRNA targets. Twenty-three miRNA targets produced secondary small interfering RNAs (siRNAs) in soybean. These targets were guided by five miRNAs: gma-miR393, gma-miR1508, gma-miR1510, gma-miR1514, and novel-11. Multiple targets of these secondary siRNAs were detected. These 23 miRNA targets may be the putative novel TAS genes in soybean. Global identification of miRNA targets and potential novel TAS genes will contribute to research on the functions of miRNAs in soybean. © 2012 Institute of Botany, Chinese Academy of Sciences. Source Li N.,Central Laboratory of Tianjin Academy of Agricultural science | Li H.,Central Laboratory of Tianjin Academy of Agricultural science | Shao H.,Central Laboratory of Tianjin Academy of Agricultural science | Liu L.,Central Laboratory of Tianjin Academy of Agricultural science | And 2 more authors. Chinese Journal of Chromatography (Se Pu) | Year: 2011 An ultra performance liquid chromatography-tandem mass spectrometry (UPLC-MS/ MS) method was developed for the determination of the fifteen sulfonylurea herbicides in ginseng. The pesticides were extracted with acetonitrile, cleaned-up by an ENVI-Carbon vsolid phase extraction cartridge, eluted with 1% formic acid in methanol-dichlormethane (20: 80, v/v), separated with UPLC and detected with MS/MS in multiple reaction monitoring (MRM) mode via positive electrospray ionization (ESI +). The method was validated at three fortification levels in ginseng. The validation results were as follows. The standard calibration curves for the fifteen sulfonylurea herbicides all showed linear over the range of 2 -100 μg/L with the correlation coefficients between 0. 996 and 0. 999. The average recoveries of the fifteen sulfonylurea herbicides at the three fortification levels of 5, 25 and 50 μg/kg were between 84. 9% and 104. 3% with the relative standard deviations (RSDs) between 2. 4% and 11. 9%. The limit of quantification (LOQ) was defined as the lowest concentration that could be measured with acceptable precision and accuracy. The LOQs of all the fifteen sulfonylurea herbicides in ginseng were 5 μg/kg. It was indicated that this method is easier, more sensitive and has a better purification effect. Thex^eftsitivity, accuracy and precision of the method were all acceptable. So, this method can be further applied to investigating the contamination status of traditional Chinese medicine by the sulfonylurea herbicides. Source
<urn:uuid:b74375c5-08d8-49af-a60f-161be6d2e713>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/central-laboratory-of-tianjin-academy-of-agricultural-science-2348412/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285315.77/warc/CC-MAIN-20170116095125-00272-ip-10-171-10-70.ec2.internal.warc.gz
en
0.928739
829
2.65625
3
For two years, Google has quietly been developing autonomous flying vehicles that can be used to deliver packages for disaster relief or for commerce purposes, the company revealed Thursday. The program, dubbed Project Wing, has been housed under Google X, the company's secretive facility where it created other projects like Google Glass and its self-driving cars. "Self-flying vehicles could open up entirely new approaches to moving goods -- including options that are cheaper, faster, less wasteful and more environmentally sensitive than what's possible today," the company says in a document describing the effort. The drones are designed to follow a pre-programmed route at the push of a button, flying at 40 to 60 meters above the ground. One goal is to have the unmanned flying robots deliver small items like medicines and batteries that can be used for disaster relief or to bring aid to isolated areas. The initial idea was to deliver defibrillators to heart attack victims. "Even just a few of these, being able to shuttle nearly continuously, could service a very large number of people in an emergency situation," Astro Teller, Google's "captain of moonshots," as it calls its big projects, told the BBC. Prototypes have already been built and tested delivering packages to remote farms in Queensland, Australia. The tests were conducted in Australia because that country has more open rules about drone use, the BBC said. Farmers there received candy bars, dog treats, cattle vaccines, water and radios. Eventually, Google might use the drones for home delivery of items that people purchase online, according to a company spokesman. Google has been working to expand its Google Shopping Express service, which right now uses cars for deliveries. Amazon kicked off the delivery-by-drone craze in December, when it said it was testing the use of unmanned aircraft for sending packages to customers, though some people didn't take the idea seriously at the time. Having unmanned vehicles buzzing around towns delivering packages seems like a radical and potentially dangerous endeavor, but Google's involvement further validates the idea. The company stressed that these are early days for Project Wing and it might be years before testing is complete. For the next year, Google will focus on the safety system for the drones, teaching them to navigate around each other and handle events like mechanical trouble. Moreover, Google said, "we have to fly efficient delivery routes" that take into account the public's concerns about noise and privacy violations and don't threaten "the safety of those on the ground." Ultimately, the company said, "we have to be good enough to deliver to an exact spot the size of a doorstop." Project Wing was reported earlier Thursday by the BBC and The Atlantic.
<urn:uuid:f3bf5cde-2b82-4829-933d-45e562775d82>
CC-MAIN-2017-04
http://www.computerworld.com/article/2599435/internet/googles-project-wing-building-drone-delivery-service.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280128.70/warc/CC-MAIN-20170116095120-00090-ip-10-171-10-70.ec2.internal.warc.gz
en
0.968352
550
2.734375
3
IT staff may be more vulnerable than other office workers to allergies brought on by computer emissions. New research by Swedish scientists has shown that the VDU gives off chemicals which are known to cause a variety of symptoms such as blocked noses, headaches and skin problems. By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers. The study, which is reported in Environmental Science and Technology, suggests that the cause of these illnesses is often the flame retardant chemical triphenyl phosphate used in computer casings and which has a documented allergy effect. The problems are caused as the computer heats up and the chemical evaporates. In tests, levels of the chemical are highest the first few times the computer is switched on but still remain high after 150 hours - the equivalent of two office years. Professor Vyvyan Howard, a toxico-pharmacologist at Liverpool University confirmed that IT staff were a high risk category. "If you're working in a room with 40 computers and in a confined space then you would be more vulnerable, seeing that the company would be trying to keep its computers up to scratch by buying new models." Howard, who has studied similar flame retardants, said that women should be particularly aware of the problems as the chemicals are stored in the body and can be passed on to a child through babies milk. He added, "The best advice to people is to use new computers in a well ventilated area."
<urn:uuid:5e42a131-7ae8-4ef5-b5c8-28595884c1a3>
CC-MAIN-2017-04
http://www.computerweekly.com/news/2240040937/IT-are-staff-exposed-to-workplace-emissions
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00576-ip-10-171-10-70.ec2.internal.warc.gz
en
0.974857
307
2.921875
3
The Sad Truth About the Data Breach Data breach prevention is much more achievable than many people realize. According to a newly published report by the Online Trust Alliance (OTA) based on a review of more than 1,000 data breaches in 2014, 90 percent of data breaches were preventable if a dozen best practices had been followed. Per the OTA statistics, 40 percent of data breaches were due to external intrusion, 29 percent were caused by employees due to a lack of internal controls, 18 percent were attributed to lost or stolen devices and only 11 percent were due to social engineering or fraud. Defenses against data breaches fall into three categories: those intended to minimize the chances of a data breach, those intended to minimize the damage from a successful breach and those designed to maximize the effectiveness of a response to a successful breach. Minimizing Opportunities for Data Breaches It is important to use password protection policies enforced by enterprise password management software. These protections include unique, strong passwords that must be changed on a regular basis and use two-factor authentication. Limit user access by granting access only to users who require access to a given system, which is commonly called least user access. Use device management software to limit access to only those devices that are recognized and use the software to enforce all your security controls, including locking out devices, wiping data from devices and encrypting data on devices. Limit access inside the firewall to only devices that are inside the firewall or use a Virtual Private Network to gain access from outside the firewall. Use real-time intrusion detection and prevention software to detect and quarantine anomalous behavior. In the past, most data security was based on preventing access by hardening the firewall and identifying malware through the use of antivirus software. More recently, the world of data security has adopted intrusion detection and prevention software. There have been tremendous advances in this class of software over the past few years based on improvements in predictive analytic algorithms used by the software. Minimizing Damage From a Data Breach Even with the most rigorous efforts to prevent data breaches, they can still occur. The two most important things an organization can do to minimize the damages from a breach are to encrypt the data and limit the data that is online. An organization should encrypt any data it would not mind having in public distribution, such as data with personal and private information, trade secrets and customer information. Data should be encrypted both at rest in storage and in motion when moving through networks. Protection should also be extended to devices by redacting the personal information by default when it is displayed on a device or printed. Needless to say, data backups should also be encrypted, regardless of the backup media type or backup location. Another important protection for backups is to require unique passwords for access to backups. As should be obvious, data that is online in production systems is more easily stolen than data that is offline or has been destroyed as part of a data retention program. Old applications and their data that are not in production should not be online. Rather, they should be retired with the data and destroyed if they are no longer needed. Or, if the data should still be retained, they should be archived into a secure, encrypted environment. Given that many old applications and data stores cannot be easily encrypted, archiving may be the only way to easily secure them. After the Breach Some breaches are going to occur, and your organization must be prepared for them. Data breach preparedness should be a cornerstone of an organization's information governance program. A data breach preparedness plan should be part of that program, as well. The plan should be tested, perhaps with the assistance of white hat hackers, and should be continuously improved based on the test results. Unfortunately, for the time being, data breaches are a fact of life. Fortunately, by making use of these best practices, your organization can minimize its exposure to data breach risks. Do you have questions about data management? Read additional Knowledge Center stories on this subject, or contact Iron Mountain's Data Management team. You'll be connected with a knowledgeable product and services specialist who can address your specific challenges.
<urn:uuid:7a85a959-6709-49a1-a849-3a998cf2e582>
CC-MAIN-2017-04
http://www.ironmountain.com/Knowledge-Center/Reference-Library/View-by-Document-Type/General-Articles/T/The-Sad-Truth-About-the-Data-Breach.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281649.59/warc/CC-MAIN-20170116095121-00144-ip-10-171-10-70.ec2.internal.warc.gz
en
0.942713
865
2.6875
3
Paralysis in Congress has eroded the nation’s ability to predict floods, but not to the extent some forecasters originally feared. The U.S. Geological Survey (USGS) says sequestration, the across-the-board federal budget cuts that went into effect March 1, has forced the shutdown of 31 stations in its National Streamflow Information Program. The network of some 8,000 stream gauges provides real-time flood forecasts, flood plain mapping, and information used to divvy up water and construct dams and bridges. The cash-strapped agency plans to soon shut off another 10 stations because of a lack of money. Another 45 are funded only through September. Even so, the network has fared far better than officials feared this spring, when USGS predicted sequestration would claim 375 stations. According to Michael Norris, the network’s coordinator, a “tsunami of support” from other federal agencies, along with state and local governments, helped avoid deeper cuts. “The situation wasn’t as bad as it could have been,” said Norris, adding that the help was a testament to the network’s value. Experts say shutting off the stations, which each cost about $15,500 annually to maintain, could have devastating consequences as climate change increases the likelihood of extreme droughts and floods. “Data was absolutely critical” last April when the Grand River’s surge forced some 1,700 people to flee their homes in Kent County, Mich., said Mark Walton, manager of the National Weather Service Forecast Office in Grand Rapids. “Without it, we would have basically been flying blind,” Walton said. Even the smaller number of shut-offs is adding stress to a system that was already shrinking due to cutbacks at state and local governments, who partner with federal agencies to fund it. In addition to the stations closed by sequestration, nearly 600 gauges have gone idle in recent years, with Florida hit hardest. Though the cuts forced the USGS to slash its program funding by 5 percent, other federal agencies that typically chip in — the Army Corps of Engineers, for example — kept their shares steady. The same held true for local partners, some of which boosted funding to keep stations running. In fact, some 28 stations that sequestration closed have since been resurrected. The USGS also saved some threatened stations by putting off upgrades and maintenance at other stations, and canceled the installation of new gauges planned for flood forecasting. Though sequestration hasn't been as painful as expected, Norris said the program is far from out of the woods, particularly since there’s no end in sight for sequestration. “There’s still a great deal of uncertainty,” he said. This article was originally published by Stateline. Stateline is a nonpartisan, nonprofit news service of the Pew Charitable Trusts that provides daily reporting and analysis on trends in state policy.
<urn:uuid:14861b3c-1c36-4a8b-a8d0-4aec2c6652d1>
CC-MAIN-2017-04
http://www.govtech.com/federal/Flood-Forecasting-Network-Remains-Largely-Intact-Despite-Cuts.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00292-ip-10-171-10-70.ec2.internal.warc.gz
en
0.968472
613
2.671875
3
A new type of lighting system called "Radio Frequency, or RF Lighting" is being developed. It is claimed that RF Lighting devices will consume much less electrical power for an equivalent amount of light than current lighting technologies. The potential power savings of RF lighting came to prominence during the California power shortages in Summer 2001. RF lighting works by eliminating the need for a filament or electrode, or a "raw" electrical discharge as is the case with current lighting systems. Instead, the "excitation" of light-producing elements is done with radio frequency energy instead of electrical energy. In addition to substantial energy savings, other claimed potential benefits of RF lighting are much longer lifetimes, and no change in color or brightness over the lifetime of an RF lighting system. For more information on the technology of RF lighting systems, see http://www.fusionlighting.com/technology.htm. The Interference Problem There is a potential problem with RF lighting, especially in multi-tenant buildings, that building managers and developers should be aware of. The radio frequency energy used in RF lighting systems is emitted within the popular 2.4 GHz band used by, among many other devices, wireless Local Area Networks (WLANs, often referred to as "Wi-Fi" devices) and the newest generation of cordless phones. Tenants who are using such devices may well find them completely non-functional if building management (or other tenants) install RF lighting systems in close proximity to tenant areas. WLANs and 2.4 GHz cordless phones are proving to be very popular among technology-savvy consumers. With multiple-computer households becoming more common, the need has arisen to share high-speed Internet connections, printers, and other devices between members of the household who each have their own computers. WLANs, while not exactly "easy" to set up, are easy enough that they're proving to be a popular solution to the "computer sharing" problem. The primary alternative for computer sharing is to drill holes and run network cabling, or use "Phone Line Networking" which is, of course, only usable where there is a phone jack. Similarly, sophisticated cordless phones have become very popular and relatively inexpensive. The majority of the newest, longest-range phones operate in the same 2.4 GHz spectrum as WLANs. Judging by the number of models displayed at stores such as Target, cordless phones appear to be outselling more conventional "wired" phones. Interference with WLANs and 2.4 GHz cordless phones arise occasionally when microwave ovens are used near cordless phones. Microwave ovens also emit radio frequency energy within the 2.4 GHz band. However, microwave ovens adhere to strict emission limits (for the most part, the RF energy is confined to the inside of the microwave oven; if it wasn't the high-powered microwave signals could cause eye damage). Microwave ovens also operate sporadically - rarely for more than a few minutes at a time. RF lighting devices, on the other hand, will likely be turned on continuously, where their energy savings would be the highest, resulting in continuous interference. How We Got To This Point The genesis of the microwave oven occurred during the development of RADAR (Radio Detection And Ranging) during World War II. During the development of RADAR, it was noted that high-powered microwave emissions for RADAR systems in development caused water molecules to vibrate rapidly - resulting in heating with no external heat applied. Further experimentation showed that microwaves energy could actually cook food (and, if care was not taken, human tissue). In order for Raytheon to develop the first industrial microwave oven, it needed a swath of spectrum to be designated specifically for use by microwave ovens. It wouldn't do to have a microwave oven on an Air Force base interfering with the base's RADAR systems. For this, and other microwave energy uses such as industrial plywood dryers and medical diathermy, the FCC created the Industrial, Scientific, and Medical (ISM) bands, the most popular of which is the 2.4 GHz band. A considerable time after the development of the microwave oven, in response to the computer industry, the FCC permitted an additional use for the ISM bands - wireless Local Area Networks. Rules were crafted that communications devices had to "accept" interference, and "could not cause interference to existing communications systems". For equipment that was built to use the ISM spectrum and adhered scrupulously to the appropriate rules, purchasers of such systems were not required to obtain a license from the FCC to operate them. The FCC's rules proved to be sufficiently flexible and the technology was developed to insure reasonably reliable operation of communications devices even in the presence of interference. Most importantly, there was considerable demand for Wireless LANs and later more sophisticated cordless telephones and a near infinite number of wireless systems such as wireless video cameras, wireless stereo speakers, baby monitors, etc. Though there were occasionally interference issues, the "high powered" use of 2.4 GHz and the "communications" use of 2.4 GHz proved to be mostly compatible. RF lighting, if widely deployed, threatens to upset this delicate balance. The Big (Potential) Interference Issue Surfaces This potential conflict of uses of 2.4 GHz first came to light in 1999 in concerns raised by vendors of Wireless LANs filing position papers with the FCC. More recently, the RF lighting potential interference issue surfaced in an August 6, 2001 article in the Wall Street Journal titled "Energy-Saving Light-Bulb Maker Battles With Satellite-Radio Firms For Bandwidth". The article dealt with the concerns of two companies that (then) planned to offer satellite-based broadcast radio - Sirius Satellite Radio, Inc. and XM Satellite Radio (which is now in limited operation). At issue was the amount of interference that Fusion Lighting, Inc.'s proposed new RF lighting devices would cause to the satellite radio broadcasts at 2.32 - 2.345 GHz, which are considerably removed from the spectrum where Fusion's devices operate - the 2.4 GHz band. The satellite radio broadcasters have concluded that Fusion's devices, as proposed, will cause substantial interference to their transmissions. Left unmentioned in the WSJ article, and only now beginning to be noted by many users of the 2.4 GHz band is that if the Fusion devices are capable of causing such trouble for satellite radio broadcasting what would the effect be to communications users of the 2.4 GHz band? Likely devastating- RF lighting devices could become as widely deployed as light bulbs with each one a source of interference in the 2.4 GHz band. The potential interference issue is not just limited to individual offices or apartments. High profile (and highly profitable tenants) may well consider the use of wireless technology essential to their use of leased space. One example is Starbucks, which offers Internet access via Wireless LANs at many of their company-operated locations. Such "Public Wireless Access Points" are becoming ubiquitous, and some analysts project that PWAPs, deployed in public spaces such as airports, hotels, and many other locations may well displace the role envisioned for mobile telephone "3G" wireless data services. What To Do Currently, RF lighting devices are not approved for general use, so the problem is only a potential one at present. But looking ahead, it seems likely that RF lighting will be approved in some form - the potential power savings alone could justify the initially high prices for RF lighting devices. Even if building management chooses not to install RF lighting, interference could still be an issue if tenants install RF lighting which could then interfere with another tenant's WLAN or cordless phone. It's a somewhat humorous, but potentially very real possibility that building management would be called on to resolve a radio frequency interference complaint between tenants. One solution or salvation is that in the next few years, WLAN systems will likely begin to migrate higher in the spectrum to 5 GHz, where there is much more spectrum available (and WLAN speeds will be higher). Currently 5 GHz WLAN equipment is expensive, but intense competition is beginning, and 5 GHz WLAN equipment will likely be considered affordable within two years. Cordless phones are also likely migrating to 5 GHz spectrum along with WLANs. Likely the cordless phone migration will be slower due to greater cost-sensitivity by cordless phone buyers and range issues (though technology to enable greater 5 GHz range is evolving rapidly). Of necessity, building managers increasingly need to become familiar with wireless technology that will be used inside their buildings. Just as "bad cellular telephone coverage" can become an issue with tenants, WLAN and cordless phone interference could also become an issue with tenants and potential tenants. It's possible that future tenant leases will include disclaimers like "This building makes use of RF lighting systems, and may render the use of devices such as Wireless Local Area Networks and cordless phones inoperable." To many building managers, the inclusion of such language may sound laughable but it wasn't that long ago that cigarette smoking was considered an inalienable right, and any suggestion that smoking wasn't permitted in one's office or apartment would have been considered equally laughable. About the Author Steve Stroh is an Independent Technology Writer based in the Redmond, Washington area. Steve is Editor of Focus On Broadband Wireless Internet Access, and has specialized in writing about Broadband Wireless Internet Access since 1997. More information about Focus can be found at http://www.strohpub.com/focus.htm. Steve can be contacted via email at firstname.lastname@example.org with questions or comments.
<urn:uuid:f2b4ba9c-853e-46c1-9a2d-521ce58ff44a>
CC-MAIN-2017-04
http://www.broadbandproperties.com/2002%20issues/april%2002/rf%20lighting.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00108-ip-10-171-10-70.ec2.internal.warc.gz
en
0.956595
1,984
3.09375
3
A California start-up company has claimed that its direct methanol fuel cell membrane will make the cells - envisaged as a future power source for mobile electronics devices - smaller, cheaper and lighter. By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers. PolyFuel's development of a DMFC membrane comes as several of the world's largest electronics companies are developing fuel cells with a view to commercialising them in the next couple of years. The membrane is a small piece of plastic, resembling cellophane wrap, which sits at the heart of the DMFC separating a mixture of methanol and water from a catalyst. The electrical potential across the membrane that is the key to power creation. "Until now all of the manufacturers - and we've counted 35 organisations working on DMFCs - have been hampered because they have had to use a hydrogen fuel cell membrane that was developed 40 years ago. It has been the only one available for DMFC applications and they are very different technologies," said PolyFuel president and chief executive officer Jim Balcom. The biggest problem developers have is stopping methanol crossing over the membrane, something that reduces overall efficiency of the fuel cell because fuel is wasted and it also results in generation of heat. To combat this problem researchers have kept methanol concentrations at around 10% although a higher concentration would be better, said Balcom. PolyFuel's membrane allows for much higher concentrations - between 50% and 100% - and this should mean DMFCs can be made one-third smaller, lighter and less expensive, he claimed. Increasing the methanol concentration has been the goal of several companies developing DMFCs for some time. NEC, which plans to commercialise a DMFC for notebook computers this year, is using a methanol concentration of around 10% in its prototype, and Toshiba, which has shown a prototype battery charger based on DMFC technology, says it uses a concentration of between 3% and 6%. Hitachi plans a DFMC for use in PDAs and said it hoped to raise methanol concentration from around 20% to 30% by the time the produce is commercialised in 2005. PolyFuel's membrane is already in sample production and Balcom said initial feedback from the company's potential clients had been good so far. The company's production capacity in Silicon Valley is anticipated to be enough to handle customer demand though 2005 and further expansion will be based on demand, he added. Martyn Williams writes for IDG News Service
<urn:uuid:9a1206b7-8760-4979-ba6d-3f440b67c43e>
CC-MAIN-2017-04
http://www.computerweekly.com/news/2240054174/California-start-up-claims-fuel-cell-breakthrough
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279650.31/warc/CC-MAIN-20170116095119-00136-ip-10-171-10-70.ec2.internal.warc.gz
en
0.964449
530
2.671875
3
It may be revolutionising the way we do business - but is Big Data secure? Guillermo Lafuente offers much-needed advice and guidance The biggest challenge for big data from a security point of view is the protection of user’s privacy. Big data frequently contains huge amounts of personal identifiable information and therefore privacy of users is a huge concern. Because of the big amount of data stored, breaches affecting big data can have more devastating consequences than the data breaches we normally see in the press. This is because a big data security breach will potentially affect a much larger number of people, with consequences not only from a reputational point of view, but with enormous legal repercussions. When producing information for big data, organizations have to ensure that they have the right balance between utility of the data and privacy. Before the data is stored it should be adequately anonymised, removing any unique identifier for a user. This in itself can be a security challenge as removing unique identifiers might not be enough to guarantee that the data will remain anonymous. The anonymized data could be could be cross-referenced with other available data following de-anonymization techniques. When storing the data organizations will face the problem of encryption. Data cannot be sent encrypted by the users if the cloud needs to perform operations over the data. A solution for this is to use “Fully Homomorphic Encryption” (FHE), which allows data stored in the cloud to perform operations over the encrypted data so that new encrypted data will be created. When the data is decrypted the results will be the same as if the operations were carried out over plain text data. Therefore, the cloud will be able to perform operations over encrypted data without knowledge of the underlying plain text data. While using big data a significant challenge is how to establish ownership of information. If the data is stored in the cloud a trust boundary should be establish between the data owners and the data storage owners. Adequate access control mechanisms will be key in protecting the data. Access control has traditionally been provided by operating systems or applications restricting access to the information, which typically exposes all the information if the system or application is hacked. A better approach is to protect the information using encryption that only allows decryption if the entity trying to access the information is authorised by an access control policy. An additional problem is that software commonly used to store big data, such as Hadoop, doesn’t always come with user authentication by default. This makes the problem of access control worse, as a default installation would leave the information open to unauthenticated users. Big data solutions often rely on traditional firewalls or implementations at the application layer to restrict access to the information. Big data is a relatively new concept and therefore there is not a list of best practices yet that are widely recognized by the security community. However there are a number of general security recommendations that can be applied to big data: The main solution to ensuring that data remains protected is the adequate use of encryption. For example, Attribute-Based Encryption can help in providing fine-grained access control of encrypted data. Anonymizing the data is also important to ensure that privacy concerns are addressed. It should be ensured that all sensitive information is removed from the set of records collected. Real-time security monitoring is also a key security component for a big data project. It is important that organizations monitor access to ensure that there is no unauthorised access. It is also important that threat intelligence is in place to ensure that more sophisticated attacks are detected and that the organizations can react to threats accordingly. Organizations should run a risk assessment over the data they are collecting. They should consider whether they are collecting any customer information that should be kept private and establish adequate policies that protect the data and the right to privacy of their clients. If the data is shared with other organizations then it should be considered how this is done. Deliberately released data that turns out to infringe on privacy can have a huge impact on an organization from a reputational and economic point of view. Organizations should also carefully consider regional laws around handling customer data, such as the EU Data Directive. In the past, large data sets were stored in highly structured relational databases. If you wanted to look for sensitive data such as health records of a patient, you knew exactly where to look and how to access the data. Also, removing any identifiable information was easier in relational databases. Big data makes this a more complex process, especially if the data is unstructured. Organizations will have to track down what pieces of information in their big data are sensitive and they will need to carefully isolate this information to ensure compliance. Another challenge in the case of big data is that you can have a big variety of users each needing access to a particular subset of information. This means that the encryption solution you chose to protect the data has to reflect this new reality. Access control to the data will also need to be more granular to ensure people can only access information they are authorise to see. The main challenge introduced by big data is how to identify sensitive pieces of information that are stored within the unstructured data set. Organizations must make sure that they isolate sensitive information and they should be able to prove that they have adequate processes in place to achieve it. Some vendors are starting to offer compliance toolkits designed to work in a big data environment. Anyone using third party cloud providers to store or process data will need to ensure that the providers are complying with regulations. Security is a process, not a product. Therefore organizations using big data will need to introduce adequate processes that help them effectively manage and protect the data. The traditional information lifecycle management can be applied to big data to ensure that the data is not being stored once it is no longer needed. Also policies related to availability and recovery times will still apply to big data. However organizations have to consider the volume, velocity and complexity of big data and amend their information lifecycle management accordingly. If an adequate governance framework is not applied to big data then the data collected could be misleading and cause unexpected costs. The main problem from a governance point of view is that big data is a relatively new concept and therefore no one has created procedures and policies. The challenge with big data is that the unstructured nature of the information makes it difficult to categorize, model and map the data when it is captured and stored. The problem is made worst by the fact that the data normally comes from external sources, often making it complicated to confirm its accuracy. What organizations need to do is to identify what information is of value for the business. If they capture all the information available they risk wasting time and resources processing data that will add little or no value to the business.
<urn:uuid:08c06ee9-3038-4086-9094-3ce22c777496>
CC-MAIN-2017-04
https://www.mwrinfosecurity.com/our-thinking/big-data-security-challenges-and-solutions/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00034-ip-10-171-10-70.ec2.internal.warc.gz
en
0.941454
1,389
3.046875
3
Pittsburgh's Mayor, Luke Ravenstahl, wrapped up Earth Week today announcing that ten city waste haulers will go green with the pilot-installation of diesel particulate filters, thanks to a partnership with the Environmental Protection Agency (EPA) and local clean air and water leaders. The advanced emission reduction technology, funded by a $127,000 EPA grant and managed in part by the Mid-Atlantic Regional Air Management Association (MARAMA), will reduce toxic particulate matter in each waste hauler by more than 85 percent. "Through the use of bio-diesel fuel and now diesel particulate filters, we are aggressively reducing the city's carbon foot print," Mayor Ravenstahl said. "With these cutting-edge green technologies, we are improving the quality of life for the people of Pittsburgh." Currently, bio-diesel makes up more than 30 percent of Pittsburgh's purchased diesel fuel. A dynamic coalition led by the Mayor, the EPA, MARAMA and the Allegheny County Partnership to Reduce Diesel Pollution (headed by the Group Against Smog and Pollution (GASP) and Clean Water Action (CWA) are collaborating to make our region cleaner and to educate citizens of the harmful effects of air pollution. "Federal emissions standards have made brand new diesel vehicles cleaner than ever before, but we the reality is we still have older, dirtier vehicles on the road," said Ashley Deemer, CWA executive director. "With this technology, workers and residents will no longer be harmed by the black smoke billowing from waste hauler tailpipes." Last May, the American Lung Association ranked Pittsburgh second to Los Angeles as the worst region for particulate pollution. This project will assist the region in reaching the EPA's fine particulate standards. "Dangerous particulate emissions in diesel exhaust pose a serious health threat to local residents," said Rachel Filippini, GASP executive director. "Scores of medical studies show that microscopic particles and toxins in diesel exhaust are associated with cardiovascular death, lung cancer, and the triggering of asthma attacks -- especially in children, the elderly and people who live and work near buses, trucks and other diesel equipment."
<urn:uuid:726439e7-aa1f-455a-b0c5-86e7dfd4b324>
CC-MAIN-2017-04
http://www.govtech.com/technology/Ten-Pittsburgh-Waste-Haulers-Go-Green.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00549-ip-10-171-10-70.ec2.internal.warc.gz
en
0.925862
445
2.609375
3
Optimizing Data Center Power Consumption Using Innovations in Memory Technology In today’s data centers, optimal usage of power is a driving factor that differentiates the leaders from the followers. Several techniques have come up in the last decade that help data center facilities plan their operations so as to use minimal power for achieving maximum results. One of the new innovations in this area is the advent of enhanced DRAM memory technology. DRAM technology has become even more important today with most data centers offering virtualized environments. The server infrastructure needed for cloud and enterprise computing requires much more DRAM per server as compared to the traditional set ups of last year. A recent research conducted by leading industry vendors such as Samsung, Microsoft Technology Center (MTC), Fujitsu, Intel and others, revealed the following results: The study demonstrated the benefits of using optimized 30nm class DRAM technology vs the traditional 50nm class DRAM technology. The results were measured by having two identical setups in terms of the CPU, fan, disk drive storage and power supply and comparing the throughput's in each setup. It was interesting to note that the 30nm class DRAM technology proved to reduce power consumption by almost 20% in a virtualized environment. If a data center has, say 1000 of these efficient server units, it can save its CO2 emissions to a range of 700 tons per year. The newer systems also lead to a considerable 60% saving in CRAC (Computer Room Air Conditioning). It was worth noting that the exhaust air for the newer systems was around 1.5 degree Celsius cooler than the older systems. As you can see, keeping up with newer technology has its challenges in terms of costs of procurement and replacement, but it also has a multitude of benefits that make it worth the upgrade. For more input on the impact of server and hardware technology on your data center performance, and before you make the choice of which data center is the best for your requirements, contact us at http://www.lifelinedatacenters.com.
<urn:uuid:60d5f5d1-3088-4fd5-83ce-ecb53bb55a1f>
CC-MAIN-2017-04
http://www.lifelinedatacenters.com/data-center/optimizing-data-center-power-consumption/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00365-ip-10-171-10-70.ec2.internal.warc.gz
en
0.946736
413
2.671875
3
Fiber optic transceivers are transmitter/receiver modules that are pre-packaged in a standardized form. They offer convenience and the low cost of mass production, are widely used in backbone networks and access networks to support Internet services and enterprise applications that require broadband transmission. In the fiber optic field, ITU-T is mainly responsible for procedures and IEC is mainly responsible for products. Although these organizations cooperate, they remain separate and were established with different objectives, so some products and procedures, such as optical fibers, must be specified in each organization. Therefore, the two organizations cooperate to avoid producing conflicting specifications. A standardized fiber optic transceiver is adapted to provide an optimal PCIe expansion over a fiber optic medium. Signal buffers are utilized to translate and fine-tune standardized PCIe traffic to a level of low voltage differential signaling (LVDS) that is comprehensible to a wide range of fiber optic transceivers over a wide range of interface bandwidths. The intended use for such a high-speed LVDS buffer is to strengthen and enable PCIe signals over metal (copper) cable or metal printed circuit board (PCB) traces for large PCBs, such as backplanes, server motherboards, etc. By disposing the PCIe buffer used for metal (copper) cable between the PCIe bus and the fiber optic transceiver, one can achieve the signal conditioning and translating required to allow PCIe signals to pass over the fiber optic medium. The SFP transceiver is not standardized by any official standards body, but rather is specified by a multi-source agreement (MSA) between competing manufacturers. The SFP was designed after the GBIC interface, and allows greater port density (number of transceivers per cm along the edge of a mother board) than the GBIC, which is why SFP is also known as mini-GBIC. The related Small Form Factor transceiver is similar in size to the SFP, but is soldered to the host board as a through-hole device, rather than plugged into an edge-card socket. However, as a practical matter, some networking equipment manufacturers engage in vendor lock-in practices whereby they deliberately break compatibility with “generic” SFPs by adding a check in the device’s firmware that will only enable the vendor’s own modules. Fiber Optic Transceiver design issues nowadays include the following: Mechanical compatibility of connector interfaces; Cross-talk between transmitter and receiver electronics; Electronic noise (EMI) issues, both with respect to emissions and susceptibility; Ease of manufacture (i.e. manufacturability); With the potential for providing high-speed and broadband characteristics, research and development of technologies related to fiber optic transceivers started. As a result of these demands and applications that require high-speed data transmission, new issues have arisen related to delay in the electrical connection network and physical wiring space in and between equipment. In order to deal with these issues, standardization activities for optical circuit boards and sc/apc fiber connectors for circuit board attachment have started in TC91 (Electronic assembly technology).
<urn:uuid:0b6d3b8e-35e9-4db7-b85b-1003623348b1>
CC-MAIN-2017-04
http://www.fs.com/blog/the-standard-of-fiber-optic-transceivers.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281649.59/warc/CC-MAIN-20170116095121-00145-ip-10-171-10-70.ec2.internal.warc.gz
en
0.937759
638
3.234375
3
Now that the Curiosity rover has made an action hero’s entrance onto the red planet, members of NASA’s Jet Propulsion Laboratory (JPL) can breath a quick sigh of relief. The car-sized, rolling laboratory has already transmitted a small but breathtaking collection of images, displaying the planet’s landscape and will continue to do so over the course of its mission. Eager earthlings who want a glimpse of what Mars looks like from the Curiosity’s point of view can turn to the JPL website and spend time looking at pictures captured by the rover’s cameras. All this has been made possible by a collection of tools provided by Amazon, who recently published a case study about the Curiosity mission. NASA and Amazon have an established history when it comes to working on missions to Mars. The cloud provider is handling images transmitted by the Opportunity exploratory rover, which continues to function after eight years of service. Amazon also had a role in handling Web traffic during the new rover’s complex landing procedure. In preparation for Curiosity’s big debut, NASA asked Amazon to help serve the estimated hundreds of thousands of visitors looking to see the landing operation. A complex system was devised, incorporating load balancing, traffic monitoring and a method to de-provision resources after the event took place. The system was benchmarked by SOASTA, which verified the stream could handle requests to the order of hundreds of gigabits per second. There was good news all around as the landing was a success and the stream worked without any noticeable issues. Now that Curiosity is on the red planet, AWS will process pictures taken by the new inhabitant, making them available to JPL researchers and the public. The workflow is slightly more complicated than sharing a photo from a smartphone. Although the rover has 17 cameras in total, the panoramic pictures are assembled using images gathered by a stereoscopic camera located on its masthead. An Amazon blog entry explains the process: In order to produce a finished image, each pair (left and right) of images must be warped to compensate for perspective, then stereo matched to each other, stitched together, and then tiled into a larger panorama. This process is completed using Amazon’s Simple Flow and AWS Flow Frameworks, producing the graphics available to the public. The service provider says accelerated analysis of these images will lead to better decision-making, ultimately increasing the amount of exploration the new rover will embark upon.
<urn:uuid:da4cf6de-f7d5-4942-9d48-60f1a15cc517>
CC-MAIN-2017-04
https://www.hpcwire.com/2012/08/14/amazon_enables_earthlings_to_view_martian_photos/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281649.59/warc/CC-MAIN-20170116095121-00145-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947748
507
3.015625
3
In January 2010, a new act was submitted to Congress in the United States of America, which, if it becomes law, may have a profound affect on accessibility of technology devices. The brief description of the Technology Bill of Rights for the Blind Act of 2010 is "To provide for a study and report on access by blind consumers to certain electronic devices and to provide for the establishment of minimum nonvisual access standards for such devices and for the establishment of an office within the Department of Commerce to enforce such standards, and for other purposes." It appears that the major intent of this act is to ensure that consumer devices and office technology do not only use touchscreens for their operation. The problem being that touchscreen devices by themselves are unusable by people who are blind or have severe vision impairments. At the moment there is a grave danger that the continued move of technology towards touchscreens as the major user interface will greatly reduce the independence of disabled people. In many cases putting up a barrier that did not exist in earlier generations of the technology. However, on closer reading of the draft act it would appear that it could be used to influence the design of any user interface. For example it mentions the need for kiosks to be accessible and these are normally only a specialised form of web browser. Therefore I think that this new act could be used to ensure the accessibility of any website. The only concern I have with the wording is that it only considers the problems of people with vision impairments and does not consider the accessibility problems of other users with disabilities. Maybe it should be retitled "Bill of Rights for Accessible Technology". If the act is passed then, in about three years time, manufacturers and suppliers will have a legal responsibility to provide user interfaces that are accessible by people with vision impairments. Presumably this will apply to any organisation selling to the American market. The act provides the ability for enforcement via civil penalties by the state or damages claims by individuals. In the UK we are coming up to elections and party manifestos. Could the the groups lobbying for equal rights for people with disabilities persuade the major parties to include a similar provision in their manifestos? Both for the benefit of our own citizens but also to ensure that our manufacturers are able to sell into the US in the future.
<urn:uuid:9804675e-6dfa-47df-80dc-ec356c96551c>
CC-MAIN-2017-04
http://www.bloorresearch.com/blog/accessibility/technology-bill-of-rights-for-the-blind-act-of-2010/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00475-ip-10-171-10-70.ec2.internal.warc.gz
en
0.960908
465
2.78125
3
A new programming language developed by researchers at the Massachusetts Institute of Technology (MIT) is claiming to be able to increase the speed of big data processing by up to four times. Called Milk, the language allows application developers to manage memory more efficiently in programs that deal with scattered data points in large data sets. In tests conducted using several common algorithms, programs written in the new language were shown to be four times as fast as those written in existing languages, but the researchers behind the language believe that further work will result in even larger gains. Milk is intended to solve one of the biggest barriers to successful implementation of big data analytics processes – how efficiently programs gather the relevant data. MIT explained that traditional memory management is based on the 'principle of locality' – that is, if a program requires a certain piece of data stored in a specific location, it is also likely to need the neighbouring data, so it will fetch this at the same time. However, this assumption no longer applies in the era of big data, where programs frequently require scattered chunks of data that are stored arbitrarily across huge data sets. Since fetching data from their main memory banks is the major performance bottleneck in today’s computer chips, having to do this more frequently can lead to major performance issues. Vladimir Kiriansky, a PhD student in electrical engineering and computer science and first author on the new paper, explained that returning to the main memory bank for each piece of information is highly inefficient. He said: "It's as if every time you want a spoonful of cereal, you open the fridge, open the milk carton, pour a spoonful of milk, close the carton, and put it back in the fridge." The new programming language aims to overcome this limitation through the use of batch processing, by adding a few commands to OpenMP, an extension of languages such as C and Fortran that makes it easier to write code for multicore processors. When using the language, a programmer then adds a few lines of code around any instruction that iterates through a large data collection looking for a comparatively small number of items. Milk’s compiler then figures out how to manage memory accordingly. Using Milk, when a core needs a piece of data, instead of requesting it – and any adjacent data – from main memory, it adds the data item's address to a locally stored list of addresses. When this list is long enough, all the chip's cores pool their lists, group together those addresses that are near each other and redistribute them to the cores. This means that each core requests only data items that it knows it needs and that can be retrieved efficiently. Matei Zaharia, an assistant professor of computer science at Stanford University, noted that although many of today's applications are highly data-intensive, the gap in performance between memory and CPU means they are not able to fully utilise current hardware. "Milk helps to address this gap by optimising memory access in common programming constructs. The work combines detailed knowledge about the design of memory controllers with knowledge about compilers to implement good optimisations for current hardware," he added.
<urn:uuid:c59815db-d116-46e4-926d-a58d20679c5d>
CC-MAIN-2017-04
http://kognitio.com/new-programming-language-boosts-big-data-speeds-fourfold/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00045-ip-10-171-10-70.ec2.internal.warc.gz
en
0.946199
644
3.21875
3
Pecchioni E.,University of Florence | Santo A.P.,University of Florence | Piccini L.,University of Florence | Di Fazio L.,Botanic Garden | And 4 more authors. Sustainability (Switzerland) | Year: 2015 The Vie Cave are a suggestive network of roads deeply entrenched in the rock, dating back to the Etruscan civilization; these ancient roads connect various settlements and necropolises existing mainly in the area of Sovana, Sorano and Pitigliano towns (Southern Tuscany, Italy). The Vie Cave are located in a peculiar geomorphological site, characterized by the presence of extensive pyroclastic deposits, which have been incised by a parallel network of deep gorges. In this paper, the geomorphological, geological and lithological setting of the Vie Cave area, where several Etruscan archaeological sites are found, are described. The precarious stability of the Vie Cave walls and the several archaeological structures carved into them, the high grade of decay shown by the constituent materials, together with the dense vegetation that has developed over the rocky scarps, are taken into account with the aim to provide a complete assessment of the conditions in which the site lies. Finally, we propose some targeted actions related to the preservation of this territory, showing so distinctive morphology, in order to protect the area from further decay to which it would be subjected if it remained abandoned. © 2015 by the authors. Source Nana E.D.,Charles University | Nana E.D.,International Research and Training Center | Sedlacek O.,Charles University | Bayly N.,SELVA Investigacion Para la Conservacion en El Neotropico | And 6 more authors. Biodiversity and Conservation | Year: 2014 The Cameroon volcanic line montane forests host specific avian assemblages with many endemic species. Such unique bird assemblages deserve adequate description for proper protection. For this purpose, we sampled birds in the upper montane forests of Mts Cameroon and Oku situated at ~2,250 m. We combined point counts and continuous observations to describe species composition and estimate densities of particular species. In total, we recorded 106 species; 45 only on Mt Oku, 21 only on Mt Cameroon, and 40 common to both mountains. The higher species richness on Mt Oku was due to non-forest species that invade the forest interior due to recent human disturbance. Endemic species of the Cameroon volcanic line and montane non-endemic species had higher abundances than widespread species in general. As a result, we did not find a positive abundance-range-size relationship for both locations. Our findings support a previously made observation that montane species of the Cameroon volcanic line have higher densities compared to widespread species. However, we also show that the structures of avian assemblages vary between sites as species spatial turnover was lower on Mt Cameroon than on Mt Oku and species common to both were more abundant on Mt Cameroon. This could be attributed to the more pristine forest on Mt Cameroon, with higher annual rainfall but also due to lower human impact and the existence of a continuous forest. Conservation action within the broader landscape context is thus necessary to secure diverse montane forests in West-Central Africa in the future. © 2014 Springer Science+Business Media Dordrecht. Source Wojtal A.Z.,Polish Academy of Sciences | Ognjanova-Rumenova N.,Bulgarian Academy of Science | Buczko K.,Hungarian Natural History Museum | Siwek J.,Jagiellonian University | And 2 more authors. Phytotaxa | Year: 2015 Navicula striolata was originally described as N. digitoradiata var. striolata from modern material collected in Sweden. After examination of a sample collected from Belgium, the variety was transferred to N. reinhardtii as N. reinhardti var. gracilior. From this time a large mix up of these and related taxa was observed in the literature. A similar species, Navicula rumaniensis had also been established in 1934 from Neogene Romanian materials but there has been much confusion regarding the status of these taxa, leading to a poor understanding of their distribution. In this study, type material of Navicula digitoradiata var. striolata, N. reinhardtii var. gracilior and N. rumaniensis are revised using light and scanning electron microscopy in order to clarify their identity and to investigate possible conspecificity. The results indicate that these species are not synonyms. Conspecificity of the modern N. digitoradiata var. striolata and N. reinhardti var. gracilior was confirmed and lectotypes of both varieties have been designated whereas N. rumaniensis proved to be a separate species. In addition, the study of Neogene material from Bulgaria revealed the presence of a new Navicula taxon—N. friedelhinziae. The morphology of these and similar taxa is discussed. © 2015 Magnolia Press. Source Wetzel C.E.,Luxembourg Institute of Science and Technology | Ector L.,Luxembourg Institute of Science and Technology | Van De Vijver B.,Botanic Garden | Van De Vijver B.,University of Antwerp | And 3 more authors. Fottea | Year: 2015 The identity and nomenclatural history of several small-celled naviculoid taxa are revisited. The species discussed here are important from the ecological point of view since they are often dominant in benthic freshwater communities. The original concepts of several species that have suffered major taxonomic drift due to their entangled nomenclatural history are discussed, and forgotten epithets are resurrected. We examined the original material of Navicula aggerica E. Reichardt, Navicula atomoides Grunow, N. crassulexigua E. Reichardt, N. minima Grunow, N. minima var. typica R. Ross, N. minutissima (Kütz.) Grunow, N. saugerresii Desm. N. seminulum Grunow, N. seminulum var. intermedia Hust. N. seminulum var. radiosa Hust. N. stroemii Hust. N. subbacillum Hust. N. subseminulum Hust. N. tantula Hust. N. vasta Hust. N. ventraloides Hust. Stauroneis fonticola Hust. and Synedra minutissima Kütz. Several of these names were regarded as synonyms in many floristic works and, as such, remained forgotten or ignored. Analyses using light and scanning electron microscopy indicate conspecificity of Navicula minima (= Sellaphora seminulum sensu auct. nonnull.) with Sellaphora saugerresii (Desm.) C.E. Wetzel et D.G. Mann comb. nov. which has priority against N. minima. Synedra minutissima is lectotypified and transferred to Halamphora minutissima (Kütz.) C.E. Wetzel et Compère comb. nov. Navicula minutissima (Kütz.) Grunow 1860, nom. illeg. and Navicula minima Grunow pro parte, typo excl. designate one and the same species (valid and legitimate), currently known as Sellaphora aggerica (E. Reichardt) Falasco et Ector. We consider Sellaphora atomoides (Grunow) C.E. Wetzel et Van de Vijver comb. nov. (= Eolimna tantula sensu auct. nonnull.) and Sellaphora nigri (De Not.) C.E. Wetzel et Ector comb. nov. (= Eolimna minima sensu auct. nonnull.) to be separate species, although morphologically very similar. Sellaphora crassulexigua (E. Reichardt) C.E. Wetzel et Ector comb. nov. and Sellaphora subseminulum (Hust.) C.E. Wetzel comb. nov. are rarely encountered, but usually found in calcareous springs and aerial habitats, respectively. All species are transferred to the genus Sellaphora on the basis of their valve morphology, pending molecular studies confirming the monophyly of the group once living material of each can be located and brought into clonal culture. Additionally, 64 established taxa from Navicula s.l. Eolimna or Naviculadicta are formally transferred to Sellaphora. Navicula subminuscula Manguin is formally transferred to the genus Craticula Grunow. © Czech Phycological Society (2015). Source Jager A.K.,Copenhagen University | Stafford G.I.,Botanic Garden South African Journal of Botany | Year: 2012 Tulbaghia species are used in traditional medicine in southern Africa. They contain sulphur compounds, which have anti-Candida activity. The sulphur compounds are unstable, so different extraction methods were investigated. Grinding the rhizome material in liquid nitrogen and extraction with ethanol yielded the best results. Eight Tulbaghia species were tested and found to contain the same pattern of sulphur compounds on the TLC plate, though in varying concentrations, except T. simmleri, for which sulphur compounds could not be detected. This means that more species can potentially be utilised for the drug Tulbaghiae rhizoma. A simple quantitative TLC dilution method was developed, which can be used to ascertain whether the rhizome material contains a sufficient level of sulphur compounds. The effect of storage was investigated. The content of sulphur compounds in the rhizomes decreased fast upon storage, half of the main compound was lost four weeks after harvest. Possible adulterants for Tulbaghiae rhizoma are Allium sativum and Agapanthus campanulatus. It was not possible to detect adulteration with A. sativum, but a simple TLC test could detect adulteration with 10 % A. campanulatus material. © 2012 South African Association of Botanists. Published by Elsevier B.V. Source
<urn:uuid:41da5d90-2cf2-44a6-969b-5d70a3aed1d5>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/botanic-garden-1172289/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00439-ip-10-171-10-70.ec2.internal.warc.gz
en
0.885713
2,203
2.828125
3
The MET Office, the UK’s National Weather Service, relies on more than 10 million weather observations from sites around the world, a sophisticated atmospheric model and a £30 million IBM supercomputer to generate 3,000 tailored forecasts every day. Thanks to this advanced forecasting system, climate scientists were able to predict the size and path of Monday’s St. Jude’s Day storm four days before it formed. The IBM machine, upgraded in 2012 for a total of 1.2 petaflops of processing power, brings a new level of accuracy to weather modeling. ATelegraph article describes how the machine was able to predict the storm based on two areas of turbulent weather over Canada and the United States which came together over the western Atlantic to form one large low-pressure system. Such depressions are common and generally harmless, but in this situation a fast-moving jet stream carried the weather mass across the Atlantic where it encountered a band of unusually warm air over Britain. The warm air ‘energized’ the depression, and transformed it into the storm that swept across the UK on Monday. The Met Office supercomputer was able to forecast this storm and many other weather events using data from millions of sources across the globe. These crucial sites include weather stations, satellites, aeroplanes, boats, buoys and argo floats, which report on water temperature – a important weather variable. On Sunday, October 27, the Met Office warned “a major Atlantic storm is set to move across the UK over the next 24 hours, bringing some heavy rain and very strong winds to parts of England and Wales.” The office issued Severe Weather Warnings advising of the potential disruption from both the strong winds and the rainfall. While this wasn’t a catastrophic weather event, the advance notice gave people the opportunity to take necessary precautions and make arrangements to delay travel. Met Office spokesperson Dan Williams describes how the forecasting process played out: The accuracy of [forecasting] these days is enough to pull out factors that could lead to the storm that we saw last night. There were factors that played a role in that, there were two weather systems over the Americas which coalesced in the western Atlantic forming a fairly innocuous area of low pressure. But then because we have a particularly strong jet stream at the moment it rattled across the Atlantic fairly quickly and it was at that point just off the South West of the UK that we could see there was an area of particularly warm air that was going to be there just as that low pressure reached the UK. Those two things combined meant that this low pressure rapidly deepened just off the coast of the UK and energised it, making it much more vigorous and giving it the power it then took across the UK. The Met Office is slated to receive a new £100 million supercomputer in 2015, a replacement for the current £30 million machine. Scientists and government officials alike say the new equipment is necessary to increase forecast accuracy, especially for reliable seasonal predictions.
<urn:uuid:de43b7e2-8256-4794-8002-189e89dc9bed>
CC-MAIN-2017-04
https://www.hpcwire.com/2013/10/29/met-office-maps-storm-days-advance/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00559-ip-10-171-10-70.ec2.internal.warc.gz
en
0.961869
616
3.515625
4
Special Fiber is constructed with a non-cylindrical core or cladding layer, such as Polarization Maintaining Fiber (or PM fiber) and fiber designed to suppress whispering gallery mode propagation. The Special Points Of PM fiber PM fiber is a specialty optical fiber with strong built-in birefringence, preserving the properly oriented linear polarization of an input beam. Polarization maintaining fiber (PM fiber) is a special type of single mode fiber. Normal single mode fibers are capable of carrying randomly polarized light. However, PM fiber is designed to propagate only one polarization of the input light. PM fibers contain a feature not seen in other fiber types. Besides the fiber core, there are stress rods in the fibers. The stress rods are two circles in the Panda PM fiber, an elliptical clad in elliptical-clad PM fiber and two bow-ties in the Bow-Tie type PM fiber. Polarization-maintaining PANDA fiber (left) and bow-tie fiber (right) Why PM Fiber Are Needed? Optical fibers always exhibit some degree of birefringence, even if they have a circularly symmetric design, because in practice there is always some amount of mechanical stress or other effect which breaks the symmetry. As a consequence, the polarization of light propagating in the fiber gradually changes in an uncontrolled (and wavelength-dependent) way, which also depends on any bending of the fiber and on its temperature. This problem can be fixed by using a polarization-maintaining fiber, which is not a fiber without birefringence, but on the contrary a specialty fiber with a strong built-in birefringence. Provided that the polarization of light launched into the fiber is aligned with one of the birefringent axes, this polarization state will be preserved even if the fiber is bent. The physical principle behind this can be understood in terms of coherent mode coupling. The propagation constants of the two polarization modes are different due to the strong birefringence, so that the relative phase of such copropagating modes rapidly drifts away. Therefore, any disturbance along the fiber can effectively couple both modes only if it has a significant spatial Fourier component with a wavenumber which matches the difference of the propagation constants of the two polarization modes. If this difference is large enough, the usual disturbances in the fiber are too slowly varying to do effective mode coupling. A disadvantage of using PM fibers is that usually an exact alignment of the polarization direction is required, which makes production more cumbersome. Also, propagation losses are higher than for standard fiber, and not all kinds of fibers are easily obtained in polarization-preserving form. PM fibers should not be confused with single-polarization fibers, which can guide only light with a certain linear polarization. PM fibers are rarely used for long distance transmission because of their expensive price and higher attenuation than single mode fiber cable, either. Polarization Maintaining Optical Fiber (PMF) have been more and more widespread applied in optical fiber communication and optical fiber sensing system because the strong ability on polarization maintaining to the linearly polarized light and the good compatibility with the ordinary single mode optical fiber. People have done much works and made significant development on the theoretical design and the fabrication technology and the parameter survey of PMF.
<urn:uuid:983f8610-dcd4-4caa-b1aa-84bd07d3c611>
CC-MAIN-2017-04
http://www.fs.com/blog/polarization-maintaining-fiber-of-special-fiber.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00375-ip-10-171-10-70.ec2.internal.warc.gz
en
0.924039
683
3.125
3
BPM 101: Introduction to BPM [Video] This video is brought to you by BPM Basics and sponsored by Appian. BPM stands for Business Process Management. BPM can be defined as a management approach to continuously improve processes and achieve organizational objectives through a set of methodologies. BPM has multiple input types. These include people and process. People can be internal or external cross function and department boundaries. A process requires a series of actions to achieve a certain objective. BPM processes are continuous but also allow for ad hoc action. Processes can be simple or complex based on number of steps, number of systems involved etc. They can be short or long running. Longer processes tend to have multiple dependencies and a greater documentation requirement. BPM has multiple outputs. These include analytics via a dashboard or reports, case information updates and phone or email alerts. The end goal of the process is achieve desirable outcomes and bring process to an end. An IT Request process has been provided as an example of BPM.
<urn:uuid:e3295995-5674-41bb-b69d-c49faffa1959>
CC-MAIN-2017-04
http://www.appian.com/blog/technology-insights/bpm-101-introduction-to-business-process-management
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285001.96/warc/CC-MAIN-20170116095125-00007-ip-10-171-10-70.ec2.internal.warc.gz
en
0.937807
211
3.015625
3
AT&T Marks the 100th Anniversary of Long-Distance Calling Today we whip out our smartphones and make long-distance calls like it is not even a big deal. We don't even have to remember phone numbers anymore because our phones remember them for us. It certainly wasn't as easy, convenient or inexpensive back on Jan. 25, 1915, when the first transcontinental phone call was made across the United States. With that 100-year anniversary here, it's time to acknowledge the significance of that first transcontinental call and talk a bit about how much things have changed, progressed and morphed since those early days. So just how big a deal was this event back in January 1915? It was huge, according to a Jan. 23 post on the AT&T blog, which celebrates that first transcontinental conversation as "the phone call that changed the world forever." That call connected people thousands of miles apart, showcased a promise of universal communication and spurred a century of innovation shaping the world we live in today, according to the post. "The call revolutionized communications for years to come," the post said. "It marked the start of AT&T's transcontinental telephone system, which at the time consisted of 130,000 telephone poles and 2,500 tons of copper wire stretching nearly 3,400 miles from New York to San Francisco. Today, AT&T provides service for nearly 119 million Americans." The call happened on Jan. 25, 1915, leading up to the opening of the Panama Pacific International Exposition and World's Fair in San Francisco. Making the historic first transcontinental call was inventor Alexander Graham Bell from New York. Also on the call was Theodore Vail, president of the former American Telephone and Telegraph Company (now AT&T), U.S. President Woodrow Wilson from the White House and Bell's assistant, Thomas Watson, who was in San Francisco, according to the post. AT&T had stretched telephone lines across the nation in an amazing feat of engineering, and all of the technology that went into the call and the construction of the phone system at the time was state of the art for 1915. It was an incredible time to be alive and to see what telephones would bring to the nation and the world. Fast forward to today, and it's not much different. In January 2015, we have smartphones that have amazing computing power for telephone communications, data sharing, data storage, text communications and so much more. A phone call just about anywhere around the world can be made from the small smartphone in your pocket or purse. Alexander Graham Bell would love this, I'd say. "We have come a long way since the first transcontinental call 100 years ago," wrote AT&T in its blog post. "Just think how far we will go in the next century!" As we acknowledge the 100th anniversary of that first cross-country call, that's an intriguing thing to think about. What will smartphones of the future be like? It will be fascinating to see what's next.
<urn:uuid:36acc23b-028d-48de-8fb0-19219ce75889>
CC-MAIN-2017-04
http://www.eweek.com/blogs/first-read/commemorating-the-100th-anniversary-of-long-distance-calling.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00311-ip-10-171-10-70.ec2.internal.warc.gz
en
0.958441
617
3
3
Data Leakage Protection (DLP) is one of the hot new IT security technologies on the market. The point of these tools is to catch sensitive and confidential data before it can be maliciously leaked out of the company. While the tools certainly have value, they are not the be all and end all of IT security that vendors are purporting them to be. DLP – What it Does, and How it Does it DLP tools exist in two distinct formats, though both can be offered by the same solution, with both seeking to prevent the loss of sensitive or confidential data. The most common format offers protection at the network gateway while the less common format provides protection at the endpoint. Gateway solutions focus on monitoring and restricting the transmission of electronic communications while endpoint solutions limit the transfer of data to devices such as USB drives and to hardcopy printouts. Naturally, endpoint solutions require the installation of an agent on the device while gateway solutions don't.
<urn:uuid:f05331a2-903c-4ea1-b9ac-7e50798b7fe2>
CC-MAIN-2017-04
https://www.infotech.com/research/data-leakage-protection-not-the-silver-bullet
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00429-ip-10-171-10-70.ec2.internal.warc.gz
en
0.924893
193
2.6875
3
A student at MIT has developed a new smartphone app which shows the potential of Augmented Reality and the Internet of Things – and how mobiles and things could interact. Massachusetts Institute of Technology’s Fluid Interfaces Lab’s Ph.D. student Valentin Heun has released Reality Editor, a smartphone app which will allow users to pair objects together to create new connections and functionalities. Heun told The Kernel: “The Reality Editor is like a digital multitool. It allows you to learn easier about how an object works.” Using the app, you could turn a light on or off by turning a doorknob. This could be done by directing your smartphone camera to a doorknob, and using the app to drag the functionality to a light switch. The app uses the Internet to communicate between smart objects, with data communication controlled by the user, and reorganised by the object which sends identifying information and network IP addresses to the Reality Editor app itself, not a cloud, and this creates private networks between objects. The fact that the app doesn’t use a cloud system sets it apart from the Internet of Things. This has its benefits as the app won’t stop working due to internet failures not enabling it to access a cloud. Heun called his app a “Web browser for the physical space and said “we found a novel method that allows you to build simple HTML web pages and augment them on to an object. This means that every visual representation that you see in the user interface is actually a webpage and can be designed by every web designer.” This open platform makes it easier to get devices online, FingerPrints stickers (similar to QR codes) make it easy for Reality Editor to add new items. The Open Hybrid platform and Reality Editor app provide less limitations than standard IoT products. Another limitation faced is that several IoT products made by different manufacturers can’t communicate with each other. Heun’s model does not “represent a pyramid model where only one silo can be the winner. It’s a model where everyone can be involved to build the products they want.”
<urn:uuid:8da60ede-4c66-4426-8efc-1eee080ea3e5>
CC-MAIN-2017-04
https://internetofbusiness.com/this-augmented-reality-smartphone-app-wants-to-control-all-your-iot-devices/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00340-ip-10-171-10-70.ec2.internal.warc.gz
en
0.939498
447
2.734375
3
An ultra-long distance phone call recently connected the depths of the ocean with the heights of Earth's orbit. On Jan. 26, 2007, astronaut Sunita Williams, orbiting more than 200 miles above the Earth in the International Space Station, chatted with Tim Shank, a marine biologist conducting research 2 miles undersea in the Alvin submersible. Both explorers work in small, confined spaces, looking out onto vast expanses. Shank has no contact with sunlight, buried under a blanket of perpetual darkness, while Williams floats in darkness but sees the sun rise 15 times a day. Shank and Williams compared notes on life, science and exploration. -- The Woods Hole Oceanographic Institution and NASA
<urn:uuid:6e70ff0e-1ea5-48f7-a2f9-46270091b124>
CC-MAIN-2017-04
http://www.govtech.com/magazines/gt/Keeping-in-Touch.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00366-ip-10-171-10-70.ec2.internal.warc.gz
en
0.825756
141
3.15625
3
Sony PlayStation 3: It's not all fun and games - By William Jackson - Dec 06, 2007 Studying gravitational radiation emitted by very massive black holes as they engulf stars takes a lot of computing power. Physics professor Gaurav Khanna at the University of Massachusetts Dartmouth has turned to Sony's popular PlayStation 3 gaming console to build his own supercomputer to help with the work. Khanna and Glen Volkema of the physics department at UMass Dartmouth have built a cluster of eight of the consoles running Linux in parallel in Khanna's laboratory. 'My cluster is running code as fast as running 200 processors on a supercomputer,' Khanna said. Total cost of the homebuilt system would be under $4,000 retail, the cost of running one large job on a supercomputer. However, most of the hardware for this project was donated. Khanna is using the system to model the 'ripples in space and time' produced by black holes at the centers of galaxies in support of NASA's Laser Interferometer Space Antenna. LISA is being built in cooperation with the European Space Agency and will be launched in 2015 to detect gravitational waves, the death spirals of stars. Khanna is helping to determine what LISA will search for. Khanna was attracted to PlayStation 3 by several unique features. 'One cool thing about it is that Sony made PS3 an open platform,' he said. 'That opened the possibility of using it for things beyond gaming.' The second advantage is the Cell processor developed by Sony, IBM and Toshiba, which has 'great potential in raw computing power,' he said. Each PS3 processor is equal to about 25 processors in a traditional supercomputer. Khanna began working on the project about a year ago. 'I was lucky that my wife was able to get one for me last Christmas,' when they were in short supply, he said. When he got it he was like ' well, like a kid on Christmas morning. 'I was so excited about it. As soon as I got it I started taking it apart and putting Linux on it.' This did not thrill his children, who wanted to play games on it, but 'now they have their own.' And Khanna has eight of his own, most of them donated by Sony. He has received permission to spend department grant money on additional consoles and has six more on order. 'My goal is to have 16.' Khanna's in-house system will not eliminate his need for time on more traditional supercomputers. Khanna last year was awarded a National Science Foundation grant for 30,000 hours of time on its TeraGrid computing infrastructure for his work on black holes and gravitational waves. However, using the time on a supercomputer means putting your job in queue and waiting until the necessary processors are available to run it. That can mean waiting days for a job that might take a few hours to run. 'It's very painful,' he said. 'The infrastructure hasn't kept up with demand,' he said. 'There is never enough computing time. So it's not that I won't be using my supercomputer time any more,' but he will not be as dependent on it. Loading Linux onto the consoles and hooking them together to work in parallel was easier than he expected, Khanna said. 'I thought it was going to take some hacking, but it was fairly straightforward,' he said. 'Sony tells you how to do it.' The tricky part was rewriting his code to take advantage of the new Cell processor. 'I had to do a lot of learning. That took several months of work.' Cell processors are good for supercomputing, and Los Alamos National Laboratory will be using them in a new peta-scale supercomputer it expects to have working next year. But inexpensive clusters of PS3s will not do away with the need for true supercomputers, Khanna said. The fundamental limitation of the gaming console is its relatively limited memory, with only 512M of RAM per console, 'and it's not expandable,' he said. 'My problems are not heavy on memory usage,' he said. 'It's just heavy on raw calculation.' But if a job requires lots of memory, the PlayStation 3 probably won't do the trick. William Jackson is a Maryland-based freelance writer.
<urn:uuid:0bf6f48c-da8a-4a15-8618-77c3a4f20dad>
CC-MAIN-2017-04
https://gcn.com/articles/2007/12/06/sony-playstation-3-its-not-all-fun-and-games.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280128.70/warc/CC-MAIN-20170116095120-00092-ip-10-171-10-70.ec2.internal.warc.gz
en
0.976181
897
2.59375
3
An XSS Channel is an interactive communication channel between two systems which is opened by an XSS attack. At a technical level, it is a type of AJAX application which can obtain commands, send responses back and is able to talk cross-domain. The XSS Shell is a tool that can be used to setup an XSS Channel between a victim and an attacker so that an attacker to control a victim’s browser by sending it commands. This communication is bi-directional. Download the article in PDF format here.
<urn:uuid:f31f87db-5030-48cd-8933-6985f213c875>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2007/07/11/tunnelling-http-traffic-through-xss-channels/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00302-ip-10-171-10-70.ec2.internal.warc.gz
en
0.941625
109
2.734375
3
I was recently reminded by a couple of security researchers that SSL provides privacy, integrity and authenticity. This isn’t something they just thought of. This is documented from the beginning of SSL deployment and referred to in an April 1995 IETF meeting. These characteristics are as follows: Privacy – Data is encrypted for an intended recipient. Integrity – Provisions in the protocol do not allow the contents to be intentionally or unintentionally modified. Authenticity – Digital certificates are used to provide a verified identity. The certificates are signed by a trusted third party (TTP). The TTP is a certification authority (CA). The CA signs the certificate attesting proof the identity has been validated. The browser verifies the identity assertion from the CA. Authenticity is provided for the server and can be provided for the client. For me, the bottom line is SSL delivers trust. The trust of SSL is always increasing. This is accomplished by browser manufacturers, CAs, security researchers and large certificate subscribers working together to determine issues and implementing solutions. Here are some items being addressed: - CAs can provide trust to all domains. More than one CA can provide trust to the same domain. There is no registry or control, and some CAs may issue to a domain when they are unauthorized. There are two mechanisms being put into place to help control the issue. Certificate Authority Authentication (CAA) will be used to allow a CA to determine whether or not they have authorization to issue for a domain. Certificate Transparency (CT) will be implemented to allow website operators to see if an unauthorized certificate has been issued to one of their domains and to provide a trust dialogue to users when a certificate has been mis-issued. - In the past, certificate management and issuance standards have not been defined. CAs have performed different functions and methods to provide a certificate at the same level of security. The CA/Browser Forum has developed a standard for EV certificates and minimum baseline requirements for all publicly-trusted SSL certificates. These standards provide a level at which all CAs can base their procedures. - Browsers provide security functions differently. Users do not get the same look and feel for security in each browser. Hopefully, the IETF Web PKI working group will document the problems, then steps can be taken to resolve them. - Certificate subscribers make many deployment mistakes, which reduces the security that the SSL can bring. Website operators can check their sites to see how they hold up to deployment standards as described by Qualys SSL Labs. They can also review the most popular SSL deployment mistakes and take action.
<urn:uuid:45d6a8f2-510f-4eab-af4f-ca8d3ac1a035>
CC-MAIN-2017-04
https://www.entrust.com/ssl-privacy-integrity-authenticity/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00238-ip-10-171-10-70.ec2.internal.warc.gz
en
0.94884
528
2.828125
3
Grassland restoration with sowing of low-diversity seed mixtures in former sunflower and cereal fields [Gyeprekonstrukció napraforgó- és gabonatá blák helyén alacsony diverzitású magkeverék vetésével] Valko O.,DE TTK Okologiai Tanszek | Valko O.,Debrecen University | Vida E.,DE TTK Okologiai Tanszek | Vida E.,Debrecen University | And 11 more authors. Journal of Landscape Ecology | Year: 2010 Sowing seed mixtures is a useful technique in grassland restoration in former arable fields. We studied the early vegetation dynamics of former croplands (sunflower and cereal fields) sown with low diversity seed mixtures (composed of 2 or 3 native grass species) in Egyek-Pusztakócs, Hortobágy, East-Hungary. In 10 restored fields the percentage cover of vascular plants was recorded in 4 permanent plots per field between 2006 and 2009. There were collected 10 aboveground biomass samples per field in June in every year. The target grasslands selected for baseline vegetation reference were alkali (Achilleo setaceae-Festucetum pseudovinae) and loess grasslands (Salvio nemorosae-Festucetum rupicolae). We addressed three questions: (i) How effective is the sowing of low-diversity seed mixtures on the species richness and diversity of short-lived weedy species? (ii) How fast is the establishment of a perennial grass dominated vegetation after sowing low diversity seed mixtures ? (iii) How influence the sowing of low diversity seed mixtures the short term biomass dynamics of short-lived species? Weedy species were characteristic in the first year after sowing. In the second and third year their cover and species richness decreased. From the second year onwards the cover of perennial grasses increased. The immigration of species characteristic to the reference grasslands was also detected. However, the noxious perennial weed, Cirsium arvense, was abundant in four sown fields even in the third year. The biomass of sown grasses and the litter increased significantly by the third year after sowing. The biomass of herbs decreased significantly from year to year due to the decline of cover of short-lived weedy species. Our results suggest that sowing low-diversity seed mixtures is effective in the suppression of short-lived weedy species. In case of Cirsium arvense further management is needed (e.g. mowing multiple times a year, early mowing). Source
<urn:uuid:419ecd5c-bc3f-4475-a6ce-6503094c5435>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/de-ttk-okologiai-tanszek-645265/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00110-ip-10-171-10-70.ec2.internal.warc.gz
en
0.894118
574
2.578125
3
A recent study conducted by the University of Michigan's Transportation Research Institute predicts that passengers in self-driving cars will be more likely to experience motion sickness than in traditional automobiles. Specifically, the study estimates that anywhere between 6% and 10% of passengers would experience some level of motion sickness "often, usually, or always," while as many as 12% would feel moderate or severe motion sickness "at some time." The researchers looked at the factors that contribute to motion sickness – "conflict between vestibular and visual inputs, inability to anticipate the direction of motion, and lack of control over the direction of motion" – and estimated how much more likely passengers would be to experience them while traveling in self-driving cars. They concluded that these factors "are elevated in self-driving vehicles." In self-driving cars, passengers lose control of the vehicle, and therefore they are less able to anticipate when it might be moving. This might sound no different than riding as a passenger with a friend or a cab driver, but consider that when a driver is planning to switch lanes, he or she usually glances somewhere else, grips the wheel differently, or gives some other abstract hint that the car is about to move. Whether passengers consciously register this or not, they may be preparing for some kind of motion. An increase in other activities while traveling – like reading or watching videos, which more than a third of Americans say they are likely to do, according to the study – isn't likely to help with the issue. An article at Fast Co.Exist pointed out that self-driving car manufacturers could develop solutions to this problem, and that's important to point out. The market is still very young, and cars that present these issues aren't likely to make it to the market if 10% of focus groups end up getting sick. Either way, just add motion sickness to the long list of obstacles that the self-driving car will have to overcome in order to make it to the roads.
<urn:uuid:8f41cd2a-d306-44ae-a5ad-63f1697f4e8d>
CC-MAIN-2017-04
http://www.networkworld.com/article/2911155/opensource-subnet/study-self-driving-cars-will-make-a-lot-of-people-sick.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00046-ip-10-171-10-70.ec2.internal.warc.gz
en
0.974541
403
3.40625
3
Definition: A string matching algorithm that is a variant of the Boyer-Moore algorithm. It uses two consecutive text characters to compute the bad character shift. It is faster when the alphabet or pattern is small, but the skip table grows quickly, slowing the pre-processing phase. R. F. Zhu and T. Takaoka, On improving the average case of the Boyer-Moore string matching algorithm, J. Inform. Process. 10(3):173-177 (1987). If you have suggestions, corrections, or comments, please get in touch with Paul Black. Entry modified 17 December 2004. HTML page formatted Mon Feb 2 13:10:40 2015. Cite this as: Bob Bockholt, "Zhu-Takaoka", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 17 December 2004. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/zhuTakaoka.html
<urn:uuid:fb5467cd-44ad-40f5-8ec5-36ca512d3ebb>
CC-MAIN-2017-04
http://www.darkridge.com/~jpr5/mirror/dads/HTML/zhuTakaoka.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00532-ip-10-171-10-70.ec2.internal.warc.gz
en
0.801042
222
3.625
4
IBM and Business Analytics IBM's history with analytics solutions goes back to 1890. OhMyGov's Richard Hartman recently examined IBM's longstanding focus on business analytics, performance measurement and government consulting. "IBM's history goes deep into the government, beginning with support of the U.S. Census Bureau's adoption of the Hollerith Punch Card, Tabulating Machine and Sorter in 1890 by inventor Herman Hollerith, a Census Bureau statistician," Hartman writes. "Two of IBM's software products to help implement analytics are Cognos, a performance management solution to improve visibility into and across government agencies, and SPSS, which includes data collection, text and data mining, and advanced statistical analysis and predictive solutions," Hartman notes.
<urn:uuid:f82d3b8b-5652-4024-b7a4-241069d34c16>
CC-MAIN-2017-04
http://www.enterpriseappstoday.com/business-intelligence/article.php/396828/IBM-and-Business-Analytics.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282202.61/warc/CC-MAIN-20170116095122-00256-ip-10-171-10-70.ec2.internal.warc.gz
en
0.916766
156
3.125
3
The data center does create a lot of garbage or waste products. From paper to many other products, the data center has to deal with the recycling or safe disposal of trash. Paper printouts, servers and so on are all part of the data center waste management system. The waste management system gets more complex depending on the product that needs to be disposed off. Security of information remains a critical concern for most data center managers. Confidential information can be retrieved from dumpster diving and going through the discarded material from a data center can yield plenty of information to the trained eye. Paper normally gets recycled and excess IT equipment goes on to online auction sites or even bought as scrap by other vendors. Shredding the paper is a good idea before giving it up for recycling. For hardware, polices and operating procedures like being zeroed out is an essential step to be taken before they can be given to scrap dealers. Read More About Data Center Waste Management
<urn:uuid:efdf7d41-ac54-4a6b-9c31-b766c7340003>
CC-MAIN-2017-04
http://www.datacenterjournal.com/data-center-waste-management-system/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284411.66/warc/CC-MAIN-20170116095124-00164-ip-10-171-10-70.ec2.internal.warc.gz
en
0.924442
193
2.9375
3
Scientists are working on a prototype system composed of integrated circuit #d technology that they say will help create human tissues for people with congenital defects or serious internal organ damage. Draper Laboratory and the Massachusetts Institute of Technology (MIT) built the prototype using what they called "an automated "layer-by-layer" assembly method- usually found within the electronics packing industry to build integrated circuits. MORE NEWS: Quick look: Death of India's telegram system] Instead of building mobile phones, this technology has been used to stack "porous, flexible, biodegradable elastomer sheets," which the researchers have used to create 3D scaffolds on which tissues can be grown. Such guide cells to grow in precise patterns, in the way that highly specialized tissues such as heart and skeletal muscle grow. The scaffolds are flexible enough, the researchers say, to be implanted directly into an injured part of the body in order to guide cellular growth at that site, the researchers said. Having developed their 3D scaffolding technique, the research team was able to grow contractile heart tissue from rat heart cells, the researchers said. Before this work, researchers intent on growing human tissues lacked the ability to precisely control the 3-D pore structure of scaffolds in many types of polymers, instead relying on 2-dimensional templates, random 3D pore structures, or amorphous gelatin. While relatively simple organs like bladders can be grown using such methods, for more complex tissues like the heart or the brain a 3-D structure to guide specialized cell growth patterns is necessary. "Scaffolds that guide 3-D cellular arrangements can enable the fabrication of tissues large enough to be of clinical relevance, and now we have developed a new tool to help do this," said Lisa Freed, the principal investigator for the project at Draper Laboratory Draper researchers stated that this work is driven by "the shortage of human tissue in medicine," explaining that this technology could be implemented to facilitate the growth or regrowth of specific tissues in people with congenital defects or traumatic damage to their tissues or organs. The flexible scaffolds could be implanted at the site of the injury to guide cellular growth, afterwards dissolving harmlessly into the body. Biomedical researchers can also take advantage of these scaffolds for purposes including studying tissue development and identifying key cues that prompt a blob of heart cells to grow into a fully functional, beating heart muscle, for example, the researchers stated. Check out these other hot stories:
<urn:uuid:4db0c358-508c-4356-ab14-2a80d36bda27>
CC-MAIN-2017-04
http://www.networkworld.com/article/2225006/applications/3d-integrated-circuit-technology-used-to-grow-human-tissue.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284411.66/warc/CC-MAIN-20170116095124-00164-ip-10-171-10-70.ec2.internal.warc.gz
en
0.936057
509
3.40625
3
Nearly two months have passed since the public revelation of the Heartbleed bug affecting the widely used open source cryptographic library OpenSSL. The reaction of the security community, software and hardware vendors, website owners and providers of online services was almos immediate and, for the first time ever, even the wider public knew that something was very wrong. But since then, the frenzy has died down a bit, and many now believe that the danger has passed. Not so, says Luis Grangeia, partner and security services manager at SysValue. He proved that the same exploit that has been used to exploit Heartbleed can also be used to target any device running an unpatched version of OpenSSL, and he says the attack is successful against wireless and some wired networks. He dubbed the exploit “Cupid.” “Cupid is the name I gave to two source patches that can be applied to the programs ‘hostapd’ and ‘wpa_supplicant’ on Linux. These patches modify the programs behavior to exploit the heartbleed flaw on TLS connections that happen on certain types of password protected wireless networks,” he explained in a blog post on Friday. “This is basically the same attack as Heartbleed, based on a malicious heartbeat packet. Like the original attack which happens on regular TLS connections over TCP, both clients and servers can be exploited and memory can be read off processes on both ends of the connection. The difference in this scenario is that the TLS connection is being made over EAP, which is an authentication framework/mechanism used in Wireless networks. It’s also used in other situations, including wired networks that use 802.1x Network Authentication and peer to peer connections.” “EAP is just a framework used on several authentication mechanisms. The ones that are interesting in this context are: EAP-PEAP, EAP-TLS and EAP-TTLS, which are the ones that use TLS,” he concluded. The exploit can be successfully turned against Android devices running 4.1.0 or 4.1.1, Linux systems/devices that still have older OpenSSL libraries, and most corporate managed wireless solutions as they use EAP based authentication mechanisms. Most home routers cannot be targeted, as they use those particular authentication mechanisms. He also pointed out that previous beliefs that Heartbleed can only be exploited over TCP connections and after a TLS handshake are false. He published the exploit code and asked researchers to test it against more networks and devices.
<urn:uuid:a78d869c-c523-4310-a02e-ad07fae449b8>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2014/06/02/cupid-exploits-heartbleed-bug-on-wifi-networks-and-android/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00560-ip-10-171-10-70.ec2.internal.warc.gz
en
0.953187
530
2.578125
3
Over the past five decades, Boeing has been the country’s largest exporter and earner of foreign capital; a major innovator of high-end technologies and a major user of them; the custodian of knowledge about designing and integrating the numberless systems and parts of which a modern airliner is made. No other American company has that knowledge. Elsewhere in the world only Airbus and Russian industry have it. For now, many would say. Boeing has elected to outsource a sizable body of knowledge. The greater part of its new airplane, the 787, is being built elsewhere, with Japan’s three major aircraft companies in the lead role. Boeing’s close and productive ties with them have at times raised questions, although never to the same extent as now. Could these companies use what they have learned from Boeing to build and market their own large commercial aircraft? Has Boeing, in effect, ceased to be a maker of large commercial aircraft? And if so, why? The 787 program is stretching technology to its outer edge, starting with the wing, the smart part of any airframe. In the art of making wings for commercial aircraft, Boeing has had few peers. Now it has outsourced the wing of the 787, the first to be made partly of composite material, to the Japanese “heavies”—Mitsubishi Heavy Industries, Fuji Heavy Industries and Kawasaki Heavy Industries. They are also responsible for a section of the all-composite fuselage. And Boeing has licensed the design and manufacturing technologies involved in composite materials to the Japanese. Alenia of Italy and Vought Aircraft are producing the center and aft fuselage sections, along with the airplane’s horizontal stabilizer—in all, 26 percent of the structure. Boeing itself is supplying roughly 35 percent of it, including the vertical fin, the fixed and movable leading and trailing edges of the wing. Boeing and Airbus have always outsourced subcomponents and systems to suppliers. In the case of the 787, however, Japanese suppliers are acquiring so-called core competences, starting with wing technology and the new lightweight materials. Hence, critics say, Boeing is giving up its competitive edge by outsourcing the major parts of the 787 with the new materials. “The 787 composite wing and fuselage structure are new technologies—untried on this scale even by Boeing,” says Stan Sorscher, a Boeing engineer. “Boeing developed much of the materials, manufacturing processes, tooling, tolerances and allowances, and other design features, which are then transferred to suppliers in Japan, Italy and elsewhere. Over time, institutional learning and forgetting will put the suppliers in control of the critical body of knowledge, and Boeing will steadily lose touch with key technical expertise.” However, outsourcing, as management professes to see it, is unavoidable—the best and perhaps the only means of holding labor and structural costs within financially stable bounds. The talk is misleading, however. There is no evidence that Boeing is saving much money by outsourcing. But the outsourcing does send a message to the unions that Boeing deals with. It says: “If you mess too hard with us, we can always outsource your job to another place.” Also important to Boeing is gaining low-cost financing from the world’s second-biggest economy. Early in the new century, Boeing began reacting to the Airbus surge and its own stumbles in the marketplace by searching, especially in East Asia, for strategic partners. In return for aircraft sales in these countries, Boeing would allocate production and design work on its aircraft. For a company as versatile as Boeing and as endowed with exceptional resources, the alternative to making airplanes is assembling them as a systems integrator. And that is what Boeing is doing with the 787. Being a systems integrator means shifting the financial risks to suppliers, especially those who, like the Japanese and Italians, are subsidized by their governments. Partner or Competitor? The issue of sharing advanced technology with Japan has been around for roughly a quarter century, and for most of that time its focus lay in Washington. A 1983 Washington Post story cited transfers of military technology that “have enhanced the technical capability of an already expanding Japanese aircraft industry.” The current controversy is largely an internal Boeing affair. It pits management against other sectors, especially the engineers. Not surprisingly, they take exception to the outsourcing of core components, since much of the work for which they would have been responsible has gone. That aside, many of them are convinced that the company has lost sight of its larger interests. Their worst-case scenario envisages Japan designing airplanes and outsourcing some or much of the fabrication to Korea and eventually to China. Other equally experienced people are skeptical of this view. “It’s hard to become what Boeing is,” says a former vice president of GE’s aeroengines group. “The Navy once wanted to hang a Pratt [& Whitney] engine on the F-18, which had only GE engines. The Navy wanted to see some competition, and so we were instructed to turn over the blueprints and all relevant documents to Pratt. Pratt tried and could not produce a good enough engine. They had everything that was needed except the know-how.” Pierre Chao, one of the most carefully listened to among industry analysts, takes a balanced view. He says Japan, theoretically the biggest of the potential competitive threats, has been constrained for the last two to three decades because Boeing has co-opted its industry. Various parts of the Japanese bureaucracy have wanted to develop an indigenous commercial aircraft product line, but industry has refused to go along because of their lower risk and more lucrative role as subcontractors to Boeing. Until recently, the received wisdom was that Airbus, unlike Boeing, wouldn’t outsource core competences. But the overarching importance of the Asian airline market has altered a few basic assumptions. China is a huge market and demands technology in return for granting access to it. Four Chinese manufacturers are making wing components for Airbus aircraft. Six companies are making its landing gear. In June 2006, Airbus disclosed its plan to build an assembly plant for A320s in the coastal city of Tianjin. The same document reported a joint venture between Airbus and China Aviation Industry Corporation I establishing an engineering center adjacent to the offices of Airbus China. In late 2005, Airbus served notices on suppliers that it had tightened its criteria for contractors with whom it works—including a requirement that major suppliers outsource a minimum amount of work to companies in Asian countries such as China and India. The mandate was described as part of a broader global “action plan” in which Airbus is seeking partnerships in China, Russia, and elsewhere, specifically for development and production of the A350. The Battle for China In 1992, Boeing’s share of the world market for airliners was 70 percent. But with American carriers postponing orders worth many billions of dollars, Boeing had become dependent on sales to foreign airlines, especially in East Asia, where economic growth and power were by then concentrated. In competing with Airbus, Boeing had advantages; it was the dominant supplier of commercial aircraft, and its role as a major defense contractor underlined its political importance in Washington. For China to buy from Boeing might help to promote Chinese access to the huge American market that beckoned. For both Boeing and Airbus, but especially for Boeing, political issues with China would always have to be reckoned with. In the 1990s, the issue was whether to keep extending most-favored-nation, or MFN, treatment to China in trade. Then as now, China was running a heavy trade surplus with the United States, but without MFN the surplus would have shrunk significantly. It was equally true that if China had chosen to buy the majority of its jet airliners from Airbus, the U.S.-China trade deficit would have been very much larger. The Chinese were hardly subtle in their use of aircraft deals as leverage in the campaign for MFN renewal. Late in 1992, United Airlines announced it was postponing delivery of 122 Boeing airliners worth $3.6 billion. On April 7, 1993, United disclosed that it was deferring delivery of another 49 Boeing aircraft worth $2.7 billion. But on April 9, just two days after the second blow from United, China placed an order worth $800 million for 20 Boeing airplanes, all narrow-bodies. And from Beijing there also came a strong hint of a follow-on order for a similar number of the pricier wide-bodies, both 767s and the 777s on which the company thought its future might depend. While sales by Airbus to Chinese airlines have continued to lag behind Boeing’s, each of China’s chief long-distance carriers are now buying aircraft from both suppliers. Two-thirds of the aircraft delivered to China in 2004 were from Airbus. But in sales of airliners with a hundred seats or more, Boeing had roughly 70 percent of the Chinese market at the end of 2005. In the next twenty years, Chinese airlines are expected to triple their fleets, adding 2,300 aircraft worth nearly $200 billion. The big Chinese airlines now have more to say about aircraft selection and procurement while continuing to need the central government’s approval. Gaining official approval depends heavily on external politics. On this point, contacts between China and France are instructive. In January 2004, China’s president, Hu Jintao, made a state visit to France during which he was shown a cabin mock-up of Airbus’s A380. His host, French president Jacques Chirac, made comments about Taiwan to please Hu. In turn, Hu noted a purchase by China Southern of some A320 family aircraft a few days earlier. The meeting probably led to an order by the Chinese six months later for 20 A330s, Airbus’s midsize aircraft. Four months later, Chirac returned the visit. China was expected to announce the purchase of several A380s while he was there. But that didn’t happen, probably because France had been unable, or hadn’t tried, to lift an arms embargo imposed in 1989 as punishment for the crushing of the democracy movement at Tiananmen Square. In January 2005, China did make the long-awaited purchase of five A380s. Long-distance air travel had become a more immediate concern for the Chinese, largely because they would be hosting the Summer Olympics in July 2008. Airbus and Boeing did what they could to make the most of the moment. Airbus pushed to expand sales of its entire family, from the A320 at the low end to the superjumbo A380. Boeing competed hard against the A320 with its 737, but its larger effort was directed toward selling 60 of its new 787s. Boeing’s efforts to sell the package were waged in Washington as well as Beijing. The U.S. presidential campaign was under way, and Boeing was trying to conclude the 787 deal before the election. That didn’t happen, but later, Bush and Hu did discuss the 787 package. In January 2005, it was approved. During Bush’s visit to Beijing in November 2005, China signed an agreement to buy 70 Boeing 737s and announced that an order for another 80 would follow. In Japan, Airbus feels that Boeing continues to benefit from intervention on its behalf. “We have discovered that [Mitsubishi Heavy Industries] has on occasion pressured JAL [to buy from Boeing] when its airplane was competing against us,” an Airbus vice president says flatly. His use of the term “its airplane” reflects MHI’s strong involvement with Boeing, along with the heavy subsidies it receives from the Japanese government. Early in 2005, Airbus was holding little more than 1 percent of the Japanese market, and that share was heading toward zero. Reprinted by permission of John Newhouse. Excerpted from Boeing Versus Airbus: The Inside Story of the Greatest International Competition in Business by John Newhouse, 2007. John Newhouse covered foreign policy for The New Yorker, served as assistant director of the U.S. Arms Control and Disarmament Agency and, during the Clinton administration, was senior policy adviser for European Affairs with the U.S. State Department.
<urn:uuid:98af0b36-aab7-4d6c-99ba-219bb80b4286>
CC-MAIN-2017-04
http://www.cio.com/article/2442518/outsourcing/boeing-versus-airbus--flight-risk--outsourcing-challenges.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00312-ip-10-171-10-70.ec2.internal.warc.gz
en
0.975405
2,558
2.796875
3
The concept of data protection has been around for many years, since the UK first implemented a Data Protection Act in 1984. The General Data protection Regulation is a piece of legislation drawn up by the European Commission to unify data protection within the EU and to govern the export of personal data beyond the EU’s boundaries. GDPR is due to come into force across the EU in May 2018 following a two year transition period. Being a regulation rather than a directive, it doesn’t require enabling laws to be passed by member states. Why data protection matters Businesses and public sector organisations are collecting more data about us than ever before. Our shopping and surfing habits are constantly analysed in order to target us with appropriate advertising and offers. As the Internet of Things expands the amount of information collected will continue to grow. With all this information about us being held on computer systems there are, naturally, concerns about how it’s used and how safely it is stored. Since much of the information is collected by multi-national enterprises there are worries about where it might end up too. What will GDPR do? Because data protection concerns stretch across national boundaries, the introduction of GDPR seeks not just to regulate data within the EU. It seeks to extend EU data protection law to any organisation holding information on EU citizens, even if that organisation is based outside the EU. It sets out a number of principles which are broadly similar to those already enshrined in the UK’s Data Protection Act. These are aimed at ensuring that data is gathered for legitimate purposes, that only data needed for those purposes is held, that the data is fairly and lawfully processed, and that it isn’t held for longer than necessary. In order to make it easier for overseas companies to comply with the principles, GDPR will also harmonise data protection requirements across the European Union. Penalties of up to four per cent of global turnover can be levied on businesses that fail to comply. Each EU member state will have to set up an Independent Supervisory Authority to investigate complaints and determine penalties. These will be overseen by a European Data Protection Board (EDPB). Businesses will need to be able to demonstrate that they comply with the principles. To do this they’ll need to have documentation in place that shows how they’re processing data, they may also need to appoint a data protection officer. GDPR gives individuals a number of rights too. These include rights of access to and rectification of data, a right to restrict processing and a right to data portability. It also imposes a ‘right to erasure’ which allows for data subjects to request that data relating to them is erased on various grounds including withdrawal of consent and unlawful processing. What it means for businesses Under GDPR organisations will need to set a retention period for stored data and supply contact details for a data protection officer and data controller as appropriate. If these posts don’t already exist within the organisation then it may be necessary to create them, though they needn’t be full time roles, they could be carried out as part of another job such as database administrator or IT security. In some circumstances, public authorities and organisations carrying out large scale monitoring of individuals, you must have a data protection officer to comply with GDPR. GDPR also requires that privacy settings be set a high level by default and that data protection be built in to new business processes – so called ‘privacy by design’ the Information Commissioner’s Office has guidance for this. One of the most important implications for business is that consent needs to be obtained for collecting data and the purposed for which it’s used. Individuals have the right to withdraw their consent and data controllers need to be able to prove it’s been given. This will probably mean requiring users to complete a check box when installing software or signing up for a website. When any new technology is introduced businesses will need to carry out a Data Protection Impact Assessment (DPIA). This should set out a description of the processing, an assessment of risk and set out the measures in place to mitigate this. Similarly businesses need to be geared up to cope with the right to erasure. There are some circumstances in which businesses can refuse to comply with this, such as where data is held to meet a legal obligation. There are extra requirements if holding and processing data relating to children. These surround consent and whether it’s required from a parent or guardian for example. Companies with fewer that 250 employees are required to maintain records of activities related to higher risk processing. This will include processing personal data that could result in a risk to the rights and freedoms of an individual, CCTV footage of public areas for example, or processing special categories of data, such as those relating to criminal convictions and offences. If businesses are already complying with domestic data protection legislation, then it’s unlikely that the introduction of GDPR is going have a major impact on them. You do need to be aware of it though and it may be seen as a good opportunity to review not only procedures but also what data is being held, how it’s used and whether it’s really needed. Whilst June 2016’s Brexit vote will undoubtedly have some impact on how all this works in the UK, the fact that GDPR applies to all companies holding data on EU citizens means that many UK businesses that trade with Europe will still need to comply with its rules even after we leave the EU. You can’t afford to ignore it. Image source: Shutterstock/Wright Studio
<urn:uuid:483d318a-faa6-456f-a789-7fdb5c44527b>
CC-MAIN-2017-04
http://www.itproportal.com/features/general-data-protection-regulation-gdpr-what-businesses-need-to-know/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00522-ip-10-171-10-70.ec2.internal.warc.gz
en
0.942774
1,150
3.109375
3
Interesting post today over at the Whitehouse.gov blog by Vivek Kundra, the U.S. Chief Information Officer. The post describes both the rationale as well as what cloud computing, at least what it means to the US Government. Here are a couple of the more interesting parts. "For those of you not familiar with cloud computing, here is a brief explanation. There was a time when every household, town, or village had its own water well. Today, shared public utilities give us access to clean water by simply turning on the tap. Cloud computing works a lot like our shared public utilities. However, instead of water coming from a tap, users access computing power from a pool of shared resources. Just like the tap in your kitchen, cloud computing services can be turned on or off as needed, and, when the tap isn’t on, not only can the water be used by someone else, but you aren’t paying for resources that you don’t use. Cloud computing is a new model for delivering computing resources – such as networks, servers, storage, or software applications." The post clearly outlines the use of Cloud Computing within the US federal government's IT strategy. "The Obama Administration is committed to leveraging the power of cloud computing to help close the technology gap and deliver for the American people. I am hopeful that that the Recovery Board’s move to the cloud will serve as a model for making government’s use of technology smarter, better, and faster." Read the rest here
<urn:uuid:d12c7a47-951f-489a-9bb7-3234ffdc4652>
CC-MAIN-2017-04
http://www.elasticvapor.com/2010/05/white-house-further-outlines-federal.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00459-ip-10-171-10-70.ec2.internal.warc.gz
en
0.949991
315
2.5625
3
Status: “Ready for mass consumption,” but set for regular, consistent iterations. A by-product of Big Data, Big History is the study of time going all the way back to the Big Bang. Using a scale that begins 13.7 billion years ago, ChronoZoom breaks up history by thresholds that represent moments in time. Treshhold 1, the Big Bang, is detailed with documents, images, and videos, known as artifacts, as well as a bibliography so the source of that information can be traced. Customizable narrations are especially useful add-ons for those in education or academia, allowing professors to provide their own interpretation of a window in time.
<urn:uuid:8ac12774-924c-4f3e-8f44-e3402a2f35ac>
CC-MAIN-2017-04
http://www.cio.com/article/2368441/project-management/10-cool-projects-from-microsoft-s-research-arm.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00367-ip-10-171-10-70.ec2.internal.warc.gz
en
0.942641
141
3.046875
3
Sudheesh Nair is President of Nutanix. Recently, I went to New York and had to get a taxi at LaGuardia Airport, an arguably antiquated airport. To do so, you must stand in line and wait for an attendant to fill out a ticket with the appropriate zone for your final destination. When your taxi drives up, this person hands the ticket to the taxi driver and you can then go on your way. As I waited for my taxi, I couldn’t help but see this cumbersome process as a classic queuing protocol. While easily solved through technology, this inevitable change would leave this attendant without a job. For better or for worse, this is the fate of many jobs as artificial intelligence catches up with human intelligence — and that day is not as far off as we may think. Artificial intelligence is no longer relegated to the fictional imaginings of such worlds as “Blade Runner,” “The Matrix” or “Star Wars.” Computers aren’t taking up entire rooms anymore, taking days to crunch specific queries, within reach of only a select few. Now they’re in our pockets, on our wrists, in our refrigerators and even walking around with us in our shoes. But even with this advancement, we are nowhere near the potential that the merger of computer and human intelligence has promised for so long. Human intelligence still supersedes computers in many ways, such as in common sense, empathy and critical thinking. However, the day is not too far off when artificial intelligence will change life as we know it. In this second machine age, technology will fit seamlessly into our lives, bringing remarkable benefits to consumers everywhere. Underlying hardware will act as an invisible infrastructure as intuitive software interfaces and automation do most of the work. Take, for example, the transportation industry. Right now it hinges on human input to drive cars and buses, conduct trains, and fly planes. We are already seeing the potential of AI in revolutionizing how we get from place to place, as driverless cars are on the cusp of coming to market. By removing the human error and the cost that employment incurs, transportation will become more affordable and will even, for many people, eliminate the need to own a car — one of the most underutilized resources we currently depend on. This will reduce waste, clean up the air and give us more room for development in urban areas. These consumer benefits also extend to our health care. It doesn’t seem too far off to imagine a time when a computer could make the simple diagnoses that account for a vast majority of issues presented to doctors, making medical care more affordable, more accurate and more widely available. Medical professionals would then be freed up to focus their attention on unusual or complicated cases that require their human adaptability, creativity and innovation. They would also be able to focus their attention on advancements in other areas, such as the individualized care that human genome sequencing promises. Of course, with all of the automation AI will bring us, it will eliminate many of the jobs currently undertaken by humans. Machines will be able to drive for us, diagnose us, run our routine IT support, give us financial guidance and hail us a cab at the airport. While this may seem like the end of days for employment, just remember that we have been through these shifts throughout time, even as recently as the Industrial Revolution. Advancements in technology have often improved quality of life at the expense of mundane tasks. Look no further than the Guttenberg press. While it put many a monk out of work, it also made books available to the masses, improving the lives of millions of people through accessible knowledge — much like the Internet has done today. This massive shift we are facing leaves us with innumerable questions and complications. What kinds of jobs will people have? How will we distribute wealth? What possibilities lay in front of us? For years, people have been clambering for more education to stay ahead of the changes. While this is important, education is not enough. Computers can access more knowledge than is even fathomable to our limited minds. What we must do is adapt to the coming tide of change. We can do this by leaning in to what makes us innately human: our ability to improvise, to innovate and to inspire. Machines cannot yet replace these qualities and they also cannot make the genuine human connections that we all thrive and depend on. The key is to modify our approach to the way we live with technology so people like the taxi ticket attendant will be able to ride the wave and not get pulled under the riptide. Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.
<urn:uuid:a97d8d17-3d0e-4962-b393-60216496b17f>
CC-MAIN-2017-04
http://www.datacenterknowledge.com/archives/2016/03/07/second-machine-age-future-artificial-intelligence/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00303-ip-10-171-10-70.ec2.internal.warc.gz
en
0.959201
995
2.5625
3
Ebola is Infecting Computers; How to Protect Yours No, your computer can’t catch the actual Ebola virus… its not even airborn yet. However, we are finding that criminals are taking advantage of the hype and scare and curiosity over Ebola to infect people’s computers more easily. This is commonly being done via email. There are four prevalent types of email going around now that are meant to infect your computer: - A fake report on the Ebola virus — when you click the link to read more, your Windows machine is infected with a virus that can collect and steal your personal information. - A fake email from telecommunications provider that contains an important “Ebola Presentation” for your to download and view. If you do it, you install malware that can allow others to remotely control your computer, access your web cam, log what you type, etc. - Fake emails talking about an “Ebola Cure” which contains a malware attachment and which asks you to forward the news on to your friends. The malware records your keystrokes and downloads additional malware on to your computer - Fake emails about Ebola news and lists of “precautions”. There are many other types of attacks and attack vectors that are being and can be exploited. We will go over many of these, below, and how to protect yourself from them. You should be very wary of any email received about Ebola, even if it appears to be from a friend. You should be especially wary of opening any attachments sent through email, unless you have good confidence that they are malware-free. Common Email Attack Vectors Criminals capitalizing on the Ebola epidemic and and those trying to use email, in general, to attack the unwary use a wide range of tactics to induce you to infect your computers. Beyond being completely paranoid and “not trusting anyone or any attachment” and “never clicking on links”, there are things you can do to protect yourself so that you can use email safely and effectively. These all come back to having very effective spam and virus filtering on your inbound email. Preferably, filtering that happens server-side, automatically, before the messages ever arrive to your computer. Here are some of the attack vectors used, and the kinds of filtering that can block them. We recommend that you review your spam and virus filtering service and make sure that in protects you from all of these vectors. If it does not, you may want to consider improving your level of protection. By far the most common vector for malware is to attach it to the email message and to somehow induce you to open it via the text of the message. Making you want to open it or by making you trust that the sender is someone you know and thus it is “Ok” Every virus filtering software will scan email attachments and block ones deemed malicious. You should make sure that your system updates its “definitions” in near real time. If you only get updates the definitions of what new “bad files” look like once a day or once a week, then you are more and more vulnerable to the latest attacks. Zipped File attachments Because all virus scanners are known to scan attachments, many criminals send the malware attached as a “zip file” or other compressed file. This allows their virus-laden messages to get past scanners that cannot open and scan compressed attachments. Make sure that your virus scanner looks inside compressed attachments Encrypted ZIP File attachments No virus scanner can scan inside of an encrypted ZIP file attachment; but then, most people don’t send these on a normal basis. If you don’t, you should have your virus scanner automatically block them as they can be easy vehicles for malware…. ones that you can’t check until the file is opened on your computer…. Common Phishing Attacks Email messages that are forged and appear to be from a reputable company but which seek to get you to do something that will put you at risk are called “Phishing” attacks. These are incredibly common and sent out in bulk like spam. They are detected early on and rules can be made to block these kinds of malicious email messages. Be sure that your spam and virus filter can block phishing email messages. Messages that do not include attachments often try to induce you to click on a link that can result in an infection of your computer. Your spam and virus filter can (and should): - Allow you to block links in email messages (course and annoying), or better: - Scan the pages that the links go to and block ones that go to malicious pages, or best: - Scan the pages that the links go to when you click them and block the page if it is malicious at that time. Option #3 is best … as the most advanced criminals send the messages with links that point to normal benign pages. Then, after the messages have been successfully scanned and delivered, they update those pages to include malware. If your AV scanner can check the page when you actually click on the link, you are under the best level of protection (beyond not clicking). DKIM and SPF As the majority of malicious email uses forged email addresses as the senders (e.g. pretending to come from your friend or co-worker), your spam filter must support using these technologies to help detect if a message is fraudulent or not. Does your current spam and virus solution protect you on all of these fronts? Is it updated in near-real time? Does it filter messages before they arrive on your computer? If the answer is no to any of these questions, it may be time to re-evaluate your filtering solution and get a better one.
<urn:uuid:0a45a83f-2645-4575-bca9-23f2f331cd43>
CC-MAIN-2017-04
https://luxsci.com/blog/ebola-is-infecting-computers-how-to-protect-yours.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00303-ip-10-171-10-70.ec2.internal.warc.gz
en
0.938154
1,195
2.859375
3
16 states use e-voting without paper records Verified Voting Foundation/Rutgers School of Law/Common Cause Education Fund On Tuesday, like lots of other folks, I’ll be heading to the polls to vote. I live in Massachusetts, where voting is done by paper ballot. You get a ballot on heavy stock paper, indicate your vote by filling in the appropriate ovals with a marker and the ballot gets read and counted by an optical scanner. Every time I vote, I’m taken back to my elementary school days in late 1970s in Pittsburgh: filling out my ballot is just like it was filling out a standardized test form 35 years ago. Why is that, in a time when I can pay for my morning coffee using my phone, we still use this old school approach to voting? Surely, using a more up-to-date technology would be a better way to go, right? Turns out, not necessarily and, in fact, it’s hard to beat a good old paper ballot. These days, almost all voting in the United States is done via one of two methods: paper ballots that are usually optically scanned (although some hand counting is still done) or Direct Recording Electronic voting (DRE). DRE voting machines are, basically, computers that present the ballot on a screen; voters indicate their choices by either pushing a button or a touchscreen. The vote is processed and counted by the computer and there you go. DRE voting machines have some obvious problems that paper ballots don’t. First of all, they’re computer systems (hardware), which can break down or malfunction. Secondly, the software involved can have bugs or be hacked. Thirdly, these machines can also be the subjects of man-in-the-middle attacks, which can be used to alter vote counts. Finally, the software is usually proprietary and protected by trade-secret laws, meaning it often can’t be evaluated to ensure it’s working properly.
<urn:uuid:7692f840-2765-4004-8889-8920bbf4dd01>
CC-MAIN-2017-04
http://www.itworld.com/article/2717519/security/e-voting-systems-only-as-reliable-as-the-paper-trail-they-produce.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00515-ip-10-171-10-70.ec2.internal.warc.gz
en
0.942475
415
2.59375
3
Machine learning is programmatic at heart. It typically starts with a large amount of data that’s been organized by people. This might be tagged photos, for example. The program analyzes the data closely and sniffs out patterns that correlate with human-made patterns. After it does this, it tests those patterns on different data sets. Most people’s interactions with machine learning so far are with things like speech and facial recognition, like when Facebook suggests who you can tag in a photo. Machine learning powers a surprising amount of today’s world. But machine learning is becoming an increasingly important part of the digital economy together with service assurance. A Huffington Post article points out that as machine learning grows, so does the reliance on the IT infrastructure. Service disruption simply can’t be tolerated when it comes to work and personal activities. Businesses and their leaders must manage this digital transformation so as to reduce risk while enabling innovation. It is essential to note that algorithms and data aren’t created equal when it comes to direct impact on people’s personal and work lives. Algorithms Already Replace People in Some Sectors We ask ourselves whether algorithms and data should replace people, but in many cases, they already do. For example, Computerized Numerical Control (CNC) machines have been around a long time, with software combining with robotics to drive drills, cutters, and other tools. In fact, algorithms and data have had an enormous impact on manufacturing in recent decades, with millions of jobs having been lost to automation over the last 30 years. At the same time, however, manufacturing output has risen steadily. In the US, it’s more than doubled during that same 30-year period. Today, however, algorithms have potential to automate more than just machine-based decision-making. They’re being harnessed to analyze data in fields like law and medicine, and many questions have arisen on what effect they may have in terms of the role of the human lawyer or doctor, and on costs of services. What if Fallibility or Bias Are Baked into Algorithms? The fear, naturally, when it comes to algorithms powering legal processes or medical diagnoses, is whether or not the people who create those algorithms are inadvertently building their own biases or fallibilities into them. The prospect can be scary in some instances. On the one hand, justice is supposed to be blind, and what is more “blind” than a program designed to analyze data and deliver an answer? The problem is, the legal minds behind the programmers creating the algorithms certainly aren’t blind, and while they may strive to be unbiased, it’s hard to engineer out human tendencies entirely. Visibility into automated processes is essential. Machine learning is likeliest to power predictable tasks involved in human-driven professions. In other words, it’s far more likely that tasks done by the legal clerks and lab technicians will be (and in some cases already have been) automated, rather than the work done by, say, judges or physicians. The Cost of Digital Failure The risk of depending on algorithms and data to accomplish what used to require a brain and a pair of hands is what happens when something goes wrong. Production snags can be one consequence, but so can lost revenue and damaged brand image. In 2015, the average cost per hour of application failure was $500,000 to $1 million. And inefficiently fixing these problems as Forrester Research pointed out can cost a company $11 million per year. So while yes, algorithms and data can save massively on production costs, application and service performance disruptions can be remarkably expensive too. The IT infrastructure is increasing in complexity due to greater expectations from it (such as UC, IoT, cloud, and security) and other factors like deploying new software dozens to thousands of times per day! Visibility Key to Success Visibility is of utmost importance in an increasingly algorithm-driven world. Insights gained through machine learning impacts our work and personal lives. Just as important, when it comes to the IT infrastructure, traffic data and complementary sources like synthetic transactions and NetFlow enable real-time actionable insight to assure service delivery. If IT visibility restrictions are removed, through smarter data and superior analytics, then it is possible to accelerate digital transformation. And in this new digital world, where disruptions are very costly, businesses are mandating exceptional service performance by harnessing IP intelligence. To learn more about what enterprises are achieving with service assurance and what NETSCOUT customers are talking about please visit www.netscout.com/voc.
<urn:uuid:3264800f-b0c7-484a-a6bf-6bb6a119806e>
CC-MAIN-2017-04
http://www.netscout.com/Connect/blog-ent-algorithms-and-data-replace-people-the-need-for-service-assurance-increases/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280834.29/warc/CC-MAIN-20170116095120-00331-ip-10-171-10-70.ec2.internal.warc.gz
en
0.940969
949
3.046875
3
If we speak about what is networking and then try explain how does networking work or maybe how does internet work in some technical way, soon we will have to mention the routing protocols. We should mention routing protocols as the most important networking protocols when it comes to transfer packets from origin to destination points on the network. After all they are responsible for finding the best way across our multipath network. |Check out some other stories about protocols:| A routing protocol (either BGP or OSPF) is simply a set of rules, messages, and of routing algorithms. The key purpose of such internet protocols is to be taught the routes to some other networks by the routers which aren’t actually linked to them. Anyway, with a dynamic routing protocol, routers can discover virtual paths with dynamism as regards to the other distant networks. Moreover, with these protocol’s help such networks can be automatically added to a network’s particular routing table. But two most important routing protocols categories are in use today. And name of them are as: IGP (Interior Gateway Protocols) which is the majority suitable for lone independent system. Some examples of routing protocols that fall under this category are as follow: are IGRP, RIP, EIGRP, and OSPF. But EGP (Exterior Gateway Protocols) routing protocol is used between the different systems routers. And BGP is the classification of EGP (a common routing protocol). Facts about RIP (Routing Information Protocol) RFC (Request for Comment, that is a description of a standard for networking protocols.) for the routing information protocol is 1058. A routing daemon (a program) can insert the routing course of action to the system. If the routes to the destination are many then the best one will be chosen for forwarding. Anyhow, the RIP message may contain: command, version (ver1 or ver2), family, IP address (32 bit), metrics (hop count) and information about around 25 possible routes. But RIP message set-up may contain: request, reply, asks over to throw the entire or part of routing table for the system, and poll entry etc. Routing table broadcasts are sent periodically by RIP to the neighboring routers. But due to the certain drawbacks of routing information protocol (RIP) like: having no subnet addressing information and consume lots of time for making a link stabilized, after its failure; RIP Ver2 is defined by the RFC (Request for Comment) 1388. Some added fields of this version is included a 32 bit subnet mask, an IP address of the next hop and a routing domain (daemon identifier) etc. Facts about OSPF (Open Shortest Path First) Open Shortest Path First as a link state protocol checks the link status with respect to each of its connecting neighbors. After that, obtained information is sent to them. It (OSPF) has the quality to stabilize a route or link after its failure in much more quicker way than those systems based on the distance vector protocol. Features of OSPF: support subnet Mask, allocate traffic regularly over the equivalent cost routes, can use multicasting and use IP directly. Costs for specific hops can be set by the administrators. Adjacent routers swap information instead of broadcasting to all routers. Facts about BGP (Border Gateway Protocol) RFC 1267, 1268, in addition to 1497 have expressed BGP protocol that is running over the top of TCP (transmission control protocol) using the port No 179. Updates are triggered whenever required. Some other features of BGP protocol are as: distance vectoring usage, failures detections by sending periodic messages and information exchanges about accessible networks etc. Facts about EIGRP (Enhanced Interior Gateway Routing Protocol) Enhanced Interior Gateway Routing Protocol is an improved edition of the Cisco Interior Gateway Routing Protocol (IGRP). It is considered as both an interior gateway protocol (IGP) and an exterior gateway protocol, meant for the purpose of inter-domain routing. An EIGRP running router can store information about its neighbors’ routing tables. Fast convergence besides support for the variable-length subnet masking and for partial updates is possible with it. EIGRP can use: bandwidth, delay reliability and load metrics traits while its components are including: neighbor discovery/recovery, and reliable transport protocol etc. From internet point of view, packet routing is generally divided into two sets, named as: interior and exterior routing. But the routing is performed with an algorithm help that is stored up in a router’s memory. Moreover, you can also observe two main categories of routing algorithms: distance vector and link-state. Distance Vector Routing is chiefly determines the best paths without concerning its destination. Distance value is corresponded to by calculating more than one metric. Well, an IP distance vector routing protocols like RIP v1 (Routing Information Protocol v1), RIP v2 or Routing Information Protocol v2 and Interior Gateway Routing Protocols are still in use because these are simple as well as excellent at their job in the small networks with a little management, if it is required. But Link State protocols are able to use sophisticated methods for taking into concern link variables like bandwidth and reliability. Facts about IS-IS (Intermediate System to Intermediate System) IS-IS as a routing protocol is able to forward information within a physically connected computers group with efficiency. With respect to a packet switched network, the task of routing is done by finding the best path for the datagrams. The inter-domain routing protocol “IDRP” is consisted on the path-vector algorithm for routing. It provides the routing job for the OSI defined networking environments, which is more-or-less alike to BGP protocol in the TCP/IP networks.
<urn:uuid:008e5892-63ef-4f5a-a43b-80ed6751e623>
CC-MAIN-2017-04
https://howdoesinternetwork.com/2012/dynamic-routing-protocols
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00239-ip-10-171-10-70.ec2.internal.warc.gz
en
0.917464
1,204
4.28125
4
There are so many transceivers in FiberStore.Such as SFP Plus transceiver, X2 transceiver, XENPAK transceiver, XFP transceiver, SFP transceiver module (Mini GBIC), GBIC transceiver and so on. But what are fiber optic transceivers do you know? Fiber optic transceiver is a short distance to long distance twisted pair electrical signals and optical signals to be interchanged Ethernet transmission media conversion unit, in many places, also known as media converter. Products are generally used in an Ethernet cable can not be covered, you must use the fiber to extend the transmission distance of the actual network environment, and is usually located in the metropolitan area of broadband access layer applications; while helping the fiber last mile connections to the metro network and more on the outer layer of the network also played a huge role. In order to ensure the card with other manufacturers, repeaters, hubs and switches and other network equipment is fully compatible fiber optic transceiver products must strictly comply with 10Base-T, 100Base-TX, 100Base-FX, IEEE802.3 and IEEE802.3u Ethernet, etc. web standards, in addition, anti-electromagnetic radiation in the EMC aspects should meet FCC Part15. As the major carriers are efforts to build community networks, campus networks and enterprise networks, so the amount of fiber optic transceiver products are constantly improved in order to better meet the access network construction. Now on the market most of the optical transceiver supports Pluggable, So, today I will show you some common Pluggable fiber optic transceivers type: The One,SFP (Small Form-factor Pluggable) Transceiver Module 1,Gigabit optical module, FE SFP optical module, 155Mb SFP optical module, 622Mb SFP optical module, 2.5G SFP optical module: Small pluggable optical transceiver module, LC connector. 2,Gigabit BIDI optical module,Mbps BIDI optical module: BIDI (bidirectional transmission) optical transceiver module, LC connector. BIDI GEPON OLT optical modules: BIDI GEPON OLT optical transceiver module, SC connector. 3,Gigabit CWDM optical modules: Gigabit CWDM (Coarse Wavelength Division Multiplexing) optical transceiver module, LC connector. 4,Gigabit SFP electrical interface module: RJ-45 Interface 5,Gigabit SFP cable: dedicated to interconnecting devices, hot-swappable The Two,SFP + (10 Gigabit Small Form-factor Pluggable)Transceiver Module 1, SFP + optical modules:10 Gigabit sfp+ module LC interface 2, SFP + cables: dedicated to interconnecting devices, hot-swappable The Three, GBIC (Gigabit Interface Converter, Gigabit Ethernet Interface Converter) Transceiver Module 1, GBIC transceiver modules: hot-pluggable optical transceiver module SC interface 2, GBIC electrical interface modules: hot-pluggable and RJ-45 interface 3, GBIC stacking module: dedicated to interconnecting devices, hot-swappable HSSDC(High Speed Serial Data Connection) Interface The Four, XFP (10 Gigabit Small Form-factor Pluggable,10 Gigabit Ethernet interfaces small pluggable) Transceiver Module XFP module is 10 Gigabit Ethernet interfaces small pluggable optical transceiver module LC interface The Five, XENPAK (10 Gigabit Ethernet Transceiver Package,10 Gigabit Ethernet interface transceiver collection package) module Optical transponder, hot-swappable, SC Interface
<urn:uuid:1b250598-4635-48dd-a4f3-eb8c0cfe32a8>
CC-MAIN-2017-04
http://www.fs.com/blog/some-common-pluggable-fiber-optic-transceivers-type.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281649.59/warc/CC-MAIN-20170116095121-00147-ip-10-171-10-70.ec2.internal.warc.gz
en
0.797686
788
2.828125
3
Motion detection is an important element of many, if not, most surveillance systems. It plays a central role in both storage search time reduction. Storage is routinely reduced by 30% - 80% by using motion based rather than continuous recording. Likewise, an investigator can often much faster find a relevant event by simply scanning through areas of motion rather than watching through all video. At the same time there are a number of challenges associated with using motion detection: - Scene Conditions: The accuracy of motion detection and the amount of times motion is detected can vary depending on what's in the scene - people, cars, trees, leaves, etc. - and the time of day - night time with lots of noise, sunrise and sunset with direct sunlight into a camera, etc. - Performance of Detector: Motion detetion is built into many surveillance products - from DVRs to VMS systems and now IP cameras. As such, how well each one works can vary significantly. In this report, we share our results from a series of tests we performed to better understand motion detection performance. We did a series of tests in different locations: - Indoor well light scene to simulate the simplest scene possible - Indoor dark scene (<1 lux) to examine what problems low light caused - Outdoor parking lot to see how a complex scene with trees, cars and people would perform - Roadway to see how a moderately complex scene with periodic cars would perform Three IP cameras were used with their motion detection enabled to see differences in performance: With these tests, we answered the following questions: - How can one estimate motion percentage accurately? - Does motion estimation vary significantly by scene? - How accurate was motion detection in each scene? - Did certain cameras exhibit greater false motion detection than others? What scenes or conditions drove those problems?
<urn:uuid:656045bf-14f4-494e-8239-556cec4b39ed>
CC-MAIN-2017-04
https://ipvm.com/reports/motion-detection-performance-tested
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280587.1/warc/CC-MAIN-20170116095120-00267-ip-10-171-10-70.ec2.internal.warc.gz
en
0.956949
376
2.921875
3
Grid computing, the forerunner to today’s more popular cloud-based approach to IT, is being used to create advances in the biomedical field. A pan-European grid computing infrastructure, known as the neuGRID project, was established in 2008 to advance new treatments for neurological diseases such as Alzheimer’s. The goal was to become the “Google for Brain Imaging,” i.e., to provide “a centrally-managed, easy-to-use set of image analysis tools with which scientists can answer complex neuroscientific questions.” The project ran from February 1, 2008, to January 31, 2011, and enabled the processing of thousands of brain scans in less than two weeks instead of five years normally required with traditional methods. The condensed discovery process means that researchers can detect early traces of Alzheimer’s, which should lead to better prognoses. According to the project website: The aim of neuGRID was to build a new, user-friendly Grid-based research e-Infrastructure based on existing e-Infrastructures by developing a set of generalised and reusable medical services in order to enable the European neuroscience community to carry out research required for the study of degenerative brain diseases. Researchers from seven countries worked for three years to develop the infrastructure using EUR 2.8 million in funding from the European Commission. The initial prototype system was comprised of five distributed nodes of 100 cores (CPUs) each, connected with grid middleware and accessible via the Internet with a user-friendly interface. Workability tests were run using datasets of images from the Alzheimer’s Disease Neuroimaging Initiative (ANDI), the largest public database of MRI brain scans documenting the progression of Alzheimer’s disease and mild cognitive impairment. The role of neuGRID was to connect the imaging data with facilities and services for computationally-intensive data analyses. Principal Investigator Giovanni Frisoni, a neurologist and the deputy scientific director of IRCCS Fatebenefratelli, the Italian National Centre for Alzheimer’s and Mental Diseases, commented on the impetus for the project: “neuGRID was launched to address a very real need. Neurology departments in most hospitals do not have quick and easy access to sophisticated MRI analysis resources. They would have to send researchers to other labs every time they needed to process a scan. So we thought, why not bring the resources to the researchers rather than sending the researchers to the resources?” The results were truly remarkable, as explained by Dr. Frisoni: “In neuGRID we have been able to complete the largest computational challenge ever attempted in neuroscience: we extracted 6,500 MRI scans of patients with different degrees of cognitive impairment and analysed them in two weeks, on an ordinary computer it would have taken five years!” Going forward, neuGRID will live on in the form of a spin-off project, called neuGRID for You (N4U), which is adding high performance computing (HPC) and cloud computing resources to the original grid infrastructure. With EUR 3.5 million in European Commission funding, N4U is set to become a virtual laboratory for neuroscientists by expanding the user services, algorithm pipelines and datasets. “In neuGRID we built the grid infrastructure, addressing technical challenges such as the interoperability of core computing resources and ensuring the scalability of the architecture. In N4U we will focus on the user-facing side of the infrastructure, particularly the services and tools available to researchers,” Dr. Frisoni says. “We want to try to make using the infrastructure for research as simple and easy as possible. The learning curve should not be much more difficult than learning to use an iPhone!” An excerpt from the final report highlights the “business case” for employing the grid/cloud model in research: During its implementation, neuGRID has pioneered the use of distributed computing in biomedical research. The successful data challenge and success of the user training sessions have proved the validity of the neuGRID concept, justifying the effort of populating the infrastructure with services that neuroscientists need for their daily research activity. It illustrates that a new way of doing science in computational neuroscience, where data algorithms and CPUs are de-coupled from the physical location of the neuroscience lab and externalised to the grid, is realistic and feasible. While it is quite natural to believe that if cloud computing (i.e. outsourcing data, applications, and computational resources) is working for corporate business, it might also work for research, providing empirical proof that this is the case if of course at the same time mandatory and greatly persuasive. neuGRID’s original mandate was to enable neuroscientists to quickly and efficiently analyse MRI scans of the brains of patients with Alzheimer’s disease. Not only has the team been successful in that endeavor, but now their work has created a use case for grid computing that can be applied to other neurological disorders and additional areas of medicine. The architecture is “such that generic medical services can be flexibly adapted to be interfaced to others, specific to areas outside Alzheimer’s and the neurosciences,” the website explains. Neelie Kroes, European Commission Vice-President for the Digital Agenda, said: “Today’s e-infrastructures enable us to tackle an unprecedented amount of available data and an increasing complexity of modern experiments. The neuGRID initiative allows scientists in the smallest laboratories of the most remote areas to access data treasures and help patients suffering from dementia. It is up to the scientific community to make the most of this remarkable instrument, to cooperate and break traditional barriers, thus bringing us one decisive step closer to doing away with Alzheimer’s and other brain degenerative diseases.”
<urn:uuid:d8c3bfb3-779c-42f0-97a6-39aa67deaaa7>
CC-MAIN-2017-04
https://www.hpcwire.com/2012/04/16/grid_fights_neurological_disease/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.60/warc/CC-MAIN-20170116095120-00175-ip-10-171-10-70.ec2.internal.warc.gz
en
0.931581
1,212
3.171875
3
Byaruhanga C.,University of Pretoria | Byaruhanga C.,National Agricultural Research Organisation | Oosthuizen M.C.,University of Pretoria | Collins N.E.,University of Pretoria | And 2 more authors. Preventive Veterinary Medicine | Year: 2015 A participatory epidemiological (PE) study was conducted with livestock keepers in Moroto and Kotido districts, Karamoja Region, Uganda, between October and December 2013 to determine the management options and relative importance of tick-borne diseases (TBDs) amongst transhumant zebu cattle. Data collection involved 24 focus group discussions (each comprising 8-12 people) in 24 settlement areas (manyattas), key informant interviews (30), direct observation, a review of surveillance data, clinical examination, and laboratory confirmation of cases of TBDs. Methods used in group discussions included semi-structured interviews, simple ranking, pairwise ranking, matrix scoring, proportional piling and participatory mapping. The results of pairwise comparison showed the Ngakarimojong-named diseases, lokit (East Coast fever, ECF), lopid (anaplasmosis), loukoi (contagious bovine pleuropneumonia, CBPP), lokou (heartwater) and lokulam (babesiosis), were considered the most important cattle diseases in Moroto in that order, while ECF, anaplasmosis, trypanosomosis (ediit), CBPP and nonspecific diarrhoea (loleo) were most important in Kotido. Strong agreement between informant groups (Kendall's coefficient of concordance W = 0.568 and 0.682; p < 0.001) in pairwise ranking indicated that the diseases were a common problem in selected districts. East Coast fever had the highest median score for incidence (18% [range: 2, 33]) in Moroto, followed by anaplasmosis (17.5% [8,32]) and CBPP (9% [1,21]). Most animals that suffered from ECF, anaplasmosis, heartwater and babesiosis died, as the respective median scores for case fatality rates (CFR) were 89.5% (42, 100), 82.8% (63, 100), 66.7% (20, 100) and 85.7% (0, 100). In Kotido, diseases with high incidence scores were ECF (21% [6,32]), anaplasmosis (17% [10,33]) and trypanosomosis (8% [2,18]). The CFRs for ECF and anaplasmosis were 81.7% (44, 100) and 70.7% (48, 100), respectively. Matrix scoring revealed that disease indicators showed strong agreement (W = 0.382-0.659, p < 0.05-p < 0.001) between informant groups. Inadequate knowledge, poor veterinary services and limited availability of drugs were the main constraints that hindered the control of TBDs. Hand picking of ticks was done by all pastoralists while hand spraying with acaricides was irregular, often determined by availability of drug supplies and money. It was concluded that TBDs, particularly ECF and anaplasmosis were important diseases in this pastoral region. Results from this study may assist in the design of feasible control strategies. © 2015 The Authors. Source Tittonell P.,Tropical Soil Biology And Fertility Institute of CIAT | Tittonell P.,Wageningen University | Tittonell P.,CIRAD - Agricultural Research for Development | Muriuki A.,Kenya Agricultural Research Institute | And 7 more authors. Agricultural Systems | Year: 2010 Technological interventions to address the problem of poor productivity of smallholder agricultural systems must be designed to target socially diverse and spatially heterogeneous farms and farming systems. This paper proposes a categorisation of household diversity based on a functional typology of livelihood strategies, and analyses the influence of such diversity on current soil fertility status and spatial variability on a sample of 250 randomly selected farms from six districts of Kenya and Uganda. In spite of the agro-ecological and socio-economic diversity observed across the region (e.g. 4 months year-1 of food self-sufficiency in Vihiga, Kenya vs. 10 in Tororo, Uganda) consistent patterns of variability were also observed. For example, all the households with less than 3 months year-1 of food self-sufficiency had a land:labour ratio (LLR) < 1, and all those with LLR > 1 produced enough food to cover their diet for at least 5 months. Households with LLR < 1 were also those who generated more than 50% of their total income outside the farm. Dependence on off/non-farm income was one of the main factors associated with household diversity. Based on indicators of resource endowment and income strategies and using principal component analysis, farmers' rankings and cluster analysis the 250 households surveyed were grouped into five farm types: (1) Farms that rely mainly on permanent off-farm employment (from 10 to 28% of the farmers interviewed, according to site); (2) larger, wealthier farms growing cash crops (8-20%); (3) medium resource endowment, food self-sufficient farms (20-38%); (4) medium to low resource endowment relying partly on non-farm activities (18-30%); and (5) poor households with family members employed locally as agricultural labourers by wealthier farmers (13-25%). Due to differential soil management over long periods of time, and to ample diversity in resource endowments (land, livestock, labour) and access to cash, the five farm types exhibited different soil carbon and nutrient stocks (e.g. Type 2 farms had average C, N, P and K stocks that were 2-3 times larger than for Types 4 or 5). In general, soil spatial variability was larger in farms (and sites) with poorer soils and smaller in farms owning livestock. The five farm types identified may be seen as domains to target technological innovations and/or development efforts. © 2009 Elsevier Ltd. All rights reserved. Source
<urn:uuid:4eb78863-449c-4dc3-b197-4345ea9b2534>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/agricultural-research-organisation-519768/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280266.9/warc/CC-MAIN-20170116095120-00203-ip-10-171-10-70.ec2.internal.warc.gz
en
0.935367
1,303
2.53125
3
What types of data do companies use about me? Understand Data About You Data can be grouped in two categories, core data and derived or modeled data. ‘Core’ data is gathered and purchased by Acxiom from multiple sources and represents demographic data about you and/or your household; what your household buys and how often; and what your household’s interests are. These are the ‘core’ data elements because Acxiom has not applied any analytics to them. Derived or modeled insights are different from core data in a number of ways. Companies like Acxiom make assumptions and predictions about people based on analytical processing that uses the core data. Derived and modeled insights are used to determine a person’s likelihood to perform an action, to exhibit a behavior or to purchase a product, and when that might occur. Examples include the likelihood to go to a professional soccer match, donate to a Public Broadcasting Service (PBS) charity or buy a new vehicle after owning one for 4 years. Derived and modeled insights are an interpretation of reality based on a single company’s extrapolation of multiple core data elements. This interpretation will change and evolve over time, as a person’s core data changes. It is used to try to reach consumers when core data doesn’t directly identify their marketing interests. Derived and modeled insights are used the same way as core data in marketing campaigns – to shape offers and provide consumers with a more relevant and timely marketing experience. Here is an example of how derived and modeled insights differ from core data: Susie orders a pair of tennis shoes for herself and her toddler daughter over the Internet and general information about her purchase is shared with partners of the company she bought her shoes from. The core data that is shared is: Susie is interested in tennis shoes. She has children present in her household. She purchases via the web. She looks at advertising via the web, and she lives in the northeast. The modeled data that may result from this purchase could be: Susie is the type of person who has the likelihood to purchase fitness equipment, gym memberships, gym clothing. She is likely to purchase products over the web. The modeled insights predict a likelihood of a certain action or characteristic, based on common known data characteristics. As you can see, modeled insights represent best estimates or predictions based on the core data. Marketers may use those characteristics to identify an audience for athletic shoes and to identify other individuals who resemble the purchaser and might also be interested in athletic shoes.See an Example of Modeled Data About You
<urn:uuid:d1928681-c0aa-4c5b-8409-f7fa0786fa85>
CC-MAIN-2017-04
https://aboutthedata.com/what/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281332.92/warc/CC-MAIN-20170116095121-00349-ip-10-171-10-70.ec2.internal.warc.gz
en
0.952381
534
2.734375
3
Overview of goals of security: Confidentiality, Integrity, and Availability The CIA (Confidentiality, Integrity and Availability) is a security model that is designed to act as a guide for information security policies within the premises of an organization or company. The CIA criteria is one that most of the organizations and companies use in instances where they have installed a new application, creates a database or when guaranteeing access to some data. For data to be completely secure, all of these security goals must come to effect. These are security policies that all work together and therefore it can be wrong to overlook one policy. The confidentiality aspect refers to limiting the disclosure and access of information to only the people who are authorized and preventing those not authorized from accessing it. Through this method, a company or organization is able to prevent highly sensitive and vital information from getting into the hand of the wrong people while still making it accessible to the right people. Encryption: To begin with, encryption of data involves converting the data into a form that can only be understood by the people authorized. In this case, the information is converted in to the cipher text format that can be very difficult to understand. Once all security threats have been dealt with, the information can then be decrypted which means that the data can be converted back to its original form so that it can be understood. The encryption process can involve the use of highly sophisticated and complex computer algorithms. In this case, the algorithms cause a rearrangement of the data bits into digitized signals. If such an encryption process is used, then decryption of the same information requires one to have the appropriate decryption key. The encryption process should be carried out on data at rest; that is data stored on a hard drive or USB flash. Data in motion should also be encrypted. In this case, data in motion refers to all kind of data that is traveling across a network. Access controls: Access controls is also another way of ensuring confidentiality. This means that one set various policies and standards when accessing information and other organization resources. One can choose to use passwords where an individual with the motive of accessing some information must provide a password so as to gain access. In most cases, one will have to set access controls to work on the basis of identification and authentication. One can use unique user identification cards for the identification process. The verification process means that one can use items such as biometric readers and passwords so as to allow access. One can also implement physical access policies where all employees in an organization have work badges permitting them to access and use and facility or resource in the organization. There are some major access control models that an organization can choose to implement. There is mandatory access control, discretionary access control and role-based access control. Steganography: Steganography is also another aspect that can be used to enforce confidentiality. Basically, this is hiding information. This means that the goal of this criterion is to hide information and data from third party individuals. Steganography can involve the use of microdots and invisible ink to hide data and information. Integrity is another security concept that entails maintaining data in a consistent, accurate and trustworthy manner over the period in which it will be existent. In this case, one has to ensure that data is not changed in the course of a certain period. In addition, the right procedures have to be taken to ensure that unauthorized people do not alter the data. Hashing: Hashing is a kind of cryptographic science that involves the conversion of data in a manner that it is very impossible to invert it. This is mainly done when one is storing data in some storage device so that an individual who gains access to it cannot change it or cause some alterations. Digital signatures: Digital signatures are special types of data safety maintenance where a special kind of signature is required to access some particular information. The signature can be in the form of QR code that must be properly read so as to access data. Certificates: These are special types of user credentials that are required so as to gain access to some particular information. In this case, an individual without such certificates cannot access that piece of information. These certificates tend to guarantee some permission and rights. Non-repudiation: Based on information security, non-repudiation is a cryptographic property that provides for the digital signing of a message by an individual who holds a private key to a particular digital signature. The concept of availability refers to the up time maintenance of all resources and hardware. This means that all the hardware and resources one have are functional all the time. It can also involve carrying out of regular hardware repairs. Redundancy: Redundancy is a concept that is mainly based on keeping things up and running in one's organization even with the absence of one important component. One idea behind redundancy is to keep things running and maintaining an uptime. With redundancy, one need to be sure that all one's network components and resources are working properly and that we are able to use all the resources available to us. This means that one's organization continues to function normally and as usual. In this case therefore, one need to ensure that one has no hardware failure. In this case, one can have redundant servers or power supplies. With this, in case of a power outage, all one's systems will continue running efficiently because of there is another power supply available at one's disposal. With such redundancies, one is sure that if one component fails, there is another one that is there available and ready to take its place. Fault tolerance: Fault tolerance is also another aspect of availability. This basically means that the system is up and working properly even when some of its components fail. Safety is also a very important aspect not only in an organization but also some other environments such as at home. For optimal assurance of safety, there has to be some properly set strategies that are to be followed if proper safety is to be effective. Safety does not only entail being away from danger but also having the capability to prevent unauthorized access of a particular resource or facility. Safety should also include proper monitoring of all the activities happening in a particular area or vicinity. Fencing: Fencing is one way through which safety can be affected. This basically involves erection of a barrier so as to prevent unauthorized access. In most cases, fencing is normally done so as to enclose a particular perimeter area. Fencing can be of various kinds. There can be a concrete wall to limit illegal access. One can also choose to have an electric fence surrounding one's organization or premises. The basis of fencing is so as to make sure that access to a particular premise is only available at a single place. This means that trespass of such a premises can be an offense and one can be prosecuted for it. Lighting: Lighting is also another concept that can be used to enhance safety. This basically is an activity that involves setting up of proper lighting systems in and out of a building for proper monitoring of activities. Proper lighting is essential at night so as to provide a proper view of a specific area. In this case, one can choose to have a whole area illuminated with some special type of lamps such as spotlights. With proper lighting, it is easy to monitor activities occurring during the day and at night. Locks: Locks are also another way of enhancing safety. These are normally set up so as to prevent access to a particular premises or section of a building. With the presence of a lock, an individual seeking access to a particular place must have the appropriate mechanism to open the lock. One can have physical door locks that require a key to open. One can also have bio-metrically controlled locks which work on the basis of biometrics. These are locks that have in-built systems that have the capability of detecting fingerprints or facial recognition. Such locks can be very secure since only an individual whose credentials are fed into the system can gain access to a building with such security systems. CCTV: The use of CCTV is also another concept that can be used to enhance safety. This is a method of enhancing security that involves the installation of special surveillance cameras to monitor all activities in a particular building. CCTV cameras can be installed inside a building or outside depending on the area that is to be monitored. If inside a building, one needs to install them at strategic positions so that they can easily capture all the activities. The information captured is then relayed to a special type of display screen where security personnel can view all activities. CCTV cameras are of various types and are designed to monitor in different environments. For instance, if one wants to monitor a large area such as a car park, one should install a camera that allows for a wide field of view. This can be more economical compared to having many cameras in one place. Different CCTV cameras have different lighting requirements. With reference to this, one should be conversant with cameras that one need for different environments. There are some cameras that do not require much lighting while others require proper lighting so as to have a good view. One should therefore be conversant with some of these basic requirements. Escape plans: There should also be some set plans and strategies to avert some damage or danger if it arises. For instance, if there is a fire outbreak there should be some laid out plans that can help people evade even with the absence of rescue crew. Escape plans can involve the availability of emergency doors that can be used to exit a building. Such doors must be easily accessible in case of a fire. If one is in an organization, one might also consider setting up a fire assembly point. This is a specific location where all people should assemble and converge in case of a fire so as to determine the number of people still trapped in the building. Fire assembly points should be set outside a building and should be easily accessible to all. Drills: Safety drills are also a very important aspect when planning to avert danger. Generally, these are special types of training and rehearsals for an event. With drills, people in one's organization are in a position to react very fast and with a lot of confidence in a particular emergency situation. When performing safety drills, there are some aspects that one doesn't want to avoid such as fire, severe injury and terrorism. Apart from performing drills based on such aspects, one can also take into account some details that relate to one's organization. Basically, the performance of any specific drills must be monitored and overseen by a special type of individual. This is because he is the person responsible with ensuring that all drills are done to perfection. Drills are very important since they help in effective and rapid response in case of an emergency. Drills also help to build a sense of self-confidence when dealing with a disaster or tragedy. Each individual in one's organization must be conversant with all the drills since averting danger or disaster is a team effort and not an individual responsibility. Escape routes: Escape routes are also a very important aspect when it comes to safety. This is the case especially in the occurrence of a fire where immediate evacuation from a building is needed. In this case, it is the responsibility of the organization to have a well planned and mapped out escape route. In this case, a full map of the building should be available on a notice board. With this map available, an individual can easily know the route to follow in case of a fire such as an underground tunnel or lift. There should be directions showing the shortest and easily accessible to everyone. Generally, data security and physical security is a very important aspect in our day to day living and we should make sure that we enact all measures that can ensure that we operate in highly secure environments. By doing this, not only the data can be secured, but the business can also have some good performance which can be shown in the financial statements.
<urn:uuid:35b92608-0be6-428e-addc-99607f9b2392>
CC-MAIN-2017-04
https://www.examcollection.com/certification-training/security-plus-goals-of-security-confidentiality-integrity-availability.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281419.3/warc/CC-MAIN-20170116095121-00193-ip-10-171-10-70.ec2.internal.warc.gz
en
0.957701
2,418
3.28125
3
Thanks to the intrinsic stability of a mechanical recording medium, IBM has already achieved a Millipede storage density of 125GB per square inch20 times the current density of magnetic devices and double the predicted theoretical limit of magnetic media. As a mechanical system, Millipede also offers product designers a direct trade-off between data transfer rate and power consumption. IBM has said that a device based on this technology, with data capacity on the order of 40GB to 80GB and other characteristics competitive with current flash memory units, could be ready to come to market in an SD (Secure Digital)-compatible form factor by 2006 if an overall product road map can be satisfactorily defined.Familiarity should not breed contempt for the venerable technologies of magnetic tape and solid-state memory. These have long held down opposite ends of the storage spectrum: tape with low cost but with low data throughput to match, solid-state memory with far superior speed but at enormous cost compared with other bulk-storage options. Storage industry trade associations agree that, by 2006, hard disk storage will cost only 10 times as much per unit of capacity as tapea significant narrowing of the fortyfold cost advantage that tape had in 1998. This gap will probably not close much further thereafter, however, based on current technology road maps. The hard drive road map, furthermore, will be nearing a probable dead end in terms of further density improvement, while tape manufacturers are not as dangerously close to their magnetic density limits. eWEEK Labs believes that hard drive technology should initially be used for incremental and weekly backups but that it wont eliminate the need to run full tape backups for off-site storage. Click here to read more. Those with tape experience will know, unfortunately, that mechanical rather than magnetic phenomena are more important to long-term tape performance. Both tape manufacturers and tape-drive builders are exploring the use of microscopic monitoring of the physical condition of tape edges, during both initial production and storage operations, to provide improved manufacturing quality and to minimize data loss by warning of mechanical deterioration during use. As for solid-state storage devices, its hard to argue with 250 times less latency than a hard diskexcept that it comes at a cost of about 1,000 times as much per unit of capacity, making solid-state storage an alluring but generally impractical option. However, whats working quite well in bandwidth-intensive applications is the use of midsize, solid-state unitstypically 16GB to 64GB in size, sometimes configured in arraysto serve as cache units where many processes access the same data or where theres a high rate of sustained random access. Sevenfold acceleration of SQL queries, to cite one typical result, can yield good returns on judiciously targeted solid-state storage investments. No single silver bullet, but rather a well-aimed spread, is what it will take for enterprise IT builders to hit their storage targets. Technology Editor Peter Coffee can be reached at firstname.lastname@example.org. Check out eWEEK.coms Storage Center at http://storage.eweek.com for the latest news, reviews and analysis on enterprise and business storage hardware and software. Enterprise-scale devices have yet to be discussed but seem to eWEEK Labs to be a logical extrapolation of the concepthigh density, long lifetime and low power consumption for low-data-rate applications such as offline archival are a compelling set of characteristics.
<urn:uuid:da321768-ae97-4ff8-bde1-c28122b0c492>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Data-Storage/Take-Care-to-Follow-Right-Storage-Path/2
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279923.28/warc/CC-MAIN-20170116095119-00405-ip-10-171-10-70.ec2.internal.warc.gz
en
0.936555
701
2.5625
3
Why penetration testing doesn't stop Advanced Persistent Threats (APT). Penetration testing is intended to simulate the actions of a real world attacker by identifying security weaknesses that could impact on the confidentiality, integrity and availability of a target organisation’s assets. If executed correctly, penetration testing can be a very effective method of identifying areas within an organisation where security controls need to be enhanced in order to limit the opportunities for a successful attack. Traditional penetration testing required no specific technical details of the target systems to be provided before a test was sanctioned. Instead, only high level details, such as the name of the target would be issued and the testing team would need to identify the systems owned by the organisation and formulate a plan of attack. This allowed for a very open-ended test to be performed and provided a more realistic view of the attack paths that could be exploited in order to compromise the target organisation’s assets. Over the years, this approach to testing has been modified into finely scoped engagements that focus on specific sets of systems within isolated environments which are commissioned on a per-project basis. For example, a penetration test against an example organisation’s public facing infrastructure is likely to be limited to a specific set of IP addresses provided by the client and tested within a strict timeframe. Testing in this manner, whilst it is valuable in enhancing the security of the systems tested in isolation, does not always provide a realistic view of an organisation’s overall security. This is because many organisations believe that once these isolated systems are tested and any uncovered vulnerabilities are addressed, the organisation is secure. However, attackers are fully aware of this penetration testing culture and are exploiting the “gaps” left by this approach. Attackers are not restricted by a defined scope and will attempt to identify as many security weaknesses as possible that will provide the path of least resistance into the organisation while focusing on specific goals. This will often involve chaining together multiple attack vectors that bypass security controls and provide direct access into an organisation’s internal network. The actual steps taken to perform these attacks will vary between different groups of attackers dependent upon the target organisation. However, the general approach usually consists of a number of phases, with one vital phase being the exploitation of human trust. Rather than trying to break into systems directly, attackers are targeting the users of those systems and leveraging their access to compromise the organisation in order to achieve their end goal. In a typical attack, an organisation will first be investigated in order to identify methods to bypass any security defences/controls and gain the initial foothold into the organisation. This will include identifying specific individuals to target and the relevant technologies in use. The attackers will then compromise at least one system which can then be used as a platform on which to perform further attacks. This access is usually achieved through the use of a client-side attack delivered through spear phishing or watering holes. Client-side attacks exploit human trust by manipulating unsuspecting users into downloading and executing malicious files sent via email, or directing them to a malicious website (a watering hole) resulting in malware being installed on their machine. This provides unauthorised access to the victim system and a foothold in the network. To find out more information about these methods of attack, please see the following articles authored by MWR consultants: Once the initial foothold is gained, the attackers will typically attempt to leverage their access by stealing credentials or exploiting vulnerabilities in other systems in order to move off the individual workstation and get persistence on the network through remote access or command and control (C&C) to conduct the rest of the attack. Once persistence is achieved, they will attack with the goal of disrupting or destroying key information through Computer Network Attacks (CNA) and/or the aim of intelligence gathering from competitors/adversaries through Computer Network Exploitation (CNE). Most of the time, the approaches used to perform these attacks are neither new nor innovative and consist of common attack techniques that have been known about for over a decade. However, organisations are failing to keep up with implementing strategies to effectively limit their exposure to these attacks. Attackers are constantly evolving their approach and so it is important that defenders evolve with them. Penetration testing is an important part of an organisation’s security strategy, but it must be utilised in a manner which is effective and gives an accurate view of the organisation’s security. Organisations should be incorporating a cyber-defence security programme into their existing security strategy. This will provide an understanding of the threat actors that are likely to target their organisation, the assets that they are likely to target and how they are likely to target them. This information should then be used to formulate a long term security strategy that takes a holistic view of the organisation, not just of isolated systems. The security strategy should be asset-centric and aim to prevent attacks using the scenarios that have been identified as being the most likely to succeed. Penetration testing should be integrated into the strategy to simulate those threat actors as closely as possible, so that a realistic view of the organisation’s security can be formed. The results of penetration testing should then be used to demonstrate where security deficiencies exist and that the defences subsequently put in place are effective.
<urn:uuid:56c1b668-fc6e-4259-9efb-ee75c01f740b>
CC-MAIN-2017-04
https://www.mwrinfosecurity.com/our-thinking/a-modern-security-myth/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00037-ip-10-171-10-70.ec2.internal.warc.gz
en
0.951794
1,071
2.546875
3
The Nobel Prize Extracted from BPMN 2.0 by Example (non-normative OMG document) The selection of a Nobel Prize Laureate is a lengthy and carefully executed process. The processes slightly differ for each of the six prizes; the results are the same for each of the six categories. Following is the description for the Nobel Prize in Medicine. The main actors in the processes for Nomination, Selection and Accepting and Receiving the award are the: – Nobel Committee for Medicine – Specially appointed experts – Nobel Assembly and – Nobel Laureates Each year in September, in the year preceding the year the Prize is awarded, around 3000 invitations or confidential nomination forms are sent out by the Nobel Committee for Medicine to selected Nominators.
<urn:uuid:3d5026e2-0461-41c3-a448-29cd590762a9>
CC-MAIN-2017-04
https://www.businessprocessincubator.com/content/the-nobel-prize/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00523-ip-10-171-10-70.ec2.internal.warc.gz
en
0.922765
157
2.75
3
Home to some of the world’s fastest supercomputers, China is looking to apply that computing power to solve issues that are important to its populace. On Monday, a Chinese researcher revealed a plan to use China’s Tianhe-1A system toward the construction of new “smart cities.” Taking an interdisciplinary approach to the challenges of urban planning, smart cities emphasize the intelligent use of resources and services, resulting in improved service delivery and better quality of life. Smart cities are often defined by their ability to use information and communication technologies to solve economic, social and/or environmental challenges. “The Tianhe-1A can digitize the planning, designing, construction and property management of buildings in a city,” Meng Xiangfei, head of the applications department of the National Supercomputer Center in Tianjin, told China Daily. The article in the Chinese newspaper went to explain that sophisticated design software is able to optimize the urban planning process by modeling the cost and benefits of different materials, such as cement and steel. This kind of big data modeling can reduce the cost of a subway construction project by 10 to 20 percent according to Meng. The researcher adds that the supercomputer has already been used in underground construction projects. Once the world’s fastest system when it debuted in 2010, Tianhe-1A is currently number fourteen on the TOP500 – with a 2.56-petaflops LINPACK score and a 4.7-petaflops peak. The supercomputer combines 14,336 Xeon X5670 processors and 7,168 NVIDIA Tesla M2050 GPUs. An additional 2,048 FeiTeng 1000 SPARC-based processors are also installed in the system, but their power was not used for LINPACK benchmarks. The main technological achievement of the system is the proprietary high-speed interconnect, called Arch, which was developed by Chinese researchers and runs twice as fast as the InfiniBand standard. The supercomputer cost $88 million to build and is installed at the National Supercomputing Center in Tianjin. China’s central government has made development of smart city technology a core national policy. One goal of the policy is addressing the nation’s serious air pollution problem. In July, the Chinese government signed a 10-year agreement with IBM as part of the “Green Horizon” initiative. The effort will employ sophisticated analytics and big data techniques to boost renewable energy and improve energy utilization. Program scientists plan to create real-time maps at street-scale to model the dispersion of pollutants across Beijing. “This project will provide Beijing with a much better understanding of how pollution is produced and spread across the city, so the government can address it more effectively,” Tao Wang, resident scholar in the Energy and Climate Program at Carnegie-Tsinghua Center for Global Policy, told Baseline Magazine.
<urn:uuid:2d0f1776-076f-4539-bfdf-6a0fb7920d50>
CC-MAIN-2017-04
https://www.hpcwire.com/2014/08/19/supercomputing-makes-smarter-chinese-cities/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00342-ip-10-171-10-70.ec2.internal.warc.gz
en
0.930767
594
3.296875
3
The power required to increase computing performance, especially in embedded or sensor systems has become a serious constraint and is restricting the potential of future systems. Technologists from the Defense Advanced Research Projects Agency are looking for an ambitious answer to the problem and will next month detail a new program it expects will develop power technologies that could bolster system power output from today's 1 GFLOPS/watt to 75 GFLOPS/watt. Other hot stuff: Murder, IT security and other mysteries "Examples show we need at least 50 GFLOPS/w, and requirements of at least 75 GFLOPS/w can be confidently anticipated. Current systems provide in the order of 1 GFLOPS/w and industry trends will provide power efficiencies that are well short of required performance," DARPA stated. The goal of the program, called Power Efficiency Revolution For Embedded Computing Technologies or PERFECT, is to take a revolutionary approach to processing power efficiency. From DARPA: "This approach includes near threshold voltage operation, massive heterogeneous processing concurrency, and novel architectural developments combined with techniques to effectively utilize the resulting concurrency and tolerate the resulting increased rate of soft errors. The PERFECT program will leverage anticipated industry fabrication geometry advances to 7 nm. Research and development will specifically address embedded systems processing power efficiencies and performance, and are not concerned with developments that focus on exascale processing issues. No operational hardware is to be built in this program, instead a simulation capability will be developed that will measure and demonstrate progress." In the past, computing systems could rely on increasing computing performance with each processor generation. Following Moore's Law, each generation brought with it double the number of transistors. And according to Dennard's Scaling, clock speed could increase 40% each generation without increasing power density. This allowed increased performance without the penalty of increased power, DAPRA stated. "That expected increase in processing performance is at an end," said DARPA Director Regina Dugan in a statement. "Clock speeds are being limited by power constraints. Power efficiency has become the Achilles Heel of increased computational capability." DARPA said PERFECT system development will address five areas including: Architecture: to address hardware and software power efficiency innovation and development. Example areas anticipated for development include near threshold voltage operation, heterogeneous massive concurrent architectural approaches, and novel hardware architectural approaches such as new memory hierarchies, application-specialized cores, and data movement minimizing techniques. At the software level, the goal is to develop technologies and techniques that tolerate and exploit new hardware capabilities and overcome the associated limitations. This specifically will include addressing concurrency and reliability. Concurrency : This element will address the hardware and software to support high levels of concurrency - thousands to millions of concurrent execution streams. Hardware efforts in this area may include processing cores and data stores of varying capabilities and efficiencies and perhaps include automatically synthesized processing elements that are optimized for the embedded platform's workload. Software efforts in this area may include language development or augmentation, compilers, and support software to specify and manage concurrent threads. Resiliency: Will focus on the issue of soft errors. Such errors are expected to increase for near threshold voltage operation. Locality: Will focus on minimizing run-time data communication by managing data location and availability. In particular, the memory hierarchy and the software to manage data are included. Languages and language annotations that enable programmer control of data allocation as well as automatic control of data allocation will be investigated. Algorithms: Software techniques to minimize energy consumption. In addition, algorithmic approaches to enable the tolerance of hardware faults will be investigated, at both the kernel and the system levels. Layer 8 Extra Check out these other hot stories:
<urn:uuid:341bc411-45da-45ef-aed8-c2f4cb2471d9>
CC-MAIN-2017-04
http://www.networkworld.com/article/2221572/data-center/darpa-takes-aim-at--achilles-heel--of-advanced-computing--power.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280791.35/warc/CC-MAIN-20170116095120-00066-ip-10-171-10-70.ec2.internal.warc.gz
en
0.927579
764
3.15625
3
Table of Contents Everyone knows what a floppy disk is, but a common question, is how do I clean out a floppy so that it can be used again? I have spoken to many people who have said that after they store information on a floppy they throw them out as they do not know how to erase the files on them so that it is an empty and clean disk. This tutorial will cover how to reformat a floppy disk so that you can reuse it as you wish. In this way you will not have to throw out your used floppies, but can instead keep using them over and over. Note: The images in this tutorial are taken from Windows XP , but the method shown here should still apply for Win95/98/ME/2000. Step 1: Insert the floppy you would like to erase into the floppy drive of your computer. Step 2: Click on the Start button, then click on run. Step 3. The run dialog box should now be shown. Type explorer in the Open: field and press the OK button. Alternatively, you can press down on the Windows key on your keyboard, which is generally located between your left Control and Alt keys and looks like a small flag, and while holding that key down press once on the letter E. Both methods will open Windows Explorer. After Windows Explorer is opened you should see a screen similar to Figure 1 below. Figure 1. Windows Explorer Step 3: Click once on My Computer as show in Figure 1 above to highlight it. Step 4: Click once on the Floppy Drive, usually the A: drive. It should now be highlighted. Step 5: Click on the File menu and then click on the Format option as shown in Figure 2 below. Figure 2. Select the Format Option under the File Menu Step 6: After you click once on Format as shown in Figure 2 above. You will see a screen similar to Figure 3 below. Figure 3. Format Dialog Box Step 7: Type in a descriptive name in the Volume Label field for this floppy or leave it blank. This is optional, so it is up to you. Step 8: Place a checkmark in the Quick Format box designated by the red box in Figure 3 above. Step 9: Press the Start button. You will get a confirmation box as shown in Figure 4 below. Figure 4. Confirmation Step 10: If you want to continue formatting this floppy press the OK button, otherwise press Cancel. Step 11: Windows will now format your floppy. If it has problems Quick Formatting the floppy, then it will tell you so, you should tell it continue formatting the floppy. When it is done formatting, you will be presented with a screen as shown in Figure 5 below telling you the format is complete. Figure 5. Format Complete Step 12: Press OK and then Close. Your floppy is now ready to be used. As always if you have any questions please feel free to post them in our computer help forums. Bleeping Computer Windows Basic Concept Tutorials BleepingComputer.com: Computer Support & Tutorials for the beginning computer user. Modems can be noisy when they dial up and connect to a remote location. Maybe you work late at night and don't want to wake anyone up or maybe the modem sounds just drive you crazy! Whatever the reason is we will walk you through disabling the sounds of your modem. One of the most frustrating tasks a non-technical user may run into is getting a new computer and having no idea how to move their old data to it. In this tutorial we will go over how to move your Internet Explorer favorites from one computer to another in a simple and easy to understand manner so that you have one less headache to deal with in these situations. For this tutorial we will use a ... I am sure many of you have been told in the past to defrag your hard drives when you have noticed a slow down on your computer. You may have followed the advice and defragged your hard drive, and actually noticed a difference. Have you ever wondered why defragging helps though? This tutorial will discuss what Disk Fragmentation is and how you can optimize your hard drive's partitions by ... Almost all desktop computers have a hard drive inside them, but do you really know what they are? Many people when they hear the word hard drive, think that it refers to the computer as a whole. In reality, though, the hard drive is just one of many different pieces that comprise a computer. The hard drive is one of the most important parts of your computer because it is used as a long-term ... Windows XP comes with a built-in firewall called Windows Firewall. For people who do not want to spend the money on a commercial software firewall, this firewall will be more than enough to protect your computer. By default, Windows Firewall disables all incoming traffic to your computer, including ICMP traffic, which consists of pings. Just like all other firewall's you can specify which ...
<urn:uuid:2eae4d1a-be34-4dd3-9493-60987a505ce4>
CC-MAIN-2017-04
https://www.bleepingcomputer.com/tutorials/how-to-format-a-floppy-in-windows/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00030-ip-10-171-10-70.ec2.internal.warc.gz
en
0.920232
1,032
2.640625
3
This chapter explains how to set up Fileshare. It covers the following topics: You should read the first two chapters in this Using with Fileshare Part before reading this chapter. For more information about Fileshare and its more advanced features see the Fileshare Guide. Fileshare provides MSS with VSAM data integrity, enabling you to use SYNCPOINT and SYNCPOINT ROLLBACK commands. Fileshare also enables you to access data files on other systems running on the same network as the machine running MSS. MSS can connect to up to sixteen Fileshare Servers, giving you the option of splitting your data across several machines. You can also run a Fileshare Server as a separate process on the same machine running MSS. This means you can effectively test a network application without a network. Fileshare Servers can be used for the following MSS resources: Fileshare works in a client/server setup, with the Fileshare Server being the server and an MSS-enabled enterprise server being the Fileshare client. The two can communicate with each other over a variety of protocols. The support for these protocols is provided by Micro Focus Common Communications Interface (CCI) modules. You establish the protocols to be used by each server by specifying the appropriate CCI modules when you start the Fileshare Server. Fileshare files can still be accessed by non-MSS applications at the same time. The remainder of this chapter explains how to set up both the Fileshare Server and the client region to communicate over these protocols. Before you try to run a Fileshare Server, you must ensure that Fileshare is available on the server machine. You must run Fileshare commands from the command prompt. To start a Fileshare Server, enter the following: fs /s server-name where server-name names the server, and corresponds to the ID by which the client region knows the server. The /s parameter is mandatory on each Fileshare command. For a list of valid parameters, see the section Using a Fileshare Server Configuration File. This command starts a server with the default protocol enabled, and with no password-checking performed when a client tries to access the server. The default protocol is TCP/IP. You can use the /cm switch to explicitly name the protocols to be used. For example: fs /s fstest /cm ccitc32 /cm ccinb32 In this example, ccitc32 and ccinb32 are the CCI modules required for TCP/IP and NetBIOS, respectively. The following example starts a server with password-checking applied through the file fs.pwd: fs /s fstest /cm ccitc32 /pf fs.pwd The section Using a Fileshare Password File explains how password-checking works. You can avoid specifying a list of parameters whenever you start the server by creating a Fileshare Server configuration file, as the next section explains. If you enter the fs command with no parameters, Fileshare looks for a server configuration file named fs.cfg in your current directory. You can also specify a configuration file of a different name with the /cf option, for example: fs /cf c:\filesh.cfg A Fileshare Server configuration file is an ASCII text file that contains one option per line. The Fileshare Server configuration file, fs.cfg, contains the Fileshare configuration options described below. Only the /s option is required. The /pf option is strongly recommended, however, as without it you cannot enforce any Fileshare security, although MSS does provide Resource Security Level (RSL) checking. Specifies that server-name is the name that the Fileshare Server registers on the network. The name specified has to be unique to that Fileshare Server and must correspond to the Fileshare Server ID specified in the enterprise server's Resource Definition Tables. If a Fileshare Server with that name is already registered on the network, an error is returned. Specifies database-reference-file as the name of the database reference file that the Fileshare Server is to use. Sets the transaction processing timeout, in seconds. If the server does not hear from its client in this period, and another client has requested access to records locked as part of the transaction, all files involved in the transaction are rolled back to their original state and all locks are removed. The default timeout is 60 seconds. You can disable timeout by setting this to 0. The valid values are 0 through 99999999. Specifies the maximum record size that the Fileshare Server processes in kilobytes. The valid values for record-size are 16 through 64. If you specify a value less than 16, the Fileshare Server uses a maximum record size of 16K. If you specify a value greater than 64, the Fileshare Server uses a maximum record size of 64K. if you do not include this option, the Fileshare Server uses a maximum record size of 64K. Setting a lower value than the default value reduces the amount of memory that the Fileshare Server needs to run. Specifies cci-protocol as one of the CCI communications protocols that the Fileshare Server can use to receive communications from a Fileshare Client. Repeat this option for every communications protocol that you want to use to contact this Fileshare Server. Valid values for cci-protocol are: If you do not specify any entries, the default is CCITC32. Specifies the name of the Fileshare Server configuration file. Use this option only on the command line. When using this option, you must specify the required Fileshare Server options in the configuration file. If you do not specify a name for the configuration file, it defaults to fs.cfg in the Fileshare Server's current directory. Names the password file used by this server. This option activates the Fileshare Password system. If you do not use this option, the Fileshare Server runs without security enabled. Specifies that the Fileshare Server trace option is activated as soon as the Fileshare Server starts. The trace echoes to the screen and a file called fsscreen.lst in the Fileshare Server's current directory. This option seriously impacts the Fileshare Server's performance. Use it only for problem investigation. Assume the file fs.cfg contains the following lines: /s fsserv1 /cm ccitc32 /cm ccinb32 /pf serv1.pwd If you enter the command: Fileshare reads fs.cfg and starts a server named FSSERV1, with the TCP/IP and NetBIOS protocols enabled. It uses the file serv1.pwd for password security. A Fileshare Server uses its password file to verify that a client enterprise server is entitled to access its files. Although Fileshare can run without a password file, you must use one if you want to apply any security through the Fileshare Server itself. This is strongly recommended, since Fileshare has access to all the files on your system. The Fileshare Server uses its password file only when the enterprise server makes its first attempt at access (that is, when it starts up). File-level security is controlled by MSS itself, through the resource definitions you make. The only way to maintain a password file is through this command line: fs /pf pwd-filename options Depending on the options you specify, Fileshare creates, modifies, or deletes pwd-filename. You can name the password file whatever you want, but it must correspond to the name you specify with the /pf option when configuring the Fileshare Server. The options are as follows: |/u username||The user name used by the enterprise server to log onto the server. (This must correspond to the value specified in the FS user name field in the enterprise server's SIT.) Can be up to 20 characters.| |/pw password||The enterprise server's password. (This must correspond to what is specified in the FS password field in the enterprise server's SIT.) Can be up to 20 characters.| |/e||Erases the specified user from the password file. You must specify both the /u and /pw options on the same command line when you are If you delete all the entries in the password file, the Fileshare server deletes it. Note: Both passwords and user names are case-sensitive. Assume no password file exists, and you want to create one with two entries. The following commands: fs /pf fssecu.pwd /u mtoServ1 /pw fspass1 fs /pf fssecu.pwd /u mtoserv2 /pw fspass2 result in the password file fssecu.pwd being created, with entries for both mtoserv1 and mtoserv2. The mtoserv1/fspass1 and mtoserv2/fspass2 combinations must correspond to FS username and FS password fields in the enterprise servers' SIT. To put password checking into effect on a Fileshare Server called FSSERV (for example), you must use the /pf option when starting it: fs /s fsserv /cm ccitcp /pf fssecu.pwd To disable mtoserv2's access to this Fileshare Server, use the following command: fs /pf fssecu.pwd /e /u mtoserv2 /pw fspass2 If you now enter this command: fs /pf fssecu.pwd /e /u mtoserv1 /pw fspass1 Fileshare removes the entry for mtoserv1, and deletes the file fssecu.pwd. You can monitor the activity on a Fileshare Server by pressing F2 from the same session in which you started the server. This turns on Fileshare's trace facility, which displays file requests as they occur. For each request, you see the user identifier, opcode, requested filename, and file status bytes of the reply to the user. You should use this facility only as a diagnostic aid, since it can degrade performance. Press F2 again to turn it off. Before stopping a Fileshare Server, you must either stop all enterprise servers connected to it or make sure that all the files held on Fileshare servers are closed.. To stop a Fileshare Server locally, go to the session from which you started it, and press the Esc key. You see the prompt: FS097-I Are you sure you wish to close the Fileshare Server? (Y/N) Enter Y to continue the shutdown; enter any other key to let the server continue running. If any database files are open when you reply Y, a second prompt appears: FS111-I Warning - database files are still open Continue with the close (Y/N)? If you enter Y, Fileshare closes all open files and shuts down. Enter any other key to cancel the shutdown. If you have enabled Fileshare security, you must first give the enterprise server a Fileshare user name and password. These must be specified in the FS user name and FS password fields of the enterprise server's SIT. The enterprise server uses this user name and password combination to log on to all its Fileshare Servers. The combination must appear in the password file for each Fileshare server the enterprise server wants to access. Next you should name the servers the enterprise server is to connect to. For data files you can have a choice of mechanisms for doing this: For extrapartition transient data queues and intrapartition transient data and temporary storage queues you must name the servers in the RDF. If you do not explicitly name the servers in the RDF, and you have an entry in the fhredir.cfg file that simply names a server (/s servername), then all file requests that do not have an explicit Fileshare server defined will be redirected to the named Fileshare server. If you are running multiple enterprise servers this will cause file contention problems. Finally, you must specify the protocols over which the region communicates with its servers. You do this in the client configuration file fhredir.cfg. MSS consults the fhredir.cfg file whenever a region requires a resource from a server. You should not, therefore, change this file while any regions using servers defined by it are running. For full details of fhredir.cfg entries and examples see the chapters Standard Operation and Configuration in your Fileshare Guide The next section explains how to specify Fileshare servers in the RDF to enable Fileshare access. Through the Resource Definition File, you can specify that any of the following resources reside on a Fileshare Server: The FCT, DCT, and SIT contain fields for specifying the name of a Fileshare Server. If you fill one of these fields in, MSS looks for the resource on a Fileshare Server. The FCT fields are on the FCT Details page of ESMAC. The fields are: |Fileshare Server||The Fileshare Server through which MSS accesses the file. This name must correspond to the name you give the server when you start it. This field is mandatory.| |Override Filename||The filename Fileshare uses to find the file. This field is optional. it defaults to the name of the FCT entry.| |File Path||The path Fileshare uses to find the file. This field is optional. It must be specified if the path defined by the Mainframe Express project does not exist on the Fileshare server machine.| |File Extension||Specifies the file extension Fileshare uses to find the file. This field is optional. For indexed files, this results in a filename and its index component as follows: The DCT fields are on the DCT Details page of ESMAC. The fields are: |Fileshare||The Fileshare Server through which MSS accesses the file. This name must correspond to the name you give the server when you start it. This field is mandatory.| |File Name||The filename Fileshare uses to find the file. This field is optional. The name defaults to the name of the DCT entry| |The path Fileshare uses to find the file. This field is optional. It must be specified if the path defined by the Mainframe Express project does not exist on the Fileshare server machine.| |Extension||The file extension Fileshare uses to find the file. This field is optional. For indexed files, this results in a filename and its index component In the SIT, there are two sets of fields through which you specify Fileshare Servers for temporary storage and intrapartition transient data queues. You do not have to specify the Fileshare Server and path for each individual queue. The SIT fields are on the SIT Details page of ESMAC. There are four sets of the following fields, one each for: The fields are: |Fileshare Srvr||The Fileshare Server through which MSS accesses the file. This name must correspond to the name you give the server when you start it. This field is mandatory.| |Path||The path Fileshare uses to find the file. This field is optional. It must be specified if the path defined by the Mainframe Express project does not exist on the Fileshare server machine.| Copyright © 2008 Micro Focus (IP) Ltd. All rights reserved.
<urn:uuid:73dc7a58-e1d7-4b6c-8c3a-cacb631f1185>
CC-MAIN-2017-04
https://supportline.microfocus.com/documentation/books/nx60/cbfsha.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00360-ip-10-171-10-70.ec2.internal.warc.gz
en
0.808476
3,264
2.671875
3
Disadvantages of Smoking - Bad Effects of Smoking There are no good effects of smoking on smoker's body, not even one. Smoking causes only bad effects on health of smoker and does damage to the body. The ingredients of tobacco smoke are chemically active. They can start dramatic and fatal changes in the body. There are over 4,000 chemicals, which can be damaging to the smoker's body. They include tar, carbon monoxide, nitrogen oxides, hydrogen cyanide, metals, ammonia, and radioactive compounds. Disadvantages & Bad Effects of Smoking Scientists and doctors know so much more about the effects of smoking today than ever before. They know smoking causes immediate effects on the smoker's body. It constricts the airways of the lungs. It increases the smoker's heart rate. It elevates the smoker's blood pressure. The carbon monoxide in tobacco smoke deprives the tissues of the smoker's body of much-needed oxygen. All of these are dangerous short-term effects. There are more serious long-term effects as well. Smoked tobacco in the forms of cigarettes, pipes, and cigars causes lung cancers, emphysema, and other respiratory diseases. In fact, smoking causes ninety percent of all lung cancer cases. - Twenty percent of heavy smokers get the chronic lung disease called emphysema, which causes the narrowing, and clogging of the airway passages in the lungs. This disease is seldom seen in nonsmokers. - Smokers are also at least four times more likely to develop oral and laryngeal cancer than nonsmokers. - Smoking contributes to heart disease. It increases the risk of stroke by nearly 40% among men and 60% among women. Smoking is an addiction. Tobacco smoke contains nicotine, a drug that is addictive and can make it very hard, but not impossible, to quit. More than 400,000 deaths in the U.S. each year are from smoking-related illnesses. Smoking greatly increases your risks for lung cancer and many other cancers. - Smoking harms not just the smoker, but also family members, coworkers and others who breathe the smoker's cigarette smoke, called secondhand smoke. - Among infants to 18 months of age, secondhand smoke is associated with as many as 300,000 cases of bronchitis and pneumonia each year. Secondhand smoke from a parent's cigarette increases a child's chances for middle ear problems, causes coughing and wheezing, and worsens asthma conditions. - If both parents smoke, a teenager is more than twice as likely to smoke as a young person whose parents are both non-smokers. In households where only one parent smokes, young people are also more likely to start smoking. - Pregnant women who smoke are more likely to deliver babies whose weights are too low for the baby's good health. If all women quit smoking during pregnancy, about 4,000 new babies would not die each year. Why Quit Smoking? - Quitting smoking makes a difference right away - you can taste and smell food better. Your breath smells better. Your cough goes away. This happens for men and women of all ages, even those who are older. It happens for healthy people as well as those who already have a disease or condition caused by smoking. - Quitting smoking cuts the risk of lung cancer, many other cancers, heart disease, stroke, other lung diseases, and other respiratory illnesses. Ex-smokers have better health than current smokers. Ex-smokers have fewer days of illness, fewer health complaints, and less bronchitis and pneumonia than current smokers. - Quitting smoking saves money. A pack-a-day smoker, who pays $2 per pack can, expect to save more than $700 per year. It appears that the price of cigarettes will continue to rise in coming years, as will the financial rewards of quitting. - Quitting smoking may be hard but not impossible and remember where there is a will there is a way. Check out the information below to view the list of Bad Effects of Smoking. Harmful Effects of Smoking - The harmful health effects of smoking cigarettes presented in the list below only begin to convey the long term side effects of smoking. Quitting makes sense for many reasons but simply put "smoking is bad for your health". - Smoking Kills - Every year hundreds of thousands of people around the world die from diseases caused by smoking cigarettes. One in two lifetime smokers will die from their habit. Half of these deaths will occur in middle age. - Tobacco smoke also contributes to a number of cancers. - The mixture of nicotine and carbon monoxide in each cigarette you smoke temporarily increases your heart rate and blood pressure, straining your heart and blood vessels. - This can cause heart attacks and stroke. It slows your blood flow, cutting off oxygen to your feet and hands. Some smokers end up having their limbs - Tar coats your lungs like soot in a chimney and causes cancer. A 20-a-day smoker breathes in up to a full cup (210 g) of tar in a year. - Changing to low-tar cigarettes does not help because smokers usually take deeper puffs and hold the smoke in for longer, dragging the tar deeper into their lungs. - Carbon monoxide robs your muscles, brain and body tissue of oxygen, making your whole body and especially your heart work harder. Over time, your airways swell up and let less air into your lungs. - Smoking causes disease and is a slow way to die. The strain of smoking effects on the body often causes years of suffering. Emphysema is an illness that slowly rots your lungs. People with emphysema often get bronchitis again and again, and suffer lung and heart failure. - Lung cancer from smoking is caused by the tar in tobacco smoke. Men who smoke are ten times more likely to die from lung cancer than non-smokers. - Heart disease and strokes are also more common among smokers than non-smokers. - Smoking causes fat deposits to narrow and block blood vessels which leads to heart attack. - Smoking causes around one in five deaths from heart disease. - In younger people, three out of four deaths from heart disease are due to smoking. - Cigarette smoking during pregnancy increases the risk of low birth weight, prematurity, spontaneous abortion, and perinatal mortality in humans, which has been referred to as the fetal tobacco syndrome. How Smoking Affects the Body There's hardly a part of the human body that's not affected by the chemicals in the cigarettes you smoke. Let's take a tour of your body to look at how smoking affects it. As a smoker, you're at risk for cancer of the mouth. Tobacco smoke can also cause gum disease, tooth decay and bad breath. The teeth become unsightly and yellow. Smokers may experience frequent headaches. And lack of oxygen and narrowed blood vessels to the brain can lead to strokes. Moving down to your chest, smoke passes through the bronchi, or breathing tubes. Hydrogen cyanide and other chemicals in the smoke attack the lining of the bronchi, inflaming them and causing that chronic smoker's cough. Because the bronchi are weakened, you're more likely to get bronchial infections. Mucus secretion in your lungs is impaired, also leading to chronic coughing. Smokers are 10 times as likely to get lung cancer and emphysema as nonsmokers. The effects of smoking on your heart are devastating. Nicotine raises blood pressure and makes the blood clot more easily. Carbon monoxide robs the blood of oxygen and leads to the development of cholesterol deposits on the artery walls. All of these effects add up to an increased risk of heart attack. In addition, the poor circulation resulting from cholesterol deposits can cause strokes, loss of circulation in fingers and toes and impotence. The digestive system is also affected. The tars in smoke can trigger cancer of the esophagus and throat. Smoking causes increased stomach acid secretion, leading to heartburn and ulcers. Smokers have higher rates of deadly pancreatic cancer. Many of the carcinogens from cigarettes are excreted in the urine where their presence can cause bladder cancer, which is often fatal. High blood pressure from smoking can damage the kidneys. The health effects of smoking have results we can measure; - 40% percent of men who are heavy smokers will die before they reach retirement age, as compared to only 18 percent of nonsmokers. - Women who smoke face an increased risk of cervical cancer, and pregnant women who smoke take a chance with the health of their unborn babies. But the good news is that when you quit smoking your body begins to repair itself. Ten years after you quit, your body has repaired most of the damage smoking caused. Those who wait until cancer or emphysema has set in are not so lucky - these conditions are usually fatal. It's one more reason to take the big step and quit smoking now. Many smokers do not realize that there are actually substance abuse treatment programs designed to help them quit the bad habit of smoking. This list is based on the content of the title and/or content of the article displayed above which makes them more relevant and more likely to be of interest to you. We're glad you have chosen to leave a comment. Please keep in mind that all comments are moderated according to our comment policy, and all links are nofollow. Do not use keywords in the name field. Let's have a personal and meaningful conversation.comments powered by Disqus
<urn:uuid:4cc20fa1-4608-4099-a433-988770012075>
CC-MAIN-2017-04
http://www.knowledgepublisher.com/article-222.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00296-ip-10-171-10-70.ec2.internal.warc.gz
en
0.952849
1,977
3.296875
3
Definition: A fully persistent data structure that allows meld or merge operations to combine two different versions. See also partially persistent data structure, fully persistent data structure. James R. Driscoll, Daniel D. Sleator, and Robert E. Tarjan, Fully Persistent Lists with Catenation, Journal of the ACM, 41(5):943-949, 1994. If you have suggestions, corrections, or comments, please get in touch with Paul Black. Entry modified 17 December 2004. HTML page formatted Mon Feb 2 13:10:39 2015. Cite this as: Paul E. Black, "confluently persistent data structure", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 17 December 2004. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/confluentlyPersistData.html
<urn:uuid:ad62db5a-d917-415d-bde4-3567667f28e7>
CC-MAIN-2017-04
http://www.darkridge.com/~jpr5/mirror/dads/HTML/confluentlyPersistData.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00534-ip-10-171-10-70.ec2.internal.warc.gz
en
0.76076
202
2.828125
3
Clark J.L.,University of Alabama | Roalson E.H.,Washington State University | Pritchard R.A.,University of Alabama | Coleman C.L.,University of Alabama | And 2 more authors. Systematic Botany | Year: 2011 Phinaea, in the currently accepted circumscription, is a genus in the flowering plant family Gesneriaceae with three widely disjunct species. These species are known from small populations in Mexico, northern South America, and the West Indies (Cuba and Haiti), respectively. Phinaea pulchella is one of the few members of the tribe Gloxinieae that occurs naturally in the West Indies and it is the only member of the tribe endemic to that region. It was rediscovered in Cuba in 2008, more than fifty years after it was last documented. Results from molecular data generated from the nrDNA ITS and cpDNA trnL-F regions strongly support that P. pulchella does not group with other Phinaea species and instead shares a recent common ancestor with Diastema vexans in a clade that is sister to Pearcea and Kohleria. The phylogenetic placement of P. pulchella suggests that radial floral symmetry and buzz-pollination is autapmorphic in this taxon. Our results strongly support convergence of radial symmetry and associated characters with buzz-pollination in the following taxa in the tribe Gloxinieae: Niphaea, Phinaea s. s., Phinaea pulchella, and Amalophyllon. New generic circumscriptions based on the results presented here are not suggested until more complete taxon sampling includes additional species currently recognized in Amalophyllon. © 2011 by the American Society of Plant Taxonomists. Source
<urn:uuid:de6c1075-120b-4599-b6c3-b8887ec87b37>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/empresa-nacional-para-la-proteccion-de-la-flora-y-la-fauna-582907/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282926.64/warc/CC-MAIN-20170116095122-00102-ip-10-171-10-70.ec2.internal.warc.gz
en
0.9208
371
3.15625
3
The Indian Institute Of Technology at Kanpur (IIT-K) unveiled a new supercomputer yesterday – India’s fifth fastest and a premier resource for this educational institution. Ashish Dutta, head of the Computer Centre, revealed that IIT-K spent about Rs. 48 crore (US$8 million) in acquiring the machine, which will be used for research, education and training purposes. “Supercomputers are used in many areas of science and engineering research,” said Dutta in reference to the new system. “HPC can simulate nuclear explosions, evolution of galaxies, effects of pacemaker redesign, genetics and genetic related diseases, medical drug discovery, aircraft light, aerodynamics, etc.” Among the projects already slated to benefit from the increased computational power are advanced aircraft design and scientific agri-forecasting. The computer was manufactured by Hewlett-Packard and assembled and tested by a team of engineers headed by IITK’s deputy director SC Shrivastava. With a LINPACK performance measured at 282 teraflops, the supercomputer earned a 130th ranking on the most recent edition of the TOP500 list, published in November 2013. This marks the second supercomputer for IIT-Kanpur. The institution’s previous HPC cluster, also an HP system, was ranked 369 on the June 2010 TOP500 list. “As load on this cluster grew, we decided to go in for a bigger machine,” explained Institute dean Manindra Agarwal.
<urn:uuid:dccf3dd3-3934-4729-baf5-8c18877c72a9>
CC-MAIN-2017-04
https://www.hpcwire.com/2014/06/04/iit-kanpur-launches-second-supercomputer/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00314-ip-10-171-10-70.ec2.internal.warc.gz
en
0.928337
324
2.515625
3
Machine learning is often viewed as a new technology, yet the concept of algorithms being applied to machines in order to learn and make predictions based on data has actually been a field of research for over 50 years. Arthur Samuel, an American pioneer in the field of computer gaming, defined machine learning in 1959 as a “field of study that gives computers the ability to learn without being explicitly programmed.” The concept of machine learning really began to flourish in the 1990s with the rise of the Internet and increasing availability of digital information. It’s no surprise that the hype around machine learning has only increased over the years, especially as data-driven algorithms are being applied more commercially. Machine learning is now viewed as a “silver bullet” for analyzing large data sets in order to predict certain outcomes. The IT sector in particular has taken a strong interest in machine learning-based technology, especially as IT environments become software-defined and are generating large volumes of data in real-time.
<urn:uuid:759f8c86-2eb8-438e-b31a-1318e26eaf9d>
CC-MAIN-2017-04
https://www.moogsoft.com/whats-new/articles/should-machine-learning-be-applied-to-it-operational-tools/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280929.91/warc/CC-MAIN-20170116095120-00130-ip-10-171-10-70.ec2.internal.warc.gz
en
0.970713
200
3.203125
3
China is one of the few countries in the world whose 2013 population of will decrease by about 39 million (from 1,343,239,923 to 1,303,723,332) by 2050. Although this will go a long way in reducing energy consumption across the country, it still faces considerable obstacles in putting a system in place that will reduce high rates of pollution in major cities across the country while meeting the demands of the world's second-largest economy. China has been investing heavily in smart power grid construction as part of its 12th Five Year Plan, which will include rapid deployment of modern metrology, communications, information and control technologies with a $15. 4 billion investment. Pike Research estimates the country will have cumulative smart grid revenue of $127 billion by 2020. The smart grid being put in place will integrate the latest technology and cover all voltage levels so the system is able to function with optimal power, information and business flow by focusing in power generation, transmission, transformation, distribution, consumption and dispatching. The research provides 40 key industry players, including China, classified by different industries with a comprehensive analysis of issues that are relevant to China's smart grid deployment program. This includes market drivers, business models and issues regarding technology that could hinder the country's ambitious deployment of this platform through 2020. The report answers key questions for investors and other interested parties including: - The business models being employed in China and the market drivers for smart grid development - The size of the Chinese smart grid market through 2020 - Key players in the Chinese smart grid market - The impact of regulatory policies in the country that could affect the smart grid project - Key areas of research and development in smart grid advances in China - Regional variations of smart grid development in China The deployment of the smart grid project is divided into three phases: Stage 1 (2009-2010), the planning and trial phase; Stage 2 (2011-2015), the full construction phase; and Stage 3 (2016-2020), the leading and enhancing phase. China is making great advances in every sector of the country's infrastructure, but there is a great imbalance in the country's energy supply. In order to move away from its coal dependency, which stands at 76 percent as of today, they have to start implementing other sources of energy to address pollution, dependency and reliability. Edited by Alisen Downey
<urn:uuid:ed3ee8b2-53b9-4de5-9d4e-254b412d4a32>
CC-MAIN-2017-04
http://www.iotevolutionworld.com/topics/smart-grid/articles/2013/06/24/343144-chinese-smart-grid-market-growing.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00524-ip-10-171-10-70.ec2.internal.warc.gz
en
0.942832
483
2.515625
3
We are writing one program in which we have one file as input in that file there are multiple logical files and we have to do some processing and write this logical file to separate flat files i.e our output file but we are not knowing that the number of logical file one input file can conatin so how can we decide the number of output file in the program?? COBOL is a compiled language. It does not support dynamic allocation of files except in a very limited way -- you have to predefine the file to COBOL then at run-time you can see if it exists. You cannot create a file on the fly in COBOL.
<urn:uuid:a469c3a6-9e76-479a-93a0-8521f79ebaad>
CC-MAIN-2017-04
http://ibmmainframes.com/about37328.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280065.57/warc/CC-MAIN-20170116095120-00251-ip-10-171-10-70.ec2.internal.warc.gz
en
0.942007
135
2.84375
3
Get Instant Access To unlock the full content, please fill out our simple form and receive instant access. Data replication can play a key role in assuring data availability and recovery for networked storage. Replication in the form of data snapshots as well as full site-to-site mirroring can be achieved via three methods: - Host Based. Software running on the host servers manages volume copying and replicates storage writes. - Array Based. Replication is done by software running on a disk array controller. - Network Based. Replication is managed on the network either through software on a “smart” switch or via network attached appliance or server.
<urn:uuid:0d2badca-75ea-4f13-8b8e-c51cb883e0f4>
CC-MAIN-2017-04
https://www.infotech.com/research/what-you-want-replications-with-that
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00489-ip-10-171-10-70.ec2.internal.warc.gz
en
0.898306
136
2.796875
3
When we discuss exploit prevention, we often talk about “targeted applications.’ This term refers to end-user applications which can be exploited by hackers for malicious purposes. There are a few requirements that define these applications. They receive external content: In order to deliver the exploit, the attacker must be able to provide the user with specially crafted content that contains the malicious exploit (a.k.a. weaponized content). This can be for example an HTML web page that contains a hidden Java applet, or an email attachment (like a Word document, Excel Spreadsheet, or PDF document) that contains hidden code. This code executes when the application (i.e. browsers, Java, Word, Excel or Adobe Acrobat reader) opens the content, and exploits vulnerabilities in these applications to download malware on the endpoint. If an application does not receive external content, it would be impossible for the attacker to deliver the weaponized content and the exploit. They have vulnerabilities: Vulnerable applications provide the attacker an opportunity to develop an exploit. Some applications contain more vulnerabilities than others. And some vulnerabilities are easier to exploit. An application that has many exploitable vulnerabilities will be targeted more often. Zero-day vulnerabilities, which are vulnerabilities that are unknown to the public, are more likely to be successfully exploited because there is no patch available. But zero-day vulnerabilities are not a requirement. Interestingly, known application vulnerabilities are still exploited because many users do not apply security patches in a timely manner. They are common applications: common applications that can be found on most user endpoints are targeted more often than uncommon, specialized applications. Of course the more common the application is, the wider the attack surface it provides. There are exploits available: Exploit code must be developed in order to exploit the application vulnerability. If the vulnerability exists, but no exploit code was developed, the risk remains theoretical. Considering the listed requirements for targeted applications, it is not surprising that the most targeted end-user applications are: the browsers, Java applications, Adobe Acrobat, Flash, Word, Excel, PowerPoint and Outlook. These are all common applications found on most user endpoints. They all receive external content that can be weaponized. They all contain vulnerabilities: most of them are known but periodically we hear about zero-day vulnerabilities. And exploit kits that contain exploit codes are widely available. If we take for example the RSA breach, according to the blog RSA posted, the attacker used a spear-phishing campaign to deliver a weaponized attachment to employees. The spear-phishing email included a weaponized attachment – an Excel spreadsheet, containing a zero-day exploit object. It exploited an Adobe Flash vulnerability (CVE-2011-0609) to silently install a customized remote access Trojan known as the Poison Ivy RAT. Both Excel and Adobe Flash are common targeted applications that can be found on most user endpoints. Any advanced threat protection and exploit prevention technology must ensure that these targeted end-user applications are not successfully exploited. Since these applications are very different from each other, special controls may be required for each application. For example – Java applications are vulnerable to both native exploits (execute at the memory level) and applicative exploits (execute in the user space by breaking out of the JVM sandbox). Solutions that apply granular controls at the OS level to protect against native exploits would not be able to protect against applicative exploits. Author: Dana Tamir, director of enterprise security at Trusteer.
<urn:uuid:fd54787a-964e-4f28-b4f6-facc440b20de>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2013/08/28/shielding-targeted-applications/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00305-ip-10-171-10-70.ec2.internal.warc.gz
en
0.930201
711
3.21875
3
Using "twisted vortex beams," American and Israeli scientists have boosted wireless transmission speeds to 2.5 terabits per second. As explained in ExtremeTech, the researchers "use orbital angular momentum (OAM) to cram much more data into s single stream." Current technology uses SAM (Spin Angular Momentum). These results came only a few months after Bo Thide at the Swedish Institute of Space Physics proved OAM is possible, and sent a signal over 442 meters (1450 feet). The 2.5 terabits per second test twisted eight 300Gbps light streams around each other, but only transmitted them for one meter (three feet). Alan Willner and others at USC, NASA's Jet Propulsion Laboratory, and Tel Aviv University believe successful implementation could boost wireless throughput by 1,000 times or more. It will bring costs down. 1000 times more bandwidth without the bidding wars for spectrum.dinkster on slashgear.com I want it now!The Divine Miss Z on slashgear.com Not so fast So yes, this is great news, just don't expect it to upgrade your home wifi network or LTE connection to said 2.5 terabits.Alexander ypema on extremetech.com until the rest of the computer hardware works that fast, it is nothing but an academic paper.SinOjos on slashgear.com I don't know much about the science, but anyone who claims their new breakthrough has an infinite capacity immediately sets off my skepticism.recursive on news.ycombinator.com Given that OAM is a modulation technique, which is a physical-layer tech, nothing above firmware will even know about it.ovi256 on news.ycombinator.com OAM has promises for broadcast as well, it needs a lot more work though.Naiem Y on extremetech.com So it'll be great for things like backhaul networks, satellite linkups etc, but it won't solve the problem of mobile wireless access (WiFi/LTE/etc).kaelleboo on news.ycombinator.com Now the US and Israel can spy on everyday citizens even faster!!Viral Videos on extremetech.com Tell us how fast your network connection was when you first got into computing: 300 baud, 10Mpbs, etc. Now read this:
<urn:uuid:38f67d2e-324f-48d9-9e18-7fab8432f770>
CC-MAIN-2017-04
http://www.itworld.com/article/2722969/networking/lab-boosts-wireless-speeds-to-2-5-terabits-per-second.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283301.73/warc/CC-MAIN-20170116095123-00213-ip-10-171-10-70.ec2.internal.warc.gz
en
0.910484
495
2.546875
3