text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
With today’s high processor clock speeds and multiple cores, almost every home user has a few processor cycles to spare. The question is, what do we do with them and how do we make use of them for good? Distributed volunteer computing uses software and hundreds to thousands of computers around the world to create a super computer via the Internet. Distributed computing or grid computing is certainly nothing new with some of these projects having been around for quite a while, but both computers and the software have advanced quite a bit since you might have last thought about joining the grid to help further science. There also might be some projects listed here you didn’t know about that interest you and if you know a grid computing project that I didn’t list, certainly share that information in the forum. Just as the Kiva and Kickstarter article showed you some ways to put your idle money to good use, this article will show you how to put your idle computer to work for improving our world. Folding@Home is a well-known distributed computing system that puts volunteered computing power to work to assist researchers investigating different diseases. By understanding protein folding and misfolding, the researchers may be able to understand and combat the related diseases like Alzheimer’s, Huntington’s, Parkinson’s Disease and some cancers. By using distributed computing, protein folding simulations are able to be crunched in parts around the world and then the results are sent back to Stanford. The coordination of downloading the data, processing it, and uploading the results is all controlled by a small agent that you download and install. Not only Windows computers and Macs but also Linux systems like Ubuntu can work with Folding@Home and with quite some ease using Origami. Beyond computers, even the Playstation 3 can make use of Folding@Home and contribute during its idle time. The installation of the Folding@Home software is a breeze. For Windows computers it comes as two versions, one gives you a system tray icon the other does not for console use. The console version is good to use if you don’t want the application to be a distraction whereas the System Tray version allows you to easily see the work units in progress and other settings. Partially for tracking and partially for motivation, Folding@Home allows you to monitor your statistics about how many work units or points you have earned through spare processing power. You can see your individual statistics or team statistics. If you’d like to join Team 404 Tech Support, use team number: 180008 BOINC is also a well-known volunteer computing software package, but the software is probably not nearly as well-known as some of the popular projects that utilize BOINC. It allows researchers to wrap their projects in the BOINC software for easily adapting to an existing grid computing network. Unlike Folding@Home, BOINC can serve multiple projects. It will rotate through any projects you subscribe to at a custom-defined time (default: every 60 minutes). There are a wide variety of projects that you can contribute to through BOINC, some of them include: - SETI@Home – Perhaps the most popular, SETI@Home uses grid computing to analyze radio telescope data in the Search for Extraterrestrial Intelligence. - LHC@Home – The LHC@Home project computes data for particle accelerators like the Large Hadron Collider. - Cosmology@Home – This project run at the University of Illinois seeks the model that best describes our Universe. - World Community Grid – The World Community Grid is sponsored by IBM and uses BOINC for a number of different projects that all have humanitarian aims. The BOINC software allows an easy view and a more advanced view (seen below) where you can configure the finer details regarding your preferences and view the progress of your computations. Since BOINC so easily works with other projects, you might be interested in Grid Republic, a system for BOINC that allows you to manage multiple BOINC projects with a single login. If you’re looking into contributing to a grid computing network or wanting to make use of one, you should also check out: The Condor Project, run by the University of Wisconsin, Madison. TeraGrid, which is coordinated by the University of Chicago. Certainly you know someone affected by one of the diseases being researched or have an interest in some of the areas of science being explored that getting involved with Grid Computing makes sense. When you’re not using your computer, you might as well let some good come of it. You never know, the numbers you crunch today may be used in research tomorrow that saves your life some day. Look into the projects listed above or find others in some of the links to find a project you can proudly contribute to and know you’re making a difference in scient. If you’d like to find out more about grid computing, check out Grid Cafe. If you know of other related resources that I failed to list, please share them in the forum topic.
<urn:uuid:bf065aff-c7bb-430e-b034-87c82c22863e>
CC-MAIN-2017-04
https://www.404techsupport.com/2010/02/putting-your-idle-computer-to-good-use/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00336-ip-10-171-10-70.ec2.internal.warc.gz
en
0.93097
1,035
2.765625
3
Create a Customized Big Data Architecture and Implementation Plan Storyboard - Sample Many data architects have no experience with big data and feel overwhelmed by the number of options available to them (including vendor options, storage options, etc.). They often have little to no comfort with new big data management technologies. There are a few key reasons big data architecture is different than traditional data architecture: - Big data architecture starts with the data itself, taking a bottom-up approach. Decisions about data influence decisions about components that use data. - Big data introduces new data sources such as social media content and streaming data. - The enterprise data warehouse (EDW) becomes a source for big data. - Master data management (MDM) is used as an index to content in big data about the people, places, and things the organization cares about. - The variety of big data and unstructured data requires a new type of persistence. - Analytics capabilities need to be expanded to handle the variety, volume, and velocity of big data. - Big data applications leverage reporting and visualization in new ways to integrate information and generate new insights. Before beginning to make technology decisions regarding the big data architecture, make sure a strategy is in place to document architecture principles and guidelines, the organization’s big data business pattern, and high-level functional and quality of service requirements.
<urn:uuid:de33b103-3eba-4f53-936c-ef6ecf56018f>
CC-MAIN-2017-04
https://www.infotech.com/research/storyboard-create-a-customized-big-data-architecture-and-implementation-plan
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00152-ip-10-171-10-70.ec2.internal.warc.gz
en
0.870064
279
2.75
3
Many of us know the danger of shoulder-surfers. Those are the people who lurk beside you, or peek over your shoulder, as you enter a password on your computer or tap the PIN code into an ATM. But did you know that the people stealing your iPhone or iPad passcode could be up to 150 feet away, and not even able to see your device's screen? It sounds like science fiction, but researchers at the University of Massachusetts Lowell claim they can easily steak smartphone passcodes as they are typed in, even if they are well out of arm's reach. Xinwen Fu, a scientist who worked on the project, described to Wired how the research revealed that passcodes could be determined on iOS and Android devices even when the screen itself wasn't visible, by tracking and taking video of the users' finger taps. Unsurprisingly, different hardware in the hands of the would-be hackers produced different results. Google Glass could detect a passcode with 83% accuracy, from a distance of three feet. A $72 Logitech webcam scored a more impressive 92% accuracy. Best of all was the iPhone 5's built-in camera, which accurately identified passcodes 100% of the time. But before you smirk and admit you have to applaud Apple for the quality of their smartphone camera, here's something else to consider. A $700 high-definition Panasonic camcorder, almost 150 feet from its intended target, was able to extract the passcode from a victim's iPad with its optical zoom. Of course, despite its poorer performance, Google Glass might be the one to be most concerned about - as it can take video footage so surreptitiously. “Any camera works, but you can’t hold your iPhone over someone to do this,” says Fu. “Because Glass is on your head, it’s perfect for this kind of sneaky attack.” How to Protect Yourself My first recommendation is to stop using simple four digit passcodes for your iOS devices. Even though the researchers claim that longer passwords (that aren't just limited to the numbers 0 to 9) don't appear to be dramatically harder to crack, they clearly provide a higher level of security. You can do this by going into Settings / Passcode (you may be asked for your existing passcode at this point), and toggling "Simple Passcode." Secondly, if you're worried that someone might be snooping, obscure your keypresses as you unlock your iPhone, iPad or indeed Android device - just like you would shield the numeric pad as you enter your PIN at a cash machine. Finally, don't let your iDevice out of your sight! Yes, it's bad if your passcode ends up in the wrong hands - but the bad guys can't actually do anything with it unless they manage to get physical access to your device. Xinwen Fu and his fellow researchers will present a paper about their research at the Black Hat conference later this year, and release an Android app called PEK (Privacy Enhancing Keyboard) that randomizes the buttons on a lockscreen keyboard to make snooping via this method considerably more tricky. To demonstrate a fix for that PIN privacy issue, the researchers have built an Android add-on that randomizes the layout of a phone or tablet’s lockscreen keyboard. They plan to release the software, dubbed Privacy Enhancing Keyboard or PEK, as an app in Google’s Play store and as an Android operating system update at the time of their Black Hat talk. Will an app like PEK ever be released for iOS? It's hard to imagine it happening any time soon. Apple has tight control over many aspects of its operating system, making it impossible for third-parties to mess with such fundamental aspects as the iPhone/iPad lock screen. Unless, of course, you've decided to jailbreak your iOS device - in which case you could have any number of other security issues to consider... 😉
<urn:uuid:c1c97b86-3c74-4d00-b3e6-222b420cab8a>
CC-MAIN-2017-04
https://www.intego.com/mac-security-blog/steal-iphone-passcode/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00152-ip-10-171-10-70.ec2.internal.warc.gz
en
0.955518
833
2.71875
3
In the first part of this blog series on In Memory Databases (IMDBs) I talked about the definition of “memory” and found it surprisingly hard to pin down. There was no doubt that Dynamic Random Access Memory (DRAM), such as that found in most modern computers, fell into the category of memory whilst disk clearly did not. The medium which caused the problem was NAND flash memory, such as that found in consumer devices like smart phones, tablets and USB sticks or enterprise storage like the flash memory arrays made by my employer Violin Memory. There is no doubt in my mind that flash memory is a type of memory – otherwise we would have to have a good think about the way it was named. My doubts are along different lines: if a database is running on flash memory, can it be described as an IMDB? After all, if the answer is yes then any database running on Violin Memory is an In Memory Database, right? What Is An In Memory Database? As always let’s start with the stupid questions. What does an IMDB do that a non-IMDB database does not do? If I install a regular Oracle database (for example) it will have a System Global Area (SGA) and a Program Global Area (PGA), both of which are areas set aside in volatile DRAM in order to contain cached copies of data blocks, SQL cursors and sorting or hashing areas. Surely that’s “in-memory” in anyone’s definition? So what is the difference between that and, for example, Oracle TimesTen or SAP HANA? Let’s see if the Oracle TimesTen documentation can help us: “Oracle TimesTen In-Memory Database operates on databases that fit entirely in physical memory” That’s a good start. So with an IMDB, the whole dataset fits entirely in physical memory. I’m going to take that sentence and call it the first of our fundamental statements about IMDBs: IMDB Fundamental Requirement #1: In Memory Databases fit entirely in physical memory. But if I go back to my Oracle database and ensure that all of the data fits into the buffer cache, surely that is now an In Memory Database? Maybe an IMDB is one which has no physical files? Of course that cannot be true, because memory is (or can be) volatile, so some sort of persistent later is required if the data is to be retained in the event of a power loss. Just like a “normal” database, IMDBs still have to have datafiles and transaction logs located on persistent storage somewhere (both TimesTen and SAP HANA have checkpoint and transaction logs located on filesystems). So hold on, I’m getting dangerously close to the conclusion that an IMDB is simply a normal DB which cannot grow beyond the size of the chunk of memory it has been allocated. What’s the big deal, why would I want that over say a standard RDBMS? Why is an In-Memory Database Fast? Actually that question is not complete, but long questions do not make good section headers. The question really is: why is an In Memory Database faster than a standard database whose dataset is entirely located in memory? Back to our new friend the Oracle TimesTen documentation, with the perfectly-entitled section “Why is Oracle TimesTen In-Memory Database fast?“: “Even when a disk-based RDBMS has been configured to hold all of its data in main memory, its performance is hobbled by assumptions of disk-based data residency. These assumptions cannot be easily reversed because they are hard-coded in processing logic, indexing schemes, and data access mechanisms. TimesTen is designed with the knowledge that data resides in main memory and can take more direct routes to data, reducing the length of the code path and simplifying algorithms and structure.” This is more like it. So an IMDB is faster than a non-IMDB because there is less code necessary to manipulate data. I can buy into that idea. Let’s call that the second fundamental statement about IMDBs: IMDB Fundamental Requirement #2: In Memory Databases are fast because they do not have complex code paths for dealing with data located on storage. I think this is probably a sufficient definition for an IMDB now. So next let’s have a look at the different implementations of “IMDBs” available today and the claims made by the vendors. Is My Database An In Memory Database? Any vendor can claim to have a database which runs in memory, but how many can claim that theirs is an In Memory Database? Let’s have a look at some candidates and subject them to analysis against our IMDB fundamentals. 1. Database Running in DRAM – e.g. SAP HANA I have no experience of Oracle TimesTen but I have been working with SAP HANA recently so I’m picking that as the example. In my opinion, HANA (or NewDB as it was previously known) is a very exciting database product – not especially because of the In Memory claims, but because it was written from the ground up in an effort to ignore previous assumptions of how an RDBMS should work. In contrast, alternative RDBMS such as Oracle, SQL Server and DB/2 have been around for decades and were designed with assumptions which may no longer be true – the obvious one being that storage runs at the speed of disk. The HANA database runs entirely in DRAM on Intel x86 processors running SUSE Linux. It has a persistent layer on storage (using a filesystem) for checkpoint and transaction logs, but all data is stored in DRAM along with an additional allocation of memory for hashing, sorting and other work area stuff. There are no code paths intended to decide if a data block is in memory or on disk because all data is in memory. Does HANA meet our definition of an IMDB? Absolutely. What are the challenges for databases running in DRAM? One of the main ones is scalability. If you impose a restriction that all data must be located in DRAM then the amount of DRAM available is clearly going to be important. Adding more DRAM to a server is far more intrusive than adding more storage, plus servers only have a limited number of locations on the system bus where additional memory can be attached. Price is important, because DRAM is far more expensive than storage media such as disk or flash. High Availability is also a key consideration, because data stored in memory will be lost when the power goes off. Since DRAM cannot be shared amongst servers in the same way as networked storage, any multiple-node high availability solution has to have some sort of cache coherence software in place, which increases the complexity and moves the IMDB away from the goal of IMDB Fundamental #2. Gong back to HANA, SAP have implemented the ability to scale up (adding more DRAM – despite Larry’s claims to the contrary, you can already buy a 100TB HANA database system from IBM) as well as to scale out by adding multiple nodes to form a cluster. It is going to be fascinating to see how the Oracle vs SAP HANA battle unfolds. At the moment 70% of SAP customers are running on Oracle – I would expect this number to fall significantly over the next few years. 2. Database Running on Flash Memory – e.g. on Violin Memory Now this could be any database, from Oracle through SQL Server to PostgreSQL. It doesn’t have to be Violin Memory flash either, but this is my blog so I get to choose. The point is that we are talking about a database product which keeps data on storage as well as in memory, therefore requiring more complex code paths to locate and manage that data. The use of flash memory means that storage access times are many orders of magnitude faster than disk, resulting in exceptional performance. Take a look at recent server benchmark results and you will see that Cisco, Oracle, IBM, HP and VMware have all been using Violin Memory flash memory arrays to set new records. This is fast stuff. But does a (normal) database running on flash memory meet our fundamental requirements to make it an IMDB? First there is the idea of whether it is “memory”. As we saw before this is not such a simple question to answer. Some of us (I’m looking at you Kevin) would argue that if you cannot use memory functions to access and manipulate it then it is not memory. Others might argue that flash is a type of memory accessed using storage protocols in order to gain the advantages that come with shared storage, such as redundancy, resilience and high availability. Luckily the whole question is irrelevant because of our second fundamental requirement, which is that the database software does not have complex code paths for dealing with blocks located on storage. Bingo. So running an Oracle database on flash memory does not make it an In Memory Database, it just makes it a database which runs at the speed of flash memory. That’s no bad thing – the main idea behind the creation of IMDBs was to remove the bottlenecks created by disk, so running at the speed of flash is a massive enhancement (hence those benchmarks). But using our definitions above, Oracle on flash does not equal IMDB. On the other hand, running HANA or some other IMDB on flash memory clearly has some extra benefits because the checkpoint and transactional logs will be less of a bottleneck if they write data to flash than if they were writing to disk. So in summary, the use of flash is not the key issue, it’s the way the database software is written that makes the difference. 3. Database Accessing Remote DRAM and Flash Memory: Oracle Exadata X3 Why am I talking about Oracle Exadata now? Because at the recent Oracle OpenWorld a new version of Exadata was announced, with a new name: the Oracle Exadata X3 Database In-Memory Machine. Regular readers of my blog will know that I like to keep track of Oracle’s rebranding schemes to monitor how the Exadata product is being marketed, and this is yet another significant renaming of the product. According to the press release, “[Exadata] can store up to hundreds of Terabytes of compressed user data in Flash and RAM memory, virtually eliminating the performance overhead of reads and writes to slow disk drives“. Now that’s a brave claim, although to be fair Oracle is at least acknowledging that this is “Flash and RAM memory”. On the other hand, what’s this about “hundreds of Terabytes of compressed user data”? Here’s the slide from the announcement, with the important bit helpfully highlighted in red (by Oracle not me): Also note the “26 Terabytes of DRAM and Flash in one Rack” line. Where is that DRAM and Flash? After all, each database server in an Exadata X3-2 has only 128GB DRAM (upgradeable to 256GB) and zero flash. The answer is that it’s on the storage grid, with each storage cell (there are 14 in a full rack) containing 1.6TB flash and 64GB DRAM. But the database servers cannot directly address this as physical memory or block storage. It is remote memory, accessed over Infiniband with all the overhead of IPC, iDB, RDS and Infiniband ZDP. Does this make Exadata X3 an In Memory Database? I don’t see how it can. The first of our fundamental requirements was that the database should fit entirely in memory. Exadata X3 does not meet this requirement, because data is still stored on disk. The DRAM and Flash in the storage cells are only levels of cache – at no point will you have your entire dataset contained only in the DRAM and Flash*, otherwise it would be pretty pointless paying for the 168 disks in a full rack – even more so because Oracle Exadata Storage Licenses are required on a per disk basis, so if you weren’t using those disks you’d feel pretty hard done by. [*see comments section below for corrections to this statement] But let’s forget about that for a minute and turn our attention to the second fundamental requirement, which is that the database is fast because it does not have complex code paths designed to manage data located both in memory or on disk. The press release for Exadata X3 says: “The Oracle Exadata X3 Database In-Memory Machine implements a mass memory hierarchy that automatically moves all active data into Flash and RAM memory, while keeping less active data on low-cost disks” This is more complexity… more code paths to handle data, not less. Exadata is managing data based on its usage rate, moving it around in multiple different levels of memory and storage (local DRAM, remote DRAM, remote flash and remote disks). Most of this memory and storage is remote to the database processes representing the end users and thus it incurs network and communication overheads. What’s more, to compound that story, the slide up above is talking about compressed data, so that now has to be uncompressed before being made available to the end user, navigating additional code paths and incurring further overhead. If you then add the even more complicated code associated with Oracle RAC (my feelings on which can be found here) the result is a multi-layered nest of software complexity which stores data in many different places. Draw your own conclusions, but in my opinion Exadata X3 does not meet either of our requirements to be defined as an In Memory Database. “In Memory” is a buzzword which can be used to describe a multitude of technologies, some of which fit the description better than others. Flash memory is a type of memory, but it is also still storage – whereas DRAM is memory accessed directly by the CPU. I’m perfectly happy calling flash memory a type of “memory”, even referring to it performing “at the speed of memory” as opposed to the speed of disk, but I cannot stretch to describing databases running on flash as “In Memory Databases”, because I believe that the only In Memory Databases are the ones which have been designed and written to be IMDBs from the ground up. Anything else is just marketing…
<urn:uuid:bb6b3894-4917-4fe0-af33-599ba4f6e675>
CC-MAIN-2017-04
https://flashdba.com/category/database/oracle-exadata/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00060-ip-10-171-10-70.ec2.internal.warc.gz
en
0.939885
3,031
2.703125
3
A privileged account is a login ID on some system which has elevated security rights -- i.e,. is able to perform more tasks and/or access more data than a regular user can do. Privileged accounts are also often shared accounts -- i.e., they do not belong to just one user, but rather are shared by multiple users, who are usually system administrators, database administrators, network managers and the like. Privileged accounts, like their name suggests, are accounts designed to provide elevated access to systems and data. They are an integral part of every IT infrastructure and play a key role in a large variety of day-to-day operations, from the management of operating systems and application servers by administrators to providing appropriate security contexts for running services, or securing communication between interdependent business applications. Because they exist in one form or another in virtually every server, workstation or appliance in the enterprise, the larger the environment, the more challenging it becomes to maintain an accurate repository of information related to these types of accounts. At the same time, due to their privileged nature, they are a prized target for attackers and one of the first items IT auditors focus on when assessing the security posture of an organization. It is therefore crucial for enterprises of any size to implement processes -- be they manual or automated -- for discovering and managing most if not all of their privileged accounts. From a high level perspective, privileged accounts fall into one of the following three categories: These are accounts used to establish interactive login sessions to systems and applications. Often shared by multiple IT people, they provide the administrative access permissions required to install applications, apply patches, change configuration, manage users, retrieve log files, etc. Administrative accounts can be further divided based on their access scope: These privileged accounts have a more limited scope, since they only provide administrative access to the local host or application on which they reside. Examples of local administrative accounts include members of the local administrators group on a Windows workstation, such as Administrator, the root account on Unix/Linux servers, the sa account on MSSQL Servers or SYSTEM on Oracle databases. These accounts are used by one application to connect, identify and authenticate to another. Common examples include accounts used by a web application to connect to a database server or accounts used by a batch script to connect to a web application's API service. Because of their intended purpose, credentials for this type of accounts are often lacking an adequate protection, making them a prime target for attackers. These are non-personal privileged accounts, configured with either local or domain level access, whose purpose is to provide a security context in which to run unattended processes, such as scheduled tasks, services or "daemons." Hitachi ID Privileged Access Manager secures privileged accounts across the IT landscape and at large scale:
<urn:uuid:60726735-6b0b-4ebf-a6bd-9ea8078342e4>
CC-MAIN-2017-04
http://hitachi-id.com/resource/concepts/privileged-account.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00364-ip-10-171-10-70.ec2.internal.warc.gz
en
0.928866
574
3.21875
3
Touchless sensing is a technology that facilitates interaction with a device without the need for physical contact or touch with the use of infrared sensors. Sensors that are capable of emitting and receiving waves in the form of heat are known as infrared sensors. The thermal radiation emitted by objects in the infrared spectrum is invisible to the human eye. These sensors are able to detect infrared radiation. There are two types of infrared sensors: proximity sensors and motion sensors. The two major markets of touchless sensing are touchless sanitary equipment, and touchless biometric. The touch-less sanitary equipment market consists of major products such as touchless faucets, soap dispensers, trash cans, hand dryers, paper towel dispensers, and flushes. The touchless biometric market consists of mainly face, iris, voice, and touchless fingerprint biometrics. Orbis (U.S.), EMX Inc. (U.S.), and Wilcoxon Research (U.S.) are the major players in the global IR-based touchless sensing market. Along with market data, you can also customize MMM assessments that meet your company’s specific requirements. Customize this report on the global IR-based touchless sensing market to get comprehensive industry standards and a deep-dive analysis of the following parameters. - Usage pattern of infrared sensors in major applications such as sanitary and biometric equipment - Product matrix which gives a detailed comparison of the portfolio of the aforementioned applications of each company mapped by infrared sensor technology - End-user adoption rate analysis of the application by segment - Product-specific and region-specific market share analysis of top players - Key developments of each top player that affect market dynamics - Consumption pattern of infrared sensors in applications such as sanitary and biometric equipment based on major geographies - Revenue and shipment data of infrared sensors segmented on the basis of applications and geographies Please fill in the form below to receive a free copy of the Summary of this Report Please visit http://www.micromarketmonitor.com/custom-research-services.html to specify your custom Research Requirement
<urn:uuid:6579512e-dce9-4552-b9eb-2d21a52c3ac5>
CC-MAIN-2017-04
http://www.micromarketmonitor.com/market-report/ir-based-touchless-sensing-reports-9010612726.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00364-ip-10-171-10-70.ec2.internal.warc.gz
en
0.916077
440
2.625
3
Storage Growth in the Enterprise Between 5 and 7 exabytes (or 10 bytes of data to the 18 th power) of new information are created every year. If this somewhat conservative estimate is correct, then that means on average, more than 800 megabytes of information per year are generated for every person living on earth. This makes William Shakespeare, whose “Complete Works” amount to approximately 5 megabytes, seem like a slacker. Of course, these numbers have been twisted like so much political polling data (Shakespeare was a very prolific writer—as someone who’s slept through several of his plays, I should know) to illustrate a point: Information is exploding. Just ask Rick Bauer, technology director of the Storage Networking Industry Association (SNIA). “Data is just exploding, and there’s no diminution of that growth pattern in sight,” he said. “I think we’ve had a real roller-coaster ride in terms of growth in the past decade. We’ve seen Moore’s Law in spades with both tape capacity and storage capacity on discs growing by logarithmic metrics. During the pre-Internet-bubble days, there was an almost breathless speculation about storage approaching infinity, bandwidth costs approaching zero and profitability approaching billions weekly, or something like that.” In the mid- to late-1990s, information storage really came into its own in organizational IT with upsurges in Fibre-channel-based storage, storage area networking (SAN) and network-attached storage (NAS) deployments. “There was a real exuberance as network storage became more and more ready for primetime in the data center,” Bauer said. “In terms of adoption in the enterprise, it really began there: solving problems that the data center was having with the amount of storage proliferating and not being able to track that with direct attached (storage).” Although IT has slowed down—even after an economic recovery of sorts—storage keeps on keeping on. Two areas are particularly strong in the storage space, Bauer said. The first is Internet Small Computer System Interface (iSCSI), a method of connecting storage facilities that is increasingly used by both big corporations and small- to medium-sized businesses. The other, storage virtualization, has helped organizations condense heterogeneous data into clear and easy-to-understand implements. Virtualization engines allow storage professionals to view data pools granularly, while managing and backing up aggregate information. However impressive these new technologies are, though, they have to serve enterprises and their objectives. “Storage has got to be aligned to the business units, and the business units of the enterprise are the biggest drivers for the growth of storage,” Bauer said. “They’re the ones who are deploying the large databases, whether it be e-commerce, analytics or any of the other things fueling that growth. While I don’t have the exact figures, I would pretty much bet the mortgage payment that the growth of database or just the size of the databases themselves are really major pulls into the storage side of things. That’s how the storage gets justified: When you’re purchasing these large systems and arrays and, concomitantly, the kind of security and data protection systems you have to have as well, all of that is being driven by some business driver. It’s usually the manager of customer, business or critical infrastructure data.” A key driver of growth in the storage industry has been regulations like Sarbanes-Oxley and the Health Insurance Portability and Accountability Act (HIPAA). Projected spending in 2005 on regulatory compliance in the United States by Fortune 500 companies is more than $16 billion, Bauer said. “As storage has become more centralized, it’s a bigger and bigger target, not just for recreational hacking, but from people who are really part of criminal enterprises and attacking that data for economic advantage. We’re also seeing that in the security space, with some of the more publicized mishandling and exposure of private customer data. That seems to be a real problem right now: tapes being lost, tapes being exposed, things not being encrypted. “On the part of the government, in some ways the United States is playing catch-up here to very stringent and customer-focused regulations in Japan and the European Union,” he added. “I think we’re seeing a sea change on the part of Congress to take some of that legislation. Once corporate America has a financial penalty for accidents or really caring that much about the data, then I think we’ll have a lot more board members taking steps to make sure that data is secure.” Yet storage professionals shouldn’t wait for the government to get its act together to secure data. They ought to be working together and promoting best practices, Bauer said. “People will ask the question, ‘How do we do it?’ Hopefully, certified storage professionals are going to be able to give a variety of different ways, from physical to digital. This is where storage professionals who are trained will help get professionals who are not as aware of all the ways to secure data in flight.” Storage growth in enterprises will likely continue unabated in the future, due in large part to a couple of focal factors. “I think ubiquitous data—secure and whenever you want it—will be a driver for the next big things in our industry,” Bauer said. “We’re surrounded by mountains of data. Tools for finding what we need are becoming more significant as we multiply the data out, from the ability to intelligently sift through everything from the data warehouse to the individual desktop search for that CD I burned for my brother-in-law last weekend.” Another significant source of storage growth will be data management on an increasingly global basis, he said. “By managing, I mean securing, mining, protecting and keeping data legal. You’re going to find companies really looking for good solutions to manage a global information store. In the next five years, there are going some exciting things around being able to move data easily in an optimized and secure fashion from point to point.” Additionally, some of the most interesting developments in the storage sector within the next few years will take place not in the workplace, but rather the homes of individual consumers, Bauer predicted. “The resurgence of Apple is due to the ability to manage data in musical form. With video and music on-demand, figuring out how to manage security and digital rights, and yet be able to push data from the living room to the car, is going to mean software, hardware and storage companies working together to put out exciting products. I think we’re also going to see storage professionals managing those.” –Brian Summerfield, firstname.lastname@example.org
<urn:uuid:36732c2b-9055-476f-8099-a61775eeb0c9>
CC-MAIN-2017-04
http://certmag.com/storage-growth-in-the-enterprise/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00180-ip-10-171-10-70.ec2.internal.warc.gz
en
0.959557
1,438
2.515625
3
DataStage jobs are usually run to process data in batches, which are scheduled to run at specific intervals. When there is no specific schedule to follow, the DataStage operator can start the job manually through the DataStage and QualityStage Director client, or at the command line. If the job is run at the command line, you would most likely do it as follows. dsjob -run -param input_file=/path/to/in_file -param output_file=/path/to/out_file dstage1 job1. A diagram representing this command is shown in Figure 1. Figure 1. Invoking a DataStage job In normal circumstances, the in_file and out_file are stored in a file system on the machine where DataStage runs. But, in Linux or UNIX, input and output can be piped in a series of commands. For example, when a program requires sorting, you can do the following. command|sort |uniq > In this case, Figure 2 shows the flow of data, where the output of one command becomes the input of the next, and the final output is landed in the file system. Figure 2. UNIX typical pipe usage Assuming the intermediate processes produce many millions of lines, you are potentially avoiding landing the intermediate files, thus saving space in the file system and the time to write those files. DataStage jobs do not take standard input through a pipe, like many programs or commands executed in UNIX. This article will describe a method and show the script to make that happen, as well as the practical uses of it. If the job should accept standard input and produce standard output like a regular UNIX command, then it would have to be called through a wrapper script as follows. Or maybe you will have to send the output to a file such as the following. command1|piped_ds_job.sh > /path/to/out_file. The diagram in Figure 3 shows you how the script should be structured. Figure 3. Wrapper script for a DataStage job The script will have to convert standard input into a named pipe, and also convert the output file of the DataStage job into standard output. In the next sections, you will learn how to accomplish this. Developing the DataStage job The DataStage job does not require any special treatment. For this example, you will create a job to sort a file which, if run normally, would take at least two parameters: input file and output file. However, the job could have more parameters if its function required it, but for this exercise is better to keep it simple. The job is shown in figure 4. Figure 4. Simple sort DataStage job The DSX for this job is available in the downloads section of this article. The job simply takes a text file, treats the full line as a single column, sorts it, and writes to the output file. Additionally, the job will have to allow multiple instance execution. It should take the input line with no separator and no quotes, and the output file will have the same characteristics. Writing and using the wrapper script The wrapper script will contain the code required to create temporary files for the named pipes, and create the command line for invoking the DataStage job (dsjob). Specifically the script will have to perform the following. - Direct the standard input (this is the output of the command which is piping to it) to a named pipe. - Make the output of the job to be written to another named pipe that then will be streamed to the standard output of the process so the next command can read the output in a pipe as well. - Invoke the DataStage job specifying the input file and output file parameters using the file names of the named pipes created earlier. - Clean up the temporary files created for the named pipes. Now begin the writing of the wrapper script. The first group of commands will prepare the environment, sourcing the dsenv file from the installation directory and set some variables. You can use the process ID (pid) as the identifier to create a temporary file in a temporary directory, as shown in Listing 1. Listing 1. Preparing the DataStage environment #!/bin/bash dshome=`cat /.dshome` . $dshome/dsenv export PATH=$PATH:$DSHOME/bin pid=$$ fifodir=/data/datastage/tmp infname=$fifodir/infname.$pid outfname=$fifodir/outfname.$pid You can proceed to do the FIFO creation and the dsjob execution. At this point, the job will wait until the pipe starts receiving input. The code warns you if the DataStage job execution has thrown an error, as shown in Listing 2. Listing 2. Creating the named pipes and invoking the job mkfifo $infname mkfifo $outfname dsjob -run -param inputFile=$infname \ -param outputFile=$outfname dstage1 ds_sort.$pid 2> /dev/null & if [ $? -ne 0 ]; then echo "error calling DataStage job." rm $infname rm $outfname exit 1 fi At the end of the dsjob command, you see an ampersand, which is necessary since the job is waiting for the input named pipe to send data, but the data will be streamed a few lines ahead. The following code prepares the output to be sent to standard output via a simple cat command. As you can see the cat command and the rm command are within parenthesis, meaning that those two commands are invoked in a sub-shell that is sent to the background (specified by the ampersand at the end of the line), as shown in Listing 3. Listing 3. Handling the input and output named pipes (cat $outfname;rm $outfname)& if [ -z $1 ]; then cat > $infname else cat $1 > $infname fi rm $infname The latter is necessary so when the job is finished writing the output, the temporary named pipe file name is removed. The code that follows, tests if the script was called with a parameter as a file, or if you are receiving from the data from a pipe. After the input stream (file or pipe) is sent to the input named pipe, you finish and remove the file. You can name the script as piped_ds_job.sh and execute it as command1|piped_ds_job.sh > /path/to/out_file. The fact that the script can receive the input via an anonymous pipe, allows the uses shown in Listing 4. Listing 4. Wrapper script uses command1|piped_ds_job.sh|command2 zcat compressedfile.gz |piped_ds_job.sh > /path/to/out_file zcat compressedfile.gz |ssh -l email@example.com piped_ds_job.sh| command2 The last sample where you use SSH assumes that you are executing from another machine, and therefore the DataStage job is somehow used as a service. This also would be a representative usage of how you can bypass the file transmission (and decompression in this case). The mechanism described in this article allows for a more flexible DataStage job invocation at the command line and in shell scripting. The explained wrapper script can easily be customized to make it more general and flexible. The technique is a simple one that can be quickly implemented for current jobs and can convert them in services through remote execution via SSH. The benefits in avoiding landing data in a regular file are most notable when file sizes are in the order of dozens of million of rows, but even if your data is not that large, the integration use case is very valuable. |Sample DataStage job and wrapper script||job_and_script.zip||10KB| - Read about Information Server and DataStage in the InfoSphere Information Server 9.1 Information Center. - Review the UNIX IPC in the "Speaking UNIX: Interprocess communication with shared memory" developerWorks article. - Visit the developerWorks Information Management zone to find more resources for DB2 developers and administrators. - Stay current with developerWorks technical events and webcasts focused on a variety of IBM products and IT industry topics. - Attend a free developerWorks Live! briefing to get up-to-speed quickly on IBM products and tools as well as IT industry trends. - Follow developerWorks on Twitter. - Watch developerWorks on-demand demos ranging from product installation and setup demos for beginners, to advanced functionality for experienced developers. Get products and technologies - Build your next development project with IBM trial software, available for download directly from developerWorks. - Evaluate IBM products in the way that suits you best: Download a product trial, try a product online, use a product in a cloud environment, or spend a few hours in the SOA Sandbox learning how to implement Service Oriented Architecture efficiently. - Participate in the discussion forum. - Get involved in the My developerWorks community. Connect with other developerWorks users while exploring the developer-driven blogs, forums, groups, and wikis.
<urn:uuid:c39f4733-658a-4d0d-96b6-253a9af54be3>
CC-MAIN-2017-04
http://www.ibm.com/developerworks/data/library/techarticle/dm-1304datastagecommandline/index.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00052-ip-10-171-10-70.ec2.internal.warc.gz
en
0.868366
1,968
2.6875
3
Computer scientists Wayne Jansen and Karen Scarfone of the Computer Security Division of the Information Technology Laboratory at the National Institute of Standards and Technology (NIST) have written a new (October 2008) Special Publication entitled "Guidelines on Cell Phone and PDA Security," which summarizes the security issues and provides recommendations for protecting sensitive information carried on these devices. Cell phones and PDAs have fused. Take the Nokia N810 as an example: it has a full keyboard, a high-resolution (800 x 480 pixel, 64K colors) screen, and a 400-MHz processor running Linux. They include applications for e-mail, calendar, music, Web browsing, maps, and image-handling. Their networking capabilities include IEEE 802.11b/g, Bluetooth, and USB connectivity. According to PC World, researchers at the Georgia Tech Information Security Center warned in October 2008 that “As Internet telephony and mobile computing handle more and more data, they will become more frequent targets of cyber crime.” Computer scientists Wayne Jansen and Karen Scarfone of the Computer Security Division of the Information Technology Laboratory at the National Institute of Standards and Technology (NIST) have written a new (October 2008) Special Publication entitled “Guidelines on Cell Phone and PDA Security,” (NIST SP800-124) which summarizes the security issues and provides recommendations for protecting sensitive information carried on these devices. The Executive Summary presents a succinct overview including a list of vulnerabilities leading to risks for corporate security from cell phones and PDAs: • The devices are easily lost or stolen and few have effective access controls or encryption; • They’re susceptible to infection by malware; • They can receive spam; • Wireless communications can be intercepted, remote activation of microphones can eavesdrop on meetings, and spyware can channel confidential information out of the organization; • Location-tracking systems allow for inference; • E-mail kept on servers as a convenience for cell-phone/PDA users may be vulnerable to server vulnerabilities. The key recommendations, which are discussed at length in this 51-page document, include the following (quoting from the list on page ES-2 through ES-4): 1. Organizations should plan and address the security aspects of organization-issued cell phones and PDAs. 2. Organizations should employ appropriate security management practices and controls over handheld devices. a. Organization-wide security policy for mobile handheld devices b. Risk assessment and management c. Security awareness and training d. Configuration control and management e. Certification and accreditation. 3. Organizations should ensure that handheld devices are deployed, configured, and managed to meet the organizations’ security requirements and objectives. a. Apply available critical patches and upgrades to the operating system b. Eliminate or disable unnecessary services and applications c. Install and configure additional applications that are needed d. Configure user authentication and access controls e. Configure resource controls f. Install and configure additional security controls that are required, including content encryption, remote content erasure, firewall, antivirus, intrusion detection, antispam, and virtual private network (VPN) software g. Perform security testing. 4. Organizations should ensure an ongoing process of maintaining the security of handheld devices throughout their lifecycle. a. Instruct users about procedures to follow and precautions to take, including the following items: • Maintaining physical control of the device • Reducing exposure of sensitive data • Backing up data frequently • Employing user authentication, content encryption, and other available security facilities • Enabling non-cellular wireless interfaces only when needed • Recognizing and avoiding actions that are questionable • Reporting and deactivating compromised devices • Minimizing functionality • Employing additional software to prevent and detect attacks. Enable, obtain, and analyze device log files for compliance b. Establish and follow procedures for recovering from compromise c. Test and apply critical patches and updates in a timely manner d. Evaluate device security periodically. After reading this document, it is clear to me that organizations should consider the benefits of issuing centrally selected and centrally controlled devices to their employees rather than allowing employees to download potentially sensitive information to a wide variety of uncontrolled mobile targets for industrial espionage. NIST SP800-124 will provide a useful framework for discussions and planning of reasonable security programs to prevent serious losses from unsecured cell phones and PDAs.
<urn:uuid:27c0c7eb-2758-45cf-bdef-0f28a83ffd8d>
CC-MAIN-2017-04
http://www.networkworld.com/article/2263206/lan-wan/cell-phone-security.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00107-ip-10-171-10-70.ec2.internal.warc.gz
en
0.891839
919
2.84375
3
When you configure the Identity Server to use SSL (the HTTPS protocol), the browser must be configured to trust the CA that created the certificate for the Identity Server. If you use a well-known CA, the browser is usually already configured to trust certificates from the CA. If you use a less-known CA or the Access Manager CA to create the certificate, you need to import the public key of the trusted root certificate into the browsers to establish the trust. For the Access Manager CA, this certificate is called configCA. For instructions on how to export the public key of a trusted root certificate, see Viewing Trusted Root Details. To import a public key into the browser, access the certificate options, then follow the prompts: For Internet Explorer, click> > > > > . For Firefox, click> > > > > > .
<urn:uuid:bc29b690-80e6-45db-8571-4d06b2465857>
CC-MAIN-2017-04
https://www.netiq.com/documentation/novellaccessmanager31/adminconsolehelp/data/bddd3q6.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00107-ip-10-171-10-70.ec2.internal.warc.gz
en
0.822013
173
2.78125
3
Over the years, the Big Data concept has evolved into an all knowing, all-encompassing technological behemoth, devouring everything that lies before it. For the uninitiated, Big Data is simply a notion to express the very large amounts of data that is being generated in our age of the Internet in the form of text files, images, videos, web pages, mobile data and so on. The amount of data in Big Data is not defined, but rather an abstract value that we regard as so huge, that traditional data storage technologies are not considered efficient enough in collecting, storing, and analyzing the said data. Thus, they require the need for a specialized set of tools, software platforms, and databases that we collectively call ‘Big Data Technologies’. These technologies are often used by corporations and institutions of all shapes, sizes and forms from super-market chains to space organizations, who try their hand at finding out whatever hidden meaning they can dig out from the immense amounts of data generated by their businesses. One such opportunity in Big Data which we will be looking into is the automobile industry, and more specifically – Telematics. Telematics comes from the French word “télématique” (which is a combination of two other French words “telecommunications” and “informatique”), meaning – transfer of information over telecommunications. Gartner, a leading IT analyst, defines telematics as such – “the use of wireless devices and “black box” technologies to transmit data in real time back to an organization, typically used in the context of automobiles, whereby installed or after-factory boxes collect and transmit data on vehicle use, maintenance requirements or automotive servicing.” In simpler terms, telematics is gathering and analysing vehicle data, i.e, data generated by devices installed in an automobile. The data is then wirelessly streamed, usually via existing cellular technologies, to a backend server, where data-mining and data-crunching takes place. The telematics device records all sorts of data from the vehicle, ranging from GPS coordinates, various vehicle metrics, such as engine performance, mileage, water temperate, speed, steering angles, acceleration, and braking frequencies, and also data from the vehicles on-board entertainment console. With such complex and varied data in hand, one could get insight about a vehicle’s performance, driver behaviour, logistical patterns and so on and so forth. Such insights over potentially hundreds of thousands of vehicles over time will no doubt prove beneficial to a wide array of companies in insurance, logistics, car entertainment, policy makers, safety auditors and of course, vehicle manufacturers. Devices also vary between manufacturers. Most modern cars come with one built-in device. For others, a 3rd party device is fitted somewhere beneath the dashboard (usually under the steering column). A more recent advancement in this field is the introduction of Google’s Android Auto and Apple’s CarPlay, where a user is simply required to plug his/her smartphone into the car’s USB port, after which the phone doubles up as a telematics device. The obvious advantage of this is that it does away with the necessity of having to install another device and also, a driver will now be able to use his/her phone using inbuilt car controls. A company considering to jump into the telematics bandwagon may now only be required to develop an app for the concerned platform and have the user run it when behind the wheels. Telematics and Big Data As one can imagine, the amount of vehicle data expected to be gathered for mining, crunching and analysis in Telematics is huge, considering the vast array of metrics that will be streamed from thousands or even millions of vehicles over time. An IBM whitepaper reveals that the volume of data collected from 26 million connected cars in 2013 was more than 480 TB and this number is expected to jump to 11.1 PB by 2020. It also states that some hybrid vehicles can generate up to 20 GB of data in just one hour! Further, InsuranceTech made an observation that within a year of enrolling 1,000 average drivers on a UBI (Usage-based insurance) program, an insurance carrier must accommodate the transmission and storage of over 190 million data points. That is a crazy amount of data generated and streamed to a data centre at an astonishing rate, sometimes even on a per second basis. Getting the data into a database is one thing and analysing the staggering amount of data another matter altogether. Traditional data warehousing technologies are ill equipped to handle the huge amounts of data at the mentioned frequencies. Its architecture just cannot cope with the volume, velocity, and variety of data. Companies are therefore looking at Big Data alternatives in popular distributed computation frameworks such as Apache Hadoop and Apache Spark. These platforms, along with others in the Big Data ecosystem, are enabling developers and IT companies in processing unprecedented amounts of data to gain never before seen insights, patterns, and hidden opportunities in Telematics and other such enterprises. Untapped Telematics Opportunity in India The automobile industry in India is getting bigger. Although India stood at a lowly 160th position in a list of countries by the number of road motor vehicles per 1000 inhabitants, with only 18 vehicles per 1000 persons (2011), the situation seems to be improving. According to SIAM (Society of Indian Automobile Manufacturers), the industry produced a total of 23,366,246 vehicles including passenger vehicles, commercial vehicles, three wheelers and two wheelers in April-March 2015 as against 21,500,165 in April-March 2014, registering a growth of 8.68 percent over the same period last year. The table below details the domestic sales of vehicles in India over the years: The key takeaways from the data are: The sales of Passenger Vehicles grew by 3.90 percent in April-March 2015 over the same period last year. Within the Passenger Vehicles segment, Passenger Cars and Utility Vehicles grew by 4.99 percent and 5.30 percent respectively. The overall Commercial Vehicles segment registered a de-growth of (-) 2.83 percent in April-March 2015 as compared to same period last year. Medium & Heavy Commercial Vehicles (M&HCVs) grew by 16.02 percent. It goes without saying that with the number of Passenger and Commercial vehicles, especially the Medium & Heavy Commercial variants, is steadily increasing over the years and with no widely known Telematics related endeavours taking shape, this particular brand of technology in our country is ripe for exploration. The prospect is especially enticing in a new and vibrant market, such as India, with a growing economy coupled with varied cultural and geographical landscapes presenting to those who dare to undertake this venture with a colourful challenge and incredible bounty. Telematics Use Cases Let’s take a look at a few examples of how this technology is used in various industries. The insurance companies and their customers are perhaps the greatest benefactors of this technology. Currently, automobile insurance premiums are given on the basis of the type and cost of a vehicle, and taking into account that a certain class of vehicle would draw a certain premium based on the case history pertaining to that category. The driver is not taken into consideration at all. With a telematics device on board, the insurance company can constantly and accurately monitor a driver’s driving habits throughout and correctly reward them with lower premiums for safe driving and also charge higher for those who are more aggressive in their handling of their vehicles. Another advantage crops up when dealing with insurance frauds. Since the status of the automobile is monitored at all times, it gives the company all the details it needs to correctly judge if a case is genuine or otherwise. Some nifty insurers provide their customers a near real-time interface to monitor their driving habits themselves, thereby allowing them to adjust their driving patterns on the go. Popular snack food manufacturer Frito Lays made public their success story of using telematics in managing their army of delivery trucks. They claim that a supervisor can now monitor a fleet of 500 vehicles at a time when he could monitor only 50 earlier. They were also successful in reducing idling time down to 50% as well as insurance claims from anywhere between $1000 to $2000 per vehicle. DHL and many other delivery services are making use of this technology to better plan transport routes, manage fuel economy and gather so many more insights to enable them to run their fleets more efficiently. Car entertainment presents a very unique and interesting set of opportunity in telematics. Every modern vehicle now comes equipped with a standard music system and other media peripherals. By gathering user preferences over time, one can predict a behavior pattern and subsequently suggest customized media for the driver’s consumption. By using GPS data, a mechanism similar to an in-flight suggestion system could be developed for cars and vans, whereby the driver is alerted of popular tourist attractions, fuel depots, restaurants, ATM’s etc. Although such systems already exist, using telematics, suggestions can be personalized from a vehicle owner’s perspective. Imagine an intelligent system that dynamically changes music depending on the style of driving or surrounding traffic. It could automatically play soulful or ambient music when it detects that the car is cruising on a highway or could switch to heavy death metal when it learns that the car has been idle right smack in the middle of a 4km long traffic jam for the past few hours (and thus aid in fueling the fury of the driver). Similar to the example stated above for dealing with insurance frauds, investigators and safety auditors can now accurately study the data relating to the state of a vehicle that has met with an accident. They could learn the circumstances that led to a mishap and then possibly devise counter measures ensuring they don’t happen again. Incidentally, they could advocate better vehicle safety features in a vehicle after studying the data at hand. Studying the data generated by their vehicles, manufacturers can be better prepared to develop better variants and upgrades. With a host of data varying across matrices, they can look to improve anything from fuel efficiency and engine performance, to driver comfort, passenger ergonomics and environment friendly vehicles. It could also be beneficial when a customer brings in their vehicle for timely services as the service center would now be better positioned to run maintenance after studying how each component in the car has been performing thus far. Government agencies can stand to gain knowledge on how the public drives on its roads, to find out which are the most congested and which aren’t. This could help them suitably plan budgets for maintenances or aid them in making an informed decision on alternate traffic routes. Further, they could plan on increasing the frequencies of public transport along the lines that see more traffic, thus, encouraging its usage more often. In times of emergencies, ambulances and related services can make use of the data to determine the quickest route to their destination that would have minimum blockades. These are just some of the more obvious use cases in telematics. The information gained from this technology is also being used in developing autonomous self-driving cars, enabling manufacturers to come up with fine-tuned, fuel efficient, and environmentally friendly vehicles into the market. Here at HCL, we have already begun aiding our clients in setting up a backend infrastructure to regulate, monitor, clean, and understand the data provided by vehicles. As time progresses, we stand to understand more of what we have and help our customers and ourselves drive further.
<urn:uuid:26400c38-d210-4127-8ae5-779b7dbbb5c3>
CC-MAIN-2017-04
https://www.hcltech.com/blogs/telematics-%E2%80%93-vehicle-driven-data
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00163-ip-10-171-10-70.ec2.internal.warc.gz
en
0.944716
2,366
2.78125
3
IBM last week unveiled a prototype technology that researchers say could deliver massive amounts of bandwidth in an energy efficient way to devices ranging from supercomputers to cell phones. Just how massive? Think 8 terabits per second of information - using only 100 watts of power. The new technology puts optical chips and optical data buses in a single package with standard components. It’s aimed at applications that require sharing huge files, such as healthcare sites that need to exchange high-definition medical images, or people who want to swap large video files. Specifically, IBM's new optical network technology consists of optically-enabled circuit boards, or "Optocards," that use polymer optical waveguides to conduct light between transmitters and receivers. Each waveguide channel is smaller than a human hair, IBM says, and the Optocards are densely packed on the databus to create an integrated optical module, or "Optochip." IBM says its 10 Gigabit/channel databus is the first ever demonstration of an integrated module-to-module, 32-channel optical datalink on a printed circuit board. In addition to the optical data bus, IBM also developed a parallel optical transceiver module with 24 transmitters and 24 receivers that each operate at 12.5 Gigabit/sec. On the energy-efficiency front, the new optical technology could save significant amounts of power in supercomputers, IBM says. For a typical 100-meter long link, the optical technology consumes 100 times less power than today's electrical interconnects, and saves 10 times the power of current commercial optical modules, according to Big Blue. The prototype “green optical link” builds on related work unveiled by the same research team in 2007. “Last year we unveiled an optical transceiver chip-set that could transmit a high-definition movie in under a second using highly customized optical components and processes,” said IBM Researcher Clint Schow, part of the team that built the prototype, in a statement. “Just a year later, we've now connected those high speed chips through printed circuit boards with dense integrated optical ‘wiring.’ Now we have built an even faster transceiver and have moved the optical components away from custom devices to more standard parts procured from a volume manufacturer, taking an important step toward commercializing the technology.” Here are some of the potential uses IBM sees: * High-definition video sharing: “Web-serving sites that host videos could use the technology to access libraries with millions of high-definition movies and video clips in seconds, speeding up access for users,” IBM suggests. Vendors could also incorporate an optical data port in laptops, video recorders and handheld devices to store and display HD video content. * Patient care: Medical personnel could exchange big images, such as MRIs and heart scans, for analysis in real time. * Consumer electronics: Scaled-down versions of the optical interconnect technology could be used in consumer products, such as cell phones, to allow device displays to be moved freely without being impeded by electrical wires. * Supercomputing: Greater bandwidth for data interconnects will enhance massively parallel supercomputers used for applications such as molecular dynamics calculations, drug discovery and climate modeling. For more information, check out IBM’s Web site.
<urn:uuid:c2db87ae-8653-447f-b673-568946fbee30>
CC-MAIN-2017-04
http://www.networkworld.com/article/2284009/lan-wan/prototype-file-sharing-technology-combines-fiber-optics--green-computing.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00493-ip-10-171-10-70.ec2.internal.warc.gz
en
0.906223
687
3.453125
3
This was posted by intelliGEORGE on Aug 14, 2007 1. Use solid core co-axial cable only, not stranded cable. The solid core must have a copper core with copper shield. 2. Avoid high voltage cable. A good rule to follow is: for every 100 volts there should be a separation of 1ft between the video cable and power cable. 3. While cabling, avoid areas like electrical equipment or transmitter rooms etc., where EMI interference is expected. This can create all types of interference to the video picture. Co-axial cable is very easily prone to EMI. 4. Minimize cable breaks - Every extra connection in the cable can deteriorate the quality of the video signal. If unavoidable, make sure the insulation is good; otherwise over time the exposed cable can touch the ground causing ground loop currents. It may be difficult or expensive to fix such problems in the future. 5. Avoid sharp bends, which affects the cable impedance causing picture reflection and distortion. This is especially true while getting all the cable into the CCTV monitor rack. 6. Poor BNC connections are the major cause of poor picture quality. Also BNC connectors should be replaced every couple of years and should be part of the system maintenance program. 7. Use metal conduits for high security applications. 8. Use heavy-duty cable for outdoor applications providing better protection against the elements. Every device connected to the network that uses the TCP/IP protocol has a unique IP address. IP Address = Internet Protocol Address. In the current version 4.0, the IP address is made up of four sets of numbers separated by dots. Example: 220.127.116.11. Each number set is one byte or 8 bits long. In other words the IP address is 4 bytes or 32 bits long. Since each number set is 8 bits long, it covers a number range of 0 to 255. Therefore the max number of an IP address is 255.255.255.255 Parts of an IP address The IP address has two parts. One part is the network address, while the second part gives the device address within the network. The IP address can be compared to the mailing address Network address = Zip Code Device address = Street or PO Box address. The identification of the network and device address within the IP address depends upon the classification of the network. The first number set is used to specify the network address, while remaining three number sets specify the device. Address Range: 001.xxx.xxx.xxx to 127.xxx.xxx.xxx All the numbers in this class are already assigned. Government or large commercial organizations have been assigned this range. First two number sets indicate the network address, while the balance two indicate the device. Address Range: 128.001.xxx.xxx to 191.254.xxx.xxx This class is assigned to universities, commercial organizations and Internet Service Providers (ISP). In this case the first 3 number sets specify the network address, while the remaining number set indicates the device address. Address Range: 192.000.001.xxx to 223.255.254.xxx The maximum number of devices that can be attached to a single network address is 254; it is therefore suitable for smaller networks. Shortage of IP address The numbers of networks and devices have exploded in the recent past. This means that the availability of IP addresses is getting exhausted. Some Temporary IP Address; One solution to overcome the IP address shortage is to provide temporary address to devices as and when the device connects to the Internet. After the device disconnects, the same address can be given to another device, this how ISP’s operate. Reduce Need for IP Address: The Router, which is the starting point of the network, has a fixed IP address. All the devices connected to this network use this IP address. The router has the address list of the devices network card (NIC) and uses this address to communicate within the network. IP version 6.0 To overcome the IP address shortage, a new version 6.0 is being proposed. It has 6 number sets separated by dots. The size of the address will be 128 bits. With the introduction of version 6.0, there would 5 classes A, B, C, D and E. Resolution is a key specification of any CCTV equipment. It is the quality of definition and clarity of a picture. It is defined in number of lines for an analog signal and number of pixels for a digital signal. More lines or pixels = higher resolution = better picture quality. Camera resolution depends upon the number of pixels in the CCD chip. If a camera manufacturer can put in more pixels in the same size CCD chip, that camera will have a better resolution. In other words the resolution is directly proportional to the number of pixels in the CCD chip. Any CCTV device has two types of resolution, vertical and horizontal: Vertical resolution = number of horizontal lines or pixels. The vertical resolution cannot be greater the number of TV scanning lines, which is 625 lines for PAL and 525 lines for NTSC. Because some of the lines are lost in the interlacing of fields, the maximum vertical resolution possible as per the Kell factor is 0.75 of the number of horizontal scanning lines. Using this, the maximum vertical resolution possible is For PAL 625 X .75 = 470 lines For NTSC 525 X .75 = 393 lines Vertical resolution is not a critical issue as most camera manufacturers achieve this figure. Horizontal resolution = number of vertical lines. Theoretically horizontal resolution can be increased infinitely, but the following two factors limit this • It may not be technological possible to increase the number of pixels in a chip. • As the number of pixels increase in the chip, the pixel size becomes smaller which lowers the sensitivity. There is a trade off between resolution and sensitivity. If only one resolution is shown in the data sheet, it usually it is the horizontal resolution. There are different methods to measure resolution: 1. Resolution Chart The camera is focused on a resolution chart and the vertical lines and horizontal lines are measured on the monitor. The resolution measurement is the point were the lines start merging and they cannot be separated. • The merging point can be subjective as different people perceive it differently • The resolution of the monitor must be higher than the camera. This is not a problem with Black and white monitors, but is a problem with many color monitors as they usually have a lower resolution as compared to a color camera. 2. Bandwidth Method This is a scientific method to measure the resolution. The bandwidth of the video signal from the CCTV equipment is measured on an oscilloscope. Multiply this bandwidth by 80 to give the resolution of the camera. Example: If the bandwidth is 5 MHz, the camera resolution will be 5 * 80 = 400 lines Human Eye and CCTV Technology The CCTV and video technology has been designed to meet the characteristics of the human eye. Starting with the camera, the human eye is the final recipient of the video signal. This information will explain how some of the properties of the human eye have made an impact on CCTV or video technology. Eye and Persistency of Image The human eye and a camera are quite similar. Both have a lens, an iris, and a light sensitive imaging area. In a camera it is the CCD chip, while in the eye it is the retina. It is important to understand the Persistency of Image of the human eye. Any image formed by the eye is retained in the Retina for 40 ms (0.004 sec) only and after that it disappears. This is known as the persistency of the human eye. For continuity it is necessary that the next frame or image is formed within 40 ms, if not, the human will see discrete frames with no continuity. Converting this to frames per sec, it means the human eye requires a minimum of 24 frames per sec for a picture to look continuous. This basic concept was used when PAL and NTSC TV transmission standards were set up. NTSC has 30 frames per sec, and is used in USA and Japan. PAL has 25 frames per sec, and is popular in Europe and Asia On the surface; both these standards meet the minimum requirements, but have an underlying problem. In both PAL and NTSC systems, there is a certain time taken when the first frame comes to an end and the next frame starts. During this time a blank pulse is added. Since the PAL and NTSC systems are just above the minimum requirement, the human eye is able to perceive the blank pulse between the frames and this is seen as screen flickering. To overcome this problem, the frame is divided into two fields – odd and even fields. This way the blank pulse appears 50 times (PAL) and 60 times (NTSC) every sec. At this frequency, the human cannot perceive the blank pulse and therefore the screen flickering is avoided. This is not an issue with computer monitors because the refresh rate is 100 times per sec and they do not use the PAL or NTSC standards. A point of interest - have you seen the moving lines on a computer monitor while watching television? This is because of the different refresh rates of a computer and TV. We discussed the concept of persistency of the human eye and why we require at least 25 frames per sec for the moving images to look continuous. In part 2, we will deal with the sensitivity of the human eye, which in many ways determines the bandwidth of the digital signal and also the video compression techniques used. It is known that the three basic colors of light are Red, Green and Blue (RGB). These colors are mixed and matched to form all the different colors. An analysis of the spectral response of the human eye reveals that it is most sensitive to green light, while the response to red and blue is limited. Based on this finding, the brightness of a picture (Y) can be defined by the following equation: Y = 0.3R (Red) + 0.59G (Green) + 0.11B (Blue) A composite video signal contains Brightness Y and the basic colors RGB in the color burst. When converting this analog signal into a digital signal, sampling the green signal is not necessary. Only the Brightness, Blue and Red are part of the digital signal. This is also called the YUV (Brightness, Primary color 1, Primary color 2) signal. Green is reconstructed by using the above equation G = (Y - 0.3R - 0.11B) / 0.59 This helps reduce the size or bandwidth of the digital signal as only three components are used, instead of four. The human eye has 120 million Rods and 8 million Cones. These are like pixels in the CCD chip. A CCD chip only has about 350,000 pixels, meaning a much lower picture quality as compared to the human eye. Rods are sensitive to the brightness of an image while cones handle the color. Since the numbers of available cones are limited, the sensitivity of the human eye to colors in a moving picture is not very high. Because of this, it is possible to reduce the image bandwidth by reducing the sampling rate of colors as compared to Y. Here each pixel in the chip is sampled for brightness (Y), Primary color 1 (U) and primary color 2 (V). For a digital signal with 640X 480 pixels (307 KB), the bandwidth would be 307 KB (Y) + 307 KB (U) + 307 KB (V) = 921 KB Here each pixel is sampled for Y (640X 480), but only every alternate horizontal pixel is sampled (320 X 480) for the color component. The bandwidth in this case will be 307 KB (Y) + 154 KB (U) + 154 KB (V) = 615 KB This color sampling process is used in JPEG and MPEG compression Here each pixel is sampled for Y (640X 480), but only every alternate horizontal and vertical pixel is sampled (320 X 240) for color. The bandwidth in this case will be 307 KB (Y) + 77 KB (U) + 77 KB (V) = 461 KB To further reduce the image size, different compression techniques like JPEG, MPEG and Wavelet are used. Lens Construction and Chromatic Aberration To understand the construction of the lens, it is important to understand the theory of light. The speed of light when traveling through air is roughly 299,460 km per second. When light passes from air into a denser medium at an angle, like glass or water, its speed slows down by the index of refraction of the medium. The following table gives a comparison for the various mediums. Medium Index of Refraction Speed of Light Air / Vacuum 1.0 299,460 km/sec Water 1.33 225,158 km/sec Glass 1.5 199,640 km/sec Diamond 2.42 123,744 km/sec As the wave of propagation is still continuous, this slowing down bends the light beam when it enters the new medium. It is similar to a bicycle changing direction when it enters sand from road. This basic principle is used in the construction of a lens. Convex and concave lenses are the basic lens types that make the light beam converge and diverge respectively. These basic lens types are mixed and matched to give a wide variety of lenses. Chromatic Aberration of Light When light is refracted through glass, a lens error called chromatic aberration occurs. What is chromatic aberration? Visible light is made of different colors and each color has a different frequency. These colors will bend differently compared to each other when they pass through a single convex lens, resulting in a scattered focal point, meaning the picture will not be focused properly. To overcome this error, several different lenses are grouped together. This can make the lens construction complex and therefore more expensive. There are lenses available that do not resolve the chromatic error accurately and are not compatible for use with color cameras, as they will not give a sharp focus for all the colors in the picture. The same reasoning and logic is applicable for the infrared frequency range also. For this reason, in many cases, when an infrared illuminator is used with a monochrome camera the picture is not properly focused. Lens Construction and Quality Different Glass Groups in a lens Many people are under the impression that a lens is made up of a single lens. This is not true. Besides glass pieces required for correcting chromatic aberration, additional glass is also required: • To focus the lens on objects at different distances When the lens focus moves from one object to another at a different distance, or when it follows a moving object, the lens elements reposition, i.e. the focal point changes and the picture thus always remain clear. This is not a problem with the human eye which varies the thickness of the lens. A long way to go to catch up with this advanced technology! • To achieve different focal lengths in a zoom lens The glass pieces move in relation to each other to achieve different magnification of the object, resulting in different focal lengths in a zoom lens. Factors effecting lens quality During construction, the following factors will determine the quality of the lens. 1. Number of glass pieces used More glass pieces combined together in a lens may help in reducing chromatic error, improving focusing etc, but will increase light absorption, resulting in lesser light availability to the camera. There is a trade off between accuracy and absorption. 2. Absorption factor of the glass Poor quality glass will absorb more light, again resulting in lower light availability to the camera. Obviously glass with lower absorption factor will cost more. 3. Coating and polishing: The quality of coating and polishing of the glass can improve lens quality. Precision and reliability of the mechanism that moves the glass pieces within the lens is important. Poor quality mechanisms can lead to inaccurate settings that may not be consistent. Different Elements of a Zoom Lens A zoom lens is a lens that can be changed in focal length continuously without losing focus. Magnification of a scene can be changed with a single lens, but every time the position shifts, the lens must be refocused. If two lenses are combined, it is possible to change the magnification without disturbing the focus. A zoom lens is made of the following groups 1. Focusing lens group: The focusing lens group brings an object into focus. It moves irrespective of the zoom ratio or current focal length. 2. Variator lens group: The variator lens group changes the size or magnification of the image 3. Compensator lens group: When moved in relation to the variator group, the compensator lens group corrects the shift in focus. Lens groups 1 to 3 are the core of the zoom lens, and are called the zoom unit 4. Relay lens: Since the zoom unit does not converge light, the relay lens group is placed behind it to focus the object on to the CCD chip. Zoom lens design requires extensive optical path tracing and continues self correcting performance evaluation effort. It also involves the use of powerful computers and specialist software. Camera Sensitivity / Minimum Scene Illumination Sensitivity, measured in lux indicates the minimum light level required to get an acceptable video picture. There is a great deal of confusion in the CCTV industry over this specification. There are two definitions "sensitivity at faceplate" and "minimum scene illumination" • Sensitivity at faceplate indicates the minimum light required at the CCD chip to get an acceptable video picture. This looks good on paper, but in reality does not give any indication of the light required at the scene. • Minimum scene illumination indicates the minimum light required at the scene to get an acceptable video picture. Though the correct way to show this specification, it depends upon a number of variables. Usually the variables used in the data sheet are never the same as in the field and therefore do not give a correct indication of the actual light required. For example a camera indicating the minimum scene illumination is 0.1 lux. Moon light provides this light level, but when this camera is installed in moon light, the picture quality is either poor or there is no picture. Why does this happen? It is because the field variables are not the same as those used in the data sheet. How does it work? Usually light falls on the subject. A certain percentage is absorbed and the balance is reflected and this moves toward the lens in the camera. Depending upon the iris opening of the camera a certain portion of the light falls on the CCD chip. This light then generates a charge, which is converted into a voltage. The following variables should be shown in the data sheet while indicating the minimum scene illumination. • F Stop • Usable Video • Shutter speed Light from a light source falls on the subject. Depending upon the surface reflectivity, a certain portion of this light is reflected back which moves towards the camera. Below are a few examples of surface reflectivity. • snow = 90% • grass = 40% • brick = 25% • black = 5% Most camera manufacturers use an 89% or 75% (white surface) reflectance surface to define the minimum scene illumination. If the actual scene you are watching has the same reflectance as in the data sheet, then there is no problem, but in most cases this is not true. If you are watching a black car, only 5% of the light is reflected and therefore at least 15 times more light is required at the scene to give the same amount of reflected light. To compensate for the mismatch, use the modification factor shown below. Modification factor F1 = Rd/Ra Rd = reflectance used in the data sheet Ra = reflectance of the actual scene The reflected light starts moving towards the camera. The first device it meets is the lens, which has a certain iris opening. While specifying the minimum scene illumination, the data sheet usually specifies an F Stop of F1.4 or F1.2. F Stop gives an indication of the iris opening of the lens. The larger the F Stop value, the smaller the iris opening and vice versa. If the lens being used at the scene does not have the same iris opening, then the light required at the scene requires to be compensated for the mismatch in the iris opening. Modification factor F2=- Fa² / Fd² Fa = F-stop of actual lens Fd = F-stop of lens used in data sheet. After passing through the lens the light reaches the CCD chip and generates a charge which is proportional to the light falling on a pixel. This charge is read out and converted into a video signal. Usable video is the minimum video signal specified in the camera data sheet to generate an acceptable picture on the monitor. It is usually measured as a percentage of the full video. Example: 30% usable video = 30% of 0.7 volts (full video or maximum video amplitude) = 0.2 volts. The question here is: Is this acceptable? Unfortunately there is no standard definition for usable video in the industry and most manufacturers do not indicate their definition in the data sheet while measuring the minimum scene illumination. It is recommended to be aware of the useable video percentage used by the manufacturer while specifying the minimum scene illumination in the data sheet. The minimum scene illumination should be modified if the useable video used in the data sheet is not acceptable. Modification Factor F3 = Ua/Ud Ua = actual video required at the site as % of full video Ud = usable video % used by the manufacturer AGC stands for Automatic Gain Control. As the light level reduces the AGC switches on and the video signal gets a boost. Unfortunately, the noise present also gets a boost. However when the light levels are high, the AGC switches off automatically, because the boost could overload the pixels causing vertical streaking etc. The data sheet should indicate if the AGC is “Onâ€
<urn:uuid:7ac21b69-708a-4691-88dd-fbe62631b18e>
CC-MAIN-2017-04
http://www.cctvforum.com/viewtopic.php?f=12&t=12156&start=15
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00401-ip-10-171-10-70.ec2.internal.warc.gz
en
0.906573
4,652
2.546875
3
Database Architect: Getting to the Top IT is an exciting and challenging field. Within IT, there are a number of disciplines that are fields in their own right, and database technology is one such discipline. Database architects hold the top jobs. But what is a database architect, and how does an IT professional become one? Database architects have overall responsibility for the successful implementation and operation of database systems that support critical corporate systems. This usually includes the company’s ERP systems, including order-processing, payroll, finance, inventory, etc. The performance, reliability and availability of these critical systems are directly linked to the characteristics of the underlying database management system (DBMS), its architecture, implementation and operation. The DBMS design trade-offs and technology decisions will have an impact over the entire life cycle of a company’s information systems. At a large corporation, these decisions can save or cost millions and affect a company’s operation for more than a decade. Because of the high stakes involved, a database architect is typically a highly skilled individual with years of experience and in-depth expertise in database technologies, programming, and system analysis and design. Database architects, however, need to be more than talented technologists — they need to have superior communications skills and a keen sense of technology decisions’ impact on the company’s bottom line. They’ll be challenged to communicate proposals to a spectrum of audiences from engineers to senior management. The Role of the Database Architect Database technology is at the core of business systems. The database architect must be versed in the technologies, architectures and implementation issues at the database layer. It is also critical to understand how this layer supports the functions that rely on it, and how the hardware layer below can affect it. Database architects are called on to set the strategy for their enterprise’s database systems and set standards for operations, programming and security. They are also likely to be involved with the design and specification, or at least the review, of the hardware and storage architectures supporting the database platform. Setting or evolving a company’s database strategy is a key component of this role — it involves analyzing the database requirements of the supported business systems and designing a database infrastructure to support the business objectives. Typically, database architects look at the core business systems and categorize them as Online Transaction Processing (OLTP) or Online Analytical Processing (OLAP) applications. The separation typically is done to prevent reporting programs from overloading the database system and slowing down the response of transaction-processing applications. In OLTP systems, you will need to consider scalability, uptime, transaction volume, response times and how to do maintenance upgrades and backups. You will need to have strong analytical skills and some experience in developing models that support your recommendations. On the OLAP side, you need to consider an architecture for reporting and data warehousing: How current does the data have to be? At what point can it be summarized? A database architect needs to consider how to clean the data. Often reporting systems combine data from multiple systems that might have different keys, similar attributes with different meanings, conflicting data, missing data, etc. Database architects play a key role in setting the standards for physical and logical database schema/table design and programming. They need to understand each of these disciplines to review and make decisions regarding the exceptions that inevitably arise. There will be myriad architectural issues to review or specify: Should clustering or replication be employed? How should database instances be allocated on specific machines? What impact does the architecture have on maintenance, upgrade and disaster-recovery issues? How will application development and testing be performed? Will you want to implement separate test and quality assurance environments? Different production, development, test and quality-assurance environments likely will have different architectures or hardware. You need to be able to extrapolate results in one environment to predict performance in the other, and part of the job is figuring out a systematic way for handling these issues. Many factors can affect the performance of database systems — programming standards employed for transactional and reporting systems can play a big part in the success or failure of systems. It is easy for poorly written programs and/or poorly optimized queries to significantly degrade the performance of an otherwise properly designed system. A programming background can help here, but even when lacking one, a database architect needs to understand the interaction of programming models and query optimization on production database systems. The physical design of database tables can affect performance and access to data. A company’s ERP system dictates most of the physical table design for OLTP systems, but there are always ancillary systems and reporting and data warehouse environments that require physical designs. Each database platform usually has its own Structured Query Language (SQL) extensions and/or programming languages and facilities such as Oracle’s PL/SQL and Triggers. The extent to which extensions, facilities and proprietary languages are used needs to be considered — this can have an impact on the ability to switch DBMS vendors in the future. Information security is a growing concern in all IT areas, and DBMSs are no exception: There are many security issues. Some of these, such as access and auditing, can be handled by the ERP system. You need to develop standards, however, to ensure the same access and audit requirements are implemented in custom-developed applications and modules. Probably the most problematic area is in securing reporting and data warehousing systems. These systems are built to provide people with access to data for analysis and reporting, but this can present a security problem, if the data are not secured to the same level as the production ERP system. Whether implementing an environment or managing one, you need to devise a strategy for ensuring the architecture is meeting the needs of the company. System performance and capacity trends need to be monitored to anticipate needed changes before they affect business system performance. The database architect likely will be called upon to make these recommendations and to lead troubleshooting efforts if odd system behaviors or slowdowns occur. Skills and Experience To be a strong candidate, you are going to need a combination of credentials, skills and experience that complement one another. A bachelor’s degree will likely be required, and a master’s degree definitely is going to be a plus, particularly if it has a component related to management information systems (MIS) or computer science (CS). If the latter, coursework or a thesis focused on database technology would stand out, and many master’s programs offer such options. Most companies try to standardize on a platform such as Oracle, IBM DB2 or Microsoft SQL Server. This is obviously going to affect the certifications pursued while working at a given job. Database certifications are vital to complementing your college work and demonstrating a commitment to continued learning and a desire to delve deep into the technology. Database architect is a senior position, and it’s assumed that a candidate has eight or more years of experience and has probably worked at several companies. It will help you if you have experience on more than one platform at more than one company. There are many approaches to database and syste
<urn:uuid:751c5e71-8673-449a-b636-a1c3ca070a56>
CC-MAIN-2017-04
http://certmag.com/database-architect-getting-to-the-top/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281332.92/warc/CC-MAIN-20170116095121-00089-ip-10-171-10-70.ec2.internal.warc.gz
en
0.935195
1,462
2.796875
3
Here are some of the tools and sites that have information and tools about the Japanese Language: - -- Jim Breen's site has a wealth of info on kanji, including his dictionaries. - -- Joyo 96 -- Understanding Written Japanese - -- Nihongo Corner - easy classification of J grammar elements. - -- Grammar practice at Nagoya U. - -- Human Japanese forum. - -- More lessongs - -- Formal J grammar elements from York U, Can.
<urn:uuid:9d72c144-8534-4aea-8d65-b12d9db9a3ca>
CC-MAIN-2017-04
http://backerstreet.com/japan/jtools.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00483-ip-10-171-10-70.ec2.internal.warc.gz
en
0.817914
103
2.546875
3
Architecting for the Era of Tera With television, the Internet, phone calls, and print media, our world is flooded with data. The quantity of data doubles every 24 months — the data equivalent of Moore’s Law. The amount of worldwide data has grown over 30% per year for the past several years. So much so it is now measured by the exabyte — 1018 bytes, or a billion gigabytes. As a result, we are faced with a new challenge: what should we do with all of the data? By itself, data is unusable. To distill data into meaningful information requires finding and manipulating patterns and groupings within the data. And powerful computer systems are required to find these patterns in an ever-increasing pool. To ensure that today’s computers are able to handle future applications, they will need to increase their processing capabilities at a rate faster than the growth of data. Defining Tera-Era Workloads To develop processor architectures capable of delivering tera-level computing, Intel classifies these processing capabilities, or workloads, into three fundamental types: recognition, mining, and synthesis, or RMS. The RMS model is a good metric for matching processor capabilities with a specific class of applications. Recognition is the matching of patterns and models of interest to specific application requirements. Large data sets have thousands, even millions, of patterns; many of which are not relevant to a specific application. To extract significant patterns in the data, a rapid intelligent pattern recognizer is essential. Mining, the second processing capability, is the use of intelligent methods to distill useful information and relationships from large amounts of data. This is most relevant when predicting behaviors based upon a collection of well-defined data models. Recognition and mining are closely dependent on and complimentary to each other. Synthesis is the creation of large data sets or virtual worlds based upon the patterns or models of interest. It also refers to the creation of a summary or conclusion about the analyzed data. Synthesis is often performed in conjunction with recognition and mining. The RMS model requires enormous algorithmic processing power as well as high I/O bandwidth to move massive quantities of data. Processor architects use different approaches to maximize performance for each workload in the RMS model; balancing and trading-off combinations of factors including the number of transistors on the die, power requirements, and heat dissipation. These choices result in architectures optimized for specific classes of workload. The RMS workloads in tera-level computing require several similar, application-independent capabilities: - Teraflops of processing power. - High I/O bandwidth. - Efficient execution and/or adaptation to a specific type of workload. With tera-levels of performance, it becomes possible to bring these workloads together on one architectural platform, using common kernels. Tera-level computing platforms will use a single optimal architecture for all RMS workloads. Enabling the Era of Tera The power of the computing architecture required for tera-level applications is 10-to-100 times the capabilities of today’s platforms. The figure below illustrates that while the rate of frequency is slowing, other techniques are actually increasing the rate of overall performance. Moving forward, we see that performance will be derived from new architectural capabilities such as multi- and many-core architectures as well as frequency scaling. We can expect the rate of performance improvement to actually improve faster than we’ve seen historically with only frequency scaling. Recognizing the need to increase today’s platform capabilities, Intel is developing a billion-transistor processor. Yet processor improvements in clock speed and transistor count and will not meet the requirements of tera-level computing in the next 25 years. A number of the less-friendly laws of physics are more limiting than Moore’s Law. As clock frequencies increase and transistor size decreases, obstacles are developing in key areas: - Power: Power density is increasing so quickly that tens of thousands of watts per square centimeter (w/cm2) will be needed to scale the performance of Pentium processor architecture over the next several years. This creates a problem, though, being hotter than the surface of the sun. - Memory Latency: Memory speeds have not increased as quickly as logic speeds. Memory access with i486 processors required 6-to-8 clocks. Today’s Pentium processors require 224 clocks, about a 28x increase. These wasted clock cycles can negate the benefits of processor frequency increases. RC Delay: Resistance-capacitance (RC) delays on chips have become increasingly challenging. As feature size decreases, the delay due to RC is increasing. In 65nm (nanometer) and smaller nodes, the delay caused by a one millimeter RC delay is actually greater than a clock cycle. Intel chips are typically in the 10-to-12 millimeter range, where some signals require 15 clock cycles to travel from one corner of the die to the other; again negating many of the benefits of frequency gains. - Scalar Performance: Experiments with frequency increases of various architectures such as superscalar, CISC (complex instruction set computing), and RISC (reduced instruction set computing) are not encouraging. As frequency increases, instructions per clock actually trend down, illustrating the limitations of concurrency at the instruction level. Performance improvements must come primarily from architectural innovations, as monolithic architectures have reached their practical limits. The New Architectural Paradigm In the past, mini- and mainframe computers provided many of the architectural ideas used in personal computers today. Now, we are examining other architectures for ways to meet tera-level challenges. High-performance computers (HPC) deliver teraflop performance at great cost and for very limited niche markets. The industry challenge is to make this level of processing available on platforms as accessible as today’s PC. The key concept from high-performance computing is to use multiple levels of concurrency and execution units. Instead of a single execution unit, four, eight, 64, or in some cases hundreds of execution units in a multi-core architecture is the only way to achieve tera-level computing capabilities. Multi-core architectures localize the implementations in each core and effect relationships with the "Nth" level — second and third levels of cache. This creates enormous challenges in platform design. Multiple cores and multiple levels of cache scale processor performance exponentially, but memory latency, RC interconnect delay, and power issues still remain — so platform-level innovations are needed. This architecture will include changes from the circuit through the microprocessor(s), platform, and entire software stack. The SPECint experiments show that microprocessor-level concurrency alone is not sufficient. A massively multi-core architecture with multiple threads of execution on each core with minimal memory latency, RC interconnect delay, and controlled thermal activity is needed to deliver teraflop performance. The three attributes that will define this new architecture are scalability, adaptability, and programmability. Scalability is the ability to exploit multiple levels of concurrency based on the resources available and to increase platform performance to meet increasing demands of the RMS workloads. There are two ways to scale performance. Historically, the industry has “scaled up,” by increasing the capabilities and speed of single processing cores. An example of "scaling up" can be found in the helper thread technology. Helper threads implement a form of user-level switch-on-event multithreading on a conventional processor without requiring explicit OS or hardware support. Helper threads improve single thread performance by performing judicious data prefetching when the main thread waits for service of a cache miss. Another method of scaling performance is “Scaling out;” adding multiple cores and threads of execution to increase performance. The best-known examples of “scaling out” architectures are today’s high performance computers which have hundreds, if not thousands, of cores. In today's platforms, processors are often idle. For server workloads, processors can spend almost half of their total execution time waiting for memory accesses. Therefore the challenge and opportunity is to use this waiting time in an effective way. Experiments in Intel's labs showed that helper threads can eliminate up to 30% of cache misses and improve performance of memory intensive workloads on the order of 10%-to-15%. Adaptability is also an attribute of this new architectural paradigm. An adaptable platform proactively adjusts to workload and application requirements. The platform must be adaptable to any type of RMS workload. Multi-core architectures not only provide scalability but also the foundation for adaptability. The following adaptability example uses special purpose processing cores called processing elements to adapt to 802.11a/b/g, Bluetooth, and GPRS. Each multiple processing element in the graphic is considered to be a processing core. These processing elements can each be assigned a specific radio algorithm function, such as programmable logic array (PLA) circuits, Viterbi decoders, memory space, and other appropriate functions. Each processing element may be a digital signal processor (DSP) or an application-specific integrated circuit (ASIC). The platform can be dynamically configured to operate for a workload like 802.11b by meshing a few processing elements. In another configuration, the platform can be reconfigured to support GPRS or 802.11g or Bluetooth by interconnecting different sets of processing elements. This type of architecture can support multiple workloads like 802.11a, GPRS, and Bluetooth simultaneously. This is the power of the multi-core micro architecture. The challenge of bringing high performance computing to the desktop has been in defining parallelizable applications and creating software development environments that understand the underlying architecture. A programmable system will communicate workload characteristics to the hardware while architectural characteristics will be communicated back up to the applications. Intel has started down this path with compilers such as those developed for Itanium processors. Much more must be done to take advantage of the new architectural features in these computing platforms. We are on the cusp of another leap in computing capabilities that will dramatically impact virtually everything in our lives. With the immense amount of data generated by corporate networks, it is necessary to scale computing to match the increasing level. The solution to the challenge of tera will herald changes perhaps as dramatic as those brought about by the printing press, the automobile, and the Internet. R.M. Ramanathan has been a technology evangelist and a Marketing Manager in Intel. In his 10 years with Intel he has held various positions, from engineering to marketing and management. Before coming to Intel, Ramanathan was director of engineering for a multinational company in India. Francis Bruening has been with Intel for eight years, and has bachelors in computer science from Cleveland State University. He has been a SW developer and manger, and is currently a technology marketing manager, promoting and developing the ecosystems necessary for industry adoption of new technologies.
<urn:uuid:2d58e739-f44e-4a84-b204-425f3c610e94>
CC-MAIN-2017-04
http://www.cioupdate.com/print/reports/article.php/3517306/Architecting-for-the-Era-of-Tera.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280308.24/warc/CC-MAIN-20170116095120-00209-ip-10-171-10-70.ec2.internal.warc.gz
en
0.910642
2,303
2.75
3
Volcanic Eruption Could Dash Iceland's Data Center Ambitions An eruption from the Eyjafjallajokull volcano could ground Iceland's data center ambitions. Unfortunately, a different sort of heat has Iceland in the news this week. The country's abundant geothermal power comes from the fact that it sits atop a volcanic rift. An eruption from the Eyjafjallajokull volcano has grounded air traffic across much of Europe. Data Center Knowledge reports that while such eruptions and earthquakes are common and mild, they rarely occur in areas where data centers are built. One has to wonder if such an occurrence gives pause to companies pondering Iceland for their data center location and if increased interest in the area will increase the chances that data centers are built nearer to areas where a similar occurrence could do major damage to a facility.
<urn:uuid:e81d7894-4b0c-43b8-abf9-5354d1b78547>
CC-MAIN-2017-04
http://www.enterprisenetworkingplanet.com/datacenter/datacenter-blog/volcanic-eruption-could-dash-icelands-data-center-ambitions
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00117-ip-10-171-10-70.ec2.internal.warc.gz
en
0.950543
170
2.8125
3
Fiber optics is a hot trend in today’s world of communication network, which is a technology that uses glass (or plastic) threads (fibers) to transmit large amount of data. In recent years it has become apparent that fiber-optics are now replacing copper wires as the best means of communication signal transmission. They span the long distances more easily and provide backbones for many communication networks. Why fiber optic is gradually replacing copper networks, we first should konw the pros and cons of copper. Pros and cons of copper Telephone companies have long used copper lines, while the cable television companies have relied on coaxial cable for TV, Internet, and VoIP(Voice over Internet Phone) telephone service. Both industries now are making increased use of fiber, hybrid fiber-copper, or hybrid fiber-coaxial cable lines. The benefit of the old copper service is that, unlike fiber and hybrid-fiber lines, it carries not only the voice and data signals but also the power to operate a standard, non-cordless telephone. The phone company itself provides that power, which often keeps the phones working even when a problem at the power company knocks out electric service. But traditional copper telephone lines can’t handle the large amount of data required for television and high-speed Internet services, especially over long distances. Although advanced techniques can enhance copper’s capabilities and most other companies are installing fiber or hybrid fiber lines, in some cases alongside the copper ones. We’ve found that telephone and cable company terms and conditions typically warn customers that these systems can’t maintain phone service indefinitely during a power failure, if at all. The problem is greatest with cable company VoIP services and with systems that use fiber lines all the way to the home. It can be less of a concern with hybrid copper-fiber systems, in which copper lines carry the signal the last mile or so to the home. In those systems, carriers can maintain phone power by installing batteries and generators at the point where the fiber meets the copper. Why Use Fiber Optic? Is Copper Really Cheaper Than Fiber? Telcos use fiber to connect all their central offices and long distance switches because it has thousands of times the bandwidth of copper wire and can carry signals hundreds of times further before needing a repeater. The CATV companies use fiber because it give them greater reliability and the opportunity to offer new services, like phone service and Internet connections. Both telcos and CATV operators use fiber for economic reasons, but their cost justification requires adopting new network architectures to take advantage of fiber’s strengths. When it comes to the cost, fiber optic is always considered to be more expensive than copper cabling. Whatever you look at – cable, fiber termination kit or networking electronics – fiber costs more. So isn’t it obvious that Fiber Optic Network is more expensive than copper? Maybe not! Looking at the cabling component costs may be not a good way to analyze total network costs. A properly designed premises cabling network can also be less expensive.
<urn:uuid:d3e5339a-541a-4007-b603-bf7395387414>
CC-MAIN-2017-04
http://www.fs.com/blog/fiber-optic-is-the-trend-in-networks-compared-with-copper.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00117-ip-10-171-10-70.ec2.internal.warc.gz
en
0.927195
631
3.25
3
DELL EMC Glossary Public clouds have the private cloud capabilities of automation and self-service, but run in multi-tenant off-premises environments. Why consider a public cloud? Small to large corporate and government entities use services provided in the public cloud to address a variety of application needs such as CRM, email, and collaboration. Because transparency and control are low, organizations often limit use of the public cloud to non-mission critical applications and non-sensitive information. Public cloud services are also used for servers, storage, and backup infrastructure, as well as application development. Leveraging the advantages of cloud computing, the public cloud enables organizations to access applications quickly, offload the cost of supporting infrastructure, and free limited IT staff for more valuable activities. It also enables IT departments to rapidly deploy applications and scale application environments quickly during periods of peak demand. The result is greater business agility and efficiency. Similarly, consumers use public cloud services to simplify software use, store, share, and protect content, and enable access from any web-connected device. How does a public cloud work? Public cloud infrastructure service providers, to include ISVs and various types of third party infrastructure and platform providers, use cloud computing. Built on a foundation of virtualization, IT resources are owned and managed by the service provider, pooled and shared across customers, and accessed via the Internet or dedicated network connection. A variation is the community cloud, a multi-company, members-only version of the public cloud centered on a common interest. Resources are made available to customers on demand through a self-service online catalogue of pre-defined configurations. Resource usage is tracked and billed based on a service arrangement, such as by consumption or subscription. What are the benefits of a public cloud? - Provisioning of infrastructure and services is simple. - You can convert capital expenditures to operating expenditures. - Public cloud users pay only for what is used, though this is now a common feature in private cloud deployments. - No administration of the underlying physical infrastructure is required.
<urn:uuid:bd512646-b265-4ae8-849d-09e2fde9a00a>
CC-MAIN-2017-04
https://www.emc.com/corporate/glossary/public-cloud.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00419-ip-10-171-10-70.ec2.internal.warc.gz
en
0.932047
422
2.796875
3
Providing effective feedback is one of the critical skills that a leader must consistently demonstrate to engage others towards achieving a common a goal. Feedback provides team members with information about what they are doing well and how to do it even better. It provides the momentum that people need to keep working toward achieving their goals and to contribute to the organization's goals. While it's important for effective leaders to give feedback, it's also important for a leader to seek out feedback and be open to receiving it when it comes. In this course, you will learn how to determine the best approach to providing feedback to team members based on their experience with the work. You will learn to receive openly and to provide feedback in a structured way to ensure it has the desired impact. Whether providing or receiving feedback, you will learn to demonstrate the five key leadership characteristics of positivity, authenticity, accountability, curiosity, and trust. Benefits for the Individual - Enhanced ability to provide feedback to gain momentum by: - Articulating the challenges leaders face when providing constructive or positive feedback - Identifying the challenges of giving and receiving feedback - Creating the conditions that promote trust and respect when providing feedback - Providing positive and constructive feedback using an effective framework - Encourage reflection through guided self-discovery Benefits for the Organization - Increased likelihood of achieving organization's strategic goals - Enhanced employee motivation and engagement - Improved cultural climate promoting trust and respect - Improved capacity to align behaviours to desired outcomes
<urn:uuid:fcba538d-a525-499f-832b-a2562a007c45>
CC-MAIN-2017-04
https://www.globalknowledge.com/ca-en/course/116267/provide-feedback-to-gain-momentum-leading-with-impact/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279923.28/warc/CC-MAIN-20170116095119-00145-ip-10-171-10-70.ec2.internal.warc.gz
en
0.939616
303
2.640625
3
Primer: Wireless Sensor NetworksBy Baselinemag | Posted 2005-10-01 Email Print Measure the benefits of testing software while it's still under development. What are they? Groups of devices that send data from sensors, like temperature gauges, to a central application using wireless protocols. They can also send commands to devices based on current conditions; for example, if a hotel room is unoccupied, the air conditioning unit could be instructed to shut off, to reduce electricity costs. Haven't electronic sensors been around for decades? Yes. What's new are relatively low-cost, low-power wireless radios that make it possible to collect information from sensors without having to string power or data wiring to each sensor. Wireless sensor nodes can operate on ordinary batteries and configured to send data intermittently (say, every 5 minutes) to conserve power. Companies selling such products claim a node with two AA batteries can last up to five years. In addition, wireless sensor nodes can relay data to each other, creating a mesh network that eliminates the need for multiple transmitters, which higher-speed wireless networks typically require. Who are the vendors? They fall into three categories. First are startups such as Crossbow Technology, Dust Networks, Millennial Net and Sensicast Systems, which provide the devices and software to build and manage wireless sensor networks. Next are companies that manufacture low-powered wireless networking chips for those systems, including Chipcon, Ember and Freescale Semiconductor (formerly part of Motorola). Finally, companies like General Electric and Honeywell International deliver wireless sensor networks that work with their industrial and commercial equipment. Where are they being used? Initially, where wired sensors have been used: in building automation (i.e., to turn on lights when someone walks into a room) and industrial automation (i.e., to shut off a machine if it starts to overheat). But there are other applications. For example, grocery chain Supervalu is using devices from Dust Networks in one of its Minneapolis stores to measure the energy consumption of its equipment to see where it could conserve power. Are there standards? A few. The Institute of Electrical and Electronics Engineers (IEEE) has defined a standard for low-power, low-data-rate wireless data transmission called 802.15.4. It provides a way to send up to 250 kilobits of data per second, which is only 2% of the bandwidth provided by a typical Wi-Fi wireless network but is ample for sending something like a temperature reading every few minutes. Meanwhile, the ZigBee Alliance (www.zigbee.org), a consortium of technology companies, has created a specification for sending control and status information over low-data-rate wireless networks. What's not standardized? Vendors including Dust Networks, Millennial Net and Sensicast have created their own proprietary protocols for routing data among wireless sensor networking nodes and minimizing the power a node consumes. In fact, for now it's one of their key points of differentiation. So how much does this cost? It's not exactly dirt cheap. Individual wireless nodes can sell for $30 to $50 apiece. However, the more sophisticated your setup, the pricier it becomes: Deploying a wireless network with industrial sensors (such as flowmeters, which measure the volume of delivery of a liquid or gas) in a manufacturing facility can run about $300 to $500 per sensor, according to Sensicast. Suppliers expect costs to come down eventually, as the technology becomes more widely adopted and unit volumes increase.
<urn:uuid:799576c9-dc12-4db6-b534-df3902e8f087>
CC-MAIN-2017-04
http://www.baselinemag.com/storage/Primer-Wireless-Sensor-Networks
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00355-ip-10-171-10-70.ec2.internal.warc.gz
en
0.939592
725
2.890625
3
This week's 'minutephysics' video tackles the issue of trying to identify "the fourth dimension", and why dimensions exist, but why we can't say which one is the first, second, third or fourth. It's always assumed that the fourth dimension was time, but it's more that we live in a three-dimensional world with a fourth "time dimension". Things get funky and fuzzy from there: However, we do know that there's a "third" dimension, because that's where Homer Simpson went in this famous Simpsons clip: (forgive the Spanish dubbing, Fox still won't' let people post Simpsons clips on YouTube) And finally, I do know that there's a "fifth dimension", as witnessed here: Keith Shaw rounds up the best in geek video in his ITworld.tv blog. Follow Keith on Twitter at @shawkeith. For the latest IT news, analysis and how-tos, follow ITworld on Twitter, Facebook, and Google+.
<urn:uuid:88928654-45ac-40ed-b36f-94f53fceb51e>
CC-MAIN-2017-04
http://www.itworld.com/article/2728996/virtualization/science-monday--there-s-no--fourth--dimension.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285315.77/warc/CC-MAIN-20170116095125-00016-ip-10-171-10-70.ec2.internal.warc.gz
en
0.957075
206
2.640625
3
Bullying in schools has been around since schools were created and is a phenomenon of human behavior that is sadly familiar even as administrators, teachers and parents strive to eliminate it. The advent of social media has brought with it a new form of bullying, commonly referred to as cyberbullying. The Center for Disease Control (CDC) calls it "electronic aggression" . Whatever cyberbullying is called, it is proving to be insidious, damaging and can result in irrevocable tragedies, which is why schools are scrambling to find solutions. Cyberbullying statisticsIncidents of cyberbullying have increased because the use of social networking sites by children and teens is virtually universal. Pew Research Center has done an extensive national study that includes cyberbullying statistics. It covered 2007-2010 and shows how popular social media is among 12-17 year-olds, and also how widespread cyberbullying has become: Cyberbullying Fact #1: Cyberbullying causes deep and lasting damage When incidents of bullying are online, the exposure of victims is limitless and thus far more damaging. According to psychologists, the damage inflicted by cyberbullying can last into adulthood causing lifelong issues with low self-esteem, risk for addiction and other problems Solution: iPrism Social Media security gives you visibility on online interactions of students and the power to mitigate cyberbullying problems before they escalate Cyberbullying Fact #2: Cyberbullying has resulted in teen suicide Stories of teen suicide are becoming all too common. Many of these incidents are the result of unchecked cyberbullying attacks on vulnerable students. Solution: iPrism Social Media Security enables schools to monitor online interactions and detect risks for self-harm among students, so an intervention can occur. Cyberbullying Fact #3: Cyberbullying can Result in Lawsuits for Schools Since every state except Montana has laws against bullying on the books, some cases of cyberbullying have resulted in costly lawsuits against schools and school districts. Solution: iPrism Social Media Security helps your school enforce cyberbullying legislation and reduces the risk of lawsuits. We are very concerned about social media use at our school and our need to comply with the new CIPA rules. Virtually all of our students are using social networking in one way or another and it's something we would like to be able to manage rather than just block completely." Director of Technology Warren County Public Schools
<urn:uuid:7f72edc5-4d0e-479c-b2c8-a08e30bf08f4>
CC-MAIN-2017-04
http://cyberbullying.edgewave.com/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00228-ip-10-171-10-70.ec2.internal.warc.gz
en
0.963942
496
3.515625
4
Intrusion Detection and Prevention Security is all about the deployment of multiple layers of defense. Firewall systems are the first layer of defense and are typically deployed at the perimeter of the organization. Intrusion detection systems (IDS) and intrusion prevention systems (IPS) are the next layers of defense. These are systems that are always on with the objective of “detecting” and “preventing” threats to the enterprise. Security practitioners need to review security policies to determine the role IDS/IPS can play in strengthening their defenses. Intrusion Detection Systems (IDS) Intrusion detection is about monitoring and identifying attempts made for unauthorized access into an organization’s infrastructure. IDS are designed to detect “threats” and take appropriate action. These threats, referred to as “events,” are typically logged, and an alert is generated to enable a response. There are two types of IDS: host-based IDS and network-based IDS. Host-based IDS are installed on the host systems that they are intended to monitor. This system may be a server, workstation or other device such as a router. The product typically runs as a process or a service, and has the capability to sniff network traffic that is intended for the host system. These IDS systems check the host against hundreds of “threat signatures” to make sure the system is safe from previously identified threats. Vendors include Cisco, Tripwire, Internet Security Systems (ISS) and Microsoft. Network-based IDS capture and analyze packets on the wire. While host-based IDS are designed to protect a single system, network-based IDS are built to protect systems on the network. For an IDS to effectively monitor a network, there must be at least one IDS device per network segment. This device may be a fully operational IDS, or it may just be a sensor or a tap. These systems capture packets and pass them on to the IDS console for inspection. Taps and sensors typically do not have an IP address and are thus invisible to intruders. Network-based IDS solutions are typically deployed at a choke point on the perimeter of the network as well as on critical network segments where servers are located. Vendors with solutions for network-based IDS include Internet Security Systems, Symantec and Cisco. Snort is another tool that is available. It is an open-source network-based IDS. It is very popular and deployed in numerous environments today, though it is not the easiest product to learn, install and configure. On the positive side, it supports hundreds of signature-detection rules covering exploits in many areas, including Windows, Linux, port scans and back doors. Intrusion Prevention Systems (IPS) A new generation of intrusion detection systems are being positioned as intrusion prevention systems (IPS). IPS have the capability to either stop the attack or interact with an external system to eliminate the threat. Intrusion prevention controls involve real-time countermeasures taken against specific, active threats. Examples include activities such as sending scripted commands to a firewall system to deny all in-bound traffic from a specific suspected attacker’s IP address. Another example would be to communicate with a virus scanner to clean an infected file. An IPS solution is not a passive device that detects evidence of intrusion, but one that is active and can perform actions to protect against attacks when they are detected. A term one comes across with IPS is “inline.” Inline IDS/IPS systems have the capability to filter real-time traffic. This allows for action such as dropping packets, esetting connections or routing suspicious traffic to quarantined areas for analysis. IDS solutions can produce false alarms that may result in inaccurate information being distributed. This might be due either to poor configuration choices or to limited capabilities. Further, it is typical of products in these areas to require expertise as well as time and effort for management and maintenance. For example, sensors will need to be kept updated. Also, in the case of inline systems, they do potentially impact network performance, since each packet is checked against thousands of pattern comparisons. You also have the challenge of knowing what to do if the device fails. These are all challenges that security practitioners will need to review as they establish their intrusion detection and prevention requirements. Getting Started: Developing an Incident Response Policy I recommend that the organization develop an incident response policy to establish priorities and process recommendations for threats–or incidents– that are detected. For example, the following guidance may be included in an organization’s information security policy: The organization will maintain procedures for identifying security incidents. Incidents will be classified as “serious” or “non-serious.” Non-serious incidents generally have the following characteristics: - It is determined that there was no malicious intent, or the attack was not directed specifically at the organization associated with the incident. - It is determined that no sensitive information was used, disclosed or damaged in an unauthorized manner. Serious incidents generally have the following characteristics: - It is determined that there was malicious intent and/or an attack was directed specifically at the organization. - It is determined that sensitive information may have been used, disclosed or damaged in an unauthorized manner. All workforce members of the organization will report any potential security incident that they become aware of or suspect to the security officer. A security incident is any breach of security policy or any activity that could potentially put sensitive information at risk of unauthorized use, disclosure or modification. The organization will maintain procedures for responding to serious and non-serious security incidents in order to prevent the escalation of the incident and to prevent future incidents of a similar nature. Incidents characterized as serious by the security officer will be responded to immediately and reported to all upper-level management. The organization will attempt to mitigate any harmful effects, when possible, where a security incident affects customer information. Case Study: The NetScreen Intrusion Detection and Prevention Product The NetScreen Intrusion Detection and Prevention (NetScreen-IDP) is an example of technology that provides inline attack protection against worms, viruses and Trojans as well as stops attacks on the network. The product delivers a detailed, on-demand view of both network- and application-level data to learn about network activities. This data is translated into comprehensive network security policies using a rule-based management GUI. The product also includes built-in tools to correlate data during any phase of an attack. The NetScreen-IDP also identifies rogue servers and applications that may have been added to the network without authorization. The product supports attack reporting and forensics capability to capture all critical information for incident investigation. NetScreen was acquired by Juniper Networks in April 2004. Visit www.juniper.net/ products/intrusion for more information. I cannot envision an organization that does not deploy IDS/IDP solutions. Just like a firewall system, IDS/IDP solutions are vital for defending today’s organizations. These systems give you more insight on the types of attacks that are launched on your business. They give you real-time capabilities to protect sensitive information and
<urn:uuid:4ccaaa11-e24d-48fb-8dd6-1f386aabdbb6>
CC-MAIN-2017-04
http://certmag.com/intrusion-detection-and-prevention-the-second-line-of-defense/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00402-ip-10-171-10-70.ec2.internal.warc.gz
en
0.951511
1,485
2.921875
3
Big Data Programming Languages There are several well-known open-source programming languages that are often used for Big Data analytics: The R language is widely used among statisticians and data miners for developing statistical software and developing Big Data solutions. R is an implementation of the S programming language combined with lexical scoping semantics inspired by Scheme. R is also called GNU S and it is a strongly functional language and environment to statistically explore data sets, make many graphical displays of data from custom data sets. Julia is a growing alternative for R, as it solves the slow language interpreter problem. It is a high-level, high-performance dynamic programming language for technical computing, with syntax that is familiar to users of other technical computing environments. It provides a sophisticated compiler, distributed parallel execution, numerical accuracy, and an extensive mathematical function library.
<urn:uuid:38668a67-0b33-4aa7-bde9-49a1cf56f2e8>
CC-MAIN-2017-04
https://datafloq.com/big-data-open-source-tools/os-programming/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00246-ip-10-171-10-70.ec2.internal.warc.gz
en
0.925833
169
3.0625
3
Primer: 64-Bit ProcessingBy David F. Carr | Posted 2003-09-10 Email Print By definition, a 64-bit processor can process twice as many bits (1s and 0s) of data as a 32-bit chip can in the same number of compute cycles. What changed? In 2001, Intel released its Itanium 64-bit processor, prompting Microsoft to get serious about producing a 64-bit version of Windows. Other significant new 64-bit processors include AMD's Opteron and IBM's Power4. How does it compare with 32-bit? By definition, a 64-bit processor can process twice as many bits (1s and 0s) of data as a 32-bit chip can in the same number of compute cycles. Because it's the number of combinations of bits that really matters, this translates to much more than twice the processing power. Thirty-two bits factors out to 4,294,967,296 possible combinations, or 4GB of data, that can be addressed. Sixty-four, on the other hand, raises the roof to 18,446,744,073,709,551,616. That's 16 exabytestwo orders of magnitude beyond a terabyte. Why would my company need it? As businesses grow more dependent on complex data analysis for standard operations, fast processors become increasingly necessary. Take a supply-chain-optimization application. Such a program might require sorting through all combinations of inventory item and location. When thousands of items and locations are involved, memory requirements can make data impossibly slow to analyze. And while 64-bit is overkill for most desktop users, it may be necessary for graphic artists and animatorssomething Apple probably had in mind with the G5 processor, a Power4 variant. What are the issues? Having a 64-bit chip doesn't automatically make a computer faster. The theoretical limit may be enormous, but the ability of subsystems such as the bus to move data into and out of memory can produce a practical limit. For example, Apple says the 64-bit G5 should eventually support 4 TB of data, but can only handle 8 GB today. The faster chip also requires that the operating system and applications that run on top of it be recompiled for 64-bit processing. The Itanium does allow 32-bit applications to run unchanged, but with a hit on performance. In fact, 32-bit apps may actually run slower on Itanium than they would on a 32-bit Intel Xeon. AMD took a more evolutionary approach with its Opteron processor which extends a 32-bit design to handle 64-bit operations. This means that unmodified 32-bit applications don't suffer the same performance penalty as the Itanium. How much 64-bit software is available? Unix operating systems like Sun Solaris, HP-UX and IBM AIX have been running on 64-bit processors for years, so Unix software vendors supply these products as a matter of course. Windows Server 2003 is the first Microsoft operating system to fully support 64-bit computing (on Itanium).Linux implementations for Itanium also are beginning to appear. Commercial software applications are following slowly. For example, i2 Technologies' Supply Chain Planner for 64-bit Windows on Itanium 2 is currently supported for pilot projects only. How does Itanium compare with the Xeon? The 32-bit Xeon will almost certainly remain the default Windows server processor for years to come because of its support for existing applications and the fact that many applications don't really need 64-bit power. However, Itanium allows Windows to break into territory that used to be exclusive to Unix servers, such as large-scale graphics or simulation applications that require each processor to work with more than 4 GB of memory at a time. Most Xeon-based servers are limited to eight processors; Itanium can support up to 32.
<urn:uuid:08d1703c-b8c4-4dc3-bced-8b347f76fe3b>
CC-MAIN-2017-04
http://www.baselinemag.com/c/a/Projects-Networks-and-Storage/Primer-64Bit-Processing
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00458-ip-10-171-10-70.ec2.internal.warc.gz
en
0.933866
799
3.0625
3
Near Field Communication Near Field Communication (NFC) is a very short range radio technology and is used for contactless communication between NFC-enabled devices and tags or cards. It has a range of typically no more than 4 cm and operates at 13.56 MHz. With transfer speeds of 106 Kbps, 212 Kbps, and 424 Kbps, NFC allows the exchange of small amounts of content between two NFC-enabled devices. When two NFC-enabled devices touch or are placed close enough to each other, the devices can read content from, and write content to, each other. The devices can also share files using Bluetooth or Wi-Fi connection handover. NFC is also used to emulate smart cards to perform transactions such as credit card payment. NFC has many real-world applications, such as: - Smart posters: Smart posters are posters that are embedded with small electronic tags that store data such as URLs. An NFC-enabled device can read smart posters and act on the data. - Data exchange: NFC-enabled smartphones can exchange data such as electronic business cards simply by tapping each other. - Contactless payment: NFC-enabled smartphones can be used as credit cards. - Ticketing: NFC-enabled devices can be used to gain access to events, transit systems, etc. NFC is a standards-based technology that is governed by the NFC Forum. A number of ISO standards are used in the NFC architecture, most notably the ISO 14443 and ISO 7816 specifications. For more information on NFC and its technical specifications, see the NFC Forum. Last modified: 2015-05-07
<urn:uuid:6cb2b1af-e764-43f9-bb2f-e63dca6cdefa>
CC-MAIN-2017-04
http://developer.blackberry.com/native/documentation/device_comm/nfc/index.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00420-ip-10-171-10-70.ec2.internal.warc.gz
en
0.931476
330
3.53125
4
The business of testing rockets isn’t a cheap one, and Russian scientists are looking for less expensive, quicker ways to analyze new designs as they race toward space exploration goals. Modeling and simulation, which is used to model everything from car crashes to more streamlined beer cans, is on the agenda as Russia looks to speed time to rocket development. Roscocosmos, the Russian state space organization, has published a tender for development of “manufacturing technology of a cluster compute system with hybrid architecture for imitational modeling of rocket and launchers’ real flight conditions,” reports CNews. According to the proposal, Russia is prepared to set aside around $1.74 million for the rocket testing cluster. Russian space officials claim they require a system to be capable of providing peak performance of up to 10 teraflops, hold 20 GB RAM and offer 4000 GB of disk space. The tender goes on to note that the agency is looking for a contractor that can not only deliver this “manufacturing technology” but that can also provide a sample of such compute system (with CPU, GPU architecture), which will be installed at other sites in the space agency’s network of research and development centers. As the CNews report stated: “It stands to mention that Roscosmos by now has had some practice of computer modeling. This year the agency got a personal supercomputer with performance of arounf 1 Tflops and some applied software developed by specialists of the Federal Nuclear Center of the Russian city of Sarov (part of Russia`s biggest state nuclear corporation – Rosatom). source close to Rosatom said to CNews that software had been used to model some elements of the new RD-0146 spacecraft engine and parts of the Rus-M launcher.” The software powering these simulations must be able to simulate combustion dynamics and analuze heat transfer and aerogas dynamics of rockets at transonic speeds. According to CNews, Russia already possess the world’s largest gas dynamics chambers at the Central Science Research Institute for Machine Engineering in Moscow. This has been the site of other heat transfer research for other Russian space modules in the past.
<urn:uuid:0371e8ff-4748-4900-b3ee-f70e4146b1c4>
CC-MAIN-2017-04
https://www.hpcwire.com/2011/10/06/russia_seeks_rocket_simulation_system/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00264-ip-10-171-10-70.ec2.internal.warc.gz
en
0.938702
449
2.640625
3
Boeing and Samsung collaborate on mobile technology for space station ferry. Samsung is collaborating with aerospace firm Boeing to incorporate mobile apps into the CST-100 spaceship. The spacecraft is being developed by Boeing as part of NASA’s commercial crew program with an aim to use it as a transport for astronauts heading to the International Space Station. Boeing said that it wants to employ mobile technology to aid astronauts to share their experiences and Boeing researchers have already chosen six apps that could be used during space flight, as reported by Space.com. The app software, some of which already employed on the space station, could potentially be used on the Android operating system. An app called World Map is already used to let astronauts know when the spce station flies over a particular part of the world. Former astronaut Chris Ferguson, director of crew and mission operations for the Boeing Commercial Crew Program, said: "Those geopolitical lines aren’t necessarily carved on the Earth." Speaking at the 2014 National Space Symposium, Ferguson said: "You have to rely sometimes on a tool to tell you where you are, and there’s a great application on the space station called World Map that we would like to bring over to an Android platform. That’s an example of how a NASA astronaut would use it." The app research also holds future uses for space tourism, such as photography apps and a Wi-Fi tool that can send photos back to Earth. Boeing is going head to head with competing spaceflight firms such as SpaceX and Sierra Nevada Corp to win a contract to ferry astronauts to the International Space Station. NASA hopes to have transport operating by 2017, following the ending of the space shuttle program in 2011.
<urn:uuid:79b07d88-17ef-4a4e-a538-34336ee11c46>
CC-MAIN-2017-04
http://www.cbronline.com/news/mobility/devices/samsung-planning-android-mobile-apps-for-astronauts-and-space-tourists-4273853
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00569-ip-10-171-10-70.ec2.internal.warc.gz
en
0.946872
353
2.640625
3
For the past eight years, Emergency Management has worked to cover a wide range of issues, while seeking to highlight best practices, lessons learned and topics important to those working in the profession. 2013 will be remembered for numerous natural and man-made disasters, from the Boston Marathon bombings to devastating tornadoes and wildfires. Here is a look at the most popular articles from the year. Sandy Created a Black Hole of Communication Communication is a fundamental of emergency management and yet an inherent struggle during disasters. Superstorm Sandy was no exception as complaints about a lack of information were common. This came from communities in pockets of the East Coast where information was desperately needed but scarce. Attack at the Boston Marathon and the Value of Emergency Planning More than a terrorist incident, this attack was one of many mass casualty incidents that have occurred this year. Today’s special event contingency planning requires emergency planners to work extensively with local, state and federal emergency professionals to plan for mass causality contingencies. Are Emergency Management Graduates Finding Jobs? Emergency management degree programs have been popping up at universities throughout the U.S. over the last decade. But are the degrees actually helping students get jobs? The answer is still unclear, but signs point to academic expertise having a more significant impact in the emergency management workplace moving forward. Sandy Marked a Shift for Social Media Use in Disasters Social media today is not about the tools, but the technology and behavior — virtual collaboration, information sharing and grass-roots engagement — that transforms monologues into dialogues. Social media empowers individuals, providing them a platform from which to share opinions, experiences and information from anywhere at any time. Elected Officials are Rarely Educated About Emergencies The response of elected officials makes a difference in disasters. When they’re strong and competent, they can lead recoveries and inspire devastated, discouraged and displaced people to struggle on and begin recovering. When they fail, response is hindered, recovery delayed, and the pain of a disaster is prolonged even further. Twitter Launches an Alert System for Emergencies Twitter’s new feature will allow users to get emergency information directly from vetted, credible organizations. The system, called Twitter Alerts, will deliver tweets marked as an alert by approved organizations through the traditional timeline feed and via SMS to a user’s cellphone. Is Emergency Management a Profession? (Opinion) It is evidently believed by the appointing authorities that emergency management is not a profession. There is no significant body of knowledge to be understood and exercised in the performance of one’s duties as an emergency manager. Emergency Sing Off: Preparedness Song Makes the Message Stick Emergency managers in Virginia are getting vocal — by turning popular songs into catchy messages about what to include in an emergency kit and other useful information. Can Google Glass Help First Responders? Robocop may not be real, but his efficiency is something worth aspiring to. Through the use of Google Glass, communications vendor Mutualink may soon give public safety and military personnel a chance to capture some of the half-robot, half-man’s technological capabilities. Scale, Velocity, Ambiguity: What's Different About a Type 1 Event Hurricane Sandy illustrated three key distinguishing aspects of a Type 1 disaster: scope and scale, velocity and ambiguity of information. Using Social Media to Enhance Situational Awareness As these active uses of social media come into their own, newer passive uses are evolving. Rather than shout, government agencies listen: They harvest the chatter, sifting for relevant mentions that might help them to better respond to crises and emergencies. Beyond Debriefing: How to Address Responders’ Emotional Health The emergency management community has taken some steps to address the emotional needs of those who rush to a disaster scene. But experts say there’s much more that could (and should) be done. Mobile App Tracks Emergency Volunteers Community emergency response teams have a new mobile app at their disposal to help track the locations of fellow volunteers and key points of interest during a deployment.
<urn:uuid:bdf454ca-060a-4036-a267-299fd65d3084>
CC-MAIN-2017-04
http://www.govtech.com/em/disaster/Emergency-Managements-Top-Stories-2013.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00569-ip-10-171-10-70.ec2.internal.warc.gz
en
0.935789
839
2.546875
3
We get plenty of data recovery cases for solid state drives every year. They are fascinating and fairly complex pieces of technology. With price points dropping and technology improving every year, more and more people are buying SSDs. In fact, plenty of people buy solid state drives simply so they can have faster boot times by putting their operating system on one (actually, I did that). With growing consumer interest in solid state drives, we thought it would be beneficial to make a video explaining the various components inside a solid state drive. It can seem overwhelming at first, but the truth is, there are really only five or six components to touch on. Without further ado, here’s our video on what’s inside a solid state drive. Hi, I’m Jesse Moryn with Gillware Data Recovery. In recent years, we’ve seen some great advances in solid state technology- leading to better, more affordable solid state drives. A lot of consumers have been taking advantage of this by putting solid states in their computers. However, they don’t really know much about them. So today, we’re going to take apart this Intel 320 Series SSD and take a look at just what’s inside. So here we have a closer look at our drive. You can see the name and the size- 160 Gigabytes. We’re going to go ahead and take this drive apart through the miracle of a bad jump cut. There we go, we’re good to go. Now you can see the inside of the drive, uh- the NAND flash chips here, the processor or controller right here, and the SATA over here. Those are the big three that you’re going to find in pretty much every solid state. There’s a few other components we can talk about that are not necessarily present in, uh- some solid state drives, but we can talk about them anyway because they’re interesting. So NAND flash memory is the first component we’ll discuss- and it’s the storage for our drive, also known as non-volatile memory, since it doesn’t need a power source to keep the data. This can be compared to the spinning platters or disks of a hard drive, but instead of storing the information on a magnetic substrate, it’s stored in the stationary NAND flash memory chips. The lack of any moving parts is actually why we call them solid state drives, they’re in a solid state, there’s no need for spinning disks or rapidly moving read/write heads. The reason this memory in particular is called NAND flash memory, is because the memory uses Not-And logic gates, or NAND for short. Now, the way NAND flash memory stores data is through the use of floating gate transistors. Floating gates are electrically isolated, meaning they have no electrical contacts because they’re insulated by oxide layers. Because of this, electrons are able to stay within the floating gates for years. There are multiple ways we can get electrons into the floating gates, but the basis is by utilizing quantum tunneling to get them through the oxide layer from the channel. Because those electrons aren’t leaving the floating gate anytime soon, we use the presence or absence of those electrons to represent the 0’s and 1’s of our data. This is called SLC or single-level cell NAND since there’s only 2 possible charge states. We can also use different amounts of electrons to represent different charges and thus increase our data storage density. This is how MLC or multi-level cell NAND works. All consumer grade drives use MLC NAND, including this one. Of course the full explanation on solid state storage is much longer than that, but that’s the basics of it. Next up we have our processor, also known as the controller. This little guy is the fastest and debatably most important part of the drive since it’s responsible for interfacing between the other components, and is essentially the brains for the drive. When I say it’s the fastest part of the drive, I mean it executes operations on the order of nanoseconds, a nanosecond being one billionth of a second. It performs operations that are written into the firmware such as all the reading, writing, erasing, encryption, garbage-collection, wear-leveling, and over-provisioning. Wow. I apologize if you don’t know what some of those operations are, but just know they’re important. Without the controller to perform all these functions, your solid state drive would pretty much just be an elaborate hunk of scrap metal. So yeah, it’s pretty important. Third, we’ll look at the SATA interface which is this part here on the end with the gold pin connectors. This is the part that connects up with the computer and is connected through SATA power and data cables. Power is the big one, data is the small one. There’s not much to explain about the SATA interface other than it’s the bridge that allows the transfer of data between your SSD and your computer, as well as brings power to the SSD. Finally we’ve gotten to the components that aren’t in some solid state drives, but can certainly be useful for the ones that do have them. That’s not to say that all SSD’s even need these components, since some of them work slightly differently. But anyway, these components are the SDRAM and capacitors. The SDRAM, or synchronous dynamic random access memory, is the volatile memory for your SSD. I mentioned earlier that NAND flash memory is non-volatile, meaning it doesn’t need a power source to store data. Well you’ve probably figured out that because it’s volatile memory, SDRAM does require a power source to store data. If there’s no power going to your SSD, like when your computer is turned off, the SDRAM flushes the data out and forgets everything. So what’s the point of having it if it’s seemingly less functional than NAND? Well, it isn’t less functional than NAND, it’s actually faster and allows the controller to run programs quickly. The SDRAM only serves as working memory for your controller, and it only puts data here that’s essential for whatever the task at hand requires. You can imagine that volatile memory requiring power to store data can cause some problems, and you’re right. If there’s a sudden loss of power, whatever is stored in the RAM can be lost forever, which is similar to the reason you’re not supposed to just pull a USB out of your computer before ejecting it. This can corrupt files and cause data loss if the drive had data in the RAM when you cut the power. This problem leads me to the final components which are the capacitors. In the event of a sudden loss of power, these capacitors serve as an emergency power source to flush any data in the RAM back into the NAND flash memory so you don’t lose it forever. They’re a cool failsafe to prevent data loss. Of course one component I didn’t mention is the PCB, or printed circuit board, and that’s this green thing underneath all the other pieces. This is the component that allows all the other parts to work together and communicate with each other so your drive works exactly how it’s supposed to. That concludes our video on what’s inside a solid state drive. Here at Gillware, we want to thank you for watching and hope you enjoyed it, but most importantly, we hope you learned something.
<urn:uuid:d4e6a4ff-991d-42de-83ac-41b6afda091c>
CC-MAIN-2017-04
https://www.gillware.com/blog/data-recovery/video-whats-in-a-solid-state-drive/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00477-ip-10-171-10-70.ec2.internal.warc.gz
en
0.941754
1,641
3.03125
3
Baidu is focusing strongly on safety with its self driving car. Chinese search engine giant, Baidu, has revealed a plan to develop self driving cars, similar to Google’s autonomous cars. The car’s intelligent assistant will collect information from traffic situations and can operate without a driver under certain situations. Google previously demonstrated its self driving car, which did not have a steering wheel, brake pedal or accelerator pedal, but according to Baidu’s deputy director, Kai Yu, his company’s car will have a steering wheel, allowing the driver to take control of the car at any given point. Yu did not reveal much about the car’s technology but announced that the first prototype will be developed by 2015. Yu told The Next Web: "Philosophically we have a fundamental difference to look at this type of things. I think in the future, a car should not totally replace the driver but should really give the driver freedom. "Freedom means the car is intelligent enough to operate by itself, like a horse, and make decisions under different road situations. "Whenever the driver wants to resume control, you can do that. It’s like riding on a horse, rather than just sitting in a car where you only have a button."
<urn:uuid:3c9333a6-3496-46c1-973d-84f907db5291>
CC-MAIN-2017-04
http://www.cbronline.com/news/telecoms/baidu-to-build-self-driving-car-prototype-by-2015-280714-4328074
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00201-ip-10-171-10-70.ec2.internal.warc.gz
en
0.96573
268
2.53125
3
Today is Computer Security Day. From smartphones to the ever expanding world of IoT, so much of daily life now depends upon or is directly impacted by a multitude of computer. As a result, there’s it’s in everyone’s best interest protect all of our connected devices, not just laptop or desktop computers. In the spirit of World Computer Security Day, , we’ve pulled together a few pieces of information and suggestions to help improve your and also the world’s computer security. - We shared our own, but the FBI also published some tips to consider during National Cyber Security Awareness Month. The US Computer Emergency Readiness Team (US-CERT) also published a collection of simple and useful tips during October: Week 1, Week 2, Week 3, Week 4. - October 21 2016 might seem like ancient history as we enter the holiday period, but something very big happened on that day. While the impact was a general worldwide frustration with accessing Twitter,Spotify, or hundreds of other applications, here’s explanation of what happened and what all of us can do to help make repeats more difficult. Hint: not changing default passwords or patching your connected devices is bad for everyone. - Last month NIST released a study warning of widespread security fatigue as end users report feeling defenseless against malicious attacks. Here’s a 3-step antidote to security fatigue. Several large brands have recently forced password resets on their users, either in response to a data breach or as a precautionary measure. While helpful password resets might not really be the answer.
<urn:uuid:6e63e832-e9fb-4e9b-8895-91fb7eef06f7>
CC-MAIN-2017-04
http://blog.bitium.com/celebrating-computer-security-day
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00137-ip-10-171-10-70.ec2.internal.warc.gz
en
0.934293
330
2.734375
3
Researchers from Michigan State University and Imperial College London have just received $1.87 million in funding to conduct a treasure hunt. It will take them from Germany to Hawaii in the US and elsewhere, in search of the smallest needle—a particular type of bacteria—in a haystack the size of the globe. If it pays off, it could contribute to lowering the world’s reliance on toxic—and expensive—fertilizer, replacing it with bacteria. Making fertilizer is dangerous as April’s explosion at a factory in West, Texas demonstrated. And its use is catastrophic for the environment for reasons ranging from increased methane emissions to run-off contaminating water supplies. The bacteria project is one of three being funded by the US National Science Foundation and Britain’s Biotechnology and Biological Sciences Research Council. With a total of $8.86 million of funding, the groups are hoping the three projects will boost crop yields while reducing the need for fertilizers. At the root of all three projects is the process of “fixing” nitrogen, or converting it to ammonia, a compound that helps plants grow. There is plenty of nitrogen in the atmosphere but it doesn’t convert into ammonia in an oxygen-rich environment like ours.
<urn:uuid:b0f6dd23-0783-4ed2-a969-1c51ca6f8266>
CC-MAIN-2017-04
http://www.nextgov.com/health/2013/08/scientists-are-scouring-globe-mystery-bacteria-help-reduce-our-dependence-fertilizer/69305/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280834.29/warc/CC-MAIN-20170116095120-00073-ip-10-171-10-70.ec2.internal.warc.gz
en
0.92972
255
3.453125
3
Forensics: When is Data Truly Lost?Rob Lee of SANS Institute on the Difficulty of Destroying Data Before embarking on the tragic Newtown, Conn., shootings, Adam Lanza reportedly destroyed his computer. But is the machine's data also destroyed? Rob Lee, forensics expert and educator from SANS Institute, points out how difficult it is to truly destroy computer data. Particularly in an age when people live their virtual lives in so many forums and through so many mobile devices. "Data is incredibly difficult to get rid of," Lee says in an interview with Information Security Media Group [edited transcript appears below]. "Individuals that have files stored on a single drive might have connections into the cloud, might have e-mail stored on Yahoo or Hotmail, and could have data posted to Twitter accounts, and it's just - everything is spread everywhere." For someone intent on destroying data, Lee says, there are three basic tactics: - Delete the file - which is largely ineffective because of the proliferation of backups that exist on the machine or on servers. - Wipe the hard drive - which can be effective, even with just a single-pass electronic wipe. - Destroy the hard drive - which is harder to accomplish than it sounds, Lee says. "These things are designed to endure a lot of wear and tear from being dropped, being emerged in water and so forth," Lee says. Even if a hard drive's head and motors are damaged, the actual platter that contains the data may still be intact. "So, in order to recover the data all [an investigator] would do is what is called a platter swap and move it into a hard drive with a motor that does work." If the platter itself is destroyed - say, by drilling directly through the platter - then the data most likely is lost. In the Lanza case, the FBI is investigating the remains of the shooter's computer to determine whether the data is retrievable. "If, as is suspected here, the shooter hammered against his hard drive," Lee says, "the likelihood is that the platters that are inside of it simply need to be replaced into a working drive with a working motor in order to be able to get the data off the drive. Lee is an entrepreneur and consultant in the Washington, D.C., area, specializing in information security, incident response and digital forensics. He is the curriculum lead and author for digital forensic and incident response training at the SANS Institute, in addition to owning his own firm. He has more than 15 years of experience in computer forensics, vulnerability and exploit discovery, intrusion detection/prevention and incident response. Following is an edited transcript of this interview with Lee. When is Data Lost? TOM FIELD: Rob, we spoke just about a month ago and you took us inside a forensics investigation. Since then we've had some high-profile cases, the most recent being the Connecticut shooter who apparently destroyed his computer so that at least he could cover up some of his electronic tracks. My question for you: In a forensics investigation, when is data difficult or impossible to retrieve? ROB LEE: Data is incredibly difficult to get rid of. Computers are less stovepipe today, meaning that they are less seen as a single system than they've ever been in the past. Individuals that have files stored on a single drive might have connections into the cloud, might have e-mail stored on Yahoo or Hotmail, and would have data posted to Twitter accounts, and it's just - everything is spread everywhere. Even if you take a look at how many devices that you have attached to an individual, most people have three or four, from a tablet to a laptop to a desktop system, work PC, maybe one smartphone, your iPod and your e-reader. You could easily share data across all these devices. So, data is fairly resilient at this point and in many forms. That is one of the things that is particularly interesting in this [Connecticut] case is that the individual obviously had a concern that somebody would be looking at his laptop. So, it was definitely premeditated that he destroyed it. Now, you asked specifically the question about "How hard is it to destroy a data on the laptop?" There are several ways to do this. First of all, if someone deletes an individual file; second if they wipe an entire hard drive; third if they try to physically damage the hard drive. Let's assume someone wipes an individual file on a system. Is it possible to recover that? Typically, yes, because that file is usually never sitting at the same location on the hard drive. It may have been moved. It might have been defragmented. It might have multiple copies or backup copies on the drive, so even what you think is being deleted or wiped at a single instant, it may still reside in other locations in either backup form or just because the system is stored at another location. So, this leads us to our second possibility, when someone says, "Well, I'll wipe the entire contents of my hard drive." In that situation, it is probably one of the more destructive means to destroying data. It really does require only a single-pass wipe, even though some government agencies require multiple passes, NIST came out with standards in 2006, also recognized by multiple forensic experts, that if someone has a single-pass wipe on their hard drive, the data is gone. You will not be able to recover it off the drive. Third is the physical damage that we see here in Connecticut. Now a hard drive's chassis is actually built with really good engineering specs. These things are designed to endure a lot of wear and tear from being dropped, being emerged in water and so forth. That even if these things occurred to it and it damages the chassis ... the data on the drive is almost 100 percent recoverable. Why? The main reason is that usually the head and the motors of the hard drive are damaged, not the actual platter, which is where the actual data is stored. So, in order to recover the data all they would do is what is called a platter swap and move it into a hard drive with a motor that does work. A situation that makes it very difficult to recover data is if someone actually physically damages the platter of a hard drive. To do this, they would potentially drill through the hard drive and actually cause the platter to shatter in some form or mechanism. That is a lot more difficult to reconstruct. There is no way I know of where someone has even made an attempt of re-create the data off of a shattered platter. If, as is suspected here, the shooter hammered against his hard drive, the likelihood is that the platters that are inside of it simply need to be replaced into a working drive with a working motor in order to be able to get the data off the drive. FIELD: Rob, let's take this back to what security leaders can do. How can they preemptively attempt to preserve data on devices that they feel they are going to have to investigate forensically? LEE: Well, there are a lot of things already built in to Windows to help you accomplish this, but a lot of enterprise actually disable them. For example, for enterprises that are currently upgraded to Windows 7, they are turning off the volume shadow copy. The volume shadow copy actually has entire back-up to the hard drive from earlier points of time, going back probably about a week or two. So, if someone is wiped out today, you could recover [the data] from yesterday. So, my recommendation is: Don't turn off some of the default capability already built into your work station, thinking you're going to see a little more performance out of it. You're not. The second thing that you can do to really help out is make sure your servers are regularly backed up. You know, e-mails are considered examples here. If someone is trying to actively delete data from e-mails or wipe them, then it may be a week or two that goes by [before you notice], but if you have backups, you'll be able to recreate the entire e-mails. And as I said earlier, e-mails are also stored at other locations in your network such as mobile devices to laptops. So the ability to still recover e-mail or similar type of artifacts is fairly decent because of the sheer replicated mechanisms that are already in play.
<urn:uuid:0f92f0c6-7ec5-4a0c-b7ed-3ef03bdd2f06>
CC-MAIN-2017-04
http://www.bankinfosecurity.com/forensics-when-data-truly-lost-a-5380/op-1
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00193-ip-10-171-10-70.ec2.internal.warc.gz
en
0.975981
1,748
2.921875
3
Once seen as the bane of classroom attention spans, Internet of Things connected devices are now becoming standard operational and teaching tools in schools where they enhance how students interact with information, make it easier for schools to monitor and address student safety, and reduce many stresses for students, teachers, and parents. Education in the Cloud The cloud's ability to store, organize, and search information is saving teachers time and saving schools money. Teachers can now make class work and lessons available to students on the cloud, so they no longer need to print material for every student in a given class, which also saves the school money on ink and paper. Multiple-choice quizzes can be administered online and graded automatically, giving teachers more time to plan their classes instead of tediously hand-grading tests and quizzes. The cloud also teaches students to collaborate more effectively while acclimating them to digital tools that are widely used in the real world. For example, Google Suites’ collaborative editing features on programs such as Docs and Slides make it simple for students to collaborate on documents and presentations in real-time. And what if a student forgot their homework assignment? Teachers can place assignments in a Google Drive or other online storage folders for access from any cloud-connected device so that forgetting course materials will no longer be a concern. Notes, resource materials, and assignments stored in the cloud are easily searchable, which teaches students valuable organization skills while ensuring they don’t miss an assignment. Using IoT for Roll Call With admission rates for schools continuously on the rise, monitoring attendance and safety has become a challenge for school districts. Attendance benchmarks need to be reached to ensure state funding requirements are met. Sometimes students aren’t in the class where attendance is being taken; they could be in the nurse's office or detention. School ID badges embedded with RFID chips and specialized sensors placed throughout campus let schools automatically account for all students on campus and see exactly where they are. This has raised privacy concerns among parents, but RFID accountability has the potential of saving student lives in case of an emergency. For example, if there’s a fire, faculty can receive text alerts to quickly see if students are still in the building after the evacuation process. GPS Tracking for School Buses Getting ready in the morning is often the most Herculean task accomplished in the day: making breakfast, packing lunch, getting the kids ready, and sending them out to catch the bus is enough stress for any parent without wondering if the bus will be on time on a given day. Wouldn’t it be a relief to know exactly when the bus was coming to ensure your child isn’t in the street any longer than necessary? Wouldn’t it also be nice to know your child arrived home safely if you can’t be there to pick them up? Schools are now using the same GPS technology used to monitor commercial shipping fleets to monitor buses. Families with multiple children can receive alerts to let them know which child needs to head out at what time, eliminating the stress caused by juggling multiple bus schedules. The Austin Statesman reported that all school buses in the Austin Independent School District can now be tracked from a smartphone app. As more educational institutions tap into the power of Internet of Things technology, teachers and students will undoubtedly become more connected through IoT. If you’re developing education-related IoT apps or devices, Aeris connectivity is available to get you up and running to help schools move into the future. Find out about the many options for IoT technology, connectivity, and devices in our whitepaper.
<urn:uuid:0dd5cf61-c677-457d-8f0f-f1fc08d5654e>
CC-MAIN-2017-04
http://blog.aeris.com/how-iot-enhances-education
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00037-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947498
734
2.875
3
With each new school year comes a barrage of emails and letters sent out to teachers, students, and parents. Newsletters, permission slips, and teacher notes form piles around the house. One of the many forms of communications sent out that must be returned with a signature of understanding and approval is the acceptable use policy (AUP), or technology use policy. Because of the enormous amount of compliance paperwork shuffled across the population of a school district, it’s likely that this all-important piece of information is merely skimmed over. And when the technological landscape of your school is constantly changing, the acceptable use of technology is paramount for parents, teachers, and pupils to understand. It’s up to the school’s IT department to make sure that acceptable use of technology is communicated, understood, and accepted. The key to achieving these goals is to keep communicating to all parties involved, even after the note has been sent home. It’s the equivalent of hanging up your kid’s report card on the fridge to keep his good grade goals in mind. Making the message of AUP clear doesn’t have to be homework, though. Here are some helpful tips to communicate the policy to the many types of school stakeholders: Communication with faculty We know — the beginning of the school year is crazy for teachers. Another mandatory teacher meeting or training about technology would be met with a collective eye roll. Still, it is of great importance to give teachers the information they need about any changes to the technology AUP so they are prepared to educate students and answer parent questions. Ultimately, group training on AUP changes and updates would be ideal. And it could be just as effective and efficient to hold an online internal webinar. Better yet, you could even create a video for teachers to watch on their own time. There would be no shortage of information or examples to fit into this video! What to cover? Include anything in the AUP that is different from previous years. Use examples of policy guidelines on actual computers. Take screenshots, or record live use of websites and other resources that convey the information covered in the AUP. Arm teachers with the language they need to explain the AUP to parents and students. You know the parents will have questions, so you should prep your staff with the accurate answers. Create an FAQ just for teacher-parent conversations. But don’t just spoon-feed information to teachers. As an IT department leader, you should show them how to find resources to back their answers. To cover all of your bases, though, discuss who is next in command after educators for in depth explanations about the AUP. Communication with parents As previously mentioned, it’s no secret that the beginning of the school year evokes a bombardment of paperwork for parents. There are health care forms, field trip permission slips, syllabi reviews, and conference schedules to read over and sign off on. For parents with busy schedules and multiple children, it’s common for even the most engaged parent to simply skim over permission slips and sign the dotted line. Communicating AUP changes or updates to parents involves more than sending a note home. Parents need to be alerted in a more significant way. The AUP is crucial for both child and parent to understand, but another flyer home isn’t the best way to inform them. Create an interactive booth at the school’s open house where students can see Internet use and technology in action. Illustrate violations so parents can experience what their children see. This will (hopefully) prevent school faculty and IT teams from angry parent phone calls. It is likely that these parents are on social media as often as their kids. So this should be a no-brainer: take to social media. Utilize the school’s Facebook and Twitter accounts to project links to an updated policy on the school’s website. You can go old school, too, with a mass voice message via the school’s phone tree service. Give parents a phone number to call with any questions, and give them a ring when the policy changes. Communication with students Who is the most important person involved in the AUP? The student is, of course. Students are the reason the policy is created. AUP’s protect the student from doing harm or being harmed. If you have to educate just one person on the AUP, it should be the student. If students have accurate information about the policy, then they can be key conveyors of its importance to their parents. Ideas for communicating AUP guidelines with students are similar to those for parents and teachers. It’s all about showing and telling. Students understand best when they are given examples. If you give them hands-on experience and allow them time to understand the concepts behind the school’s AUP, then they will walk away feeling confident about the AUP guidelines. When teachers simply read through policies, students tend to tune out. Don’t give them that option in the first place. Again, demonstrations and hands-on experience is where it’s at. To drive home the importance of AUP, teachers can schedule activities students and messages that can be incorporated into lessons. One great way to help pupils understand this is to provide tours of the school through the IT department’s eyes. Show students the network, the monitoring system, and the tools used to keep track of what they are doing online. Don’t keep it a secret. Part of being transparent with students about AUP is also showing them what happens when they violate the AUP. How do teachers and IT administrators know if students have been on inappropriate websites? Show them. Additionally, it’s helpful to instruct student on communicating to adults when someone else is violating technology rules. Teach them about appropriate online behavior, including interacting with other individuals on social networking websites and in chat rooms. Touch on cyber bullying awareness and response, as well, as it is required by CIPA for E-Rate program compliance. If you give students the tools and info they need to understand the AUP, then they will work within it instead of fearing it. Communication with community The community at large may not seem like an immediate priority for conveying the contents of a school’s technology AUP. But if the community isn’t informed, then headaches are sure to follow due to miscommunication. It is best to prevent confusion rather than fight fires caused by it. Consider different ways to be transparent about expectations to the stakeholders in your school district. In some cases, a public forum that allows community members to ask questions and voice concerns will do the trick. Invite the local newspaper to write an article on the school technology policies. Get the school principal involved, too, by asking him to post about new policies on a school blog. Provide ways for community members to get involved, such as volunteering time to help students learn technology. For acceptable use policies to be successful, all school stakeholders must understand and have access to transparent information. To have transparency, communication must be made a priority. To get buy-in, all parties must open up lines of communication — from IT staff to teachers, students, parents, and community members. Illustrate changes instead of just sending written information and hoping people will read it. Provide means of two-way communication and avenues for answering questions. And finally, focus on how current and future technologies will enhance learning, as that is the mission of a good AUP. Resources for communicating your acceptable use policy: 1-to-1 Essentials – Acceptable Use Policies – Common Sense Media Bringing Acceptable-Use Policies into the 21st Century – Education World Need a network management software solution that provides infinite scalability and integrates all your digital devices? Impero Software can help. For a full list of the features and benefits of Impero Education Pro network, classroom, and device management software, go here, download the trial software, sign up for a webinar, email email@example.com, or call 877.883.4370 today.
<urn:uuid:f890c195-2a0f-4259-85b7-85774b6dd5a9>
CC-MAIN-2017-04
https://www.imperosoftware.com/for-successful-acceptable-use-policies-communication-is-key/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00431-ip-10-171-10-70.ec2.internal.warc.gz
en
0.943834
1,678
3.03125
3
Google employs many security measures to thwart would-be Google account hijackers, and not all are highly visible as the two-factor authentication option introduced in 2010 and 2011 (for corporate and non-paying customers, respectively). Mike Hearn, Google Security Engineer, says that implementing all these measures has allowed Google to reduce the number of compromised accounts by 99.7 percent since 2011, when hijacking attempts were at their peak. Still, the hijackers keep trying. “We’ve seen a single attacker using stolen passwords to attempt to break into a million different Google accounts every single day, for weeks at a time. A different gang attempted sign-ins at a rate of more than 100 accounts per second,” he shared in a blog post. But every sign in attempt is evaluated before being approved. Google’s system performs a complex risk analysis to determine how likely it is that the sign-in really comes from the account owner. “In fact, there are more than 120 variables that can factor into how a decision is made,” says Hearn. “If a sign-in is deemed suspicious or risky for some reason—maybe it’s coming from a country oceans away from your last sign-in—we ask some simple questions about your account.” Hijacking legitimate email accounts and using them to send malicious emails to the owners’ contacts has become the preferred way for cyber criminals to target potential victims. This trend is a testament to the effectiveness of modern-day spam filters, says Hearn, as they are good at blocking random spam, but occasionally let emails purportedly coming from an existent contact pass through.
<urn:uuid:50e306d1-4c72-4a9a-9960-0abc73003215>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2013/02/21/google-account-hijacking-dramatically-reduced/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00183-ip-10-171-10-70.ec2.internal.warc.gz
en
0.95196
345
2.53125
3
What is a digital signature? A digital signature is an encrypted hash of a message that can be decrypted by anyone who has a copy of your public key. Digital signatures improve the security and ease of document sharing; they have also increased the productivity of numerous businesses worldwide. What is a digital certificate? Digital certificates are used for various purposes. SSL Certificates, a type of digital certificate, are used to secure domains. However, digital certificates can also secure documents, software, emails, and more. Ask a DigiCert representative about what kinds of digital certificates could simplify your life and provide you added security. How does a digital certificate work? Digital certificates include information like an ID card except it is digital so it can be transferred quickly. Digital certificates include identifying information of the certificate holder (individual or organization) to ensure the identity is accurate. Certificate Authorities, like DigiCert, validate the authenticity of those who apply for a digital certificate before they issue the certificate to you. Digital certificates utilize public key cryptography. This means a public and private key are used to ensure security and privacy. Digital certificates for secure emails start with the sender who uses their recipient’s public key to encrypt their message. Then, the recipient uses their own private key to decrypt the message. Digital certificates for software and larger files are also encrypted but first, they are passed through a hashing algorithm and made into a message digest. The message digest is encrypted with the sender’s private key. A digital signature is produced and then committed to the file. What is Two-Factor Authentication? Two-factor authentication is authentication taken to the next level. In order to sign documents, you must enter your two methods of authentication. In various movies you have likely seen people entering secure locations with a retina scan, fingerprint scan, voice authentication or even facial recognition. These are methods of authentication. The two-factor authentication that DigiCert utilizes includes a password and a USB token- more cost-effective for you but still very secure authentication. This protects your digital certificates and provides you peace of mind that no one else can sign documents with your certificate. What is a CDS certificate? CDS is an abbreviation for Adobe’s Certified Document Services which has been around since 2005. This is a document service that is being phased out. This program automatically trusts new digital IDs if the roots link to the Adobe Root Certificate. The new document signing that DigiCert offers is one of Adobe’s latest document offerings. What is the difference between a digital signature and an e-signature? A digital signature is like showing your ID and an e-signature is like a scribble on paper. An e-signature could be typed by anyone. While a digital signature has high assurance and password protection. If your digital signature is modified by anyone else, your PDFs will display a warning to 3 Levels of E-signatures: - Image of your signature-easily forged. - Typed signature-easily forged. - Digital signature-near impossible to forge! What is saved on my token? Only your certificate(s) are saved on your token—not your signed documents. Save your documents to your computer or in another safe location. How does Adobe validate my digital signature? Adobe checks the certificate’s validity including expiration and revocation. Next, Adobe checks to see if the document has been altered during transmission. Lastly, Adobe checks the certificate’s root, or in other words, if the certificate is from a trusted and approved provider. DigiCert is one of the carefully selected providers on the Adobe Approved Trust List.
<urn:uuid:97658cb2-1a38-4525-9a21-282ef85c0c87>
CC-MAIN-2017-04
https://www.digicert.com/document-signing/faq.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280761.39/warc/CC-MAIN-20170116095120-00541-ip-10-171-10-70.ec2.internal.warc.gz
en
0.906413
781
3.21875
3
Rey A.R.,CONICET | Putz K.,Antarctic Research Trust | Simeone A.,Andres Bello University | Hiriart-Bertrand L.,Andres Bello University | And 5 more authors. Emu | Year: 2013 How closely related marine organisms mitigate competition for resources while foraging at sea is not well understood, particularly the relative importance of interspecific and intraspecific mitigation strategies. Using location and time-depth data, we investigated species-specific and sex-specific foraging areas and diving behaviour of the closely related Humboldt (Spheniscus humboldti) and Magellanic (S. magellanicus) Penguins breeding in sympatry at Islotes Puñihuil in southern Chile during the chick-rearing period. The average duration of foraging trips was <20h and did not differ significantly between species or between sexes of each species. Magellanic Penguins made significantly deeper and longer dives than Humboldt Penguins. Males of both species made significantly longer dives than females. Total distance travelled per foraging trip was significantly greater for males than for females, and females made more direct trips (less sinuous) than males. Foraging effort was concentrated in waters up to 15km to the west and south-west of the colony. The overlap in density contours was lower between species than between sexes within a species. In general, dive characteristics and foraging areas differed more between Magellanic and Humboldt Penguins than between the sexes of each species. In contrast to the findings of studies of flying seabirds, the foraging behaviour of these penguins differs more between species than between sexes. © 2013 Bird Life Australia. Source
<urn:uuid:8bdc7510-b26c-4909-8076-a534883f26b0>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/antarctic-research-trust-321812/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00294-ip-10-171-10-70.ec2.internal.warc.gz
en
0.937785
360
3.234375
3
However, much larger drives, faster networks and SSD storage have now combined to create a fork in the road, and alternatives are needed. The first crack in the edifice was the realization that if a drive failed in an array of multi-terabyte drives, the rebuild time was so long that the possibility of a second, terminal failure was too high. This led to a much more complex RAID 6, creating two parity records for each stripe. RAID 6 has a major drawback, however: requiring a lot of compute power to generate parity. An alternative, RAID 50 uses a single parity, but replicates the data on another set of disks, which uses too much space. The advent of solid-state disk made both of these options untenable. The issue is that SSDs are somewhere between fast and light-speed compared to hard drives, and those parity calculations became very hard to achieve. In addition, the cost of SSD was so high that often a configuration wouldn’t have the minimum of six drives required to make RAID 5 feasible. Many just needed one or two drives to act as caches and tier 0 storage for critical files. The result was that SSDs are often replicated or mirrored (RAID 1). These two approaches are very similar, with a second copy of the data on another drive, but replication goes a bit further and stores the data on a separate storage appliance, removing single points of failure. The other “big event” in storage affecting RAID is the emergence of cloud services. The need to scale out put enormous pressure on storage approaches, and the idea of hard disk drives (HDD) using replication made economic sense. The trade-off is that cloud service providers can buy HDD at the lowest OEM prices, making it cheaper to add drives rather than high-speed RAID heads to protect data. The CSPs also addressed a pressing need for data dispersion for disaster recovery by having a third replica geographically distant from the other two. The CSP model makes sense with HDDs costing around $60 for a 2TB drives. The cost of a typical (proprietary) RAID head node pays for a lot of drives! Replication also has the benefit of not slowing down when a drive is lost, since data doesn’t need to be recreated from parity, and it also maintains integrity if a second drive fails, since there are three copies. [Read about a new standard that ramps up SSD performance with a radical new approach to storage I/O handling in "NVMe Poised To Revolutionize Solid-State Storage."] Historically, replication has been tied to an object storage model, somewhat like a file server on steroids. This model uses its own access protocol, REST, to get to data across the network. Still, block I/O operations to update data are possible, and this need has even created universal storage appliances that can manage file, block and object access to the same object store. An example that's rapidly gaining popularity is the open source Linux storage application, Ceph. Replication’s major drawback is the need for three or more full copies of data. Cleversafe has pioneered an extension of the RAID concept called erasure coding. This involves adding redundant information, somewhat like parity, to the data and then distributing it over multiple appliances. Typically 10 data blocks become 16 total blocks (10+6 coding) and the rule is that any 10 of these 16 blocks are sufficient to reconstruct the data. However, erasure code calculation is compute-intensive, slowing both writes and reads, especially when blocks are missing. The number of drives involved tends to be high. This makes it useful for scale-out archival data, but problematic for SSDs in Tier 0 or 1. Likely this will remain an issue unless hardware assist logic becomes available. With SSDs straining performance limits and cloud storage using very inexpensive drives to protect data, it looks like replication will take the lead from RAID, if it has not already done so. RAID arrays won’t disappear overnight, but faster object stores, open source enterprise-grade software and cheap drives all mean that the playing field is tilted towards universal storage boxes and the replication approach.
<urn:uuid:2e49188b-18a9-41b6-a372-e895f3d64f1d>
CC-MAIN-2017-04
http://www.networkcomputing.com/storage/raid-fading-sunset/1055153004
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285315.77/warc/CC-MAIN-20170116095125-00018-ip-10-171-10-70.ec2.internal.warc.gz
en
0.953991
858
2.5625
3
Walk into any public-sector data center today, and you'll likely see the same thing: rows upon rows of racks that hold servers, servers and more servers (though these days, there are far fewer racks and servers thanks to virtualization). And to help maximize energy usage and keep the room cooler, governments often utlize the hot aisle/cold aisle layout design, in which racks are lined up so that cold air intakes all face one way while hot air exhausts face the other. The rows composed of rack fronts are the cold aisles and typically face air conditioner output ducts, and the rows the heated exhausts pour into are called hot aisles, which will typically face air conditioner return ducts. In September 2012, The New York Times reported that a yearlong examination revealed that "most data centers, by design, consume vast amounts of energy in an incongruously wasteful manner." While the public sector aims to control these costs by using such methods as the hot aisle/cold aisle layout design, cooling this equipment is still of concern -- and there's now a new solution for doing so: by housing that equipment in vats of mineral oil. By submerging system components in oil, heat can be dispersed far more efficiently than through air, says Andy Price, director of business development for Green Revolution Cooling, an Austin, Texas-based company that's dedicated to changing the way data centers are cooled. And oil cooling, he said, is particularly effective when it comes to high-density data centers, which is why the company’s technology is gaining interest from both private and public industry. Environmental problems such as dust and extreme temperatures can be solved with oil-cooling, while power consumption can be reduced by 40 to 45 percent, Price said. And a server room on a forward-operating base run by the military offers extreme conditions that illustrate the benefits oil-cooling has to offer. Though the military offers a good example of where the technology is useful, Price notes that everyone can benefit from oil-cooling, particularly in an era of budget constraint. Reduced energy consumption means lower operating costs, but the initial investment to build a data center can be reduced, too. “They have to manage their costs,” he said. “If they’re tasked with building a new data center, or even retrofitting, they have to look at the equipment. And our solution, from a capital standpoint, is less expensive than building out a traditional air-cooled data center.” Cooling computer equipment with air requires air flow management systems, specialized rooms, raised floors, as well as additional generators and uninterruptable power supplies (UPS) to support the air cooling systems. But when using oil cooling, Price said, “those things can typically be cut in half, and generators and UPS’s are a significant expense.” By making simple modifications to traditional computing equipment, old servers and equipment can be used in an oil-cooled system. Cooling fans are removed, and thermal paste is replaced with indium foil. And there are several solutions for managing storage devices. Hard disk drives (HDD), for instance, used to be sealed, Price said, but now there are drives sold that come pre-sealed, like the helium filled drives sold by Hitachi -- or solid state drives (SSD) can be used. Alternatively, HDDs can be mounted outside of the fluid, attached to heat sinks that are submerged in oil. As with most new technologies, oil cooling has its detractors: Some are understandably hesitant to believe that submerging computer parts in liquid is a good idea. But Price said the technology is now beyond the testing period. “The technology works,” he said. “We’re beyond the point where we have to demonstrate that servers can survive in a dielectric fluid and that they’re actually more reliable.” The oil isn’t just safe for components, but it’s safe for people too, Price said. “It’s not a harmful solution, it’s very, very safe for humans to be exposed to,” he said. “It’s baby oil without the fragrance. It’s safe for human exposure, even safe for human consumption.” At 104 degrees Fahrenheit, it might even be good for the skin if someone were to, say, take a bath with the servers, he said. At the end of 2012, Intel completed a yearlong test to measure the benefits of Green Revolution’s oil cooling system -- and the semiconductor giant endorsed the technology. “We can reduce cooling energy use by 90 to 95 percent while also reducing server power by 10 to 20 percent," Intel reported upon completion of the pilot. (And the company is reportedly continuing to evaluate the long-term viability of the technology to see how data center costs might be reduced.) While the technology is best suited for such places as research facilities, national labs, military bases, weather modeling and national weapons research labs, Price said scale is not the main factor driving cost savings. The savings, he says, come from power density – the more dense a data center, the more benefit oil cooling confers. Though no public-sector entities are known to have deployed this system of cooling just yet, some are going to keep their eyes on it. Officials at the city of Sacramento, for instance, said they recently learned about the technology, and though they're not eager to become an early adopter, CIO Gary Cook said it looks like it has promise ... though there are some potential percieved drawbacks. “I’d hate to work on the machine after you pull it out of the mineral oil," he said. "It’s going to be a mess.” Despite that, he admitted that even a 5 percent savings on cooling overhead could make the technology an attractive investment. The city manages an ever-shrinking server room of about 40 to 50 server racks. The equipment footprint has been shrinking thanks to server virtualization. “We’re about 65 percent virtualized right now,” said Darin Arcolino, IT manager of technical infrastructure. While virtualization offers many of the same benefits as oil cooling, such as savings on power and a decreased equipment footprint, the technology could someday become the norm, Cook estimated. “Five or 10 years ago, people weren’t virtualizing servers and now it’s the norm,” he said. “So this could be the next generation of the norm for cooling systems.” Photo via Green Revolution Cooling
<urn:uuid:12824ab1-1a63-4ca2-9d6d-03f20c3b36b0>
CC-MAIN-2017-04
http://www.govtech.com/technology/Cool-Your-Servers-in-Vats-of-Oil.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00506-ip-10-171-10-70.ec2.internal.warc.gz
en
0.955137
1,389
2.609375
3
We introduced the topic of Intrinsic Safety yesterday in our August 3 post. In that post, I promised to cover the Intrinsically Safe definitions and standards today. Non-incendive Devices, Circuits, and Components: incapable of generating thermal or electrical energy sufficient to ignite a volatile atmosphere under normal operating conditions-although sufficient energy for ignition could be generated under fault conditions. A Non-Incendive device is designed for use in environments where the specified hazard may be present, but is not likely to exist under normal operating conditions. Class I is part of the National Electric Code definitions of hazardous location classifications and protection techniques. The Class I classification is a segment of the basic designation which is listed by “class” and “division”. Class I locations are areas where flammable gases may be present in sufficient quantities to produce explosive or flammable mixtures. Class II locations can be described as hazardous because of the presence of combustible dust. Class III locations contain easily ignitable fibers and flyings. Division 1 designates an environment where flammable gases, vapors, liquids, combustible dusts or ignitable fibers and flyings are likely to exist under normal operating conditions. On the other hand, Division 2 is an environment where flammable gases, vapors, liquids, combustible dusts or ignitable fibers and flyings are not likely to exist under normal operating conditions. Hazardous atmospheres are further defined by “groups.” Intrinsically Safe (I-Safe) Devices, Circuits, and Components: are incapable of generating thermal or electrical energy sufficient to ignite a volatile atmosphere under either normal or abnormal operating conditions. Consequently, intrinsically safe systems have much wider application than their non-incendive counterparts. Non-incendive systems are generally less costly and easier to maintain than either explosion-proof or intrinsically safe systems. Class 1, Division 2 Safe Device: Device which is safe to operate in locations (1) in which volatile flammable liquids or gases are handled, processed or used but which are normally confined in enclosed containers or systems, (2) in which ignitable concentrations of gases or vapors are normally prevented by ventilation, (3) which are adjacent to Class I Division I locations and not separated by a vapor tight barrier. An intrinsically safe device is approved for use in the specified class and division and will not produce any spark or thermal effects that will ignite a specified gas mixture. ATEX: Derived from the French “ATmosphere EXplosible” (explosive atmosphere). Refers to Atex Directive 04/9/EC, the European regulation governing equipment and protective systems intended for use in potentially explosive atmospheres. Contact DecisionPoint Systems, Inc. to learn more about non-incendive and Intrinsically Safe (I-Safe) mobile devices and what you need to protect your employees.
<urn:uuid:a87134d1-cdf3-4194-8bc2-4df57ca0f867>
CC-MAIN-2017-04
http://blog.decisionpt.com/intrinsically-safe-definitions-and-standards
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00414-ip-10-171-10-70.ec2.internal.warc.gz
en
0.897304
602
3.078125
3
The tiny robots were an offshoot of research into new types of materials by DOE physicists Alexey Snezhko and Igor Aronson. The goal of the scientists’ work is to develop structures that behave like biological systems, the two said in an interview. Snezhko and Aronson have developed self-assembling structures one millimeter wide that can be controlled by magnetic fields, according to a posting on Argonne's website. Consisting of microscopic particles of nickel or iron, the structures are suspended in the interface between a solution of water and silicon oil. When an alternating magnetic field is activated, the particles assemble into flower-like shapes the scientists call “asters.” Applying a second, small magnetic field allows the scientists to move the asters and to open and close the structures like tiny jaws. This allows the tiny robots to pick up, move and release objects such as glass or plastic beads. The researchers also discovered two types of asters, which rotate or swim in opposite directions. By grouping several of these counter-rotating asters together in a circle, Snezhko and Aronson were able to use them to draw small, free-floating particles into the center of the circle and keep them there. Manipulating very small objects is a challenge for robotics. The asters are in a size category between mechanical micromanipulators and laser-powered manipulation. Gripping and manipulating very small objects without damaging them has always been a problem with mechanical systems, Aronson said. The asters exert more force than lasers, but are more delicate than mechanical grips, which opens the potential for a variety of applications in micro-scale engineering and research. But to get there, more must be understood about how objects assemble and function at this scale. The Argonne research has been ongoing for four years. It began with the scientists studying the interaction of molecules and surfaces and interfaces. Snezhko and Aronson’s first discovery was that in an oil and water medium, nickel and iron particles will self-assemble into long strands, or “snakes.” They then sought to scale down the size of the snakes, which led to the development of the asters. The research is multi-purposed. Besides working with the asters, the scientists have developed algorithms and computer models to predict what happens in the interface. “We have a pretty good idea of what’s going on now,” Aronson said. The robots represent an important stage between the mesoscale (multi-molecular) and the molecular level. The niche for microscopic robots is very diverse, Aronson said. Among the applications that the Argonne researchers have studied are activities that take place in the interface area of oily substructures, such as the zone between oil and water. In these areas, microrobots would be useful by being able to apply and remove substances without destroying the interface. Another advantage is that the microrobots can be manufactured and controlled autonomously. It is possible to develop a computer program to control the magnetic coils that manipulate the robots. The scientists are also studying how to provide individual particles in the asters with more functionality. One consideration is to use microscopic rod structures because parts of the rods can be assigned different functions, such as being able to recognize certain particles. The micro robots currently cannot distinguish between a glass bead and a plastic one. One of the scientists’ goals is to make the robots able to distinguish this difference. Snezhko and Aronson are now trying to modify and control the functionality of particles when their structures change. The goal of the current work is to understand what happens and what works in these combinations. “We’re trying to find the basic principles of how to make them and what functionality they will have,” Aronson said. Fully understanding how the micro robots operate in two-dimensional systems will allow researchers to move on to more complex, three-dimensional applications. This would open the possibility of technologies capable of building new types of electronics or other structures at the molecular level. “But in order to move there, we need to understand what happens in two dimensional systems,” Snezhko said.
<urn:uuid:97205daa-e4f0-4bd8-beb1-a2e1dc36648f>
CC-MAIN-2017-04
https://gcn.com/articles/2011/08/30/argonne-lab-microscopic-robots.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00414-ip-10-171-10-70.ec2.internal.warc.gz
en
0.957442
869
3.984375
4
In my article Data Warehouse Construction: Behavior Pattern Analysis, we observed the usual constructional behaviors with a lab task of emptying all 1000 tables in a database "my_db." There, we showed that the employed approach was not effective and analyzed the reasons for this. In my article Data Warehouse Construction: The Real Life, we showed that the insights obtained there are relevant to the real life data warehouse construction today. In this article, we go back to the laboratory to consider some fundamentals of another type of constructional approach, the metadata-driven generic one. Generator: Another Solution for the Lab Task Assume that you know some basics of SQL (structured query language). Then, you should know what the following SQL statement does if it is executed in an SQL execution environment: SELECT 'DELETE FROM my_db.' || tablename || ';' WHERE databasename = 'my_db'; Actually, this statement perfectly generates all DELETE statements listed in the lab example discussed in Data Warehouse Construction: Behavior Pattern Analysis as the execution result in a fraction of one second! They are namely: DELETE FROM my_db.tab_adfhkfha; DELETE FROM my_db.tab_lkeasr; DELETE FROM my_db.tab_dajpgh; Here is a little technical detail for a better understanding of the story. - When being executed, the above SQL statement looks up table "tables" located in the catalog database "my_catalog," fetches all records in it about the tables that are hosted in the database "my_db" and takes the names of these tables (tablename) in the hand. For each of these table names, it constructs a character string by concatenating three string portions: "DELETE FROM my_db.," the current value of the variable "tablename" from the 1000 table names just taken in the hand and the character ";" for a syntactically correct end of every SQL statement. - Before we do this, we need to put information about the related tables into the table catalog, i.e., table "tables" located in the catalog database "my_catalog." Each of these tables has a corresponding record in this catalog table with at least two columns, "databasename" and "tablename," denoting the hosting database and the table in consideration, respectively. It is easy to verify that each of such a character string corresponds to one of the DELETE statements desired. In this sense, the SQL statement in consideration is a statement generator , although it is a simple one. What you still need to do is simply execute these generated statements. Note that there is no cold-blooded "copy," no powerful "paste," no clever "search," no elegant "replace," no noxious "adjust" and no skeptical "verify" anymore. In short, there is no operations repetition (discussed in Data Warehouse Construction: Behavior Pattern Analysis ) at all! Now, let us have a close look at the components of this statement generator. The first portion 'DELETE FROM my_db.' and the third one ';' of the complete character string to be constructed by the generator are constant . If the traditional approach is applied, they are the repeated contents ; in particular, they are the major motivation for applying the operations pair "copy & paste" 999 times. In the terms introduced in the article mentioned above, they are carriers of certain domain-generic knowledge , regarding the SQL-syntax and regarding the programming style conventions in our case. The second portion is variabl e. Its concrete value, i.e., the table name, is specific for each individual table. With the traditional approach, this portion is where operations chain "search-replace-adjust-verify" is applied, again 999 times. In the terms introduced in the article mentioned above, this portion represents the carriers of certain object-specific knowledge , i.e., the table name in our case. What are the essential differences between the two approaches regarding constructional behaviors? With the new approach, the carriers of domain-generic knowledge are touched/edited only once, instead of the additional 999 times of manual "copy & paste" with the traditional approach. In other words, the domain-generic knowledge is centralized and, thus, not distributed with the new approach. Although the 1000 DELETE statements generated contain these carriers, they are in principle not to be read, understood and changed, just as with a piece of binary code. This way, the carriers of generic knowledge remain to be such, and no noxious "adjust" can change this state. In Data Warehouse Construction: Behavior Pattern Analysis , we introduced domain-generic knowledge in a very thrifty form and used it to analyze the behaviors in data warehouse construction. In general, with generic knowledge we mean such that shows a general validity and applicability in a given domain of interest . Therefore, we call it domain-generic knowledge. A domain in our context is a well-defined system area such as a source application, the data warehouse, and so forth. Representatives of the domain-generic knowledge are the data warehouse architecture guidelines, the programming style conventions and, in particular, the algorithms employed. In fact, the latter generally represents the major portion of the domain-generic knowledge involved in data warehouse construction. As an important matter of fact, the architect team produces nothing but the domain-generic knowledge With the new approach, we know exactly where the object-specific knowledge is to be placed. Thus, there is no need for the operations pair "search & replace." Moreover, we know exactly which object-specific knowledge is to be used, and we always get the correct one. Hence, there is no need for the operations pair "adjust & verify." In short, there is no need for the expensive and time-consuming operations repetition analyzed in Data Warehouse Construction: Behavior Pattern Analysis . Therefore, the new approach is substantially more effective than the traditional one. But how do we obtain such object-specific knowledge? With the second point of the above explanation, we could have left an impression that we would manually put information about the related tables into the catalog table. If this would be the case, our life as data warehouse constructors would not be improved substantially with the new approach. This is because the manual acquisitions are generally error-prone as well, and we have to "adjust & verify" their results individually. Actually, we obtain this information perfectly for free! When a table is created in a given host database such as "my_db" using any database management system available today, information like table name and the name of the database hosting this table is stored by the system automatically as a record in the so-called system catalog table "tables" or the like. If the table is dropped later, this record will be removed from the system catalog table automatically . In short, a table exists in a database, if and only if there is a corresponding record in the system catalog table "tables." This information exists in the system catalog without being perceived by us, and many people are indeed not aware of its existence. To make the story perfect, therefore, the above statement generator should look like SELECT 'DELETE FROM my_db.' || tablename || ';' WHERE databasename = 'my_db'; _catalog" is replaced by "system Actually, the so-called "information" mentioned above, regardless of whether it is stored in "my_catalog" or in "system_catalog," is nothing but our object-specific knowledge. As a matter of fact, the object-specific knowledge, in turn, is nothing but the so-called operative metadata. Operative Metadata Operative metadata defines operative/system objects and the relationships among them in the system, and determines the system behavior or state therewith. It is, thus, indispensable for the well-functioning of the system in consideration. Examples of operative metadata are the column list of a table, or the column mappings from a source table to a target table. The former is usually stored in the system catalog , e.g., in "system_catalog" above, and maintained by the system automatically , whereas the latter is stored in the user/tool-defined catalog , e.g., in "my_catalog" above, and maintained by the system constructors manually . For business users, operative metadata is data that is stored in the systems somewhere. To them, it is data they do not always understand and are, therefore, not interested in. It is noteworthy that operative metadata is object-specific by definition. As a matter of fact, the activities in constructing data warehouses that are not repetitive are related with operative metadata. It has to be treated individually and specifically for each of the concrete objects, e.g., tables or mappings. This is the reason why we said previously that the object-specific knowledge is nothing but the operative metadata In the terms explained above, the new constructional approach type mentioned at the beginning of this article, i.e., the metadata-driven generic one, could be now called an operative-metadata-driven domain-generic-knowledge-centralized approach type. This designation is a little long but complete. In this article, we observed actually only one of the possibilities of this approach type. In the upcoming articles, we will investigate another possibility, an even more effective one. In fact, this approach type is nothing but a new constructional paradigm , in the sense of Thomas Samuel Kuhn (1922 – 1996) , as will be discussed in my next article. References - "Domain-generic knowledge" is a concept introduced in Constructing Data Warehouses with Metadata-Driven Generic Operators, and More (B. Jiang, DBJ Publishing, 2011). In this book, you can find a huge amount of domain-generic knowledge in forms of, for instance, sophisticated detailed reference architecture for enterprise data warehouses and more than two dozen generally applicable algorithms frequently found in diverse data warehouse realizations. - "Operative metadata," a concept introduced in Constructing Data Warehouses with Metadata-Driven Generic Operators, and More (B. Jiang, DBJ Publishing, 2011) as well, is more detailed discussed in Metathink: An Enterprise-Wide Single Version of the Truth, and Beyond on BeyeNETWORK.com. SOURCE: Data Warehouse Construction: Generator, Generic Knowledge and Operative Metadata Recent articles by Bin Jiang, Ph.D.
<urn:uuid:616b92bd-8b0b-4e8e-909a-ba364ae9a25d>
CC-MAIN-2017-04
http://www.b-eye-network.com/channels/5648/view/16654
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00138-ip-10-171-10-70.ec2.internal.warc.gz
en
0.906119
2,221
2.640625
3
Fiber optic cable provides protection for the fibers from the environment encountered in an installation. Outdoor Fiber Cable is designed strong to protect the fibers to operate safely in complicated outdoor environment, it can be buried directly, pulled in conduit, strung aerially or even placed underwater. While indoor cables don’t have to be that strong. Outdoor fiber optic cable is composed of many fibers enclosed in protective coverings and strength members. Common features for fiber optic cable include polarization maintaining, graded index, and metalization. Most outdoor fiber cables are loose buffer design, with the strengthen member in the middle of the whole cable, the loose tubes surround the central strength member. Inside the loose tube there is waterproof gel filled, whole cable materials used and gels inside cable between the different components will help make the whole cable resist of water. Typical outdoor fiber optic cable types are used for aerial, direct buried and duct applications. Loose Tube Cables Loose Tube cables are the most widely used cables for outside plant trunks, as it can be made with the loose tubes filled with gel or water absorbent powder to prevent harm to the fibers from water. Loose Tube Fiber Optic cables are composed of several fibers together inside a small plastic tube, which are in turn wound around a central strength member and jacketed, providing a small, high fiber count cable. They can be installed in ducts, direct buried and aerial/lashed installations for trunk and fiber to the premise applications. Loose tube cables with singlemode fibers are generally terminated by spicing pigtails onto the fibers and protecting them in a splice closure. Multimode loose tube cables can be terminated directly by installing a breakout kit, also called a furcation or fan-out kit, which sleeves each fiber for protection. Ribbon cable is preferred where high fiber counts and small diameter cables are needed. This cable has the highest packing density, since all the fibers are laid out in rows in ribbons, typically of 12 fibers, and the ribbons are laid on top of each other. Not only is this the smallest cable for the most number of fibers, it’s usually the lowest cost. Typically 144 fibers in ribbons only has a cross section of about 1/4 inch or 6 mm and the jacket is only 13 mm or 1/2 inch diameter! Some cable designs use a “slotted core” with up to 6 of these 144 fiber ribbon assemblies for 864 fibers in one cable! Since it’s outside plant cable, it’s gel-filled for water blocking or dry water-blocked. These cables are common in LAN backbones and data centers. Armored Fiber Optic Cable Armored cable is used in direct buried outside plant applications where a rugged cable is needed and/or for rodent resistance. Armored cable withstands crush loads well, for example in rocky soil, often necessary for direct burial applications. Cable installed by direct burial in areas where rodents are a problem usually have metal armoring between two jackets to prevent rodent penetration. Another application for armored fiber optic cable is in data centers, where cables are installed under the floor and one worries about the fiber cable being crushed. This means the cable is conductive, so it must be grounded properly. Aerial Fiber Optic Cable Aerial cables are for outside installation on poles. They can be lashed to a messenger or another cable (common in CATV) or have metal or aramid strength members to make them self supporting. A widely used Aerial Cable is optical power ground wire (OPGW) which is a high voltage distribution cable with fiber in the center. The fiber is not affected by the electrical fields and the utility installing it gets fibers for grid management and communications. This cable is usually installed on the top of high voltage towers but brought to ground level for splicing or termination. Fiber Optic Indoor/Outdoor Cables are designed to meet both the stringent environmental requirements typical of outside plant cable AND the flammability requirements of premise applications. Ideal for applications that span indoor and outdoor environments. By eliminating the need for outside to inside cross-connection, the entire system reliability is improved and with lower overall installation costs. Underwater and Submarine Cables It is often necessary to install fibers under water, such as crossing a river or lake where a bridge other above water location is not possible. For simple applications a rugged direct burial cable may be adequate. For true undersea applications, cables are extremely rugged, with fibers in the middle of the cable inside stainless steel tubes and the outside coated with many layers of steel strength members and conductors for powering repeaters. Submarine cables are completed on shore, then loaded on ships and laid from the ship, often while operational to ensure proper operation. FiberStore offers a comprehensive range of multimode fiber cable and single-mode fiber optic cables. Indoor, outdoor, armoured, tight buffered or loose tube structures, which cover all possible applications.
<urn:uuid:7b0b3694-e536-4d3b-a0eb-e19a1f5f3c72>
CC-MAIN-2017-04
http://www.fs.com/blog/typical-outdoor-fiber-optic-cables.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00350-ip-10-171-10-70.ec2.internal.warc.gz
en
0.92897
1,018
3.015625
3
Protecting Consumer Privacy In the near term, barcode-type RFID tags are unlikely to reach the hands of consumers on a regular basis—but they will eventually. As a result, consumer privacy is becoming a major issue around RFID in the media, and a critical concern for deployers of would-be RFID. Industry approaches to consumer privacy vary. Some enterprises are proposing policy guidelines for use of RFID information. EPCGlobal proposes, for instance to enforce clear labeling of RFID-tagged products, among other measures. Policy-based approaches to RFID privacy will help. It is the position of RSA Laboratories, however, that policy guidelines are in and of themselves insufficient to guarantee consumer privacy. After all, RFID-tag reading is not a visible process. Consumers can have no easy way of knowing when RFID policies are adhered to or breached. In fact, RFID tags can be so small and easily embedded in products, that consumers may not even know when they are carrying them! So what technologies can help protect consumer privacy? Here are a few approaches proposed by scientists: - Kill codes: Perhaps the most straightforward approach to protecting consumer privacy is to ensure that consumers do not carry live RFID tags in the goods they purchase. With this aim, EPCGlobal standards support kill codes on RFID tags. These are PIN-protected commands that cause RFID tags to disable themselves permanently, so that they are no longer readable. There have been some difficulties in making this approach workable in field tests. However, once these difficulties are overcome, kill codes are likely to be an important mechanism for protecting consumer privacy. - RSA® Blocker Tag: Kill codes have a drawback. If tags do not function in the hands of consumers, then consumers can’t benefit from them. Many envisioned benefits -- like “receipt-less” item returns, “smart” RFID-enabled appliances, and so forth -- would be unworkable. RSA Laboratories’ proposal, the RSA Blocker Tag, aims to provide consumers with the best of both worlds: privacy and usable RFID tags. - One may think of a the RSA Blocker Tag as "spamming" any reader that attempts to scan tags without the right authorization (the RSA Blocker Tag is designed to manipulate the reading protocol to make the reader think that RFID tags representing all possible serial numbers are present). When a Blocker is in proximity to ordinary RFID tags, they benefit from its shielding behavior; when the Blocker tag is removed, the ordinary RFID tags may be used normally. - Thanks to their selective nature, blockers do not interfere with the normal operation of RFID systems in retail environments. They prevent unwanted scanning of purchased items, but do not affect the scanning of shop inventories. Thus RSA Blocker Tags cannot be used, for example, to circumvent theft-control systems or mount denial-of-service attacks. They can only to be used to protect the privacy of law-abiding consumers. - Distance measurement: In initial experiments, scientists at Intel® have noted that RFID tags might be able to employ the signal-to-noise ratio of the transmissions they receive from a reader to estimate the distance of that reader from the tag. As distance implies trust in many circumstances, this might serve as a privacy-enhancing feature in RFID tags. Protecting RFID Infrastructure It is not only tags and readers, but all parts of an RFID infrastructure that present important security challenges -- particularly with the rich business intelligence that RFID data carry. Thankfully, many well trusted data-security tools for device authentication, end-to-end communication encryption, and database security may be applied to RFID systems. With its own special characteristics, RFID does present some unusual challenges. These will unfold as enterprises deploy RFID and learn their security needs, and will preset an important challenge to data security specialists.
<urn:uuid:09099e5b-26dd-4a78-b389-a16ea4a299a2>
CC-MAIN-2017-04
https://www.emc.com/emc-plus/rsa-labs/research-areas/protecting-consumer-privacy.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00350-ip-10-171-10-70.ec2.internal.warc.gz
en
0.907271
808
2.515625
3
So what’s on the near-term horizon for consumer electronic technology? We all know by now about smart grid technologies and how they’re going to help streamline and revolutionize the way utilities deliver electricity and the way homes and business control their use of power, but what about other complementary technologies? A new survey commissioned by IEEE (News - Alert) and conducted by Zpryme goes a long way toward understand what’s in the nation’s near-term future when it comes to new energy solutions. The topics that came up in the survey – energy storage, microgrids and distributed generation technologies like wind, solar and onsite power – help understand the direction the world is heading in clean energy. Zpryme surveyed 460 energy industry executives from around the world to arrive at the results of the study. Collectively, the study found, energy storage, distributed generation and microgrids will drive the evolution of energy markets over the next five years. These technologies are expected to increase the adoption of the smart grid, and spur new markets for software and systems that integrate these technologies into modern and future energy systems. Microgrids. This is a localized grouping of electricity generation (from a variety of different sources), energy storage methods, and loads that can operate either independently or while directed to the larger grid (macrogrid) to either pull energy from it or give energy to it. Microgrid technology, however, is badly in need to standards. Distributed energy generation is a system that involves generating electricity from many small energy sources instead of one large central energy source. Many countries are embracing distributed energy generation as a way of taking advantage of different energy sources and avoiding the costs – both monetary and environmental – of transporting power over long distances. Energy storage. Much of the world today is pursing newer and cleaner ways to store energy once it is generated. Particularly with energy from renewable sources (you can’t keep wind or sun in a box), storage is a critical element of energy independence. Technologies like batteries, flywheels, compressed air or hydraulics, solar ponds, liquid hydrogen and many other methods (mechanical, chemical, thermal or electrical) are being experimented with. The major barrier to energy storage remains high costs. To read the report in detail, click here. Want to learn more about the latest in communications and technology? Then be sure to attend ITEXPO Miami 2013, Jan 29- Feb. 1 in Miami, Florida. Stay in touch with everything happening at ITEXPO (News - Alert). Follow us on Twitter. Edited by Amanda Ciccatelli
<urn:uuid:04d3ff2e-d7f6-484a-a826-0f452b66a6d7>
CC-MAIN-2017-04
http://www.iotevolutionworld.com/topics/smart-grid/articles/2012/12/05/318518-new-report-determines-biggest-near-term-issues-smart.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00468-ip-10-171-10-70.ec2.internal.warc.gz
en
0.916643
539
2.875
3
If you like your privacy when you are using your cellphone or surfing cyberspace, then you might find it disturbing how easily you can be personally identified while doing either. Here's a look at two different deanonymizing processes by which your privacy can be obliterated. Location-based services sometimes offer to protect user privacy and anonymize or obfuscate personally identifiable information (PII) in the location data. There has been research in the past showing ways to defeat the anonymization, but new research showed "these methods can be effectively defeated: a set of location traces can be deanonymized given an easily obtained social network graph." This week at the Association for Computing Machinery's Computer and Communications Security (ACM CCS) conference in Raleigh, NC, researchers Mudhakar Srivatsa and Mike Hicks are to present "Deanonymizing mobility traces: using social networks as a side-channel" [PDF]. It's interesting how the mobility traces were matched to a contact graph and then social networks were exploited to find friendships via Facebook data and business relationships via LinkedIn. Matching a user's mobility trace to their identity "can provide information about habits, interests and activities—or anomalies to them—which in turn may be exploited for illicit gain via theft, blackmail, or even physical violence," stated the research. It's worth a read to see how the researchers used Wi-Fi hotspots on a university campus, captured chats via instant messengers, as well as Bluetooth connectivity to show inter-user correlations. In these social network side channel attacks, they were able to strip out privacy and deanonymize users via their mobility traces with an accuracy of 80%. And this flyer claimed that the "proposed algorithms to quantify information released in location traces, using social networks as a side-channel, are within 90% of the optimal." The research paper authors concluded [PDF]: This paper studied the use of interuser correlation models to address this problem. In particular, we exploited structural similarities between two sources of inter-user correlations (the contact graph and the social network) and developed techniques to leverage such structural similarities to deduce mapping between nodes in the contact graph with that in the social network, thereby de-anonymizing the contact graph (and thus the underlying mobility trace). We validated our hypothesis using three real world datasets and showed that the proposed approach achieves over 80% accuracy, while incurring no more than a few minutes of computational cost in de-anonymizing these mobility traces. Then Jeremiah Grossman, founder of WhiteHat Security, has a different deanonymizing approach in his "I Know . . ." series. He builds on what he has previously demonstrated about attack techniques and how a user may do nothing more than visit the "wrong" site for that website to "learn what websites you've visited, how they can steal a browser's auto-complete data, what sites you are logged in to, surreptitiously activate a computer's video camera and microphone, list out what Firefox Add-Ons are installed, what you've previously watched on YouTube, who is listed in your Gmail contact list, etc." If you think you are relatively anonymous, then you'll be disappointed. Grossman warned that unless a user takes "very particular precautions," then nearly every website can quickly glean your personal information, such as "I Know: ...A LOT About Your Web Browser and Computer, ...The Country, Town, and City You Are Connecting From (IP Geolocation), ...What Websites You Are Logged-In To (Login-Detection via CSRF), ... I Know Your Name, and Probably a Whole Lot More (Deanonymization via Likejacking, Followjacking, etc.) , ... Who You Work For, ... Your [Corporate] Email Address, and more...." Grossman's entire I Know series is excellent, but in keeping with deanonymization via social networks, let's hone in on "I know your name, and probably a whole lot more." Clickjacking techniques involve an invisible object that chases your mouse around the page, waiting for you to click on something, anything, while you are there. Clickjacking can be used by hackers to covertly turn on your computer's camera and microphone, but many people are unaware of that as was highlighted in the study that found 1 in 2 Americans are 'clueless' about webcam hacking. Since we've also previously looked at cookiejacking, let's hone in specifically on Followjacking via Twitter and Likejacking via Facebook. You should read Grossman's article, but here's his shorter explanation and demonstration in a video. Of the clickjacking, Grossman wrote: By now it should be clear that this style of attack can be extended to LinkedIn, Google+, and other online services providing similar functionality. That list is quite long. I would like to reiterate a key lesson and highlight a new one. - If a browser is logged-in to a social network or similar identity storage website, as many are persistently, a single-mouse click is all it takes for any website to reveal a visitor's real name and other personal information. - If the browser happens to have the popular Tor proxy installed, it does not provide any protection against deanonymization via Likejacking and Followjacking. Can we actually call this clickjacking --> deanonymization issue a "vulnerability?" If so, who is responsible for dealing with it? The browser vendors? The logged-in visitor? The social networking website(s)? The Web standards bodies? All of deanonymization examplifies where your identity can be revealed via alleged anonymized location data from a mobile device, or via one click and a website can find out pretty much everything about you, actually create more questions than answers. Both are disturbing from a privacy/security perspective. Like this? Here's more posts: - Time to disable Java AGAIN: 1 billion at risk from newest critical Java bug - Feds Warn of Zombie Apocalypse! Buy emergency kit, but you might be a terrorist if... - Senate report: Fusion centers don't find terrorists, filled with 'crap' that violates privacy - Smartphone snoop: Even when phone sleeps, digital assistant always eavesdrops - Facebook Want Button: Collecting massive amounts of data about you has never been easier - Busted! Forensic expert who recovered lurid SMS warns: Phone texts don't die, they hide - Microsoft: Companies should pay Uncle Sam $10k per H-1B Visa to hire skilled foreigners - Lock picking hotel rooms like James Bond - Flame's vicious sibling miniFlame malware, a cyber-espionage 'surgical attack tool' - Surveillance State: From Inside Secret FBI Terrorist Screening Room to TrapWire Training - Social media surveillance helps the government read your mind Follow me on Twitter @PrivacyFanatic
<urn:uuid:fdd3b8cf-6803-48d1-a926-f89d364f6fa6>
CC-MAIN-2017-04
http://www.networkworld.com/article/2223338/microsoft-subnet/deanonymizing-you--i-know-who-you-are-after-1-click-online-or-a-mobile-call.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280587.1/warc/CC-MAIN-20170116095120-00010-ip-10-171-10-70.ec2.internal.warc.gz
en
0.929018
1,420
2.53125
3
Is Open-Source More Secure? Open-source software has faced both criticism and praise for the last 15 or so years since it broke into the IT scene. Misconceptions, or myths, about open-source software have plagued the scene from the very beginning. Closed-Source Programmers Are Liable for Their Software One of the largest complaints about open-source software, in the corporate environment, is that there isn’t anyone to point a finger at when problems arise. Simply put, the support isn’t there for open-source software. This complaint, however, isn’t accurate. All of the support information you need is provided by the nature of open-source software. The source code can be downloaded for free and is often heavily documented. In addition, the vast majority of open-source software developers offer cost-based support options for large companies. Moving further with this, most closed-source vendors will not allow you to view their source code or let alone make changes to it. Many open-source software vendors will continue to support a modified version of their software. Open-Source Software Developers Create Backdoors in Their Software Another misconception that simply doesn’t hold water is that open-source software developers plant bugs, Trojans and backdoors in their software when building it. Although this probably occurs on occasion, it is not common. Interestingly enough, it really doesn’t buy a malicious developer much because the software’s source-code is readily available. Again, the nature of open-source software provides that safety of seeing exactly how it works. If you think there might a problem, simply take that portion of the code out. The most interesting part of this myth is that closed-source programmers most likely build backdoors in their programs as well. The problem is you have no idea what a closed-source program runs on or how many Trojans, bugs or backdoors exist. It would be the equivalent of buying a car with its hood welded shut. You would just have to take the word of the salesman that it runs great and has a new motor. Open-source software would be the same car but have an operational hood that opens, with the entire engine blueprinted and documented, including all maintenance since the car rolled off the assembly line. Chances are that the latter car would be free of charge. Which car you would buy? Closed-Source Slows Attacks Because open-source software is by nature all-revealing, one might think that it would make creating exploits for the software easier. It should be intuitive that closed-source software would circumvent this problem, right? Wrong. Crackers have programs called decompilers or disassemblers that will reverse engineer most closed-source software. This will essentially provide an attacker with a blueprint to the closed-source software. Now, we are at a major disadvantage because there are only two groups that know about existing vulnerabilities, the crackers and the vendor. Two scenarios will most likely follow: A) The hackers release a zero day exploit, wreaking havoc on all users of the software, and the vendor will patch as soon as they can. This could take weeks or months, all the while, you are struggling to stay secure. B) The vendors find the vulnerability and either release a patch as soon as they can or choose to leave it be, accepting the risk that an exploit might be developed. Let’s take the same scenario, this time we will be using open-source software. A cracker reviews the source code for a popular open-source software suite. After finding a major vulnerability, the cracker releases a vicious virus. A) The thousands of developers that took part in building the software come together and quickly create a patch. This could take comparable time to a closed-source vendor. B) You decide to patch it yourself. After studying the vulnerability you simply make a few changes to the source and recompile the software. Because the developers of open-source software are often users themselves, they will usually work hard to ensure vulnerabilities are patched quickly. Even if they don’t, remember, this is open-source software, so you can do it yourself. Often, other users of the software will develop a fix and post it online for others to use, furthering the benefits of open-source software. Open-source software provides the flexibility to allow an application to be secure. There are many variables that play into the security of an operating system or application. Being open-source simply gives the application the potential to be more secure, never guaranteeing it. Always remember that simply because software is classified as open-source software, it should not be considered secure. Is open-source software without a doubt more secure than closed-source software? Not necessarily, but most people wouldn’t buy a car with the hood welded shut. Brad Causey is a security consultant and owns Zero Day Consulting, an incident response and penetration testing company in Alabama. Brad can be reached at firstname.lastname@example.org.
<urn:uuid:b87b6280-91de-44c3-8b75-d5582be63a79>
CC-MAIN-2017-04
http://certmag.com/is-open-source-more-secure/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00248-ip-10-171-10-70.ec2.internal.warc.gz
en
0.945443
1,044
2.9375
3
November 16, 2012 Leave a comment You might be tempted to think that In-Memory technologies and flash are concepts which have no common ground. After all, if you can run everything in memory, why worry about the performance of your storage? However, the truth is very different: In-Memory needs flash to reach its true potential. Here I will discuss why and look at how flash memory systems can both enable In-Memory technologies as well as alleviate some of the need for them. Note: This is an article I wrote for a different publication recently. The brief was to discuss at a high level the concepts of In-Memory Computing. It doesn’t delve into the level of technical detail I would usually use here – and the article is more Violin marketing-orientated than those I would usually publish on my personal blog, so consider yourself warned… but In-Memory is an interesting subject so I believe the concepts are worth posting about. In-Memory Computing (IMC) is a high-level term used to describe a number of techniques where data is processed in computer memory in order to achieve better performance. Examples of IMC include In-Memory Databases (which I’ve written about previously here and here), In-Memory Analytics and In-Memory Application Servers, all of which have been named by Gartner as technologies which are being increasingly adopted throughout the enterprise. To understand why these trends are so significant, consider the volume of data being consumed by enterprises today: in addition to traditional application data, companies have an increasing exposure to – and demand for – data from Gartner’s “Nexus of Forces”: mobile, social, cloud and big data. As more and more data becomes available, competitive advantages can be won or lost through the ability to serve customers, process metrics, analyze trends and compute results. The time taken to convert source data to business-valuable output is the single most important differentiator, with the ultimate (and in my view unattainable – but that’s the subject for another blog post) goal being output that is delivered in real-time. But with data volumes increasing exponentially, the goal of performance must also be delivered with a solution which is highly scalable. The control of costs is equally important – a competitive advantage can only be gained if the solution adds more value than it subtracts through its total cost of ownership. How does In-Memory Computing Deliver Faster Performance? The basic premise of In-Memory Computing is that data processed in memory is faster than data processed using storage. To understand what this means, first consider the basic elements in any computer system: CPU (Central Processing Unit), Memory, Storage and Networking. The CPU is responsible for carrying out instructions, whilst memory and storage are locations where data can be stored and retrieved. Along similar lines, networking devices allow for data to be sent or received from remote destinations. Memory is used as a volatile location for storing data, meaning that the data only remains in this location while power is supplied to the memory module. Storage, in contrast, is used as a persistent location for storing data i.e. once written data will remain even if power is interrupted. The question of why these two differing locations are used together in a computer system is the single most important factor to understand about In-Memory Computing: memory is used to drive up processor utilization. Modern CPUs can perform many billions of instructions per second. However, if data must be stored or retrieved from traditional (i.e. disk) storage this results in a delay known as a “wait”. A modern disk storage system performs an input/output (I/O) operation in a time measured in milliseconds. While this may not initially seem long, when considered in the perspective of the CPU clock cycle where operations are measured in nanoseconds or less, it is clear that time spend waiting on storage will have a significant negative impact on the total time required to complete a task. In effect, the CPU is unable to continue working on the task at hand until the storage system completes the I/O, potentially resulting in periods of inactivity for the CPU. If the CPU is forced to spend time waiting rather than working then it can be considered that the efficiency of the CPU is reduced. Unlike disk storage, which is based on mechanical rotating magnetic disks, memory consists of semiconductor electronics with no moving parts – and for this reason access times are orders of magnitude faster. Modern computer systems use Dynamic Random Access Memory (DRAM) to store volatile copies of data in a location where they can be accessed with wait times of approximately 100 nanoseconds. The simple conclusion is therefore that memory allows CPUs to spend less time waiting and more time working, which can be considered as an increase in CPU efficiency. In-Memory Computing techniques seek to extract the maximum advantage out of this conclusion by increasing the efficiency of the CPU to its limit. By removing waits for storage where possible, the CPU can execute instructions and complete tasks with the minimum of time spent waiting on I/O. While IMC technologies can offer significant performance gains through this efficient use of CPU, the obvious drawback is that data is entirely contained in volatile memory, leading to the potential for data loss in the event of an interruption to power. Two solutions exist to this problem: the acceptance that all data can be lost or the addition of a “persistence layer” where all data changes must be recorded in order that data may be reconstructed in the event of an outage. Since only the latter option guarantees business continuity the reality of most IMC systems is that data must still be written to storage, limiting the potential gains and introducing additional complexity as high availability and disaster recovery solutions are added. What are the Barriers to Success with In-Memory Computing? The main barriers to success in IMC are the maturity of IMC technologies, the cost of adoption and the performance impact associated with adding a persistence layer on storage. Gartner reports that IMC-enabling application infrastructure is still relatively expensive, while additional factors such as the complexity of design and implementation, as well as the new challenges associated with high availability and disaster recovery, are limiting adoption. Another significant challenge is the misperception from users that data stored using an In-Memory technology is not safe due to the volatility of DRAM. It must also be considered that as many IMC products are new to the market, many popular BI and data-manipulation tools are yet to add support for their use. However, as IMC products mature and the demand for performance and scalability increases, Gartner expects the continuing success of the NAND flash industry to be a significant factor in the adoption of IMC as a mainstream solution, with flash memory allowing customers to build IMC systems that are more affordable and have a greater impact. NAND Flash Allows for New Possibilities The introduction of NAND flash memory as a storage medium has caused a revolution in the storage industry and is now allowing for new opportunities to be considered in realms such as database and analytics. NAND flash is a persistent form of semiconductor memory which combines the speed of memory with the persistence capabilities of traditional storage. By offering speeds which are orders of magnitude faster than traditional disk systems, Violin Memory flash memory arrays allow for new possibilities. Here are just two examples: First of all, In-Memory Computing technologies such as In-Memory Databases no longer need to be held back by the performance of the persistence layer. By providing sustained ultra-low latency storage Violin Memory is able to facilitate customers in achieving previously unattainable levels of CPU efficiency when using In-Memory Computing. Secondly, for customers who are reticent in adopting In-Memory Computing technologies for their business-critical applications, the opportunity now exists to remove the storage bottleneck which initiated the original drive to adopt In-Memory techniques. If IMC is the concept of storing entire sets of data in memory to achieve higher processor utilization, it can be considered equally beneficial to retain the data on the storage layer if that storage can now perform at the speed of flash memory. Violin Memory flash memory arrays are able to harness the full potential of NAND flash memory and allow users of existing non-IMC technologies to experience the same performance benefits without the cost, risk and disruption of adopting an entirely new approach.
<urn:uuid:f0f16136-f0fa-42f8-97e6-2b4ce366d111>
CC-MAIN-2017-04
https://flashdba.com/category/database/in-memory/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00064-ip-10-171-10-70.ec2.internal.warc.gz
en
0.942943
1,711
2.515625
3
Imagine, for a moment, the sum total of all information stored on every computer -- every desktop, mini and mainframe -- in the entire world at this exact point in time. Consider the trillions of gigabytes of information existing as electronic impulses stored in millions of hard drives planetwide. Now, image this universe of data doubled! According to The Essential Client/Server Survival Guide, 2nd ed., the total quantity of data on computers worldwide doubles every five years. With the widespread use of client/server technologies, including the Internet, expectations are that this doubling factor may soon occur yearly. The sheer quantity of data now being stored digitally is almost unimaginable. The size and scope of a database containing complete information from a single state motor vehicle department, for example, is staggering. The task of enabling users even basic access to such large repositories is challenging to say the least. However, growing requirements for storing, evaluating and analyzing massive data stores have brought about a new technology field -- data mining and data warehousing. Data Grows With Population In some instances, increases in state agency databases have been triggered by nontechnical factors. For example, the state of Florida, in general, and Palm Beach County, in particular, have experienced explosive population growth in the last 20 years, according to Roger T. Presas, certified public accountant and business process consultant to the Clerk of the Circuit Court in Palm Beach County. Following this population uptrend, the volume of information being stored by state agencies has rapidly expanded along with the people served. "The need to serve the fast-increasing population coupled with the requirement to improve the cost-effectiveness of governmental services have caused us as public officials to search for new solutions," Presas said. The search led to the examination of new ways to store and analyze digital data to improve the accuracy, availability and relevance of related information. Initially, the Palm Beach County Circuit Court stored and retrieved information using mainframe technology. As the volume of data and the demand for retrieving it increased, county officials decided that a more flexible solution was needed. A decision was made to search for better tools. The county targeted the processing of child support information as a specific function requiring better data tools. The Child Support Enforcement (CSE) system, an Informix-based data-mart application, was created to replace a 15-year-old mainframe application developed in-house. The resulting new data-warehousing application solved significant problems, such as the need to increase the turnaround time between receiving and disbursing child support payments. A second, equally important, requirement was the need to store and retrieve a greater quantity of child support case information required by both the courts responsible for processing child support cases and state and federal agencies. "In addition to meeting this need," commented Presas, "the data-warehouse solution enabled us to achieve more strict compliance with ever-changing legal mandates, reduce costs and increase employee productivity." Choosing A New Solution Arriving at such a solution is never an easy task. In this case, the decision-making process was simplified when the clerk of the court initially recognized the need to improve the state's child support data-management operation. This preliminary decision was further supported when the state of Florida mandated development of a new child support enforcement application. The new application was made available to all court clerks in the state. "Clerks, early on, had concerns regarding the use of a database-based application. Many clerks lacked the technical personnel to undertake a project of this significance and were not familiar with the possible benefits of the solution," Presas said. Consequently, initial acceptance of the new tool was slow. This was not the case, however, with the clerk of Palm Beach County, where Presas works. The clerk decided to move ahead with the data-warehousing solution. When deployed in June 1997, Palm Beach County's approach proved to be the most successful implementation of the CSE application in the state. Architecture, Deployment and Benefits The CSE application maintains information required to process over 35,000 active child support cases in Palm Beach County. Data is stored and managed using Informix OnLine 7.23 running on an IBM RS/6000 SP 2 UNIX (AIX) server. All programs are written in the Informix "4GL" language. Hundreds of users access the application using a wide area network running TCP/IP. Users include the clerk's staff directly responsible to operate child support, judges hearing support cases in courtrooms, the public, and state and federal agencies. Interested parties can access payment information by calling an integrated voice response server running Edify software that connects directly to the database. Data conversion and migration from the mainframe-based application to the CSE system required significant efforts. The structure of mainframe information was completely different from the design of the new data warehouse. Additionally, data elements -- like codes, indicators and similar items -- had to be converted to the new format. The critical nature of the information made these tasks more challenging. Data-conversion programs were developed and tested for extended periods until everyone involved was assured of the accuracy of the results. At the same time, the application was ported from the original Unisys 6000 platform to operate on an IBM server. This effort was undertaken to meet requirements for effective response to hundreds of concurrent user inquiries. Once deployed, the increased efficiency of the CSE application allowed clerk management staff to reallocate human resources to other functions. This has resulted in estimated annual savings of over $200,000. Hardware and software maintenance costs have also been reduced but have not been specifically quantified. "Other benefits of the new system," stated Presas, "include increased payment turnaround, improved information flow to the courts, state and federal agencies and improved enforcement due to the availability of more timely, detailed information." Mining For Fraud How about information stored by major insurance companies for a daunting dataset? As insurance company information stores grow to unwieldy proportions, data-mining techniques are becoming an increasingly important weapon for companies trying to fight all sorts of fraud. According to a report from the Newsbytes wire service, medical insurance fraud recently made headlines due to the use of data-mining technology, which helped insurance investigators ferret out a scheme in which fictitious companies used the names of real doctors and patients to bill for services that were never provided. Joyce Hansen, vice president of Integrity Plus Services in Minneapolis, told Newsbytes that Integrity Plus, an insurance fraud detection company, has been using IBM's Fraud and Abuse Management System to catch many forms of fraud. Integrity Plus has caught bills for services supposedly provided on Sundays and holidays, for clinics claiming to serve patients who live far away, and so on, Hansen said. While it is difficult to say exactly how much the system saves, Hansen said that in the first year of its use, the claims savings from catching fraudulent billing increased 20 percent. According to an IBM technician, the system, designed in consultation with several customers in the insurance industry, looks at about 100 different claim characteristics to spot abnormal patterns that might suggest fraud. It might identify, for instance, the fact that a particular ambulance operator consistently claims longer runs than others in the same area. When the system spots a suspicious trend such as this, investigators can take a closer look. Ben Barnes, general manager of global business intelligence solutions at IBM, admitted that the system usually cannot work fast enough to pre-screen claims, so when fraud is caught, the insurer may have to take legal action to recover money already paid. However, the service provider who has been caught once will be watched more closely in the future. The fact technology exists to analyze claims looking for fraud should deter some would-be fraudsters. Others will try to outsmart the system, and Hansen said that is already happening as the perpetrators of fraud change their behavior in attempts to avoid detection. "They're learning those controls, and so they can bypass them," she said. However, she added that detection technology will continue improving to stay ahead of the fraud attempts. Planning For The Future The only thing one can say for certain is that information stores will continue to grow. Impossibly huge databases and the demands of an increasingly sophisticated user base will continue to challenge data managers. Fortunately, experience to date suggests that the industry will follow that growth trend and endeavor to provide more robust tools to facilitate the management and examination of these digital mountains. A new "sub-industry" of data warehousing and data mining has sprung up almost overnight to meet the demand. Government agencies will turn to these solutions more to meet increased demand from both internal and public users. Technology advances in data storage, transfer (for data warehousing) and artificial intelligence (for data mining) will make the job easier moving forward. A note of caution: Data managers would be wise to carefully examine all the elements of a proposed solution to ensure it will be compatible with existing infrastructures and those of related agencies and organizations. One's ability to extend and evolve the application down the road is also important. As with any new technology arena, different vendors will promote different proprietary approaches to the problem. To the extent possible, try to implement a solution that will evolve with future demands. Make the investment in this new technology an investment for the future and not a "one-shot" solution likely to be rendered obsolete with the next wave of technological change. While these areas are still developing, a working set of definitions is necessary to understand what data mining and data warehousing are and what they are trying to do. The following have been compiled from a variety of sources and seem to be currently agreed-upon descriptions (Note: However, like any new technology, these definitions are subject to change as things develop.) Data Warehousing -- A collection of data designed to support management decision-making. Data warehouses contain a wide variety of data that presents a coherent picture of business conditions at a single point in time. Development of a data warehouse includes development of systems to extract data from operating systems plus installation of a warehouse database system that provides managers flexible access to the data. (Courtesy of ZDNet's Webop
<urn:uuid:27c4fbdb-7804-4873-9e3a-2d3318cc580d>
CC-MAIN-2017-04
http://www.govtech.com/featured/Information-Explosion-Yields-Data-Nightmare.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00304-ip-10-171-10-70.ec2.internal.warc.gz
en
0.946738
2,075
2.5625
3
BPEL - The Business Process Execution Language What is BPEL?BPEL is an XML-based grammar for describing the logic to coordinate and control web services in a business process. The roots of BPEL began in December of 2000, when Microsoft published XLANG. This was followed in March of 2001 when IBM published the Web Services Flow Language or WSFL. BPEL itself was first released in July of 2002, as BPEL 1.0. BPEL 1.0 merged the flat-graph process definition approach and the structural constructs approach of the previous languages into BPEL 1.0. In May of 2003, BPEL v1.1 was released with a set of revisions, and that was the version of the specification that was submitted to the OASIS organization. OASIS standardized the language in April of 2007 and it is now known as formally as WS-BPEL 2.0. Why do we need BPEL?So, why was the new language created? Why do we even need BPEL? BPEL provides: - Support for web services relationships and interactions that are engaged in both short- and long-term business transactions. BPEL provides the foundation for automating business processes. - Message exchange correlation for long running message exchanges, not just over a minute or two, but over days, weeks or months. BPEL provides industry-standard support for processes that require very long time periods to complete. - Implementation for the parallel processing of activities, which permits the execution of non-dependent actions concurrently to improve process performance. - For the mapping of data between partner interactions, so it is possible, for example, to take the result from one web service and use it to invoke another web service. - The BPEL standard provides consistent exception and recovery handling for deployed business processes. BPEL BenefitsSome of the benefits of using BPEL include: - BPEL is SOA (Service Oriented Architecture) compliant, meaning that it is based on web services, which are the set of protocols by which such services can be published, discovered and used in a technology neutral, standard form. - BPEL allows us to leverage existing standards and skill sets, all in a common language. - BPEL deployed orchestrations are web services themselves, and therefore fit naturally into the existing Web Services stack. - BPEL is expressed entirely in XML, uses and extends the WSDL 1.1 definitions and uses XML Schema 1.0 for the data model. - BPEL is platform and vendor agnostic, and so a BPEL process will run on any engine that is BPEL-compliant. - BPEL processes are interoperable between and among existing/running web services because they are themselves web services. Click to take a screenshot tour of a BPMN designer that executes models as native BPEL 2.0
<urn:uuid:6652d5d4-b14e-4fee-8766-616a820d926b>
CC-MAIN-2017-04
http://www.activevos.com/learn/bpel
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279923.28/warc/CC-MAIN-20170116095119-00148-ip-10-171-10-70.ec2.internal.warc.gz
en
0.944017
587
2.609375
3
Ahmed Shakib - Technician – 2M CCTV Cameras these days range from security surveillance to your average point and shoot cameras. If you strip the hardware that makes up the camera and the lens that lets you see the world, you will notice little reflective photodiodes which make up a device called an image sensor. Depending on the specs of your security camera, you will notice that it has one of the following two image sensors inside: CCD (charged-coupled device), or CMOS, (complimentary metal-oxide semiconductor). CCD and CMOS are types of photodiodes that are integrated into solar panels and, essentially, they perform very similar tasks to one another. They convert light to electrons, and transform it into an image, where it can be read or displayed. Note that both CCD and CMOS use the typical dimension to measure the size of their image sensor: 1/3″, 1/4″, and etc. Since CCD moves its charged cells across the chip, it is less prone to distortion, but consumes more power. This leads CCD image sensors to produce higher quality images, have higher sensitivity to light, and produce far less noise than CMOS devices. The main reason CMOS devices have lower video quality and higher image noise is due to the fact that each pixel has dedicated transistors which sit very close to the CMOS sensor. Light that is captured by the photodiodes are also touching the transistors, which in turn make it lose its image quality. CCD works differently letting its charged cells across the array, and makes use of external devices to process its signal. This external device could be a digital converter like a DVR, or a building signal processor. This external device allows the charged cells to be converted from analog to digital signal by matching each pixel value to a digital value. Because it yields higher quality images and much lower noise within the image, the CCD image sensor is very popular within the CCTV & surveillance industry. The technologies behind CMOS have gone through extensive improvements, which have lowered the prices for the average cameras drastically. The few “cons” I have mentioned have also improved along the way. The quality of images taken with the CMOS sensor (after recent technological advancements) cannot be distinguished from those taken with a CCD device with our naked eye. Nevertheless, the advantages of having a CMOS device outweigh the benefits of the CCD device. Unlike CCD, the CMOS image sensor has transistors at each pixel that amplify the charged electrons across the X-Y wires, making it consume less power than CCD devices (about 100 times less power). In addition, the CMOS image sensor is more flexible than CCD, because each pixel is read individually. The production of CMOS is very cost effective. CMOS image sensors can be produced and shipped in less time than CCD image sensors. You could manufacture a CMOS chip, as its semiconductor, with 90% of the manufacturers that produce computer chips or other semiconductor chips. CCD sensors rely on special machines to manufacturer the sensor chip and at times require many different facilities to complete one production, making it more costly to produce. Image Scanning Technologies (Progressive and Interlaced) CCD and CMOS sensors found inside security cameras come with advanced built-in technologies that enhance their basic functionalities. Interlaced and progressive scans are few of these technologies. They process the images within the frames, in order to hide flaws that would have been noticeable otherwise. Interlaced scanning is exclusive to CCD devices. When interlaced images are captured, odd and even lines are created, which are then alternated 25 (PAL) or 30 (NTSC) frames per second. The switching between odd and even lines is done so fast, that it can not be detected by the naked eye. Progressive scanning is common to CCD and CMOS, but is mainly used by CMOS security cameras. Progressive scanning allows the camera to obtain values from each pixel in the sensor and scans them consequently, in order to produce a complete picture. This is very important in a security camera, because interlaced and progressive scanning allow the DVR to record more fluid footage, and avoid the distortion created by a moving car or the leaves of a tree. Since their induction in the 1900s, CMOS devices have come a long way and became a driving force for the overall camera industry. CCD devices will still be used with the next generation of analog security cameras, because of their ability to achieve higher resolutions. CMOS devices are capable of processing higher quality images and are used primarily on megapixel cameras, because of their higher shutter speed and lower power usage. Whether you buy a camera that has a CCD image sensor or CMOS image sensor, please note that looking at the live video on the security monitor will not allow you to distinguish between the two.
<urn:uuid:5ee22407-dfda-4b17-9fb3-f0f0b9ad188c>
CC-MAIN-2017-04
http://www.2mcctv.com/blog/2012_07_25-ccd-vs-cmos-image-sensor-technology/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00056-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947474
1,015
2.828125
3
Manufacturing Breakthrough Blog Tuesday March 10, 2015 Over the course of time, every organization has experienced problems. Sometimes the problems are solved with simple solutions, while other times the problem requires an in-depth analysis before it can be solved. One thing is for certain, in order to solve any problem, simple or complex, we must know the root cause of the problem. Unfortunately many organizations and people tend to treat the symptoms of problems rather than the true root cause. And when this happens, the problem never really goes away. A Causal Chain Example While Six Sigma offers a variety of tools, one of my favorites is called the Causal Chain. I like this tool because, as you will see below, on a single sheet of paper, we are able to see the cause and effect links of the various symptoms leading to the ultimate root cause. A causal chain is the path of influence running directly from the apparent problem to the ultimate root cause and the symptoms we see in between the two. In other words, a causal chain is an ordered sequence of events that link the causes of a problem with the effects of the problem. Each new link in the causal chain is created by repeatedly answering the question “Why?” At the end of the connected links lies the root cause of the problem. The causal chain begins with the identification of the problem. The chain is constructed by placing the object/entity with the problem above the line with the state that it’s in being place directly beneath the object/entity. For example, if you were to find that an extruded product is too wide, you would write it as “extruded product” on top and directly beneath it you would write “too wide.” You would then ask the question "why?" and record the response accordingly. If there are multiple potential answers to the question, then record all of them in a structured way as follows: In this example, we are looking at an extruder that is extruding a product to a specific width. As you can see, the identified problem is that the product width is too wide. You ask the question, why is the product width too wide? In this causal chain I have listed three answers to this question: - Product cutting guide is loose - The guide width is set too wide - The product expansion is greater than it should be There may be more potential causes, but for demonstration purposes I have only listed these three. For each new answer to the “why” question, you can look at that part of the chain and see if the predicted effect exists in your reality. If it does, you can correct it and move on to the next effect. If the effect no longer exists after you have corrected it, then you have solved the problem. You can also have more than one effect in your causal chain as demonstrated in the figure above. If you have thought through the entire potential cause and effect relationship, then at the end of the chain lies the true root cause or causes. A Question to Ponder In my next posting we will discuss another very useful tool in the Six Sigma tool kit, the Cause & Effect Diagram. Like the causal chain, the C & E diagram is a very useful tool for identifying potential root causes. As always, if you have any questions or comments about any of my postings, just leave me a message in the box below and I will respond. Thanks for reading. Until next time.
<urn:uuid:0268c934-6c0a-4226-ad0c-9ce8411d9fa9>
CC-MAIN-2017-04
http://manufacturing.ecisolutions.com/blog/posts/2015/march/six-sigma-and-the-causal-chain.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00174-ip-10-171-10-70.ec2.internal.warc.gz
en
0.953692
715
2.765625
3
The breakthrough has been detailed in the Nature Nanotechnology journal. Researchers from Stanford University have achieved a breakthrough in the development of a lithium anode battery, which is said to triple the battery life of gadgets. In a paper published in the Nature Nanotechnology journal, the researchers claimed that an anode of pure lithium could be used to boost to battery efficiency. According to the scientists the recent discovery focused on high-capacity electrode materials such as lithium metal, silicon and tin as anodes, and sulphur and oxygen as cathodes. Though it could be a better choice as an anode material for its highest specific capacity, it is considered to have potential as an anode material. Lithium anode is also said to form dendritic and mossy metal deposits, which raises safety concerns. Stanford professor of materials science and engineering and leader of the research team, Yi Cui, said: "Of all the materials that one might use in an anode, lithium has the greatest potential. Some call it the Holy Grail. "It is very lightweight and it has the highest energy density. You get more power per volume and weight, leading to lighter, smaller batteries with more power." But engineers have long tried and failed to reach this Holy Grail, Cui added. Doctoral researcher in Cui’s lab and first author of the paper, Guangyuan Zheng said: "Lithium has major challenges that have made its use in anodes difficult. "Many engineers had given up the search, but we found a way to protect the lithium from the problems that have plagued it for so long." Stanford professor and research team member, Steven Chu, said: "In practical terms, if we can triple the energy density and simultaneously decrease the cost four-fold, that would be very exciting." "We would have a cell phone with triple the battery life and an electric vehicle with a 300-mile range that cost $25,000 – and with better performance than an internal combustion engine car getting 40 mpg." In order to address the formation of hair-like or mossy growths, called dendrites and overheating, scientists created a protective layer of interconnected carbon domes on top of their lithium anode which is called as nanospheres. The protective layers resemble like honeycomb, which offers a flexible, uniform and non-reactive film that protects the unstable lithium. Carbon nanosphere wall is just 20 nanometers thick and would require 5,000 layers stacked one atop another to match the width of human hair. "The ideal protective layer for a lithium metal anode needs to be chemically stable to protect against the chemical reactions with the electrolyte and mechanically strong to withstand the expansion of the lithium during charge," said Cui. The breakthrough is claimed to be useful for handheld gadgets, phones and electric cars.
<urn:uuid:443425fe-6894-4392-b7f5-ae9476e47b73>
CC-MAIN-2017-04
http://www.cbronline.com/news/mobility/devices/researchers-claim-lithium-anode-breakthrough-can-triple-battery-life-of-gadgets-4330001
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00571-ip-10-171-10-70.ec2.internal.warc.gz
en
0.950007
591
3.390625
3
Configuring Windows 2000 Networks for Mobile Users If you've ever installed the Remote Access Service in Windows NT, you know just how simple the process is. Setting up a Windows NT Server for remote access involves little more than enabling the Grant Dial In Permissions option for each user. If you wanted more security, you could also enforce a call-back option on a per-user basis. Because of the simplicity of Windows NT's Remote Access Service, you may have given little thought to Windows 2000's remote access capabilities. However, in Windows 2000, the remote-access features are very different from those in Windows NT. Rather than using a simple radio button in User Manager to enable or disable remote access, Windows 2000 uses an entire set of policies. You can configure these policies to achieve whatever level of security your organization requires. Three types of policies control remote access security: the Local Internet Authentication Services policy, Central Internet Authentication Service policy, and Standard Group policy. The Local Internet Authentication Services policy exists at the local level. The policies are delivered from a service called Remote Authentication Dial In User Service (RADIUS), and can be used to regulate client-access permissions based on a number of criteria. If you're unfamiliar with RADIUS, you're not alone. Although RADIUS has been around for a while, it was previously used primarily for ISPs. Most Windows users have never touched RADIUS unless they're running a very large dial-in service. RADIUS is the system most ISPs use to regulate logins and to keep track of who is on when and for how long. You've probably noticed that when you dial up to an ISP, you're prompted for a login name and password. The authentication information that you provide is almost always passed to a RADIUS server rather than a Windows-based server. Once RADIUS has authenticated the user, the user is allowed to use the ISP servers or routers that connect them to the Internet. Because Windows 2000 supports RADIUS, you can use Windows 2000 to control remote access to your network or to the Internet through your network. If you need a bit more security or if you already have a RADIUS server, you can use a third-party RADIUS server in conjunction with Windows 2000. Local Internet Authentication Services Policy Central Internet Authentication Service Policy This second type of policy used for remote access security is also based on RADIUS. The Central Internet Authentication Service policies are stored centrally. Therefore, multiple routing and remote access servers can use one centrally stored copy of the policy without the need to replicate the policy to each of these servers. Finally, you can use standard group policies to control remote access security. The group policy method is more in line with what you're used to if you've previously worked with Windows NT's Remote Access Service. // Brien M. Posey is an MCSE who works as a freelance writer. His past experience includes working as the director of information systems for a national chain of health care facilities and as a network engineer for the Department of Defense. Because of the extremely high volume of e-mail that Brien receives, it's impossible for him to respond to every message, although he does read them all.
<urn:uuid:297024a1-492a-498c-9053-6de45ee3e9f4>
CC-MAIN-2017-04
http://www.enterprisenetworkingplanet.com/print/netos/article.php/624971/Configuring-Windows-2000-Networks-for-Mobile-Users.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00571-ip-10-171-10-70.ec2.internal.warc.gz
en
0.916089
658
2.546875
3
The NOAA Center for Coastal Monitoring and Assessment is now providing public access to new digital photographs from six years of coral reef field studies. The online Coral Reef Ecosystem Database, developed and managed by the NOAA National Centers for Coastal Ocean Science, provides access to images of coral reef species and habitats, which were taken during studies in Puerto Rico and the U.S. Virgin Islands. Elkhorn Coral in La Parguera, Puerto Rico Funded by the NOAA Coral Reef Conservation Program, the online database facilitates a variety of coral reef research, management and educational opportunities. More than a thousand new digital images were added to the searchable database providing high resolution digital photographs of fish, hard and soft corals, hydroids, sea grass, sponges and other invertebrates, vertebrates and algae, which can be directly downloaded via the Internet. "These new photographs are an additional component to a larger database providing public access to fish and habitat data for the Caribbean, and are the result of long-term research activities that have been conducted jointly with our federal, territorial and academic partners," said Tom McGrath, database developer for NOAA Center for Coastal Monitoring and Assessment. "NOAA is hopeful others in research and reef management, and the public at large will enjoy the benefits of such an expansive visual display of our nation's off-shore habitats." Green Moray in La Parguera, Puerto Rico Coral reefs are some of the most biologically rich and economically valuable ecosystems on Earth. Corals contribute to the food supply, jobs and income, coastal protection and other important services to billions of people worldwide. Yet they are threatened by an increasing array of impacts from overexploitation, pollution, habitat loss, invasive species, diseases, bleaching and global climate change. Rapid decline and loss of these valuable, ancient and complex marine ecosystems have significant social, economic and environmental consequences in the United States and around the world. NOAA helps coastal communities, managers, scientists and other partners to understand and sustainably manage coral reef ecosystems. Photos courtesy of NOAA.
<urn:uuid:c61960ac-04e4-4bd2-bcaf-c5f30b3334c3>
CC-MAIN-2017-04
http://www.govtech.com/e-government/NOAA-Coral-Reef-Assessment-and-Monitoring.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00507-ip-10-171-10-70.ec2.internal.warc.gz
en
0.920824
422
3.359375
3
Analysts have watched the international stage closely for years as nation-states compete in an ever-intensifying arms race. Unlike in the past, however, it’s not nuclear weapons or missile caches at the center of the contest. Instead, world superpowers and a number of developing nations have been fortifying their hacking capabilities and targeting one another for multiple ends: intelligence gathering, sabotage, political interference, the list goes on. Is this hacking cold war headed toward an all-out conflict? Or has the cyber war already begun, as some researchers have claimed? Certainly, the number of attacks by foreign nation states has increased in the U.S., and the effects on companies, government agencies, the 2016 presidential election and the public in general have been severe. Let’s look at some recent attacks attributed to nation-states to see just how urgent the international hacking environment is. Intellectual Property Thefts In October 2015, tensions between the U.S. and China had reached a head, following a number of Chinese hacks into major American corporations. That month, President Barack Obama met with Chinese President Xi Jinping, and the two leaders agreed to a pact prohibiting further cyber espionage activity. One year later, despite the agreement, security research firm FireEye reports that Chinese hackers continue to breach corporate networks, likely for the purpose of stealing trade secrets. If China can use this data to produce imitation products and services, it could harm the balance of the U.S. economy. Stealing Cyber Weapons In August of this year, a group called the Shadow Brokers leaked data containing a trove of exploit methods that the NSA had stockpiled for use in surveillance activities. Aside from embarrassing the NSA and shedding light on its secret operations, this hack placed sophisticated cyber weapons in the hands of anyone with basic computer skills. Experts suspect the group responsible has ties to Russia. The Democratic National Committee found itself the target of a major hack that exposed thousands of emails, some of which showed preferential treatment for Hillary Clinton in the race for the Democratic presidential nomination. Earlier in October, the U.S. officially accused Russia of hacking the DNC databases to interfere with the election. Yahoo! Inc., the struggling web services group currently negotiating its sale to Verizon Communications Inc. was dealt a major blow upon learning it had been the victim of a major hack. In fact, this breach, which compromised more than half a billion email accounts and passwords, ranks as one of the largest-scale cyber incidents of all time. When Yahoo officials initially claimed the hack was likely foreign-state-sponsored, some analysts were skeptical. However, an independent investigation by security firm InfoArmor found the hack did appear to be linked to Eastern European state-backed hackers. The firm suggests the motivation is espionage of accounts linked to U.S. military and government officials. As international hacks rise in prevalence, remember that no company or agency is too small to be a target. In fact, foreign hackers will often target smaller, less protected groups to gain access to their clients and partners. The sophistication of their methods surpasses those that can be identified by common vulnerability scans; they require proactive security measures, such as penetration testing and centralized security operations, to combat. Lunarline works with clients of all sizes to offer leading security capabilities at an affordable price, leveraging our state-of-the-art security operations centers and skilled professionals on behalf of your organization. For more information on our solutions, you can visit our website or contact us online today.
<urn:uuid:d98b6287-ff17-4cb1-979a-6e3603daa79a>
CC-MAIN-2017-04
https://lunarline.com/blog/2016/10/cyber-world-war/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00507-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947181
718
2.515625
3
The term architecture is used here to describe the attributes of a system as seen by the programmer, i.e., the conceptual structure and functional behavior, as distinct from the organization of the data flow and controls, the logical design, and the physical implementation. This isnt about structural relationships between components, its about hiding that structure and focusing instead on behavior. Nowadays, wed say it defines architecture as the properties of a class of objects. How did we get from external properties to internal structure? Thats largely the doing of Edsgar Dijkstra, when in 1968 he laid the foundations for the idea of software architecture. Theres a good discussion of this at the Software Engineering Institute (SEI) websitehttp://www.sei.cmu.edu. When you consider enterprise architecture, things get even more curious. Neither IEEE nor The Open Group define enterprise architecture explicitly. The most commonly cited first use of enterprise architecture doesnt actually call it enterprise architecture, and thus doesnt define it. John Zachman first applied the idea of architecture to an enterprise-wide (though IT-focused) scope in his paper A framework for information systems architecture (IBM Systems Journal, Vol. 26, No. 3, 1987). Note that Zachman did not call it enterprise architecture, rather he called it information systems architecture. Five years later he was still not calling it enterprise architecture, but somebody else was. The first actual use of enterprise architecture I have found is by Steven Spewak in his book Enterprise Architecture Planning: Developing a Blueprint for Data, Applications and Technology (Wiley, 1992). Note that the subtitle limits the scope to data, applications and technology. Spewak loosely defines architecture as being like blueprints, drawings or models. He defines enterprise by writing the term enterprise should include all areas that need to share substantial amounts of data. More recent definitions of enterprise architecture tend to put less emphasis on architecture and more on the delivery of business value, in response to the pursuit of the perennially elusive business/IT alignment. For example, researchers at the MIT Sloan Center for Information Systems Research (CISR) published Enterprise Architecture as Strategy: Creating a Foundation for Business Execution (Ross, Weill and Robertson; Harvard Business School Press; 2006), where they define enterprise architecture as: The organizing logic for core business processes and IT infrastructure reflecting the standardization and integration of a companys business model. And the Wikipedia entry for enterprise architecture defines it thus: Enterprise Architecture is the description of current and/or future structure and behavior of organizations processes, information systems, personnel and organizational sub-units, aligned with the organizations core goals and strategic direction. Although often associated strictly with information technology, it relates more broadly to the practice of business optimization in that it addresses business architecture, performance management, organizational structure and process architecture as well. As I said earlier, I am reminded of the blind men and the elephant. Is it possible to see the whole elephant for what it really is? Is there a single useful definition of our kind of architecture that encompasses all of these different perspectives and their implied needs? I believe there is. In my next article, Ill describe my quest for it. Len Fehskens is The Open Groups vice president and global professional lead for enterprise architecture. He has extensive experience in the IT industry, within both product engineering and professional services business units. Len most recently led the Worldwide Architecture Profession Office at Hewlett-Packards Services business unit, and has previously worked for Compaq, Digital Equipment Corporation (DEC), Prime Computer and Data General Corporation.
<urn:uuid:8bc7f933-c6f5-4afa-b75b-12c4b1501bb1>
CC-MAIN-2017-04
http://www.cioupdate.com/insights/article.php/11049_3726166_2/The-Architecture-of-Architecture-Part-II.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00231-ip-10-171-10-70.ec2.internal.warc.gz
en
0.918985
729
3.234375
3
The Energy Department said it would spend $10 million to help kick-start unique energy technology that converts ocean waves and currents into electricity. Perhaps the most interesting component to the announcement is $6.5 million to set up a competition that challenges individuals, universities, and existing and emerging companies to improve the performance and lower the cost of energy produced by wave energy devices. The agency has said n the past that the US could generate up to 1,400 terawatt hours of potential wave power per year. One terawatt-hour of electricity is enough to power 85,000 homes, according to the agency. +More on Network World: 10 hot energy projects that could electrify the world and more Cool Energy projects+ The Department of Energy said it expects a prize competition to "dramatically improve the performance of wave energy converter (WEC) devices, providing a pathway to game-changing reductions in the cost of wave-based energy. In principle, prize competitions set a high technical bar for participants to be eligible for a prize, thus facilitating rapid advancements through technical innovation at a relatively low cost to the sponsoring agency." According to the DOE the broader goal for the WEC competition is to spur innovations for new and next generation technologies to be cost-competitive at 15 cents per kilowatt hour (¢/kWh), down from the current range of 61-77 ¢/kWh3. "The wave energy industry is young and experiencing many new innovations as evidenced by a sustained growth in patent activity. While the private industry is developing these early conception wave energy converter (WEC) devices through design and benchtop prototype testing, funding is hard to secure for performance testing and evaluation of WEC devices in wave tanks at a meaningful scale. This is a problem for the industry since scaled WEC prototype tank testing, validation, and evaluation are key steps in the advancement of WEC technologies through the technical readiness levels to reach commercialization, the DOE stated. Hand-in-hand with the WEC technology, the DOE said it will spend $3.5 million to develop sensors, instruments and other technologies that collect data on the characteristics of waves, including their height, period, direction, and steepness. Such data will let WECs more accurately assess approaching waves and more efficiently harness their energy. "The wave environment experienced by a WEC can vary rapidly over very short time periods; the wave height, period, and direction are all highly variable. WECs currently rely on feedback controllers to adjust to this stochastic input. This form of reactive control could be augmented by shorter term wave statistics on a time horizon of minutes ahead of the device. Feed forward controllers have the potential to double energy capture, but require future knowledge of incoming waves on a time horizon of a few wave lengths (i.e., 30 seconds). New technologies would support the development of wave instrumentation or new processing software for current instrumentation to provide the short term wave statistics or wave‐by‐wave height, period, and directionality measurements that enable feed forward controls," the DOE stated. The $10 million outlay is at least the second big wave energy investment the DOE has made in past year. In August 2013 it spent $16 million on 17 research projects that promised to increase the power production and reliability of wave and tidal devices and help gather valuable data on how deployed devices interact with the surrounding environment. Check out these other hot stories:
<urn:uuid:d5b63f89-09b1-42a0-803c-cfe0bf927707>
CC-MAIN-2017-04
http://www.networkworld.com/article/2226510/data-center/us-energy-dept--deals--10m-to-ride-ocean-wave-energy.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00231-ip-10-171-10-70.ec2.internal.warc.gz
en
0.935476
701
2.59375
3
When everybody lies: Voice-stress analysis tackles lie detection - By Susan Miller - Mar 18, 2014 As big data and analytics are increasingly considered the go-to technologies for teasing veracity from volumes of information, the real truth is that people lie -- sometimes quite effectively, essentially negating reams of data on credit worthiness, employment performance and personal references. Agencies have seen their share of headlines about rogue employees passing security clearances. An insider security threat or leak can damage business and national security, ruin reputations and even cost human lives, so organizations are keen to identify deception. Although various technologies have been applied to determining whether a person is telling the truth or not, many experts believe that no foolproof method of lie detection exists. Nevertheless, since the early 1900s people have used available technology – from measuring changes in blood pressure and pupil dilation to linguistic analysis or magnetic resonance imaging -- to try to sift fact from fiction. The polygraph, today’s disputed yet de facto standard, was invented in 1921 and is currently used by many organizations, including law enforcement, and intelligence agencies, to interrogate suspects and screen new employees. A polygraph machine looks at heartbeat, perspiration, breathing and other physical factors that are influenced by stress. Too many stress indicators could mean that a subject is feeling guilty or is worried about his response. If stress levels remain the same throughout the questioning, then no deception is detected. While the polygraph has been a standard tool for law enforcement in criminal investigations, some police departments are using computer voice stress analysis (CVSA) in their investigations and parole programs. In fact, a U.S. federal court recently ruled that sex offenders can be required to submit to CVSA examinations as part of their post-release supervision. One such voice examination tool, CVSA II manufactured by National Institute for Truth Verification, runs on a variety of platforms -- including mobile devices. The company claims it even works whether the subject is face to face with an investigator or talking over the phone. It uses a microphone plugged into a computer to quantify and analyze frequency changes in the subject’s responses that indicate vocal stress. As the subject speaks, the computer displays each voice pattern and numbers it. At the end of the evaluation, an algorithm scores the results. But criminal investigations represent only the tip of the iceberg for an automated system that can flag human deception. Such technology could be invaluable in personnel screening, defense and homeland security, border control and airport security as well as for financial institutions, contact centers and insurance providers – in short, anywhere where human deception is a liability. The Department of Homeland Security’s National Center for Border Security and Immigration at the University of Arizona developed a screening system called the Automated Virtual Agent for Truth Assessments in Real-Time (AVATAR), which is designed to flag suspicious or anomalous behavior that warrants further investigation by a trained human agent in the field. The kiosk-based automated system conducts brief interviews in a number of screening contexts, such as trusted traveler application programs, personnel reinvestigations, visa application reviews, or similar scenarios where truth assessment is a key concern. AVATAR uses non-invasive sensors to track pupil dilation, eye and body movements and changes in vocal pitch in an effort identify suspicious or irregular behavior that deserves further investigation. AVATAR has been tested in several simulation exercises and at the U.S.-Mexico border. Its first field test was in December 2013 in Romania. Nemesysco, an Israel-based company specializing in voice analysis solutions, uses layered voice analysis (LVA), which identifies various types of stress levels, cognitive processes and emotional reactions that are reflected in the properties of a subject’s voice. Nemesysco emphasizes that LVA is not the same as voice stress analysis but instead uses a unique technology to detect “brain activity traces” using the voice as a medium. By using a wide range spectrum analysis to detect minute involuntary changes in the speech waveform itself, the company says, LVA can detect anomalies in brain activity and classify them in terms of stress, excitement, deception and varying emotional states. Beyond Verbal Communications, another Israel-based firm that bills itself as an emotional analytics company, is among a number of businesses that are working on adapting voice recognition technology to a variety of applications such as improving call center interactions and monitoring airline pilots for fatigue. Beyond Verbal offers its software as a cloud-based licensed service. By connecting to its API and SDK, third-party developers can use the technology for a variety of purposes in a range of fields. It has even released a “home” version of its emotion decoding voice recognition software. “With the click of a button and about 20 second of speech, the Moodies app gives users the option to analyze their own voice as well as understand the emotions of individuals around them,” the company said in its announcement of the iOS app. Similar “for-fun” emotion-analysis or lie-detection apps are available for Android. In the end, the detection method is only as good as the investigator using it and the questions posed. But there will always be doubt. So while any deception detection technology might be preferred by one investigator or another, humans can still sometimes outwit technology. -- John Breeden II contributed to this story. Susan Miller is executive editor at GCN. Over a career spent in tech media, Miller has worked in editorial, print production and online, starting on the copy desk at IDG’s ComputerWorld, moving to print production for Federal Computer Week and later helping launch websites and email newsletter delivery for FCW. After a turn at Virginia’s Center for Innovative Technology, where she worked to promote technology-based economic development, she rejoined what was to become 1105 Media in 2004, eventually managing content and production for all the company's government-focused websites. Miller shifted back to editorial in 2012, when she began working with GCN. Miller has a BA from West Chester University and an MA in English from the University of Delaware. Connect with Susan at firstname.lastname@example.org or @sjaymiller.
<urn:uuid:4450daf3-a031-47e4-ae1e-52c86ac07beb>
CC-MAIN-2017-04
https://gcn.com/articles/2014/03/18/voice-risk-analysis.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00561-ip-10-171-10-70.ec2.internal.warc.gz
en
0.938708
1,279
2.515625
3
Big Data applications – once limited to a few exotic disciplines – are steadily becoming the dominant feature of modern computing. In industry after industry advanced instruments and sensor technology are generating massive datasets. Consider just one example, next generation DNA sequencing (NGS). Annual NGS capacity now exceeds 13 quadrillion base pairs (the As, Ts, Gs, and Cs that make up a DNA sequence). Each base pair represents roughly 100bytes of data (raw, analyzed, and interpreted). Turning the swelling sea of genomic data into useful biomedical information is a classic Big Data challenge, one of many, that didn’t exist a decade ago. This mainstreaming of Big Data is an important transformational moment in computation. Datasets in the 10-to-20 Terabytes (TB) range are increasingly common. New and advanced algorithms for memory-intensive applications in Oil & Gas (e.g. seismic data processing), finance (real-time trading), social media (database), and science (simulation and data analysis), to name but a few, are hard or impossible to run efficiently on commodity clusters. The challenge is that traditional cluster computing based on distributed memory – which was so successful in bringing down the cost of high performance computing (HPC) – struggles when forced to run applications where memory requirements exceed the capacity of a single node. Increased interconnect latencies, longer and more complicated software development, inefficient system utilization, and additional administrative overhead are all adverse factors. Conversely, traditional mainframes running shared memory architecture and a single instance of the OS have always coped well with Big Data Crunching jobs. “Any application requiring a large memory footprint can benefit from a shared memory computing environment,” says William W. Thigpen, Chief, Engineering Branch, NASA Advanced Supercomputing (NAS) Division. “We first became interested in shared memory to simplify the programming paradigm. So much of what you must do to run on a traditional system is pack up the messages and the data and account for what happens if those messages don’t get there successfully and things like that – there is a lot of error processing that occurs.” “If you truly take advantage of the shared memory architecture you can throw away a lot of the code you have to develop to run on a more traditional system. I think we are going to see a lot more people looking at this type of environment,” Thigpen says. Not only is development eased, but throughput and accuracy are also improved, the latter by allowing execution of more computationally demanding algorithms. Until now, the biggest obstacle to wider use of shared memory computing has been the high cost of mainframes and high-end ‘super-servers’. Given the ongoing proliferation of Big Data applications, a more efficient and cost-effective approach to shared memory computing is needed. Now has developed a technology, NumaConnect, which turns a collection of standard servers with separate memories and I/O into a unified system that delivers the functionality of high-end enterprise servers and mainframes at a fraction of the cost. - NumaConnect links commodity servers together to form a single unified system where all processors can coherently access and share all memory and I/O. The combined system runs a single instance of a standard operating system like Linux. - Systems based on NumaConnect support all classes of applications using shared memory or message passing through all popular high level programming models. System size can be scaled to 4k nodes where each node can contain multiple processors. Memory size is limited only by the 48-bit physical address range provided by the Opteron processors resulting in a record-breaking total system main memory of 256 TBytes. (For details of Numascale technology see http://www.numascale.com/numa_pdfs/numaconnect-white-paper.pdf ) The result is an affordable, shared memory computing option to tackle data-intensive applications. NumaConnect-based systems running with entire data sets in memory are “orders of magnitude faster than clusters or systems based on any form of existing mass- storage devices and will enable data analysis and decision support applications to be applied in new and innovative ways,” says Einar Rustad, Numascale CTO. The big differentiator for NumaConnect compared to other high-speed interconnect technologies is the shared memory and cache coherency mechanisms. These features allow programs to access any memory location and any memory mapped I/O device in a multiprocessor system with high degree of efficiency. It provides scalable systems with a unified programming model that stays the same from the small multi-core machines used in laptops and desktops to the largest imaginable single system image machines that may contain thousands of processors and tens to hundreds of terabytes of main memory. Early adopters are already demonstrating performance gains and costs savings. A good example is Statoil, the global energy company based in Norway. Processing seismic data requires massive amounts of floating point operations and is normally performed on clusters. Broadly speaking, this kind of processing is done by programs developed for a message-passing paradigm (MPI). Not all algorithms are suited for the message passing paradigm and the amount of code required is huge and the development process and debugging task are complex. Shorten Time To Solution “We have used development funds to create a foundation for a simpler programming model. The goal is to reduce the time it takes to implement new mathematical models for the computer,” says Knut Sebastian Tungland Chief Engineer IT, Statoil. To address this issue, Statoil has set up a joint research project with Numascale who has developed technology to interconnect multiple computers to form a single system with cache coherent shared memory. Statoil was able to run a preferred application to analyze large seismic datasets on a NumaConnect-enabled system – something that wasn’t practical on a traditional cluster because of the application’s access pattern to memory. Not only did use of the more rigorous application produce more accurate results, but the NumaConnect-based system completed the task more quickly. A second example is deployment of a large NumaConnect-based system at the University of Oslo. In this instance, the effort is being funded by the EU project PRACE (Partnership for Advanced Computing in Europe) and includes a 72-node cluster of IBM x3755s. Some of the main applications planned in Oslo include bioscience and computational chemistry. The overall goal is to broadly enable Big Data computing at the university. “We focus on providing our users with flexible computing resources including capabilities for handling very large datasets like those found in applications for next generation sequencing for life sciences” says Dr. Ole W. Saastad, Senior Analyst and HPC expert at USIT, the University of Oslo’s central IT resource department. “Our new system with NumaConnect contains 1728 processor cores and 4.6TBytes of memory. The system can be used as one single system or partitioned in smaller systems where each partition runs one instance of the OS. With proper Numa-awareness, applications with high bandwidth requirements will be able to utilize the combined bandwidth of all the memory controllers and still be able to share data with low latency access through the coherent shared memory.”
<urn:uuid:7ebaf2fd-f291-4d8c-9be5-3709ef0adfb0>
CC-MAIN-2017-04
https://www.hpcwire.com/2013/11/01/affordable-big-data-computing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00469-ip-10-171-10-70.ec2.internal.warc.gz
en
0.917901
1,499
3.125
3
Originally published March 7, 2006 1Readers make a number of judgments when reading graphs: they may judge the length of a line, the area of a wedge of a circle, the position of a point along a common scale, the slope of a line, or a number of other attributes of the points, lines, and bars that are plotted. Cleveland and McGill (1984) identified tasks or judgments that are performed when reading graphs and conducted carefully designed experiments to determine which of these judgments we make most accurately. They then designed a graph to take advantage of the knowledge gained from their experimentation. The result was the dot plot. This article introduces the dot plot and offers before and after examples to compare presentations using bar charts and dot plots. Dot Plots charts are widely used in trading platforms such as forex trading as well as in ftse 100 where traders use dot plots to fin point forex trading outcomes. The dot plot in Figure 1 shows the revenues of the top 60 companies from the Fortune 1000 list. Figure 2 shows these same revenues using a bar chart. Most readers would have little problem understanding either the dot plot or the bar chart. Note that the dot plot is less cluttered, less redundant, and uses less ink. Figure 1: This dot plot shows the revenues of the top 60 companies from the Fortune 1000 list. Figure 2: The same information shown in Figure 1 is displayed this time in a bar chart. The Fortune 1000 list also contains the profits of these companies. Figure 3 shows the profits for these 60 companies in the same order as in Figures 1 and 2 to help make comparisons between the charts. Figure 3: This dot plot shows the profits for these same companies. Note that the companies are ordered by revenue to ease comparisons between charts and that the scale is not the same as used for revenues. The power of the dot plot becomes evident if we wish to combine the information from Figures 1 or 2 and Figure 3 into a single chart. Both the revenues and the profits are shown in Figure 4. Showing both on the same figure gives an indication of the relative sizes and makes it easier to see those companies whose profits are not consistent with the others. However, the variation in the profits is hard to see in a scale that accommodates revenues. Therefore, Figure 3 is still needed to see this variation. It is often useful to plot the same data several ways. Each emphasizes a different aspect of the data. The presentation in Figure 4 would be much more cluttered and more difficult to interpret with a bar chart. A designer who wanted to show revenues and profits in the same figure might use a clustered bar chart (also called a grouped bar chart.) However, there is no room in Figure 2 to add profits as a second group. It could be done by using a much thinner bar of a different color for profits superposed in the revenue bars, or using transparent bars so that both could be seen. However, that would result in a very busy, cluttered figure. Note that Figure 4 is not at all crowded. Another advantage of Figure 4 is that it does not depend on color so that it can be used in black and white publications with no loss of clarity. The two groups can be distinguished by using different symbols. Figure 4: This dot plot superposes the profit data on the same chart as the revenue data. Imagine how cluttered the bar chart would be if we tried to superpose the profit data on it. We have been concentrating on alternatives to simple bar charts. However, the dot plot is even more powerful when replacing clustered or stacked bar charts since these graphs forms do not communicate quantitative information as well as simple bar charts or dot plots do. Study Figure 5 and then Figure 6. A number of facts are obvious from either presentation; for example, note that Asians have the highest percent of incomes over both $50,000 and $75,000 in all three counties. There are other facts that I see immediately from the dot plot. Then when I look at the clustered bar chart, I notice these facts that I might not have noticed if not seen first in the dot plot. An example is that whites have the lowest percent of income over $75,000 in Passaic County. As you gain more experience reading dot plots, you will find that they present information much more clearly than do clustered bar charts. Figure 5: This is a clustered or grouped bar chart showing income data for various ethnic groups in several New Jersey counties. The information comes from the 2000 Census. It is hard to make comparisons across counties when there are so many bars in a group. Figure 6: This shows the data of Figure 5 in a multi-panel dot chart. Now assume that we are interested in detailed comparisons of the revenues of the seven companies with the lowest revenues displayed in Figures 1 through 4. Figure 7 shows this information. However, although it is clear that their revenues are all about $30 billion, it is difficult to be more precise. Figure 7: This shows the revenues of seven of the companies displayed in the first four figures. The bar chart has a zero baseline. We see that the revenues are similar to one another, but it is difficult to get more detail. Recall that different graph forms require different types of judgments to decode the data. When we look at a bar chart, we may judge position along a common scale by using the horizontal axis to judge the position of the right end of the bars. However, we cannot help also seeing the length of the bars. Figure 8 makes the revenues of Walt Disney appear many times larger than the revenues of Sysco, even though they are both about $30 billion. Figure 8 is a visual lie. Figure 8: This graph does not use a zero baseline so that detail can be seen. This figure is a visual lie since it makes the revenues of Walt Disney appear many times those of Sysco. A dot plot is judged by position along the horizontal axis. Length is not an issue with dot plots. Note that in Figure 9 this would not be the case if the gray gridlines ended at the dots instead of continuing across the figure. As a result, there is no distortion in this figure, and we can see the detail needed to compare these companies. Figure 9: This shows the information of Figure 8 in a dot plot. The points are no longer connected to the baseline so that we are no longer judging length. This figure shows the detail of Figure 8 without the deception. You probably learned to draw graphs with the independent variable on the horizontal or x axis and the dependent variable on the vertical or y axis. The reverse is true for the plots above. The reason is to make the company names easier to read. Either vertical bar charts or dot plots with the axes reversed would have required the labels to be rotated or drastically abbreviated. The graphs in this article were drawn using the S language with the exception of Figure 5 that used Excel. S-Plus and R are two implementations of S. S-Plus is commercial software available from Insightful Corporation in Seattle. R is open source software that is freely downloadable from http://www.r-project.org/. S-Plus offers both a graphical user interface and a command line language; R is a command line language. Dot plots can be drawn using Excel even though they do not appear in Excel’s menus. Send an e-mail to Naomi Robbins, email@example.com, for an Excel macro to draw dot plots. However, it is quicker and easier to use software designed to produce these plots once you have mastered the learning curve of S or other software that offers dot plots. Dot plots can be used in any situation for which bar charts are typically used. They are less cluttered, they make it easier to superpose additional data, and they do not require a zero baseline as do bar charts. See the references for additional discussion and examples of dot plots. Dot plots are a very useful addition to your graphical toolbox. Cleveland, William S. 1984. “Graphical Methods for Data Presentation: Full Scale Breaks, Dot Charts, and Multibased Logging.” The American Statistician, 38:270-280. Cleveland, William S. 1993. Visualizing Data. Hobart Press, Summit, NJ. Cleveland, William S. 1994. The Elements of Graphing Data. Revised edition. Hobart Press, Summit, New Jersey. Cleveland, William S. and Robert Mc Gill. 1984. “Graphical Perception: Theory, Experimentation, and Application to the Development of Graphical Methods.” Journal of the American Statistical Association 79:531-554. Robbins, Naomi B. 2005. Creating More Effective Graphs. John Wiley and Sons, Hoboken, NJ.
<urn:uuid:7b5220ec-632f-40f1-bdd3-edb4868403c0>
CC-MAIN-2017-04
http://www.b-eye-network.com/view/2468
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00525-ip-10-171-10-70.ec2.internal.warc.gz
en
0.936729
1,816
3.46875
3
The hard disk is the main data storage device for your computer. Hard disks are generally located in fixed bays within the computer chassis. Some disks are hot-pluggable, meaning they can be removed and replaced while the machine is running, and the operating system will recognize the new disk without rebooting. Current generation hard disks range in size from 8GB to 146GB, although smaller and larger drives can be found. The most commonly used interfaces for hard drives are ATA (IDE or EIDE) and SCSI. When selecting internal disk drive storage for your PowerEdge server, you have two options - ATA (or IDE) or SCSI. ATA is the de facto disk drive technology widely used for desktops and notebooks. ATA is also suitable for utility server functions, such as gateway, domain, and low-end file/print or caching servers. However, ATA is not optimized for running primary server applications on busy networks, especially as businesses expand and usage increases. For E-mail, database, and web servers, all with high usage, performance is needed to keep your business productive and profitable. SCSI disk drive technology is quite often a more suitable choice than ATA in these cases for a variety of reasons: SCSI disk drives handle more requests per bus, deliver higher performance (faster maximum throughput) and spin faster to find and deliver information more quickly.
<urn:uuid:7bd42fa7-f1d1-4764-ba5c-10117230ae1d>
CC-MAIN-2017-04
http://www.dell.com/content/topics/topic.aspx/global/learnmore/fragments/main/core/harddiskdrives?c=us&l=en&cs=04
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00525-ip-10-171-10-70.ec2.internal.warc.gz
en
0.93898
284
3.328125
3
Tiring of its mission to "organize the world's information," Google has set itself a new objective: save the planet. The search giant unveiled a US$4.4 trillion plan Wednesday to reduce the U.S.'s dependency on fossil fuels and embrace alternative energy. The proposal would yield a net saving of $1 trillion by 2030 and slash U.S. carbon dioxide emissions by 48 percent, according to Google, which said it had been busy "crunching the numbers." The plan involves weaning the U.S. off of coal for producing its electricity and turning to wind, solar and geothermal power instead. It would also cut oil use in cars by 40 percent and use electricity for personal transportation. Google said its goal in announcing the plan, called Clean Energy 2030, was to stimulate debate. "With a new Administration and Congress -- and multiple energy-related imperatives -- this is an opportune, perhaps unprecedented, moment to move from plan to action," the company said. It's the latest and perhaps most ambitious attempt by Google to shape public policy. The company has already weighed in on issues like worker immigration, intellectual property law and net neutrality. Energy is further from its expertise, but Google has been hiring experts to help with the task, including the lead author of the proposal, Jeffery Greenblatt, a former scientist with the Environmental Defense Fund. CEO Eric Schmidt was to present the proposal in San Francisco on Wednesday evening. Google also described the plan in a blog posting and in more depth on its Wikipedia-like Knol Web site. It deals primarily with two areas -- electricity production and personal vehicles. The basics look like this: Reduce energy use today: Naturally for Google, it starts with computers. Data centers and personal computers both can be operated much more efficiently, by unplugging PCs when they are not in use, for example. Building codes can be more aggressive, and "smart meters" in homes that give real-time pricing should encourage people to use less power. Pacific Gas & Electric is already installing such meters in northern California. Electricity: The U.S. today produces half its electricity from coal, 20 percent each from natural gas and nuclear energy, and 1.5 percent from oil. The plan would replace coal and oil with primarily wind, solar and geothermal energy (using heat from inside the earth). It calls for keeping electricity demand at today's level, which would lop 30 percent off the projected demand in 2030. Onshore and offshore wind would account for a further 29 percent of demand, solar 12 percent and geothermal 15 percent. Nuclear, hydro and natural gas would make up the rest. Google acknowleged that solar energy is expensive today, but said the deserts in the southwest could be used for "concentrating solar power," which could "bring costs down fast." Geothermal energy is "the sleeping giant," according to Google. Personal vehicles: The U.S. consumes 21 million barrels of liquid fuels per day, with 60 percent going into cars and other "light personal vehicles." The plan calls for incentives to increase electric and hybrid car sales to 100,000 in 2010 (annual U.S. car sales today are about 15 million), 3.7 million in 2020 and 22 million in 2030. It proposes boosting gas mileage for conventional vehicles to 45 miles per gallon, something experts say is plausible. Economics: Google made several assumptions about costs and savings, including the costs of alternative energy equipment, such as the infrastructure for charging electric cars, and the savings from more efficient power sources. It assumed that gasoline will double in price to $8 per gallon by 2030, and accepted that fluctuations could add or remove billions in its calculations. Jobs: It predicted that millions of jobs in construction, operations and professional services would be created with the alternative energy industries, as well as more jobs in electric vehicle manufacture. Google isn't the first to devise such a plan. It acknowleged that Former Vice President Al Gore has come up with a more ambitious proposal. It remains to be seen now if Google's effort will stir the U.S. into action.
<urn:uuid:ebcacd2b-100b-4d40-8544-cf56cb50e4aa>
CC-MAIN-2017-04
http://www.cio.com/article/2433253/infrastructure/google-proposes--4-4-trillion-clean-energy-plan.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00433-ip-10-171-10-70.ec2.internal.warc.gz
en
0.962513
850
2.71875
3
As of right now, we're officially one step closer to Skynet. Like the computer antagonist, computer scientist Lukasz Kaiser's machine learning software (PDF) is capable of learning at accelerated speeds. Unlike everyone's favorite Cyberdine Systems mistake, this one doesn't need military-grade hardware--it just needs a laptop with a 4GB RAM, a 2.13GHz Intel L9600 processor, and only one processor core. In his recently published paper, Kaiser outlined how a system guided by a decision-making engine of sorts can learn how to play competently games with only a minimal amount of background data. This is where things get a little technical, so bear with us: Kaiser states that while computer scientists have done a great amount of work in regards to computerized object recognition and visual scene interpretations, "only a few systems with the capacity for learning higher-level concepts has been presented thus far." According to Kaiser, our computers are pretty good at deriving sequences of higher-level symbolic data from video streams, but we still have a long way to go when it comes to learning from it. He argues that a more nuanced approach using relational structures and multiple logic systems is better suited for learning from visual data in comparison to the standard practice of utilizing formulas and singular logic systems. "These two fundamental changes allow us to demonstrate a system that--knowing only about rows, columns, diagonals and differentiating pieces--learns games like Connect Four, Gomoku, Pawns or Breakthrough, each one from a few intuitive video demonstrations, together around 2 minutes in length." Is this where we start preparing for the rise of the machines? Not quite yet. Kaiser still needs to figure out how to get the system to solve problems requiring "hierarchical, structured learning or a form of probabilistic formulas." Until then, we're safe. After that, it's anyone's game. Cassandra Khaw is an entry-level audiophile, a street dancer, a person who writes about video games for a living, and someone who spends too much time on Twitter. Like this? You might also enjoy... - MIT Develops an Energy-Harvesting Chip That You Can Shake and Bake - Give Me Three Minutes and I'll Teach You How to Out-Skrillex Skrillex With Barcodes - Robotic Legs Can Walk Like a Human, Could Teach People to Walk Again This story, "Computer learns board games from two-minute clips, beats humans right after" was originally published by PCWorld.
<urn:uuid:27501467-629d-4999-aaf1-fa5223bf0790>
CC-MAIN-2017-04
http://www.itworld.com/article/2723790/it-management/computer-learns-board-games-from-two-minute-clips--beats-humans-right-after.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00433-ip-10-171-10-70.ec2.internal.warc.gz
en
0.929716
527
2.578125
3
Mythology Quiz Questions with Answers - Part 1 - Who is regarded as the King of the Gods in - (a) What was the half-man and half-horse in mythology called? (b) What was the name of the winged horse of the Gods? - What was the name of the (a) one-eyed giant (b) the 1000-eyed monster in mythology? - Who, in Chinese mythology, was the first ruler of the world? - Complete the following pairs: (a) …and Leander; (b) Venus and …; (c) Echo and …; (d) Isis and …; - (a) Who was Hermes? (b) Who was condemned to have a terrible thirst? - State who were: - Who were the nine Muses? Name them. - The Classical God of Dream was given to a modern drug. What was his name and what is the name of the drug? - State who was the Great Earth Mother in: (d) Roman mythology - Match the mountain homes correctly: (a) Zeus Mount Zion (b) Jupiter Mount Kailash (c) Indra Mount Olympus (d) Yahweh Mount Parnassus (e) Shiva Mount Sumeru - Which women had heads covered with snakes? - Surya is the Sun God in Hindu mythology. Who are his counterparts in the following mythologies? - What was the name of the ‘Plumed Serpent God of Central America’? - (a) What was the name of Jason’s ship? (b) What was he seeking? - Who was Anubis? - Name the famous characters in the following: (a) Greek mythology (b) Hindu mythology who died from an arrow in the heel - What happened to everything when Midas touched - Who opened a box and released all the pains, disease and ills afflicting mankind? - Who was the ruler of the winds in Greek mythology? - (a) Jupiter - (a) Centaur - (a) Cyclopes - ‘P’ an-Ku - (a) Hero - (a) The Greek Messenger of the Gods - (a) The Dragon in Chinese mythology (b) Indra’ elephant (c) The ferryman who rowed the dead over the river Styx to Hades in Greek mythology. - In Greek mythology, the nine Muses were the daughters of Zeus and Mnemosyne. Each Muse presided over one of the arts. Their names were: Clio, Euterpe, Thalia, Melpomene, Terpsichore, Erato, Polyhymnia, Urania and Calliope. greek mythology, the nine muses were the daughters of zeus and mnemosyne. each muse presided over one of the arts. their names were: clio, euterpe, thalia, melpomene, terpsichore, erato, polyhymnia, urania and calliope. > - Morpheus; morphine - (a) Dana - (a) Zeus -Mount Olympus (b) Jupiter -Mount Parnassus (c) Indra -Mount Sumeru (d) Yahweh -Mount Zion (e) Shiva -Mount Kailash - The 3 Gorgon sisters – Stheno, Euryale and Medusa - (a) Apollo - (a) Argo (b) the Golden Fleece - The Jackal-headed God of the Dead in Egyptian mythology. - (a) Achilles (b) Lord Krishna - It turned to gold This list is based on the content of the title and/or content of the article displayed above which makes them more relevant and more likely to be of interest to you. We're glad you have chosen to leave a comment. Please keep in mind that all comments are moderated according to our comment policy, and all links are nofollow. Do not use keywords in the name field. Let's have a personal and meaningful conversation.comments powered by Disqus
<urn:uuid:d96bcf97-d0fd-4922-a03a-553c9bf45d76>
CC-MAIN-2017-04
http://www.knowledgepublisher.com/article-711.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280899.42/warc/CC-MAIN-20170116095120-00029-ip-10-171-10-70.ec2.internal.warc.gz
en
0.902101
893
2.703125
3
There are two elements in improving high performance computing with regard to scientific computing. The first is well covered over on HPCwire, and examines how the latest advancements in the greatest supercomputers push the boundaries of modeling and computation. The second component deals with making the applications that run those models and simulations more accessible to scientists who may not have access to those top-end supercomputers. Recognizing this, researchers from the physics department at the University of Washington at Seattle, through a grant from the National Science Foundation, created what they call a ‘virtual platform’ for scientific cloud computing, or SC2VP, which they simply named “SC2IT” for scientific cloud computing interface tools. “The main elements of our new platform include a virtual cloud computer blueprint or AMI, which contains preinstalled and optimized scientific codes and utilities.” The platform, according to the researchers, is meant to simulate the parallelism and extensive data storage capabilities of a large supercomputer in a cloud environment, with the emphasis being to cater toward the material sciences. “This blueprint,” the researchers noted, “contains libraries, compilers, a parallel computing environment, and preconfigured applications typically useful for materials scientists. E.g. these applications can calculate structural and electronic properties of materials.” What is important to note here is that the physicists built their toolkit with creating an HPC cluster as their top priority. Building a virtual machine is, according to the physicists, is not as difficult as making that cluster run high performance scientific applications. As they noted, “launching a set of virtual machines from a cloud provider is easy but does not produce a fully functional HPC cluster… To truly bring advanced science to a broad class of end users, another step is necessary beyond launching a parallel MS program on a cloud cluster.” That next level, according to the researchers, involves congregating various scientific computing codes that optimize certain types of problems. These codes already exist through previous computational research, but the trick was incorporating them en masse in a toolset that would allow them to be deployed on a virtual cluster. “The development of novel scientific software is often modular: computational scientists link existing codes together and combine them with new developments to produce state-of-the-art results.” The particular existing codes they used included a Density Functional Theory code, whose purpose is to assess and order the dynamic motion relationships among the coordinates in a material, essentially building a model of how a substance moves. With that, they added two codes to calculate a material’s vibrational tendencies, including “a new module to next derive vibrational properties; and thirdly, an existing spectroscopy code to finally calculate an X-ray spectrum incorporating the vibrational information.” Below is a screenshot of how the researchers implemented the spectroscopy code through the Graphical User Interface (GUI) hey set up. The red arrow denotes where the user can identify upon which resources the implementation would ideally run. Materials science has grown rapidly over the last decade, simply because the resolution and precision with which one can observe and test substances has seen a marked increase. However, as is seen in genomics, another popular scientific cloud computing use case, those materials and their associated tests represent a lot of information to be stored and processed. Being able to run those computations and store the necessary data in the cloud would promote cost-effectiveness, providing access to researchers without extensive in-house HPC resources. It also incidentally fosters collaboration, as retrieving data from a virtual cluster in the cloud is simpler than transmitting entire datasets over a user’s limited bandwidth or (still fairly common today) sending physical hard drives with copies of the large datasets in the mail. While the focus is on materials science, it is the hope of the University of Washington physicists that this approach of aggregating optimized codes and tools can be applied to other fields of study. “We embedded the interface and blueprint in a GUI environment that enables MS end users to perform specific SCC calculations with a few mouse clicks. The same approach can be followed for SCC enhancement of GUIs for other fields of research.” Again, this approach could prove useful to fields like genomics, where datasets are exploding and the incentive to run experiments quickly is high. The researchers here designed their toolset to run on the Amazon Elastic Compute Cloud, representing an eye toward completing high performance applications. “We tested the performance of this setup to prove that HPC calculations for materials science can be done efficiently in a cloud environment.”
<urn:uuid:2fe2eef1-fa13-4315-bc91-cb5649b927ff>
CC-MAIN-2017-04
https://www.hpcwire.com/2013/07/05/a_toolkit_for_materials_scientific_cloud_computing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00267-ip-10-171-10-70.ec2.internal.warc.gz
en
0.926753
949
2.9375
3
Host variables are data items defined within a COBOL program. They are used to pass values to and receive values from a database. Host variables can be defined in the File Section, Working-Storage Section, Local-Storage Section or Linkage Section of your COBOL program and have any level number between 1 and 48. Level 49 is reserved for VARCHAR data items. When a host variable name is used within an embedded SQL statement, the data item name must begin with a colon (:) to enable the Compiler to distinguish between host variables and tables or columns with the same name. Host variables are used in one of two ways: These are used to specify data that will be transferred from the COBOL program to the database. These are used to hold data that is returned to the COBOL program from the database. For example, in the following statement, :book-id is an input host variable that contains the ID of the book to search for, while :book-title is an output host variable that returns the result of the search: EXEC SQL SELECT title INTO :book-title FROM titles WHERE title_id=:book-id END-EXEC Before you can use a host variable in an embedded SQL statement, you must declare it. Host variable declarations should be bracketed by the embedded SQL statements BEGIN DECLARE SECTION and END DECLARE SECTION, for example: EXEC SQL BEGIN DECLARE SECTION END-EXEC 01 id pic x(4). 01 name pic x(30). EXEC SQL END DECLARE SECTION END-EXEC display "Type your identification number: " accept id. * The following statement retrieves the name of the * employee whose ID is the same as the contents of * the host variable "id". The name is returned in * the host variable "name". EXEC SQL SELECT emp_name INTO :name FROM employees WHERE emp_id=:id END-EXEC display "Hello " name. You can use data items as host variables even if they have not been declared using BEGIN DECLARE SECTION and END DECLARE SECTION. When declaring host variables, you should bear the following in mind: An array is a collection of data items associated with a single variable name. You can define an array of host variables (called host arrays) and operate on them with a single SQL statement. You can use host arrays as input variables in INSERT, UPDATE and DELETE statements and as output variables in the INTO clause of SELECT and FETCH statements. This means that you can use arrays with SELECT, FETCH, DELETE, INSERT and UPDATE statements to manipulate large volumes of data. OpenESQL and DB2 (using Host arrays are declared in the same way as simple host variables using BEGIN DECLARE SECTION and END DECLARE SECTION, but you must use the OCCURS clause to dimension the array, for example: EXEC SQL BEGIN DECLARE SECTION END-EXEC 01 AUTH-REC-TABLES 05 Auth-id OCCURS 25 TIMES PIC X(12). 05 Auth-Lname OCCURS 25 TIMES PIC X(40). EXEC SQL END DECLARE SECTION END-EXEC. . . . EXEC SQL CONNECT USERID 'user' IDENTIFIED BY 'pwd' USING 'db_alias' END-EXEC EXEC SQL SELECT au_id, au_lname INTO :Auth_id, :Auth_Lname FROM authors END-EXEC display sqlerrd(3) In this example, up to 25 rows (the size of the array) can be returned by the SELECT statement. If the SELECT statement could return more than 25 rows, then 25 rows will be returned and SQLCODE will be set to indicate that more rows are available but could not be returned. A SELECT statement should only be used when you know the maximum number of rows to be selected. When the number of rows to be returned is unknown, the FETCH statement should be used. With the use of arrays, it is possible to fetch data in batches. This can be useful when creating a scrolling list of information. If you use multiple host arrays in a single SQL statement, their dimensions must be the same. By default, the entire array is processed by an SQL statement but you can use the optional FOR clause to limit the number of array elements processed to just those that you want. This is especially useful in UPDATE, INSERT and DELETE statements where you may not want to use the entire array. The FOR clause must use an integer host variable, for example: EXEC SQL BEGIN DECLARE SECTION END-EXEC 01 AUTH-REC-TABLES 05 Auth-id OCCURS 25 TIMES PIC X(12). 05 Auth-Lname OCCURS 25 TIMES PIC X(40). 01 maxitems PIC S9(4) COMP-5 VALUE 10. EXEC SQL END DECLARE SECTION END-EXEC. . . . EXEC SQL CONNECT USERID 'user' IDENTIFIED BY 'pwd' USING 'db_alias' END-EXEC EXEC SQL FOR :maxitems UPDATE authors SET au_lname = :Auth_Lname WHERE au_id = :Auth_id END-EXEC display sqlerrd(3) In this example, 10 rows (the value of modified by the UPDATE statement. The number of array elements processed is determined by comparing the dimension of the host array with the FOR clause variable. The lesser value is used. If the value of the FOR clause variable is less than or equal to zero, no rows are processed. If you are using COBSQL, this information on the FOR clause is only applicable if you are using an Oracle database. It does not apply if you are using either a Sybase or an Informix database. Embedded SQL enables you to store and retrieve null values from a database by using indicator variables. Indicator variables are always defined as: pic S9(4) comp-5. Unlike COBOL, SQL supports variables that can contain null values. A null value means that no entry has been made and usually implies that the value is either unknown or undefined. A null value enables you to distinguish between a deliberate entry of zero (for numerical columns) or a blank (for character columns) and an unknown or inapplicable entry. For example, a null value in a price column does not mean that the item is being given away free, it means that the price is not known or has not been set. Together, a host variable and its companion indicator variable specify a single SQL value. Both variables must be preceded by a colon (:). When a host variable is null, its indicator variable has the value -1; when a host variable is not null, the indicator variable has a value other than -1. Within an embedded SQL statement an indicator variable should be placed immediately after its corresponding host variable. For example, the following embedded UPDATE statement uses a variable with a companion indicator variable, EXEC SQL UPDATE closeoutsale SET temp_price = :saleprice:saleprice-null, listprice = :oldprice END-EXEC In this example, if saleprice-null has a value of -1, when the UPDATE statement executes, the statement is read as: EXEC SQL UPDATE closeoutsale SET temp_price = null, listprice = :oldprice END-EXEC You cannot use indicator variables in a search condition. To search for null values, use the is null construct. For example, you can use the following: if saleprice-null equal -1 EXEC SQL DELETE FROM closeoutsale WHERE temp_price is null END-EXEC else EXEC SQL DELETE FROM closeoutsale WHERE temp_price = :saleprice END-EXEC end-if Indicator variables serve an additional purpose if truncation occurs when data is retrieved from a database into a host variable. If the host variable is not large enough to hold the data returned from the database, the warning flag sqlwarn1 in the SQLCA data structure is set and the indicator variable is set to the size of the data contained in the You can use indicator arrays in the same ways that you can use indicator variables, that is: In the following example, an indicator array is set to -l so that it can be used to insert null values into a column: EXEC SQL BEGIN DECLARE SECTION END-EXEC 01 sales-id OCCURS 25 TIMES PIC X(12). 01 sales-name OCCURS 25 TIMES PIC X(40). 01 sales-comm OCCURS 25 TIMES PIC S9(9) COMP-5. 01 ind-comm OCCURS 25 TIMES PIC S9(4) COMP-5. EXEC SQL END DECLARE SECTION END-EXEC. . . . MOVE -1 TO ind-comm. . . . EXEC SQL INSERT INTO SALES (ID, NAME, COMM) VALUES (:sales_id, :sales_name, :sales_comm:ind-comm) END-EXEC Note: If you are using COBSQL, this information on indicator arrays is only applicable if you are using an Oracle database. It does not apply if you are using a Sybase database. Copyright © 2000 MERANT International Limited. All rights reserved. This document and the proprietary marks and names used herein are protected by international law.
<urn:uuid:4619f2c9-f1ca-4356-8344-65fc77190b14>
CC-MAIN-2017-04
https://supportline.microfocus.com/documentation/books/sx20books/dbhost.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00572-ip-10-171-10-70.ec2.internal.warc.gz
en
0.749019
2,050
3.109375
3
The Truth About Flash Memory In The Datacenter August 7, 2015 Scott Davis Flash memory has proven to be the most disruptive storage technology of the past few years. It is significantly faster, with lower latencies than mechanical spinning disk drives. This supports an assumption that flash will soon be everywhere and supplant all other forms of persistent storage. However, while many experts think highly of flash, this assumption of datacenter ubiquity is a stretch. Let’s review the performance and latency landscape at a high level. On one end of the spectrum, memory access on the server side is in the nanosecond (ns) range, with great consistency and predictability. At the opposite end, traditional mechanical storage spindles in storage arrays clock in at 4 to 10 milliseconds (ms) and, under loads with poor locality of reference, can be much higher. To put that in perspective, a single millisecond equals 1 million nanoseconds, so this latency difference covers an extremely wide range. Flash is much faster than mechanical disks, but it also spans a notable latency variance primarily dependent on the bus access method in place; anywhere from 20 microseconds (μs) to more than 1 ms. However, these numbers are still nowhere near memory speeds and, as a result, some mobile and cloud architectures are now using memory as their primary tier for data processing. Remote memory shouldn’t be overlooked, as it is a critical building block for scale-out architectures. Remote access over the network is similarly speedy as flash, with today’s 10 Gb/sec Ethernet clocking in with a range of 4 to 20 μs, including protocol overhead. This low latency is critically important when discussing scale-out, cloud, and mobile application architectures. Despite its benefits, flash has various quirks that need to be accounted for to achieve predictable and optimal results. Read versus write performance rates differ substantially, and, even within write operations, there is a lot of variability. Flash’s unpredictability is due to the way write operations are handled and the way various mitigation techniques are used to address them. Flash is divided into blocks and further divided into pages. While empty, pages can be written directly, but if not they can’t be rewritten directly – they need to be erased first. This is due to the electrical properties of flash; it can only write 0 bits, while its empty state is a 1. This means a page of content can’t simply be loaded to flash. The page of flash that will hold this new content must first be erased, setting it to all 1s and then subsequently resetting some of the bits to a 0. Furthermore, erase granularity is at the block level, not the page level, which results in pages and blocks requiring multiple operations performed somewhat serially, such as moving the content of pages around to prepare to erase an entire block to perform a simple overwrite with new content. This process is called “write amplification,” and is a key contributing factor to writes being much slower than reads on a flash device and the variability. Another factor that makes flash unpredictable is that it has a cycle lifetime – flash memory can only be erased and rewritten a limited number of times before it fails. To maximize the life of a flash-based device, software must perform “load-leveling” to make sure that all blocks and pages on the device are cycled through and written to equally. Many forms of traditional RAID are poorly suited for these flash mechanics. Fortunately, this level of complexity is usually handled by storage software, not application developers. Another dimension of flash is the choice between consumer-grade, or multi-level cell (MLC) flash, versus enterprise-grade or single-level cell (SLC) flash. Enterprise-grade flash is more expensive, has better wear characteristics and can handle more input/output (I/O) in parallel than the less expensive consumer-grade flash. These attributes are directly related to the power consumption and form factor of each technology, and a variety of commercial solutions are available with each type. Overlooked in many flash discussions is the access bus or interconnect, which can have a big impact on performance. A SAS-connected SSD access will average from a few hundred microseconds to more than one millisecond per I/O, while NVMe over PCI-Express flash will take 20 to 50 microseconds. Next-generation Memory Channel Storage flash via DIMM slots should be single digit microseconds, and looming on the horizon are DRAM/NVDIMM with memory-like access speeds in the tens of nanoseconds. This brings up an important issue. To really leverage flash technology performance, and its likely evolution, one must move the flash closer to the application, meaning in the server and not on a storage array where it needs to communicate over a relatively slower network. This is one of my key takeaways about flash in the datacenter: flash is a critical driver of a storage architecture revolution, where centralized, dedicated storage arrays, including all-flash-arrays, will be replaced in the future by loosely coupled, distributed storage stacks co-located with the applications on the server and making judicious use of relatively inexpensive server resources. Storage performance is not the only area in which demands are increasing dramatically; capacity requirements are exploding as well. It’s a world driven by big data and multimedia, a world that demands exponentially increasing storage requirements. Flash has quickly been decreasing in price, driven by the dual demands of enterprise and consumer devices. However, mechanical devices and cloud storage prices are decreasing as well and will continue to be orders of magnitude cheaper than flash for the foreseeable future (dollars per GB for flash, cents per GB for mechanical). Aggressive data deduplication and compression techniques are frequently cited as drivers for decreasing flash costs, but these software technologies are equally applicable to mechanical drives. Mechanical drives are also advancing with increasing density techniques including shingled/SMR drives. While flash is clearly a storage medium for latency-sensitive workloads, it’s not economical or optimal for all use cases, especially high-capacity uses, such as infrequently accessed data sets or multimedia files, or other data that is processed in big chunks. Application architectures are also changing dramatically in the mobile/cloud world and have different demands of storage and persistence layers. For example: Storage persistence in this new era of application architecture needs to be optimized for both cost per GB (capacity) and cost per IOP (performance), while factoring bandwidth, scale, manageability and automation into the equation. These demands push storage persistence away from centralized, dedicated arrays and toward innovative distributed architectures. Flash is an exciting component of storage architectures. It’s a major disruption, and it will be pervasive in many enterprise data center and cloud architectures moving forward. However, it is not a panacea for all persistence needs and its cost and complexity prevent it from being the only storage persistence option on the market. All-flash arrays will not be the only form factor, due to network interconnect latencies and scale-out application architectures. Server-side flash is already playing a major role as an important ingredient in hyperconverged and software-defined storage solutions, as well as decoupled architectures that make use of server-side memory and flash as a performance tier coupled to dense, capacity-oriented storage. Scott Davis is the chief technology officer at Infinio, where he drives product and technology strategy, while also acting as a key public-facing company evangelist. Davis joined Infinio following seven years at VMware, where he was CTO for VMware’s End User Computing business unit.
<urn:uuid:a6e14187-99f5-4d42-8d43-7e2586e38628>
CC-MAIN-2017-04
https://www.nextplatform.com/2015/08/07/the-truth-about-flash-memory-in-the-datacenter/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00112-ip-10-171-10-70.ec2.internal.warc.gz
en
0.938489
1,582
2.671875
3
Governments are increasingly turning to tools like social media, mobile apps and new website capabilities to deliver information and services and to encourage citizen engagement. However, all the buzz around new social media capabilities can sometimes distract governments from what citizens care about most – getting the information they need, when they need it. So, how can a government improve citizen access and participation online while not being distracted from the fundamentals of information and service delivery? This Center for Digital Government issue brief offers some advice to governments looking to engage their citizens with digital tools. A digital practices checklist helps them assess what they are doing currently and discover new ideas for next steps.
<urn:uuid:b7ae8e13-eecf-46d3-96a4-4c3c30b498f5>
CC-MAIN-2017-04
http://www.govtech.com/library/papers/Engaging-the-Connected-Citizen.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00416-ip-10-171-10-70.ec2.internal.warc.gz
en
0.943666
126
2.515625
3
Why the thinking behind turning off a single light is important By Frost & Sullivan's Energy and Power Systems Industry Analysts, Johan Muller and Megan van den Berg Earth Hour 2011 witnessed 135 countries, and more than 5,200 cities across the globe, switching off their lights for an hour to send a powerful message for action on climate change, according to the official Earth Hour website. From its "small" beginnings in Sydney, Australia in 2007, the last Saturday of March each year has become a scheduled event on the global calendar, labelled "Earth Hour". Although Earth Hour has garnered (in some instances) a type of emotive and fashionable herd mentality response from people and cities, the real issue is: what is the raison d’être behind Earth Hour and how can it impact us as South Africans? Earth Hour is an initiative with the simple idea of raising awareness as to the effects of uncontrolled energy use. These effects include global warming, loss of energy security and in general, a lackluster mindset towards energy consumption. Each country's energy mix and its associated set of energy issues is unique. South Africa, a hybrid between a purely developing and developed nation, has a multitude of energy supply issues. Earth Hour has the capacity to grow awareness amongst South Africans as to new paradigms that need to be adopted concerning electricity usage, in order to help strike a balance between energy supply and energy demand. South Africa's energy demand is highly likely to outstrip the energy supply this winter (2012), with a potential deficit of 3,000MW during peak hours. Couple this with Eskom's recent breach of the electricity margin (global best practice is a 15.0 % margin, with Eskom operating at a margin of roughly 11.6% since 2009) and its use of diesel-fired gas turbines to keep the lights on, and you have an unsustainable system with massive cost implications which are reflected in the tariffs we pay as consumers.
<urn:uuid:3c981bb8-a3f4-4d9a-8d1b-bf7fd55a3935>
CC-MAIN-2017-04
http://www.frost.com/c/10077/sublib/display-market-insight.do?bdata=aHR0cDovL3d3dy5mcm9zdC5jb20vYy8xMDA3Ny9zdWJsaWIvY2F0ZWdvcnktaW5kZXguZG8%2FYW5jaG9yPTI1Njc5NDA4NiZjYXRlZ29yeT1pbmR1c3RyeUB%2BQEluZHVzdHJ5IFJlc2VhcmNoQH5AMTMzNTU4OTQxNTA5MA%3D%3D&id=256794086
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00140-ip-10-171-10-70.ec2.internal.warc.gz
en
0.942116
395
3.421875
3
Rendering and compositing This is a pretty huge section and there's a lot to cover, so let's jump right into the features offered by professional 3D renderers. We'll look at how they work with compositors, often the final stage of rendering and color grading. 32-bit Floating-point Rendering If you've ever done a series of levels or brightness/contrast tweaks in Photoshop, you've probably seen the kind of problems that arise when you do a lot of harsh edits on an 8-bit image: Colors start getting patchy and posterized, noise is more prominent, and everything just goes to crap eventually. This degradation is due to "rounding errors" with integer-based images. It's less problematic with 16-bit integer images but, to get the most out of a render, you should render to a floating point image, so rounding errors are no longer a problem and white or black pixels don't get thrown away once they reach either extremity. Making the same change on a 32-bit rendering of the image above yields much better results and you can darken and lighten the whole thing without posterization. I often render to 32-bit images for my different lights so I can finely control lighting without having to re-render: The other advantage in using a 32-bit frame buffer is that your rendered image will have a wider dynamic range, which is very important if you have a high-contrast daylight scene. But working with 32-bit images creates a separate problem: how to see them, since they contain data way beyond what can be displayed on the screen. This means that your renderer needs to do the job of an HDR conversion program like Photoshop or Photomatix, tone-mapping the resulting HDR image for display on the screen. In the lighting section, I mentioned that once you involve a sun/sky system, you invariably get into the complexities of a linear workflow. Basically, a linear workflow is a solution to a problem in dealing with physically simulated light. All monitors display images with a gamma correction curve of around 2.2 (sRGB). Since this is the accepted norm, all devices like digital cameras work around this standard (let's ignore larger color spaces like AdobeRGB for now). All the images you see are encoded accordingly, so that when they are shown on your screen, they appear natural. When you add light in a 3D workspace without compensation for this brightened gamma, getting natural dynamic range and contrast becomes more difficult because of the way that light is factored by the brighter gamma. The end result is that you can spend a bunch of time swinging light values back and forth, trying to get an image that's not washed out. This is especially problematic when dealing with sun and sky systems, which deal with light in realistic intensities. The solution is a linear workflow. "Linear" here doesn't mean the opposite of non-linear like games or video editing, it means working with a flat, linear gamma within your renderer. Sometimes this process of removing the sRGB gamma curve is called a "degamma." If you're finding this hard to picture, imagine the sRGB gamma curve is the water current in a stream: trying to do anything in that stream is going to done while fighting with the current. It's not that you couldn't get something done while fighting it, but your work is going to be made a whole lot easier if you just turn off the current. This is the appeal of a linear workflow: it may not be essential to get a good rendering but, once you sort out the workflow, it will be a lot easier to get accurate light and contrast since everything is working in the same linear space. So that involves dealing with the gamma problem. The exact workflow is particular to the renderer you're using but, for most applications, the process is similar: apply a gamma correction to the textures and colors used in your shaders. If you're ever opened an HDR image in Photoshop and it looked washed out, that's because it's encoded as linear but Photoshop assumed it's sRGB so it applied the 2.2 gamma to it. You can apply a degamma in Photoshop with an Exposure adjustment: Does that seem tedious, having to gamma-correct every single color and texture node in a scene? Yes, it's very tedious. The better implementations of a linear workflow are in renderers like Maxwell Render, V-Ray, Cinema 4D R12, and Modo—these renderers do linear workflows behind the scenes and all aspects are handled without you having to concern yourself with per-texture gamma correction: Even color swatches and procedural textures are corrected. Blender 2.5, currently in beta, has a similar linear workflow. Autodesk started to implement a linear workflow for Mental Ray in Maya 2011 but it's only half done. 3D Stereoscopic rendering Whether you're a hater or a fanboy, 3D film and television is here and it's increasing in popularity. I'll be honest—I don't know much at all about stereoscopic 3D rendering. If you're interested in learning how to do stereo 3D, it's best to learn this from a well-reputed source like fxPhD since they work closely with people in the film industry. The workflows will likely be biased towards certain programs like Nuke and Maya but the theory is the important thing to understand about 3D stereoscopic rendering. If you don't understand the theory correctly, you won't be producing awesome animations, you'll be producing migraines.
<urn:uuid:98fd5688-ae4d-4a12-a833-7d66d58c807c>
CC-MAIN-2017-04
http://arstechnica.com/apple/2010/09/an-intro-to-3d-on-the-mac-part-ii-animation-and-rendering/6/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00442-ip-10-171-10-70.ec2.internal.warc.gz
en
0.9463
1,153
2.953125
3
Social Networks' Threat To SecurityWeak passwords and insecure personal information could put your company's data at risk. Social networks are designed to facilitate sharing of personal information, and the more data a person discloses, the more valuable he or she is to the service. Unfortunately, these sites have poor track records for security controls. They don't encourage users to select strong passwords, and passwords on these sites never expire. This wouldn't be a problem if people only used these passwords for their social lives, but it's a safe bet that many reuse the same weak passwords--or versions of them--for all of their accounts, including at work. A database breach last year at RockYou, which creates apps and games for social networking sites, illustrates just how weak passwords can be. Attackers used a SQL injection vulnerability to steal 32 million passwords that were stored in clear text and then posted them to the Internet. This large data set gave us unprecedented insight into the passwords that users select and allowed security researchers to calculate the most common ones (see box on next page). Attackers often simply try the top 20 passwords when attempting to break into a social network account. Yes, it's a simple dictionary brute-force attack, but if you have a large user base, it's likely at least one of your employees' accounts could be hacked using this method. Attacker Modus Operandi Attackers have a variety of ways to guess passwords, including: >> Brute force based on publicly disclosed information. Beyond the RockYou top 20, people often use names of family members, birthdays, and other personal but easily accessible information in their passwords. Attackers may take what they know about a potential victim and feed it into a program that generates a range of possible passwords. >> Guessing answers to password-reset questions. Social network users sometimes reveal information that could be used to reset their passwords on the social network itself, Web mail services such as Yahoo Mail, and even on online banking or software-as-a-service sites. For example, some Facebook users include "25 Random Things About You" notes in their profiles. These notes contain information--like mother's maiden name, place of birth, color of a first car--that attackers can use to reset a victim's password and get control of that person's e-mail account. >> Create a word list to narrow down keywords mentioned in the profile. Several tools can collect keywords from a Web page and put them into a word list (see Easy-To-Find Brute-Force Tools). Once an attacker has this list, he can attempt to brute force the user's password. This attack's effectiveness is largely dependent on how accurate a word list is and whether the social network employs any brute-force prevention mechanisms, such as Captchas, those challenge-response tests used on Web forms to ensure the respondent is a person, not a computer. 1 of 2
<urn:uuid:a4a8e139-10ef-4ea1-a470-77f74287e132>
CC-MAIN-2017-04
http://www.darkreading.com/vulnerabilities-and-threats/social-networks-threat-to-security/d/d-id/1093707
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00562-ip-10-171-10-70.ec2.internal.warc.gz
en
0.927769
595
2.953125
3
The automotive industry is on the cusp of a driverless revolution, with actual driverless vehicles being tested on the road. Now the aviation industry is debating whether pilotless planes make sense. While the notion of fully automated commercial planes no doubt has been previously kicked around, last month's Germanwings crash -- caused by a co-pilot struggling with mental health who steered a plane carrying 150 passengers straight into a French mountainside -- has caused aviation experts to seriously rethink ways to increase commercial flight security. The New York Times reports that "government agencies are experimenting with replacing the co-pilot, perhaps even both pilots on cargo planes, with robots or remote operators." (Related article: Flying cars: What could go wrong?) Commercial flights today are almost flown exclusively on auto-pilot. The Times notes that in a recent survey of commercial pilots, "those operating Boeing 777s reported that they spent just seven minutes manually piloting their planes during the typical flight. Pilots operating Airbus planes spent half that time." But they're still on the plane, and still able to take control. The idea of boarding a plane that's going to fly hundreds or thousands of miles without a human pilot even on the craft is going to be a tough sell for much of the population. Even within the aviation industry, there's great skepticism. Here's Mary Cummings, director of the Humans and Autonomy Laboratory at Duke University, being quoted by The Times: “You need humans where you have humans. If you have a bunch of humans on an aircraft, you’re going to need a Captain Kirk on the plane. I don’t ever see commercial transportation going over to drones.” We agree with Dr. Cummings. This story, "Would you fly in a plane with no human pilots?" was originally published by Fritterati.
<urn:uuid:db990c35-2046-4714-b079-45e43368273c>
CC-MAIN-2017-04
http://www.itnews.com/article/2906766/would-you-fly-in-a-plane-with-no-human-pilots.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00378-ip-10-171-10-70.ec2.internal.warc.gz
en
0.955462
381
2.640625
3
Understanding Flash: What Is NAND Flash? June 6, 2014 1 Comment In the early 1980s, before we ever had such wondrous things as cell phones, tablets or digital cameras, a scientist named Dr Fujio Masuoka was working for Toshiba in Japan on the limitations of EPROM and EEPROM chips. An EPROM (Erasable Programmable Read Only Memory) is a type of memory chip that, unlike RAM for example, does not lose its data when the power supply is lost – in the technical jargon it is non-volatile. It does this by storing data in “cells” comprising of floating-gate transistors. I could start talking about Fowler-Nordheim tunnelling and hot-carrier injection at this point, but I’m going to stop here in case one of us loses the will to live. (But if you are the sort of person who wants to know more though, I can highly recommend this page accompanied by some strong coffee.) Anyway, EPROMs could have data loaded into them (known as programming), but this data could also be erased through the use of ultra-violet light so that new data could be written. This cycle of programming and erasing is known as the program erase cycle (or PE Cycle) and is important because it can only happen a limited number of times per device… but that’s a topic for another post. However, while the reprogrammable nature of EPROMS was useful in laboratories, it was not a solution for packaging into consumer electronics – after all, including an ultra-violet light source into a device would make it cumbersome and commercially non-viable. A subsequent development, known as the EEPROM, could be erased through the application of an electric field, rather than through the use of light, which was clearly advantageous as this could now easily take place inside a packaged product. Unlike EPROMs, EEPROMs could also erase and program individual bytes rather than the entire chip. However, the EEPROMs came with a disadvantage too: every cell required at least two transistors instead of the single transistor required in an EPROM. In other words, they stored less data: they had lower density. The Arrival of Flash So EPROMs had better density while EEPROMs had the ability to electrically reprogram cells. What if a new method could be found to incorporate both benefits without their associated weaknesses? Dr Masuoka’s idea, submitted as US patent 4612212 in 1981 and granted four years later, did exactly that. It used only one transistor per cell (increasing density, i.e. the amount of data it could store) and still allowed for electrical reprogramming. If you made it this far, here’s the important bit. The new design achieved this goal by only allowing multiple cells to be erased and programmed instead of individual cells. This not only gives the density benefits of EPROM and the electrically-reprogrammable benefits of EEPROM, it also results in faster access times: it takes less time to issue a single command for programming or erasing a large number of cells than it does to issue one per cell. However, the number of cells that are affected by a single erase operation is different – and much larger – than the number of cells affected by a single program operation. And it is this fact that, above all else, that results in the behaviour we see from devices built on flash memory. In the next post we will look at exactly what happens when program and erase operations take place, before moving on to look at the types of flash available (SLC, MLC etc) and their behaviour. NAND and NOR To try and keep this post manageable I’ve chosen to completely bypass the whole topic of NOR flash and just tell you that from this moment on we are talking about NAND flash, which is what you will find in SSDs, flash cards and arrays. It’s a cop out, I know – but if you really want to understand the difference then other people can describe it better than me. In the meantime, we all have our good friend Dr Masuoka to thank for the flash memory that allows us to carry around the phones and tablets in our pockets and the SD cards in our digital cameras. Incidentally, popular legend has it that the name “flash” came from one of Dr Masuoka’s colleagues because the process of erasing data reminded him of the flash of a camera. Presumably it was an analogue camera because digital cameras only became popular in the 1990s after the commoditisation of a new, solid-state storage technology called …
<urn:uuid:93478344-4bdc-414d-bf52-1f82e5431344>
CC-MAIN-2017-04
https://flashdba.com/2014/06/06/understanding-flash-what-is-nand-flash/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00066-ip-10-171-10-70.ec2.internal.warc.gz
en
0.963109
982
3.140625
3
NASA's Cassini spacecraft has detected propylene, a key ingredient in plastics, on Saturn's moon Titan. This is the first time the chemical has been definitively found on any moon or planet, other than Earth. The discovery fills in what NASA called a "mysterious gap" in scientists' knowledge of the makeup of Titan's atmosphere and gives them confidence that there are more chemicals there still to discover. Propylene is an ingredient in many consumer plastic products like car bumpers and food storage containers. The interest lies in the small amount of propylene that was discovered in Titan's lower atmosphere by one of Cassini's scientific instruments called the the composite infrared spectrometer (CIRS), which measures the infrared light, or heat radiation, emitted from Saturn and its moons. The instrument can detect a particular gas, like propylene, by its thermal markers, which are unique like a human fingerprint. Scientists have a high level of confidence in their discovery, according to NASA. "This chemical is all around us in everyday life, strung together in long chains to form a plastic called polypropylene," said Conor Nixon, a planetary scientist at NASA's Goddard Space Flight Center. "That plastic container at the grocery store with the recycling code 5 on the bottom -- that's polypropylene." The discovery gives NASA scientists the missing piece of the puzzle for determining the chemical makeup of Titan's atmosphere. In 1980, NASA's Voyager 1 spacecraft, which has flown past Jupiter, Saturn, Uranus and Neptune, did a fly-by of Titan. According to the space agency, Voyager identified many of the gases in Titan's hazy brownish atmosphere as hydrocarbons, the chemicals that primarily make up petroleum and other fossil fuels on Earth. Titan has a thick atmosphere, clouds, a rain cycle and giant lakes. However, unlike on Earth, Titan's clouds, rain, and lakes are largely made up of liquid hydrocarbons, such as methane and ethane. When Titan's hydrocarbons evaporate and encounter ultraviolet radiation in the moon's upper atmosphere, some of the molecules are broken apart and reassembled into longer hydrocarbons like ethylene and propane, NASA said. Voyager's instruments detected carbon-based chemicals in the atmosphere but not propylene. Now scientists have that piece of the puzzle. "This measurement was very difficult to make because propylene's weak signature is crowded by related chemicals with much stronger signals," said Michael Flasar, a scientist at NASA's Goddard Space Flight Center and chief investigator for CIRS. "This success boosts our confidence that we will find still more chemicals long hidden in Titan's atmosphere." This article, NASA's Cassini finds plastic ingredient on Saturn's moon, was originally published at Computerworld.com. NASA's Cassini spacecraft detected propylene, an ingredient of household plastics here on Earth, in the atmosphere of Saturn's moon, Titan. (Video: NASA) Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed. Her email address is email@example.com. Read more about government/industries in Computerworld's Government/Industries Topic Center. This story, "NASA's Cassini finds plastic ingredient on Saturn's moon" was originally published by Computerworld.
<urn:uuid:d5eae51d-abbd-4ae1-be9e-b88edaf331a2>
CC-MAIN-2017-04
http://www.networkworld.com/article/2170400/data-center/nasa--39-s-cassini-finds-plastic-ingredient-on-saturn--39-s-moon.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00370-ip-10-171-10-70.ec2.internal.warc.gz
en
0.918884
712
3.9375
4
Multipath TCP is an extension of TCP that will soon be standardized by IETF. It is a succesful attempt to resolve major TCP shortcomings emerged from the change in the way we use our devices to communicate. There’s particularly the change in the way our new devices like iPhones and laptops are talking across network. All the devices like the networks are becoming multipath. Networks redundancy and devices multiple 3G and wireless connections made that possible. Almost all today’s web applications are using TCP to communicate. This is due to TCP virtue of reliable packet delivery and ability to adapt to variable network throughput conditions. Multipath TCP is created so that it is backwards compatible with standard TCP. In that way it’s possible for today’s applications to use Multipath TCP without any changes. They think that they are using normal TCP. We know that TCP is single path. It means that there can be only one path between two devices that have TCP session open. That path is sealed as a communication session defined by source and destination IP address of communicating end devices. If some device wants to switch the communication from 3G to wireless as it happens on smartphones when they come in range of known WiFi connection, TCP session is disconnected and new one is created over WiFi. Using multiple paths/subsessions inside one TCP communication MPTCP will enable that new WiFi connection makes new subsession inside established MPTCP connection without braking TCP that’s already in place across 3G. Basically more different paths that are available will be represented by more subsessions inside one MPTCP connection. Device connected to 3G will expand the connection to WiFi and then will use algorithm to decide if it will use 3G and WiFi in the same time or it will stop using 3G and put all the traffic to cheaper and faster WiFi. TCP single path property is TCP’s fundamental problem In datacenter environment there is a tricky situation where two servers are talking to each other using TCP to communicate and that TCP session is created across random path between servers and switches in the datacenter. If there are more paths of course. If there are (and there are!) another two servers talking in the same time, it will possibly happen that this second TCP session will be established using partially the same path as the first TCP session. In that situation there will be a collision that will reduce the throughput for both sessions. There is actually no way to control this phenomenon in TCP world. As in our datacenter example the same thing works for every multipath environment so it it true for example for the Internet. Answer is MPTCP! Multipath TCP – MPTCP is better as TCP in that enables the use of multiple paths inside a single transport connection. It meets the goal to work well at any place where “normal” TCP would work. Multipath TCP, as the name says, enables the creation of multiple paths within one MPTCP session and in that way achieves better performance and adaptation of sessions. Thers was probably a great effort to make MPTCP compatible with TCP so that there is no need for any change in networks, devices and apps. After all, without this compatibility there would be no deployment and furthermore no use of MPTCP. Nobody wants to change the whole Internet because of better protocol! That is the main reason for the creation of multipath capability of TCP, performance improvement by distributing traffic load to more than one subflows across different paths. That of course additionaly requires that MPTCP always performs at least as well as standard TCP with one path. They are exaples when that goal was not meet but there are solutions in buffer size customizations andd implementation of algorithms that would mitigate reduced performance issues. MPTCP works if both sides of the communication (user and server) have support for MPTCP. If only one side has MPTCP deployed that device will try to use MPTCP but it will only suceed to establish normal TCP session. It will always try MPTCP first and if there is no answer from other side it will use TCP. Applications are not MPTCP aware; Apple IOS7 is first operating system in production that uses MPTCP and solely for Siri application that is now able to use WiFi and 3G simultaneously. But how this works? Multipath TCP is evolution of standard TCP that makes multipath data packet transport over one connection possible. Multipath TCP is made for next generation devices like iPhones and other smart devices that are multihomed (they use different Internet access options like WiFi and 3G). To make this kind of communication over more different paths possible there is still need for data sequence number. From normal TCP you know that data sequence number is used to put the segments in order when they all arrive at the receiver. But now they can arrive to the receiver using more than one subflows inside one MPTCP connection. How will then be possible to put them in the order at the receiver. And more interesting than that, how will this whole MPTCP connection keep track of loosed packet accross the connection. One more thing, some IDS midleboxes will not allow the TCP subflow with gaps in the sequence space (from MPTCP arhitecture this subflow will be seen as normal TCP flow). There should be a way to keep track of loss detection and retransmission of packets across every separate subflow. So the only answer to that question is to use sequence number for each subflow that will track loss and retransmission and separate data sequence number for reordering of packets at receiver side. On this image below we are looking at standard TCP header from frc 793 that must stay as it is to keep backwards compatibility of MPTCP. The creators of MPTCP did play with some header parts and they decided to put data sequence number and data ACK inside TCP as new option. There was a second way of doing that by encoding data sequence number and data ACK inside payload (data). Fortunately they decidec to use TCP new options so that there will be no chance of deadlocks. (will explain in some other article). One more thing is that using TCP new options there is more chance that traversing different strange firewall midleboxes will be succesful. Middleboxes like firewall and others tend to remove some strange payloads and sometimes even TCP options if they don’t understand what they are. What that means? Here’s an example with two subflows inside one MPTCP connection. Data is send in three data frames of witch two are taking red subflow and one is taking green subflow. Data sequence numbers are 1,2,3 for whole MPTCP connection so that packets on receiver side can be ordered and connected into whole data. Subflow sequence numbers are 200,201 for red subflow and 300 for green subflow. In that way each subflow can have loss detection and retransmition of lossed frames. In for some reason one of the subflows (in our case green subflow) breaks down, the frame with DATA:2 will be redirected to other subflow in the way that it can be sent to receiver. His Subflow sequence number will be changed in order to go across red subflow and the data sequence number will stay the same (2) as this is the same frame. All of this will happen without breaking the MPTCP connection so that device will practically not see anything going on, maybe only a little delay in loading content across that connection. This article was written by me in the last few days as I got into learning how this new TCP technology works. I would like to post here all the materials I used to get to know MPTCP theory and way that it uses to function. - People behind all the material on MPTCP that I used: - Costin Raiciu, Universitatea Politehnica Bucuresti; Also speaker at the USENIX from video on the link below - Christoph Paasch and Sebastien Barre, Université Catholique de Louvain; - Alan Ford; Michio Honda, Keio University; - Fabien Duchene and Olivier Bonaventure, Université Catholique de Louvain; - Mark Handley, University College London - MultiPath TCP – Linux Kernel implementation project - Great video session: How Hard Can It Be? Designing and Implementing a Deployable Multipath TCP
<urn:uuid:8735f7d1-3d81-4d96-ae96-8d074f18b250>
CC-MAIN-2017-04
https://howdoesinternetwork.com/2013/multipath-tcp
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00186-ip-10-171-10-70.ec2.internal.warc.gz
en
0.930228
1,762
3.40625
3
Although many companies in the information security industry prefer to tackle challenges with sophisticated hardware, the art of lying continues to be a towering risk difficult to deal with. The ancient threat of social engineering is in the news all the time, often used by cybercriminals, but also by those without malicious intent. Recently, two students from Savannah State University managed to social engineer their way into Super Bowl XLVII and posted a video of their adventure online. While I’m sure that the level of security at one of the world’s biggest sport events must be impressive, weak links will always be taken advantage of. “The formal study of social engineering as we know it today has only occurred in recent years, but it was thousands of years in the making. The underlying principles and science behind why we do what we do doesn’t change much, but the tactics employed by attackers do,” said Dale Pearson, Founder of SubliminalHacking.net. The insecurity of an individual can become a peril for the company that employs him. Since Internet users tend to share too much of their personal information, especially on sites like Facebook, skilled liars can take advantage of the data and social engineer their way into the corporate world. Jason Hong, CTO at Wombat Security comments: “A common tactic by an attacker is to slowly build up trust over time. For example, the attacker might call a person in an organization and using fake caller ID so that it looks like it’s from a company number. The attacker might also start out by being friendly and just asking for innocuous information at first. Over time, though, the attacker would slowly escalate, requesting more sensitive information over a period of months.” Watch your back With the cyber underworld executing targeted hacks in search for profit, they’re not going to try and break down the front door, they’re going to try and sneak their way in. “Many of the most highly publicized security breaches in the past few years have been due to spear-phishing attacks, which are the most common form of social engineering attacks today. These include RSA, Epsilon, the White House, and more. The early reports about how the New York Times computer systems were hacked also suggest that spear-phishing was involved,” according to Hong. Privacy equals protection We should all be aware of what we post online and never give out more information than necessary. It sounds simple, but most people don’t even realize the dangers. “When you receive an email asking you to share something or do something, consider what could be done with that information. If the email came from someone you know, is the format and the language consistent with previous exchanges? When people make request for access or information in person or on the phone, be confident enough to challenge them in a friendly and respectful way,” says Pearson. A typical scam involves a fraudster calling the victim up and trying to get confidential information over the phone. Pearson warns: “When you receive a call from your bank, asking for seemingly viable information, take a moment to think what the person of the phone could use this information for and whether you really know who they are? Ask politely for their name, extension number and call reference and call the bank back on the number from your statement and ask to be put through to the extension of the original caller.” Security awareness can strengthen a security policy by making people aware of the dangers. Hong agrees: “The underlying strategy and rationale for social engineering attacks is to circumvent all of the security measures in place by tricking people. For this reason, it’s critical for organizations to train people to be aware of the tactics that bad guys use, so that they can identify them and know how to react in given situations.”
<urn:uuid:d6383d2d-c4c8-44c8-be9b-8442d4016512>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2013/02/11/social-engineering-clear-and-present-danger/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00268-ip-10-171-10-70.ec2.internal.warc.gz
en
0.961732
791
2.765625
3
What is MongoDB? MongoDB is an open source database management system. It has been developed since 2007 and is currently one of the most popular and well known NoSQL databases on the market. Introduction to MongoDB's REST API Mongodb features a REST API which can be activated by starting MongoDB with --rest switch. The port number of the HTTP interface is 1000 more than the configured MongoDB port, so it is 28017 by default. Developers are recommended to turn off the rest API and the HTTP interface on production servers, since they allow direct access to the database. What does the MongoDB Rest API Do? The REST API allows the developer to query the database and get back a JSON string with the result, therefore can develop the front end of a website without the need to develop a web application that queries MongoDB first. Known Vulnerabilities in MongoDB Rest API The Rest API of MongoDB really comes in handy in the development phase of a website. However, as briefly mentioned above the developers are strongly suggested to deactivate the REST API in production environments. This is due to the fact that the HTTP interface probably wasn't made for production in the first place. By default it lacks authentication and gives you direct access to the whole database. And as early as 2012 a presentation about vulnerabilities in MongoDB's REST API was held. It featured vulnerabilities ranging from CSRF to Cross-site Scripting (XSS). What Was Patched? What Is Still Vulnerable? Unfortunately after all these years the Cross-site Request Forgery (CSRF) issues are still there. These issues allow a malicious hacker to query the MongoDB database, just by redirecting the victim to a website under his control. However, since such an attack is carried out by the developer's browser there is no way to get a response to the attacker's server. Therefore the attackers can't extract data, so the attack is useless, almost! However, there are still some other ways around SOP. Attacking the Developer The attacker can target the developers directly, who typically run the REST API on their local machines. Typically, even though it is a development environment the developer have sensitive data saved in the database and might even have a copy of the database that is used in production. An attacker can retrieve the data from the developer's database by utilizing several different methods. Cross Site Timing Side Channel - Record the current time - Create an invisible iframe with an onload event - Set the src to MongoDB's REST API on the localhost or internal network of the victim and query the data, for example the database version. The query tries to guess the version and sleeps for two seconds on a correct guess, similar to a time based SQL injection. - The onload event fires when the page finishes loading. It contains a function that records the current time again. - The attacker compares the times from step 4 and step 2. If the difference is two seconds or more there was a correct guess. As mentioned above this is the same principle as on a time based SQL Injection Attack. The only difference is, that the victim's browser makes the request and sends it to the attacker, as it is already behind the firewall. This is a proof of concept of such a timing attack on a MongoDB REST API instance. It issues a command to verify the MongoDB version on a given IP on the localhost or local network. Out-Of-Band Data Exfiltration The above attack is very reliable, but it is pretty slow and takes thousands of requests if you want to exfiltrate big datasets. A better option is an Out-Of-Band data exfiltration. Those are already known from regular SQL Injections or XXE. Since the hacker can't directly see the response of his query he sends it through other channels back to his server where he can analyze it. This can either be through a direct request to his server or - like he can do with MongoDB - through DNS requests. MongoDB has a function called cloneCollection, which allows a developer to clone a collection from a remote server. To do the clone the developer has to pass the hostname of the server to that function, which the MongoDB tries to resolve. Of course it is also possible to query the database and extract data such as usernames, password, email addresses etc. Here is an example of such a command using the REST API: And this is the explanation of the query above: ret="collections_" marks the beginning of the collections in the DNS request db.getCollectionNames().forEach(...) appends each collection name to the return value db.runCommand() just runs the following command cloneCollection this command tries to clone a collection from a remote host from the hostname to clone from, appended to our return value and db version That way data can be exfiltrated out of bounds over DNS, effectively bypassing SOP restrictions as explained in this Proof of Concept video, and by using this sample code. Below is a proof of concept video of how to exploit a CSRF vulnerability and extract data from the MongoDB database. Even though the obvious XSS vulnerabilities were fixed in MongoDB's HTTP interface, the CSRF issues are present to this day. A firewall is not a sufficient protection against such attacks since it can be bypassed by CSRF or SSRF (on a production system) and even an attack utilizing DNS Rebinding is possible. We strongly recommend developers to deactivate the REST API and the HTTP interface of MongoDB, even on local developer machines since as this article shows, it is still possible to extract data. All it takes to become a victim of such an attack is clicking a malicious link. If a REST API is needed the developer should put one of the alternatives suggested on MongoDB's website into consideration.
<urn:uuid:6406b651-5069-495e-aca8-077df90660b0>
CC-MAIN-2017-04
https://www.netsparker.com/blog/web-security/exploiting-csrf-vulnerability-mongodb-rest-api/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00268-ip-10-171-10-70.ec2.internal.warc.gz
en
0.929435
1,218
2.859375
3
Frame Relay Components, part 2 Network Consultants Handbook - Frame Relay by Matthew Castelli Frame Relay Components (Cont'd) As one would expect, determining the number of Virtual Circuits required for a network configuration is based on the number of end nodes, and the communication requirements, such as fully meshed (all-to-all), partial meshed (some-to-all), or hub-and-spoke (all-to-one), as illustrated by Figure 15-8. where N is the number of end nodes in the network. This formula is sometimes referred to as the "N2 Formula" because it is derived from ((N2-N) / 2). In a partial meshed network environment, the number of VCs required is not easily represented by a mathematical formula. You must consider many variables, the least of which is based on the determination of which end nodes require communication with which other end nodes. It is a fair assumption to estimate that the number of VCs required would fall between those of a fully meshed and those of a hub-and-spoke environment: where N is the number of end nodes in the network and X is the number of VCs required to support a partially meshed configuration. The following formula can be used as an approximation to determine the number of VCs necessary to support a partial mesh configuration: , where N is the number of network nodes. This formula is useful from a network planning and cost-determination standpoint; however, because partial mesh connectivity is determined by application and user requirements at the end node, an exact number of partial-mesh VCs is almost impossible to determine. In a hub-and-spoke network environment, the number of VCs required can be represented by the formula [N-1], where N is the number of end nodes in the network.Figure 15-9 illustrates the number of VCs necessary to support fully meshed ((N2-N) / 2), partially meshed approximation , and hub-and-spoke (N-1) Frame Relay network configurations. VCs often incur a financial obligation to the service provider. As Figure 15-9 illustrates, even relatively small networks can become quite costly very quickly based on the number of VCs alone. For this reason, hub-and-spoke network configurations are fairly common. As illustrated here, a 30-node network would require approximately 450 VCs in a fully meshed configuration, compared to the 29 VCs necessary to support a hub-and-spoke configuration. To illustrate the potential cost savings of deploying a hub-and-spoke network over a fully meshed network environment, see Figure 15-10. As is reflected here, a difference of nearly 500 VCs exists between a fully meshed and a hub-and-spoke configuration. If it is not mandated that a fully meshed network be used, it is certainly more cost effective to design and implement a hub-and-spoke or partial-mesh configuration. FECN and BECN Congestion is inherent in any packet-switched network. Frame Relay networks are no exception. Frame Relay network implementations use a simple congestion-notification method rather than explicit flow control (such as the Transmission Control Protocol, or TCP) for each PVC or SVC; effectively reducing network overhead.Two types of congestion-notification mechanisms are supported by Frame Relay: NOTE: A Frame Relay frame is defined as a variable-length unit of data, in frame-relay format, that is transmitted through a Frame Relay network as pure data. Frames are found at Layer 2 of the OSI model, whereas packets are found at Layer 3. The FECN bit is set by a Frame Relay network device, usually a switch, to inform the Frame Relay networking device that is receiving the frame that congestion was experienced in the path from origination to destination. The Frame Relay networking device that is receiving frames with the FECN bit will act as directed by the upper-layer protocols in operation. Depending on which upper-layer protocols are implemented, they will initiate flow-control operations. This flow-control action is typically the throttling back of data transmission, although some implementations can be designed to ignore the FECN bit and take no action. Much like the FECN bit, a Frame Relay network device sets the BECN bit, usually a switch, to inform the Frame Relay networking device that is receiving the frame that congestion was experienced in the path traveling in the opposite direction of frames encountering a congested path. The upper-layer protocols (such as TCP) will initiate flow-control operations, dependent on which protocols are implemented. This flow-control action, illustrated in Figure 15-11, is typically the throttling back of data transmission, although some implementations can be designed to ignore the BECN bit and take no action. NOTE: The Cisco IOS can be configured for Frame Relay Traffic Shaping, which will act upon FECN and BECN indications. Enabling Frame Relay traffic shaping on an interface enables both traffic shaping and per-VC queuing on the interface's PVCs and SVCs. Traffic shaping enables the router to control the circuit's output rate and react to congestion notification information if it is also configured. To enable Frame-Relay Traffic shaping within the Cisco IOS on a per-VC basis, use the frame-relay traffic-shaping command. Cisco also implements a traffic control mechanism called ForeSight. ForeSight is the network traffic control software used in some Cisco switches. The Cisco Frame Relay switch can extend ForeSight messages over a UNI, passing the backward congestion notification for VCs. ForeSight allows Cisco Frame Relay routers to process and react to ForeSight messages and adjust VC level traffic shaping in a timely manner. ForeSight must be configured explicitly on both the Cisco router and the Cisco switch. ForeSight is enabled on the Cisco router when Frame Relay traffic shaping is configured. The router's response to ForeSight is not applied to any VC until the frame-relay adaptive-shaping foresight command is added to the VC's map-class. When ForeSight is enabled on the switch, the switch will periodically send out a ForeSight message based on the time value configured. The time interval can range from 40 to 5,000 milliseconds (ms). For router ForeSight to work, the following conditions must exist on the Cisco router: - Frame Relay traffic shaping must be enabled on the interface. - The traffic shaping for a circuit must be adapted to ForeSight. Frame Relay Router ForeSight is enabled automatically when the frame-relay traffic-shaping command is used. However, the map-class frame-relay command and the frame-relay adaptive-shaping foresight command must both be issued before the router will respond to ForeSight and apply the traffic shaping effect on a specific interface, subinterface, or VC. When a Cisco router receives a ForeSight message indicating that certain DLCIs are experiencing congestion, the Cisco router reacts by activating its traffic shaping function to slow down the output rate. The router reacts as it would if it were to detect the congestion by receiving a frame with the BECN bit set. Our next segment from Cisco Press' Network Consultants Handbook will look at Frame Relay Virtual Circuit (VC) Parameters.
<urn:uuid:5330ee74-4859-4604-9970-3220248c5b2a>
CC-MAIN-2017-04
http://www.enterprisenetworkingplanet.com/print/netsp/article.php/956221/Frame-Relay-Components-part-2.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00113-ip-10-171-10-70.ec2.internal.warc.gz
en
0.907778
1,547
2.734375
3
Linux has an interesting relationship with file systems. Because Linux is open, it tends to be a key development platform both for next-generation file systems and for new, innovative file system ideas. Two interesting recent examples include the massively scalable Ceph and the continuous snapshotting file system nilfs2 (and of course, evolutions in workhorse file systems such as the fourth extended file system [ext4]). It's also an archaeological site for file systems of the past—DOS VFAT, Macintosh(HPFS), VMS ODS-2, and Plan-9's remote file system protocol. But with all of the file systems you'll find supported within Linux, there's one that generates considerable interest because of the features it implements: Oracle's Zettabyte File System (ZFS). The ZFS was designed and developed by Sun Microsystems (under Jeff Bonwick) and was first announced in 2004, with integration into Sun Solaris occurring in 2005). Although pairing the most popular open operating system with the most talked-about, feature-rich file system would be an ideal match, licensing issues have restricted the integration. Linux is protected by the GNU General Public License (GPL), while ZFS is covered by Sun's Common Development and Distribution License (CDDL). These license agreements have different goals and introduce restrictions that conflict. Fortunately, that doesn't mean that you as a Linux user can't enjoy ZFS and the capabilities it provides. This article explores two methods for using ZFS in Linux. The first uses the Filesystem in Userspace (FUSE) system to push the ZFS file system into user space to avoid the licensing issues. The second method is a native port of ZFS for integration into the Linux kernel while avoiding the intellectual property issues. Calling ZFS a file system is a bit of a misnomer, as it is much more than that in the traditional sense. ZFS combines the concepts of a logical volume manager with a very feature rich and massively scalable file system. Let's begin by exploring some of the principles on which ZFS is based. First, ZFS uses a pooled storage model instead of the traditional volume-based model. This means that ZFS views storage as a shared pool that can be dynamically allocated (and shrunk) as needed. This is advantageous over the traditional model, where file systems reside on volumes and an independent volume manager is used to administer these assets. Embedded within ZFS is an implementation of an important set of features such as snapshots, copy-on-write clones, continuous integrity checking, and data protection through RAID-Z. Going further, it's possible to use your own favorite file system (such as ext4) on top of a ZFS volume. This means that you get those features of ZFS such as snapshots on an independent file system (that likely doesn't support them directly). But ZFS isn't just a collection of features that make up a useful file system. Rather, it's a collection of integrated and complementary features that make it an outstanding file system. Let's look at some of these features, and then see some of them in action. As discussed earlier, ZFS incorporates a volume-management function to abstract underlying physical storage devices to the file system. Rather than viewing physical block devices directly, ZFS operates on storage pools (called zpools), which are constructed from virtual drives that can physically be represented by drives or portions of drives. Further, these pools can be constructed dynamically, even while the pool is actively in use. ZFS uses a copy-on-write model for managing data on the storage. This means that data is never written in place (never overwritten), but instead new blocks are written and the metadata updated to reference it. Copy-on-write is advantageous for a number of reasons (not only for some of the capabilities like the snapshots and clones that it enables). By never overwriting data, it's simpler to ensure that the storage is never left in an inconsistent state (as the older data remains after the new Write operation is complete). This allows ZFS to be transaction based, and it's much simpler to implement features like atomic operations. An interesting side effect of the copy-on-write design is that all writes to the file system become sequential writes (because remapping is always occurring). This behavior avoids hot spots in the storage and exploits the performance of sequential writes (faster than random writes). Storage pools made up of virtual devices can be protected using one of ZFS's numerous protection schemes. You can mirror a pool across two or more devices (RAID 1) protect it with parity (similar to RAID 5) but across dynamic stripe widths (more on this later). ZFS supports a variety of parity schemes based on the number of devices in the pool. For example, you can protect three devices with RAID-Z (RAID-Z 1); with four devices, you can use RAID-Z 2 (double parity, similar to RAID6). For even greater protection, you can use RAID-Z 3 with larger numbers of disks for triple parity. For speed (but no data protection other than error detection), you can employ striping across devices (RAID 0). You can also create striped mirrors (to mirror striped drives), similar to RAID 10. An interesting attribute of ZFS comes with the combination of RAID-Z, copy-on-write transactions, and dynamic stripe widths. In a traditional RAID 5 architecture, all disks must have their data within the stripe, or the stripe is inconsistent. Because there's no way to update all disks atomically, it's possible to produce the well-known RAID 5 write hole problem (where a stripe is inconsistent across the drives of the RAID set). Given ZFS transactions and never having to write in place, the write hole problem is eliminated. Another convenient quality of this approach is what happens when a disk fails and a rebuild is required. A traditional RAID 5 system uses data from other disks in the set to rebuild data for the new drive. RAID-Z traverses the available metadata to read only the data that's relevant for the geometry and avoids reading the unused space on the disk. This behavior becomes even more important as disks become larger and rebuild times increase. Although data protection provides the ability to regenerate data on a failure, it says nothing about the validity of the data in the first place. ZFS solves this issue by generating a 32-bit checksum (or 256-bit hash) for metadata for each block written. When a block is read, its checksum is verified to avoid the problem of silent data corruption. In a volume that has data protection (mirroring or RAID-Z), the alternate data can be read or regenerated automatically. Checksums are stored with metadata in ZFS, so phantom writes can be detected and—if data protection is provided (RAID-Z)—corrected. Snapshots and clones Given the copy-on-write nature of ZFS, features like snapshots and clones become simple to provide. Because ZFS never overwrites data but instead writes to a new location, older data can be preserved (but in the nominal case is marked for removal to converse disk space). A snapshot is a preservation of older blocks to maintain the state of a file system at a given instance in time. This approach is also space efficient, because no copy is required (unless all data in the file system is rewritten). A clone is a form of snapshot in which a snapshot is taken that is writable. In this case, original unwritten blocks are shared by each clone, and blocks that are written are available only to the specific file system clone. Variable block sizes Traditional file systems are made up of statically sized blocks that match the back-end storage (512 bytes). ZFS implements variable block sizes for a variety of uses (commonly up to 128KB in size, but you can change this value). One important use of variable block sizes is compression (because the resulting block size when compressed will ideally be less than the original). This functionality minimizes waste in the storage system in addition to providing better utilization of the storage network (because less data emitted to storage requires less time in transfer). Outside of compression, supporting variable block sizes also means that you can tune the block size for the particular workload expected for improved performance. ZFS incorporates a many other features, such as de-duplication (to minimize copies of data), configurable replication, encryption, an adaptive replacement cache for cache management, and online disk scrubbing (to identify and fix latent errors while they can be fixed when protection isn't used). It does this with immense scalability, supporting 16 exabytes of addressable storage (264 bytes). Using ZFS on Linux today Now that you've seen some of the abstract concepts behind ZFS, let's look at some of them in practice. This demonstration uses ZFS-FUSE. FUSE is a mechanism that allows you to implement file systems in user space without kernel code (other than the FUSE kernel module and existing file system code). The module provides a bridge from the kernel file system interface to user space for user and file system implementations. First, install the ZFS-FUSE package (the following demonstration targets Ubuntu). Installing ZFS-FUSE is simple, particularly on Ubuntu using apt. The following command line installs everything you need to begin using ZFS-FUSE: $ sudo apt-get install zfs-fuse This command line install ZFS-FUSE and all other dependent packages (mine libaiol) as well as performing the necessary setup for the new packages and starting the In this demonstration, you use the loop-back device to emulate disks as files within the host operating system. To begin, create these files (using /dev/zero as the source) with the utility (see Listing 1). With your four disk images losetup to associate the disk images with the loop devices. Listing 1. Setup for working with ZFS-FUSE $ mkdir zfstest $ cd zfstest $ dd if=/dev/zero of=disk1.img bs=64M count=1 1+0 records in 1+0 records out 67108864 bytes (67 MB) copied, 1.235 s, 54.3 MB/s $ dd if=/dev/zero of=disk2.img bs=64M count=1 1+0 records in 1+0 records out 67108864 bytes (67 MB) copied, 0.531909 s, 126 MB/s $ dd if=/dev/zero of=disk3.img bs=64M count=1 1+0 records in 1+0 records out 67108864 bytes (67 MB) copied, 0.680588 s, 98.6 MB/s $ dd if=/dev/zero of=disk4.img bs=64M count=1 1+0 records in 1+0 records out 67108864 bytes (67 MB) copied, 0.429055 s, 156 MB/s $ ls disk1.img disk2.img disk3.img disk4.img $ sudo losetup /dev/loop0 ./disk1.img $ sudo losetup /dev/loop1 ./disk2.img $ sudo losetup /dev/loop2 ./disk3.img $ sudo losetup /dev/loop3 ./disk4.img $ With four devices available to use as your block devices for ZFS (totaling 256MB in size), create your pool using the zpool command. You use the zpool command to manage ZFS storage pools, but as you'll see, you can use it for a variety of other purposes, as well. The following command requests a ZFS storage pool to be created with four devices and provides data protection with RAID-Z. You follow this command with a list request to provide data on your pool (see Listing 2). Listing 2. Creating a ZFS pool $ sudo zpool create myzpool raidz /dev/loop0 /dev/loop1 /dev/loop2 /dev/loop3 $ sudo zfs list NAME USED AVAIL REFER MOUNTPOINT myzpool 96.5K 146M 31.4K /myzpool $ You can also investigate some of the attributes of your pool, as shown in Listing 3, which represent the defaults. Among other things, you can see the available capacity and portion used. (This code has been compressed for brevity.) Listing 3. Reviewing the attributes of the storage pool $ sudo zfs get all myzpool NAME PROPERTY VALUE SOURCE myzpool type filesystem - myzpool creation Sat Nov 13 22:43 2010 - myzpool used 96.5K - myzpool available 146M - myzpool referenced 31.4K - myzpool compressratio 1.00x - myzpool mounted yes - myzpool quota none default myzpool reservation none default myzpool recordsize 128K default myzpool mountpoint /myzpool default myzpool sharenfs off default myzpool checksum on default myzpool compression off default myzpool atime on default myzpool copies 1 default myzpool version 4 - ... myzpool primarycache all default myzpool secondarycache all default myzpool usedbysnapshots 0 - myzpool usedbydataset 31.4K - myzpool usedbychildren 65.1K - myzpool usedbyrefreservation 0 - $ Now, let's actually use the ZFS pool. First, create a directory within your pool, and then enable compression within it (using the zfs set command). Next, copy a file into it. I've selected a file that's around 120KB in size to see the effect of ZFS compression. Note that your pool is mounted at the root, so treat is just like a directory within your root file system. Once the file is copied, you can list it to see that the file is present (but is the same size as the original). Using the dh command, you can see that the size of the file is half the original, indicating that ZFS has compressed it. You can also look at the compressratio property to see how much your pool has been compressed (using the default compressor, gzip). Listing 4 shows the compression. Listing 4. Demonstrating compression with ZFS $ sudo zfs create myzpool/myzdev $ sudo zfs list NAME USED AVAIL REFER MOUNTPOINT myzpool 139K 146M 31.4K /myzpool myzpool/myzdev 31.4K 146M 31.4K /myzpool/myzdev $ sudo zfs set compression=on myzpool/myzdev $ ls /myzpool/myzdev/ $ sudo cp ../linux-2.6.34/Documentation/devices.txt /myzpool/myzdev/ $ ls -la ../linux-2.6.34/Documentation/devices.txt -rw-r--r-- 1 mtj mtj 118144 2010-05-16 14:17 ../linux-2.6.34/Documentation/devices.txt $ ls -la /myzpool/myzdev/ total 5 drwxr-xr-x 2 root root 3 2010-11-20 22:59 . drwxr-xr-x 3 root root 3 2010-11-20 22:55 .. -rw-r--r-- 1 root root 118144 2010-11-20 22:59 devices.txt $ du -ah /myzpool/myzdev/ 60K /myzpool/myzdev/devices.txt 62K /myzpool/myzdev/ $ sudo zfs get compressratio myzpool NAME PROPERTY VALUE SOURCE myzpool compressratio 1.55x - $ Finally, let's look at the self-repair capabilities of ZFS. Recall that when you created your pool, you requested RAID-Z over the four devices. You can check the status of your pool using the zpool status command, as shown in Listing 5. As shown, you can see the elements of your pool (RAID-Z 1 with four devices). Listing 5. Checking your pool status $ sudo zpool status myzpool pool: myzpool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM myzpool ONLINE 0 0 0 raidz1 ONLINE 0 0 0 loop0 ONLINE 0 0 0 loop1 ONLINE 0 0 0 loop2 ONLINE 0 0 0 loop3 ONLINE 0 0 0 errors: No known data errors $ Now, let's force an error into the pool. For this demonstration, go behind the scenes and corrupt the disk file that makes up the device (your disk4.img, represented in ZFS by the device). Use the dd command to simply zero out the entire device (see Listing 6). Listing 6. Corrupting the ZFS pool $ dd if=/dev/zero of=disk4.img bs=64M count=1 1+0 records in 1+0 records out 67108864 bytes (67 MB) copied, 1.84791 s, 36.3 MB/s $ ZFS is currently unaware of the corruption, but you can force it to see the problem by requesting a scrub of the pool. As shown in Listing 7, ZFS now recognizes the corruption (of the loop3 device) and suggests an action to replace the device. Note also that the pool remains online, and you can still get to your data, as ZFS self-corrects through RAID-Z. Listing 7. Scrubbing and checking the pool $ sudo zpool scrub myzpool $ sudo zpool status myzpool pool: myzpool state: ONLINE status: One or more devices could not be used because the label is missing or invalid. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Replace the device using 'zpool replace'. see: http://www.sun.com/msg/ZFS-8000-4J scrub: scrub completed after 0h0m with 0 errors on Sat Nov 20 23:15:03 2010 config: NAME STATE READ WRITE CKSUM myzpool ONLINE 0 0 0 raidz1 ONLINE 0 0 0 loop0 ONLINE 0 0 0 loop1 ONLINE 0 0 0 loop2 ONLINE 0 0 0 loop3 UNAVAIL 0 0 0 corrupted data errors: No known data errors $ wc -l /myzpool/myzdev/devices.txt 3340 /myzpool/myzdev/devices.txt $ As recommended, introduce a new device to your RAID-Z set to act as the new container. Begin by creating a new disk image and representing it as a losetup. Note that this process is similar to adding a new physical disk to the set. You then use zpool replace to exchange the corrupted device loop3) with the new device loop4). Checking the status of the pool, you can see your new device with a message indicating that data was rebuilt on it (called resilvering), along with the amount of data moved there. Note also that the pool remains online with no errors (visible to the user). To conclude, you scrub the pool again; after checking its status, you'll see that no issues exist, as shown in Listing 8. Listing 8. Repairing the pool using zpool replace $ dd if=/dev/zero of=disk5.img bs=64M count=1 1+0 records in 1+0 records out 67108864 bytes (67 MB) copied, 0.925143 s, 72.5 MB/s $ sudo losetup /dev/loop4 ./disk5.img $ sudo zpool replace myzpool loop3 loop4 $ sudo zpool status myzpool pool: myzpool state: ONLINE scrub: resilver completed after 0h0m with 0 errors on Sat Nov 20 23:23:12 2010 config: NAME STATE READ WRITE CKSUM myzpool ONLINE 0 0 0 raidz1 ONLINE 0 0 0 loop0 ONLINE 0 0 0 loop1 ONLINE 0 0 0 loop2 ONLINE 0 0 0 loop4 ONLINE 0 0 0 59.5K resilvered errors: No known data errors $ sudo zpool scrub myzpool $ sudo zpool status myzpool pool: myzpool state: ONLINE scrub: scrub completed after 0h0m with 0 errors on Sat Nov 20 23:23:23 2010 config: NAME STATE READ WRITE CKSUM myzpool ONLINE 0 0 0 raidz1 ONLINE 0 0 0 loop0 ONLINE 0 0 0 loop1 ONLINE 0 0 0 loop2 ONLINE 0 0 0 loop4 ONLINE 0 0 0 errors: No known data errors $ This short demonstration explores the consolidation of volume management with a file system and shows how easy it is to administer ZFS (even in the face of failures). Other Linux-ZFS possibilities The advantage of ZFS on FUSE is that it's simple to begin using ZFS, but it has the downside of not being efficient as it could be. This lack of efficiency is the result of the multiple user-kernel transitions required per I/O. But given the popularity of ZFS, there is another option that provides greater performance. A native port of ZFS to the Linux kernel is well under way at the Lawrence Livermore National Lab. This port still lacks some elements, such as the ZFS Portable Operating System Interface (for UNIX®) Layer, but this is under development. Their port provides a number of useful features, particularly if you're interested in using ZFS with Lustre. (See Resources for details.) Hopefully, this article has whetted your appetite to dig farther into ZFS. From the earlier demonstration, you can easily get ZFS up and running on most Linux distributions—even in the kernel, with some limitations. Topics such as snapshots and clones were not demonstrated here, but the Resources section provides links a interesting articles on this topic. In the end, Linux and ZFS are state-of-the-art technologies, and it will be difficult to keep them apart. - This exceptional presentation from Jeff Bonwick and Bill More provides a detailed overview of ZFS and why it's the last work in file systems. - You can learn more about ZFS in the various Oracle Web sites for Solaris and ZFS. The OpenSolaris ZFS community site provides useful information on ZFS and where to learn more. Wikipedia also provides a nice, compact introduction to ZFS. You can read about RAID-Z from Jeff Bonwick and the specific problems it solves over traditional RAID 5. - FUSE provides a user space framework for the development and execution of file systems. FUSE is used with ZFS, as demonstrated in this article with ZFS-FUSE, but it's also widely used as a means of experimenting with file system development. You can learn more about FUSE and file system development in Develop your own filesystem with FUSE (Sumit Singh, developerWorks, February 2006). - One of the simplest means of integrating ZFS into Linux is a straight port of the Solaris implementation, but licensing contention precludes this. You can learn more about the licenses at Wikipedia for the GPL and CDDL. - The FreeBSD Handbook provides a nice introduction to ZFS as it applies to BSD. - Outside of running ZFS on FUSE, there is one native implementation of ZFS within the Linux kernel. The ZFS on Linux project is growing and already provides an impressive set of features. - Although ZFS provides checksums on each block written to storage, there is also a standardized SCSI end-to-end integrity scheme called DIF. You can learn more about DIF in this presentation from Oracle on data integrity or in Linux Kernel Advances (M. Tim Jones, developerWorks, March 2009). - For anyone who has read any of Tim's other articles on developerWorks, you already know he's a fan of file systems. Check out these other articles for all aspects of Linux file - Anatomy of the Linux file system (October 2007) - Next-generation Linux file systems: NiLFS(2) and Exofs (October 2009) - Ceph: A Linux petabyte-scale distributed file system (May 2010) - Anatomy of ext4 (February 2009) - Anatomy of Linux journaling file systems (June 2008) - Anatomy of the Linux virtual file system switch (August 2009) - In the developerWorks Linux zone, find hundreds of how-to articles and tutorials, as well as downloads, discussion forums, and a wealth of other resources for Linux developers and administrators. - Stay current with developerWorks technical events and webcasts focused on a variety of IBM products and IT industry topics. - Attend a free developerWorks Live! briefing to get up-to-speed quickly on IBM products and tools, as well as IT industry trends. - Watch developerWorks on-demand demos ranging from product installation and setup demos for beginners, to advanced functionality for experienced developers. - Follow developerWorks on Twitter, or subscribe to a feed of Linux tweets on developerWorks. Get products and technologies - Evaluate IBM products in the way that suits you best: Download a product trial, try a product online, use a product in a cloud environment, or spend a few hours in the SOA Sandbox learning how to implement Service Oriented Architecture efficiently. - Get involved in the My developerWorks community. Connect with other developerWorks users while exploring the developer-driven blogs, forums, groups, and wikis.
<urn:uuid:16cc5b74-9af8-4ee8-8606-b5d400a08d7a>
CC-MAIN-2017-04
http://www.ibm.com/developerworks/linux/library/l-zfs/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00509-ip-10-171-10-70.ec2.internal.warc.gz
en
0.858585
5,484
2.90625
3
Assessing and Understanding Performance This set of slides is based on Chapter 4, Assessing and Understanding Performance, of the book Computer Organization and Design by Patterson and Hennessy. Here are some typical comments. 1. I want a better computer. 2. I want a faster computer. 3. I want a computer or network of computers that does more work. 4. I have the latest game in “World of Warcraft” and want a computer that can play it. QUESTION: What does “better” mean? What does “faster” really mean, beyond its obvious meaning? What does it mean for a computer or network to do more work? last requirement is quite easy to assess if you have a copy of the game. Just play the game on a computer of that model and see if you get performance that is acceptable. Are the graphics realistic? reference to the gaming world brings home a point on performance. In games one needs both a fast computer and a great graphics card. It is the performance of the entire system that matters. The Need for Greater Performance Aside from the obvious need to run games with more complex plots and more detailed and realistic graphics, are there any other reasons to desire greater performance? We shall study supercomputers later. At one time supercomputers, such as the Cray 2 and the CDC Cyber 205, were the fastest computers on the planet. One distinguished computer scientist gave the following definition of a supercomputer. “It is a computer that is only one generation behind the computational needs of the day.” What applications require such computational power? 1. Weather modeling. We would like to have weather predictions that are accurate up to two weeks after the prediction. This requires a great deal of computational power. It was not until the 1990’s that the models had the 2. We would like to model the flight characteristics of an airplane design before we actually build it. We have a set of equations for the flow of air over the wings and fuselage of an aircraft and we know these produce accurate results. only problem here is that simulations using these exact equations would take hundreds of years for a typical airplane. Hence we run approximate solutions and hope for faster computers that allow us to run less–approximate solutions. The Job Mix The World of Warcraft example above is a good illustration of a job mix. Here the job mix is simple; just one job – run this game. This illustrates the very simple statement “The computer that runs your job fastest is the one that runs your job fastest”. In other words, measure its performance on your job. Most computers run a mix of jobs; this is especially true for computers owned by a company or research lab, which service a number of users. In order to assess the suitability of a computer for such an environment, one needs a proper “job mix”, which is a set of computer programs that represent the computing need of one’s organization. Your instructor once worked at Harvard College Observatory. The system administrator for our computer lab spent considerable effort in selecting a proper job mix, with which he intended to test computers that were candidates for replacing the existing one. This process involved detailed discussions with a large number of users, who (being physicists and astronomers) all assumed that they knew more about computers than he did. Remember that in the 1970’s; this purchase was for a few hundred thousand dollars. These days, few organizations have the time to specify a good job mix. The more common option is to use the results of commonly available benchmarks, which are job mixes tailored to common applications. What Is Performance? Here we make the observation that the terms “high performance” and “fast” have various meanings, depending on the context in which the computer is used. In many applications, especially the old “batch mode” computing, the measure was the number of jobs per unit time. The more user jobs that could be processed; the better. For a single computer running spreadsheets, the speed might be measured in the number of calculations per second. For computers that support process monitoring, the requirement is that the computer correctly assesses the process and takes corrective action (possibly to include warning a human operator) within the shortest possible time. Some systems, called “hard real time”, are those in which there is a fixed time interval during which the computer must produce and answer or take corrective action. Examples of this are patient monitoring systems in hospitals and process controls in oil refineries. As an example, consider a process monitoring computer with a required response time of 15 seconds. There are two performance measures of interest. 1. Can it respond within the required 15 seconds? If not, it cannot be used. 2. How many processes can be monitored while guaranteeing the required 15 second response time for each process being monitored. The more, the better. The Average (Mean) and Median In assessing performance, we often use one or more tools of arithmetic. We discuss these here. The problem starts with a list of N numbers (A1, A2, …, AN), with N ³ 2. For these problems, we shall assume that all are positive numbers: AJ > 0, 1 £ J £ N. Mean and Median The most basic measures are the average (mean) and the median. The average is computed by the formula (A1 + A2 + + AN) / N. The median is the “value in the middle”. Half of the values are larger and half are smaller. For a small even number of values, there might be two candidate median values. For most distributions, the mean and the median are fairly close together. However, the two values can be wildly different. For example, there is a high–school class in the state If the salary of Bill Gates and one of his cohorts at Microsoft are removed, the average becomes about $50,000. This value is much closer to the median. Disclaimer: This example is from memory, so the numbers are not exact. In certain averages, one might want to pay more attention to some values than others. For example, in assessing an instruction mix, one might want to give a weight to each instruction that corresponds to its percentage in the mix. Each of our numbers (A1, A2, …, AN), with N ³ 2, has an associated weight. So we have (A1, A2, …, AN) and (W1, W2, …, WN). The weighted average is given by the formula (W1·A1 + W2·A2 …+ WN·AN) / (W1 + W2 +…+ WN). NOTE: If all the weights are equal, this value becomes the arithmetic mean. Consider the table adapted from our discussion of RISC computers. The weighted average 0.532·TA + 0.038·TL + 0.086·TC + 0.278·TI + 0.047·TG to assess an ISA (Instruction Set Architecture) to support this job mix. The Geometric Mean and the Harmonic Mean The geometric mean is the Nth root of the product (A1·A2·…·AN)1/N. It is generally applied only to positive numbers, as we are considering here. Some of the SPEC benchmarks (discussed later) report the geometric mean. The harmonic mean is N / ( (1/A1) + (1 / A2) + … + ( 1 / AN) ) This is more useful for averaging rates or speeds. As an example, suppose that you drive at 40 miles per hour for half the distance and 60 miles per hour for the Your average speed is 48 miles per hour. If you drive 40 mph for half the time and 60 mph for half the time, the average is 50. Drive 300 miles at 40 mph. The time taken is 7.5 hours. Drive 300 miles at 60 mph. The time taken is 5.0 hours. You have covered 600 miles in 12.5 hours, for an average speed of 48 mph. But: Drive 1 hour at 40 mph. You cover 40 miles Drive 1 hour at 60 mph. You cover 60 miles. That is 100 miles in 2 hours; 50 mph. Measuring Execution Time Whatever it is, performance is inversely related to execution time. The longer the execution time, the less the performance. Again, we assess a computer’s performance by measuring the time to execute either a single program or a mix of computer programs, the “job mix”, that represents the computing work done at a particular location. For a given computer X, let P(X) be the performance measure, and E(X) be the execution time measure. What we have is P(X) = K / E(X), for some constant K. The book has K = 1. This raises the question of how to measure the execution time of a program. the program and note the time on the clock. When the program ends, again note the time on the clock. The difference is the execution time. This is the easiest to measure, but the time measured may include time that the processor spends on other users’ programs (it is time–shared), on operating system functions, and similar tasks. Nevertheless, it can be a useful measure. Some people prefer to measure the CPU Execution Time, which is the time the CPU spends on the single program. This is often difficult to estimate based on clock time. Reporting Performance: MIPS, MFLOPS, etc. While we shall focus on the use of benchmark results to report computer performance, we should note that there are a number of other measures that have been used. The first is MIPS (Million Instructions Per Second). Another measure is the FLOPS sequence, commonly used to specify the performance of supercomputers, which tend to use floating–point math fairly heavily. The sequence is: MFLOPS Million Floating Point Operations Per Second GFLOPS Billion Floating Point Operations Per (The term “giga” is the standard prefix for 109.) TFLOPS Trillion Floating Point Operations Per (The term “tera” is the standard prefix for 1012.) The reason we do not have “BLOPS” is a difference in the usage of the word “billion”. In American English, the term “billion” indicates “one thousand million”. In some European countries the term “billion” indicates “one million million”. This was first seen in the high energy physics community, where the unit of measure “BeV” was replaced by “GeV”, but not before the Bevatron was named. I leave it to the reader to contemplate why the measure “MIPS” was not extended to either “BIPS” or “GIPS”, much less “TIPS”. Using MIPS as a Performance Measure There are a number of reasons why this measure has fallen out of favor. One of the main reasons is that the term “MIPS” had its origin in the marketing departments of IBM and DEC, to sell the IBM 370/158 and VAX–11/780. One wag has suggested that the term “MIPS” stands for “Meaningless Indicator of Performance for Salesmen”. A more significant reason for the decline in the popularity of the term “MIPS” is the fact that it just measures the number of instructions executed and not what these do. Part of the RISC movement was to simplify the instruction set so that instructions could be executed at a higher clock rate. As a result these instructions are less complex and do less than equivalent instructions in a more complex instruction set. For example, consider the instruction A[K++] = B as part of a loop. The VAX supports an auto–increment mode, which uses a single instruction to store the value into the array and increment the index. Most RISC designs require two instructions to do this. Put another way, in order to do an equivalent amount of work a RISC machine must run at a higher MIPS rating than a CISC machine. measure does have merit when comparing two computers with GFLOPS, TFLOPS, and PFLOPS The term “PFLOP” stands for “PetaFLOP” or 1015 floating–point operations per second. This is the next goal for the supercomputer community. The supercomputer community prefers this measure, due mostly to the great importance of floating–point calculations in their applications. A typical large–scale simulation may devote 70% to 90% of its resources to floating–point calculations. The supercomputer community has spent quite some time in an attempt to give a precise definition to the term “floating point operation”. As an example, consider the following code fragment, which is written in a standard variety of the FORTRAN language. DO 200 J = 1, 10 200 C(J) = A(J)*B(J) + X The standard argument is that the loop represents only two floating–point operations: the multiplication and the addition. I can see the logic, but have no other comment. The main result of this measurement scheme is that it will be difficult to compare the performance of the historic supercomputers, such as the CDC Cyber 205 and the Cray 2, to the performance of the modern day “champs” such as a 4 GHz Pentium 4. In another lecture we shall investigate another use of the term “flop” in describing the classical supercomputers, when we ask “What happened to the Cray 3?” CPU Performance and Its Factors The CPU clock cycle time is possibly the most important factor in determining the performance of the CPU. If the clock rate can be increased, the CPU is faster. In modern computers, the clock cycle time is specified indirectly by specifying the clock cycle rate. Common clock rates (speeds) today include 2GHz, 2.5GHz, and 3.2 GHz. The clock cycle times, when given, need to be quoted in picoseconds, where 1 picosecond = 10–12 second; 1 nanosecond = 1000 picoseconds. The following table gives approximate conversions between clock rates and clock times. Rate Clock Time 1 GHz 1 nanosecond = 1000 picoseconds 2 GHz 0.5 nanosecond = 500 picoseconds 2.5 GHz 0.4 nanosecond = 400 picoseconds 3.2 GHz 0.3125 nanosecond = 312.5 picoseconds We shall return later to factors that determine the clock rate in a modern CPU. There is much more to the decision than “crank up the clock rate until the CPU fails and then back it off a bit”. CPI: Clock Cycles per Instruction This is the number of clock cycles, on average, that the CPU requires to execute an instruction. We shall later see a revision to this definition. Consider the Boz–5 design used in my offerings of CPSC 5155. Each instruction would take either 1, 2, or 3 major cycles to execute: 4, 8, or 12 clock cycles. In the Boz–5, the instructions referencing memory could take either 8 or 12 clock cycles, those that did not reference memory took 4 clock cycles. A typical Boz–5 instruction mix might have 33% memory references, of which 25% would involve indirect references (requiring all 12 clock cycles). The CPI for this mix is CPI = (2/3)·4 + 1/3·(3/4·8 + 1/4·12) = (2/3)·4 + 1/3·(6 + 3) = 8/3 + 3 = 17/3 = 5.33. The Boz–5 has such a high CPI because it is a traditional fetch–execute design. Each instruction is first fetched and then executed. Adding a prefetch unit on the Boz–5 would allow the next instruction to be fetched while the present one is being executed. This removes the three clock cycle penalty for instruction fetch, so that the number of clock cycles per instruction may now be either 1, 5, or 9. For this CPI = (2/3)·1 + 1/3·(3/4·5 + 1/4·9) = 2/3 + 1/3·(15/4· + 9/4) = 2/3 + 1/3·(24/4) = 2/3 + 1/3·6 = 2.67. This is still slow. A Preview on the RISC Design Problem We shall discuss this later, but might as well bring it up now. The RISC designs focus on lowering the CPI, with the one clock cycle per instruction. There are two measures that must be considered here. (Clock cycles per instruction)·(Clock cycle time), and language instructions per high–level instruction)· (Clock cycles per instruction)·(Clock cycle time). The second measure is quite important. It is common for the CISC designs (those, such as the VAX, that are not RISC) to do poorly on the first measure, but well on the second as each high level language statement generates fewer assembly language statements. The textbook’s measure that is equivalent to this second measure is (Instruction count)·(CPI)·(Clock cycle time). This really focuses on the same issue as my version of the measure. from the comedy department. Some wags label the computers of the VAX design line as “VAXen”. We get our laughs where we can find them. More on Benchmarks As noted above, the best benchmark for a particular user is that user’s job mix. Only a few users have the resources to run their own benchmarks on a number of candidate computers. This task is now left to larger test labs. These test labs have evolved a number of synthetic benchmarks to handle the These benchmarks are job mixes intended to be representative of real job mixes. They are called “synthetic” because they usually do not represent an actual workload. The Whetstone benchmark was first published in 1976 by Harold J. Curnow and Brian A. Wichman of the British National Physical Laboratory. It is a set of floating–point intensive applications with many calls to library routines for computing trigonometric and exponential functions. Supposedly, it represents a scientific work load. Results of this are reported either as KWIPS (Thousand Whetstone Instructions per Second), or MWIPS (Million Whetstone Instructions per Second) The Linpack (Linear Algebra Package) benchmark is a collection of solve linear equations using double–precision floating–point arithmetic. It was published in 1984 by a group from Argonne National Laboratory. Originally in FORTRAN 77, it has been rewritten in both C and Java. These results are reported in FLOPS: GFLOPS, TFLOPS, etc. (See a previous slide.) Games People Play (with Benchmarks) Synthetic benchmarks (Whetstone, Linpack, and Dhrystone) are convenient, but easy to fool. These problems arise directly from the commercial pressures to quote good benchmark numbers in advertising copy. This problem is seen in two areas. can equip their compilers with special switches to emit code that is tailored to optimize a given benchmark at the cost of slower performance on a more general job mix. “Just get us some good numbers!” 2. The benchmarks are usually small enough to be run out of cache memory. This says nothing of the efficiency of the entire memory system, which must include cache memory, main memory, and support for virtual memory. Our textbook mentions the 1995 Intel special compiler that was designed only to excel in the SPEC integer benchmark. Its code was fast, but incorrect. Bottom Line: Small benchmarks invite companies to fudge their results. Of course they would say “present our products in the best light.” More Games People Play (with Benchmarks) Your instructor recalls a similar event in the world of Lisp machines. These were specialized workstations designed to optimize the execution of the LISP language widely used in artificial intelligence applications. The two “big dogs” in this arena were Symbolics and LMI (Lisp Machines Incorporated). Each of these companies produced a high–performance Lisp machine based on a microcoded control unit. These were the Symbolics–3670 and the LMI–1. Each company submitted its product for testing by a graduate student who was writing his Ph.D. dissertation on benchmarking Lisp machines. Between the time of the first test and the first report at formal conference, LMI customized its microcode for efficient operation on the benchmark code. At the original test, the LMI–1 had only 50% of the performance of the Symbolics–3670. By the time of the conference, the LMI–1 was now “officially” 10% faster. The managers from Symbolics, Inc. complained loudly that the new results were meaningless. However, these complaints were not necessary as every attendee at the conference knew what had happened and acknowledged the Symbolics–3670 as faster. design survived the Intel 80386, which was much cheaper than the specialized machines, had an equivalent GUI, and was much cheaper. At the time, the costs were about $6,000 vs. about $100,000. The SPEC Benchmarks The SPEC (Standard Performance Evaluation Corporation) was founded in 1988 by a consortium of computer manufacturers in cooperation with the publisher of the trade magazine The Electrical Engineering Times. (See www.spec.org) As of 2007, the current SPEC benchmarks were: 1. CPU2006 measures CPU throughput, cache and memory access speed, and compiler efficiency. This has two components: SPECint2006 to test integer processing, and SPECfp2006 to test floating point processing. 2. SPEC MPI 2007 measures the performance of parallel computing systems and clusters running MPI (Message–Passing Interface) applications. 3. SPECweb2005 a set of benchmarks for web servers, using both HTTP and HTTPS. 4. SPEC JBB2005 a set of benchmarks for server–side Java performance. 5. SPEC JVM98 a set of benchmarks for client–side Java performance. 6. SPEC MAIL 2001 a set of benchmarks for mail servers. The student should check the SPEC website for a listing of more benchmarks. The SPEC CINT2000 Benchmark Suite The CINT2000 is an earlier integer benchmark suite. It evolved into SPECint2006. Here is a listing of its main components. The SPEC CFP2000 Benchmark Suite The CFP2000 is an earlier integer benchmark suite. It evolved into SPECfp2006. Here is a listing of its main components. Some Concluding Remarks on Benchmarks First, remember the great temptation to manipulate benchmark results for commercial advantage. As the Romans said, “Caveat emptor”. Also remember to read the benchmark results skeptically and choose the benchmark that most closely resembles your own workload. Finally, do not forget Amdahl’s Law, which computes the improvement in overall system performance due to the improvement in a specific component. This law was formulated by George Amdahl in 1967. One formulation of the law is given in the following equation. S = 1 / [ (1 – f) + (f /k) ] where S is the speedup of the overall system, f is the fraction of work performed by the faster component, and k is the speedup of the new component. It is important to note that as the new component becomes arbitrarily fast (k ® ¥), the speedup approaches the limit S¥ = 1/(1 – f). If the newer and faster component does only 50% of the work, the maximum speedup is 2. The system will never exceed being twice as fast due to this modification.
<urn:uuid:5ed139f8-a6db-4f4b-bb7b-64edf92095ff>
CC-MAIN-2017-04
http://edwardbosworth.com/My5155_Slides/Chapter13/AssessingPerformance.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00417-ip-10-171-10-70.ec2.internal.warc.gz
en
0.926635
5,273
3.609375
4
SecureWorks reported that attempted hacker attacks launched at its healthcare clients doubled in the fourth quarter of 2009. Attempted attacks increased from an average of 6,500 per healthcare client per day in the first nine months of 2009 to an average of 13,400 per client per day in the last three months of 2009. In the Fall of 2009, the security community began tracking a new wave of attacks involving the latest version of the Butterfly/Mariposa Bot malware. If a computer is infected with the Butterfly malware, it can be used to steal data stored by the victim’s browser (including passwords), launch DDoS attacks, spread via USB devices or peer to peer, and download additional malware onto the infected computer. SQL Injection attacks target vulnerabilities in organizations’ web applications. “We also saw a resurgence of SQL Injection attacks beginning in October,” said Hunter King, security researcher with SecureWorks. “They were being launched at legitimate websites so as to spread the Gumblar Trojan. Although SQL Injection is a well known attack technique, we continue to read news reports where it has been used successfully by cyber criminals to steal sensitive data,” said King. One of the most recent cases reported involved American citizen Albert Gonzalez who was charged, along with two unnamed Russians, with the theft of 130 million credit card numbers using SQL Injection. Factors contributing to healthcare attacks: 1. Valuable data stores – Healthcare organizations often store valuable data such as a patient’s Social Security number, insurance and/or financial account data, birth date, name, billing address, and phone, making them a desirable target to cyber criminals. 2. Large attack landscape – Because of the nature of their business, healthcare organizations have large attack surfaces. Healthcare entities have to provide access to many external networks and web applications so as to stay connected with their patients, employees, insurers and business partners. This increases their risk to cyber attacks.
<urn:uuid:d63f2162-e36a-4298-91c2-0ff9eae96a4e>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2010/01/27/hacker-attacks-on-healthcare-organizations-double/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00049-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947585
398
2.546875
3
The Internet of Things wants to connect you to the future This feature first appeared in the Summer 2014 issue of Certification Magazine. The term Internet of Things, or IoT, was coined in 1999 by Kevin Ashton. Ashton, a British technologist who co-founded the RFID-pioneering (radio frequency identification) Auto-ID Center at Massachusetts Institute of Technology (MIT), used the term to envision a world where the internet would be connected to everyday objects via sensors. Today, objects can and do collect, receive and send information, both to users and other connected objects. Over 100 million vending machines, vehicles, smoke alarms and more are sharing information. Market analysts at Berg Insight expect this number to rise to 360 million by 2016. An M2M (Machine-to-Machine) device can assist with inventory control, or alert a technician to broken equipment. Take photocopiers, for example: An M2M chip can order fresh toner and paper automatically. When the photocopier goes down (a monthly occurrence in many offices), the M2M chip can alert a repair technician, and even tell him which parts to bring. Manufacturers of products are climbing onto the IoT bandwagon as end-users clamor for this embedded technology to help businesses thrive, or improve quality of live. Many homes, for instance, depend on a sump pump to remove ground water from basements or crawl spaces. When the sump goes out, water damage can occur quickly. Now imagine that the same vital pump is equipped with a chip that relays operating information to an app on the homeowner’s phone. Near instantaneous notification of an impending failure could save the homeowner thousands of dollars. The future holds countless opportunities for software and hardware engineers to bring the IoT to more devices. The device itself needs a hardware engineer, while the app on the phone requires a software engineer. Networking, however, is what brings them together to drive the bottom line. In an advertisement titled “Create the Internet of Your Things,” Microsoft encourages business owners to “empower your business and gain a competitive edge by connecting data from devices and sensors with the cloud and business intelligence tools. Small things have the potential for big impact. The Internet of Things promises vast opportunities, but it also poses challenges for businesses that seek to take action and realize tangible results as it can seem overwhelming, complicated, and expensive.” Using an elevator company to demonstrate its message, Microsoft has posted a short video to illustrate how IoT technology can help businesses. As new technologies are born, new certifications are needed to validate a candidate’s grasp of that new subject. Existing certification exams are frequently modified to accommodate technology trends. CompTIA’s A+ certification has evolved through the years, once requiring that candidates understand how floppy disks function, or the proper procedure for installing EISA (Extended Industry Standard Architecture) cards. These questions have been removed from the A+ exam in favor of the more relevant SSD (solid state drive) and the ubiquitous USB drive. The basis of all data communication, however, comes down to a candidate’s knowledge of networking. Back in 1999, candidates seeking the coveted MCSE (Microsoft Certified Systems Engineer) credential had to pass an entire exam on TCP/IP alone. This was a grueling exam that required the candidate to configure CIDR (classless inter-domain routing) and VLSM (variable length subnet mask) without a calculator. Today, there is no TCP/ IP exam, since that subject is embedded within almost every computer certification exam. In 1996, Bill Gates said, “There might as well be an electricity division at Microsoft: The Internet will be everywhere, in everything we make.” Like electricity, TCP/ IP is in everything now, and TCP/IP means networking. Remember IPv4, the standard for assigning a unique 32-bit address to every device on the internet? “Remember it,” you say, “I’m still using it!” Not for long: With IPv4 limited to a mere 4,294,967,296 addresses, the IoT will demand a thorough conversion to IPv6. In 1994, a Band-Aid called NAT (network address translation) added some years to the life of IPv4, thanks to the clever people at the IETF (Internet Engineering Task Force). IPv6, on the other hand, can accommodate 340,282,366,920,938,463,463,374,607,431,768,211,456 unique addresses. That’s a big number, so allow me to restate it this way: IPv6 can provide many trillions of addresses to each human on the planet. Here’s another way to get your head around the immensity of that number: Draw a box on a piece of paper that measures 1.6 inches square. Put all of the IPv4 addresses into that box. Using the same scale, you would need to draw a box the size of our solar system to fit all of the IPv6 addresses. Game over; IPv6 wins. A certification candidate’s mastery of networking concepts is crucial, and IoT is making it even more so. The term “Internet of Things” intimates that tangible things are involved, but this may be a misnomer. Cisco prefers the term IoE, or the Internet of Everything. According to Cisco, “More than 99 percent of our world remains unconnected. Tomorrow, we will be connected to almost everything. Thirty-seven billion devices will be connected to the Internet by 2020, from trees to water to cars; the organic and the digital will work together for a more intelligent and connected world. If traffic, transportation, networking, and space exploration depend on digital information sharing, how will that information be identified from its source to its destination?” Are you prepared for IoT? Do you want to have a hand in connectivity that extends from orchards to space shuttles? Networking certification can punch your ticket to the future of IT. Get started today!
<urn:uuid:f83c1dfa-0b04-45a4-8d48-165329d4155e>
CC-MAIN-2017-04
http://certmag.com/internet-of-things-wants-connect-future/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00105-ip-10-171-10-70.ec2.internal.warc.gz
en
0.924978
1,256
2.796875
3
ENISA calls for a joint effort between end-users and service providers to protect our online identity In the cyber world our identity is reflected by our usernames and passwords. For users, keeping their passwords safe is vital to avoid security incidents such as identity theft. But online service providers (who store usernames and passwords) are expected to do the same. Problems arise when security is compromised at either end of the chain. Passwords protect sensitive information – whether it be financial or health data, private material, intellectual property, customer lists, etc. Yet, just halfway through 2012, data breaches have already exposed millions of citizens’ personal data including password information. ENISA is urging service providers to take preventive actions to better protect sensitive data. More information on how service providers should improve the safety of their users’ information, prevent data leaks and offer a more secure service to citizens is contained in this pdf download below:
<urn:uuid:1023353e-a1bc-4546-8b8f-a338fe3b80bc>
CC-MAIN-2017-04
http://infosecisland.com/documentview/22161-ENISA-Calls-for-Joint-Effort-to-Protect-Login-Credentials.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280587.1/warc/CC-MAIN-20170116095120-00013-ip-10-171-10-70.ec2.internal.warc.gz
en
0.904599
192
2.546875
3
STOP RUN terminates the run unit,and deletes all dynamically called programs in the run unit and all programs link-edited with them. (It does not delete the main program.)STOP RUN statement does not have to be the last statement in a sequence, but the statements following the STOP RUN will not be executed.STOP RUN statement closes all files defined in any of the programs GOBACK statement specifies the logical end of a called program or invoked method.Should appear as the only statement or as the last of a series of imperative statements in a sentence because any statements following theGOBACK are not executed. IN COBOL PROGRAMMING IF YOU PUT STOP RUN AFTER ANY SENTENCE THEN THE CONTROL COMES OUT OF PROGRAM AND IN CASE OF GO TO CONTROL WILL PASS TO THE STATEMENT SPECIFIED IN THE GO TO STATEMENT STOP RUN can only be used in the main program. When executed, it returns back to OS. GOBACK can be used both in the main program and in a sub program. GOBACK returns controls either back to the main program or to the OS. Any statement following the GOBACK execution results in the subsequent statements not being executed. Statements following STOP RUN are also not executed. GOBACK in the sub program functions as an exit program.
<urn:uuid:331d348b-6c8e-42c5-931b-949bd7a135e4>
CC-MAIN-2017-04
http://ibmmainframes.com/about9628.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00463-ip-10-171-10-70.ec2.internal.warc.gz
en
0.858827
279
3.46875
3
Suraud J.-P.,CNRS Biometry and Evolutionary Biology Laboratory | Fennessy J.,Giraffe Conservation Foundation | Bonnaud E.,University Paris - Sud | Issa A.M.,Direction de la Faune | And 2 more authors. ORYX | Year: 2012 Abstract The West African giraffe is a genetically unique population represented only by the subspecies Giraffa camelopardalis peralta, categorized as Endangered on the IUCN Red List. These giraffes live outside protected areas, without natural predators and share their habitat with local people and their livestock. This study provides demographic data on this poorly studied megaherbivore and documents its recovery. We analysed the results of photo-identification censuses from 1996 to 1999 (count data) and from 2005 to 2008 (count and demographic data). From 1996 to 1999 the annual growth rate was c. 19% because of an unbalanced population structure after a period of severe poaching. From 2005 to 2008 an annual growth rate of c. 12-13% was estimated from both count data and demographic parameters. This value fits with the maximum growth rate calculated for a browser species based on the allometric relationship linking growth rate and body mass. During the period 2005-2008 adult and subadult females had a constant survival rate of 0.94 and a constant recapture rate of 0.97. Annual calf survival rate was 1. Observed sex ratio at birth was 0.57 and mean reproductive success was 0.257. Generation time was estimated to be 9.66 years. This spectacular population growth was mostly attributed to the absence of predators and the ongoing monitoring to limit illegal hunting. © 2012 Fauna & Flora International. Source News Article | September 9, 2016 For quite some time, giraffes were all thought to belong to a single species that was divided into several sub-species. However, as it turns out, we've not been entirely accurate about the world's tallest land animal from the very beginning. In a recent study published in the journal Current Biology, it has been revealed that, rather than one species of giraffe, which is split up into several sub-species, there are actually four species of the animal, mirroring the genetic differences observed in polar bears and brown bears. "We were extremely surprised," said conservationist Julian Fennessy, co-founder of the Giraffe Conservation Foundation and lead author of the study. He also noted that the conservation implications are immense, and that their findings will hopefully help put giraffe conservation on the map. To be fair, there were already indications that all of these giraffes could be different species, but there was nothing distinct enough about them that would definitively prove it. For example, the reticulated giraffe of Somalia, with its polygonal, liver-colored spots, can be easily distinguished from the Rothschild's giraffe of Uganda and Kenya, with patches that are not as sharply defined. Similarly, while the Rothschild's giraffe and Masai giraffe of Kenya and Tanzania are similarly marked, a close look at their skulls reveals that the former has five ossicones rather than the usual three — in fact, this feature is unique to the Rothschild's giraffe. However, it wasn't until the Giraffe Conservation Foundation was looking into the potential results of different giraffe subspecies mixing together when they're moved into protected areas, that they realized there was more to giraffes than just their looks. Following a process that took almost seven years, during which time 190 tissue samples were collected, an analysis of nuclear genetic markers and mitochondrial DNA revealed that giraffes can effectively be divided into four species: the southern giraffe (Giraffa giraffa), which has a population of about 52,000; the Masai giraffe (Giraffa tippelskirchi), with an approximate population of 32,500; the reticulated giraffe (Giraffa reticulata), which numbers to about 8,700; and the northern giraffe (Giraffa camelopardalis), with about a 4,750 population. While monumental, the discovery also revealed something rather disconcerting: the reticulated giraffe and northern giraffe are in a rather precarious position. Though not considered endangered, they are dangerously close to that point, and these results should encourage the International Union for the Conservation of Nature (IUCN), which classifies giraffes as a single species, to take a stronger stance in their protection. "Northern giraffe number less than 4,750 individuals in the wild, and reticulated giraffe number less than 8,700 individuals — as distinct species, it makes them some of the most endangered large mammals in the world," Fennessy said in a press release. Fennessy said the biggest threats to the giraffe population include destruction of their habitat due to human population growth as well as poaching for bush meat, tail hair and "medicinal" parts. © 2016 Tech Times, All rights reserved. Do not reproduce without permission. News Article | September 10, 2016 Giraffes appear to be one single creature but findings of a new study have revealed that there are actually four different species of the long-necked mammals. Geneticist Axel Janke, from the Senckenberg Biodiversity and Climate Research Centre and Goethe University in Germany, said that the findings change the status of the animals in terms of how endangered their species are. About a third of the giraffe population was lost over the past three decades alone but the International Union for Conservation of Nature (IUCN) does not consider giraffes as endangered. They remain classified as least concern. The discovery that the animal is composed of at least four different species, which was published in the journal Current Biology on Thursday, Sept. 8, can have crucial implications in giraffe conservation campaigns. Giraffes taken as one number nearly 100,000 but when they are considered to be four separate species, the animals would appear to be in more dire need of help and support. The southern giraffe only numbers about 52,000, the reticulated giraffe has a population of about 8,700, the Masai giraffe has about 32,500 individuals and the northern giraffe has a bleak number of only 4,750. Scientists also said that the different species are about as distinct as polar bears and brown bears and because populations are composed of different species, giraffes could not reproduce with one another. "They normally don't hybridize and have fertile offspring in nature," Janke said. Conservationists would have to take into account that the different species do not commonly crossbreed when planning for strategies that aim to help improve the number of the animals. "The remaining former giraffe subspecies cluster genetically into four highly distinct groups, and we suggest that these should be recognized as discrete species," Janke and colleagues wrote in their study. "The conservation implications are obvious, as giraffe population numbers and habitats across Africa continue to dwindle due to human-induced threats." The population of the giraffe has long been declining. The decline in their population is widely blamed on habitat loss, excessive hunting and poaching. The skin of the giraffe is used for clothing items and in some countries like Tanzania, the hunt is driven by beliefs that some parts of the animal can treat HIV infection. "With now four distinct species, the conservation status of each of these can be better defined and in turn added to the IUCN Red List," said Julian Fennessy, from the Giraffe Conservation Foundation in Namibia. Nubian giraffes are seen in Murchison Falls, Uganda in this undated handout picture. Courtesy Julian Fennessy/Handout via REUTERS WASHINGTON (Reuters) - Genetic research on the world's tallest land animal has found that there are four distinct species of giraffe, not just one as long believed, with two of them at alarmingly low population levels. Scientists on Thursday unveiled a comprehensive genetic analysis of giraffes using DNA from 190 of the towering herbivores from across their range in Africa. The genetic data showed that four separate species of giraffes that do not interbreed in the wild inhabit various parts of the continent. "We were extremely surprised," said conservationist Julian Fennessy, co-director of the Namibia-based Giraffe Conservation Foundation. Beyond genetics, the researchers identified differences among the four species including body shape, coloration and coat patterns. Genetic differences among the four species were comparable to those between polar bears and brown bears, said geneticist Axel Janke of the Senckenberg Biodiversity and Climate Research Centre and Goethe University in Germany. Until now, scientists had recognized a single species, with the scientific name Giraffa camelopardalis. The study identified the four separate species as: the southern giraffe (Giraffa giraffa), with a population of 52,000; the Masai giraffe (Giraffa tippelskirchi), with 32,500; the reticulated giraffe (Giraffa reticulata), with 8,700; and the northern giraffe (Giraffa camelopardalis), with 4,750. "The conservation implications are immense and our findings will hopefully help put giraffe conservation on the map," Fennessy said. The giraffe currently is not listed as endangered, although its population has declined dramatically over the past three decades from more than 150,000 to fewer than 100,000, the researchers said. But the low population levels of the northern giraffe and reticulated giraffe make them some of the world's most endangered large mammals and of high conservation importance, Fennessy said. Giraffes stand up to about 18 feet (5.5 meters) tall, with long necks and legs, a sloped back and two to five short knobs called ossicones atop the head. They have a tan, white or yellowish coat blotched with brownish patches. They roam the savannas of central, eastern and southern Africa, as far north as Chad, south to South Africa, east to Somalia and west to Niger. Fennessy said the biggest threats to the giraffe include habitat destruction due to human population growth as well as poaching for bush meat, their tail hair and "medicinal" parts. Their closest relative is the long-necked African mammal called the okapi. The research was published in the journal Current Biology. Bercovitch F.B.,Kyoto University | Deacon F.,Giraffe Conservation Foundation | Deacon F.,University of the Free State African Journal of Ecology | Year: 2015 Giraffe are popular animals to watch while on wildlife safaris, and feature prominently in zoos, advertisements, toys and cartoons. Yet, until recently, few field studies have focused on giraffe. We introduce this giraffe topic issue with a review essay that explores five primary questions: How many (sub) species of giraffe exist? What are the dynamics of giraffe herds? How do giraffe communicate? What is the role of sexual selection in giraffe reproduction? How many giraffe reside in Africa? A confluence of causes has produced drastic declines in giraffe population numbers in Africa, and we conclude that guiding giraffe conservation plans depends upon evaluation of the five key quandaries that we pose. © 2015 John Wiley & Sons Ltd. Source
<urn:uuid:2fd742cb-5cfe-46be-9bb4-a5244dcd9ee1>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/giraffe-conservation-foundation-1031111/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280899.42/warc/CC-MAIN-20170116095120-00031-ip-10-171-10-70.ec2.internal.warc.gz
en
0.950467
2,355
3.375
3
3.1.5 How large a key should be used in the RSA cryptosystem? The size of a key in the RSA algorithm typically refers to the size of the modulus n. The two primes, p and q, which compose the modulus, should be of roughly equal length; this makes the modulus harder to factor than if one of the primes is much smaller than the other. If one chooses to use a 768-bit modulus, the primes should each have length approximately 384 bits. If the two primes are extremely close1 or their difference is close to any predetermined amount, then there is a potential security risk, but the probability that two randomly chosen primes are so close is negligible. The best size for a modulus depends on one's security needs. The larger the modulus, the greater the security, but also the slower the RSA algorithm operations. One should choose a modulus length upon consideration, first, of the value of the protected data and how long it needs to be protected, and, second, of how powerful one's potential threats might be. A good analysis of the security obtained by a given modulus length is given by Rivest [Riv92a], in the context of discrete logarithms modulo a prime, but it applies to the RSA algorithm as well. A more recent study of RSA key-size security can be found in an article by Odlyzko [Odl95]. Odlyzko considers the security of RSA key sizes based on factoring techniques available in 1995 and on potential future developments, and he also considers the ability to tap large computational resources via computer networks. In 1997, a specific assessment of the security of 512-bit RSA keys shows that one may be factored for less than $1,000,000 in cost and eight months of effort [Rob95c]. Indeed, the 512-bit number RSA-155 was factored in seven months during 1999 (see Question 2.3.6). This means that 512-bit keys no longer provide sufficient security for anything more than very short-term security needs. RSA Laboratories currently recommends key sizes of 1024 bits for corporate use and 2048 bits for extremely valuable keys like the root key pair used by a certifying authority (see Question 22.214.171.124). Several recent standards specify a 1024-bit minimum for corporate use. Less valuable information may well be encrypted using a 768-bit key, as such a key is still beyond the reach of all known key breaking algorithms. Lenstra and Verheul [LV00] give a model for estimating security levels for different key sizes, which may also be considered. It is typical to ensure that the key of an individual user expires after a certain time, say, two years (see Question 126.96.36.199). This gives an opportunity to change keys regularly and to maintain a given level of security. Upon expiration, the user should generate a new key being sure to ascertain whether any changes in cryptanalytic skills make a move to longer key lengths appropriate. Of course, changing a key does not defend against attacks that attempt to recover messages encrypted with an old key, so key size should always be chosen according to the expected lifetime of the data. The opportunity to change keys allows one to adapt to new key size recommendations. RSA Laboratories publishes recommended key lengths on a regular basis. Users should keep in mind that the estimated times to break the RSA system are averages only. A large factoring effort, attacking many thousands of moduli, may succeed in factoring at least one in a reasonable time. Although the security of any individual key is still strong, with some factoring methods there is always a small chance the attacker may get lucky and factor some key quickly. As for the slowdown caused by increasing the key size (see Question 3.1.2), doubling the modulus length will, on average, increase the time required for public key operations (encryption and signature verification) by a factor of four, and increase the time taken by private key operations (decryption and signing) by a factor of eight. The reason public key operations are affected less than private key operations is that the public exponent can remain fixed while the modulus is increased, whereas the length of the private exponent increases proportionally. Key generation time would increase by a factor of 16 upon doubling the modulus, but this is a relatively infrequent operation for most users. It should be noted that the key sizes for the RSA system (and other public-key techniques) are much larger than those for block ciphers like DES (see Section 3.2), but the security of an RSA key cannot be compared to the security of a key in another system purely in terms of length. 1 Put m = [(p+q)/ 2]. With p < q, we have 0 £ m- Ön £ [((q-p)2)/( 8p)]. Since p = m ±Ö[(m2-n)], the primes p and q can be easily determined if the difference p-q is small.
<urn:uuid:9f2cc71d-8ab3-4205-822a-fd87f4761a42>
CC-MAIN-2017-04
https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/how-large-a-key-should-be-used.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00517-ip-10-171-10-70.ec2.internal.warc.gz
en
0.923162
1,047
3.890625
4
Yellowstone Super-Volcano More Active Than Previously Thought / May 1, 2012 According to a story from gizmodo.com, the Yellowstone super-volcano is "more active than previously thought," which means that eruptions are more frequent -- and the next one is likely closer than previously predicted. New research using a new high-precision argon isotope dating technique (a technology that's like getting a sharper lens on a camera) shows that what scientists thought was Yellowstone's biggest eruption -- the one that created the 2 million year old Huckleberry Ridge deposit -- was actually two eruptions 6,000 years apart. Ultimately because two smaller eruptions occurred versus one giant one, scientists now believe that the next eruption won't wipe out half of the U.S. -- there will just be more frequent eruptions that are 12 percent less powerful than one large eruption. Photo courtesy of the University of Aberdeen
<urn:uuid:d209fdbc-8f95-4632-b658-c673bc7e7512>
CC-MAIN-2017-04
http://www.govtech.com/photos/Photo-of-the-Week-Yellowstone-Super-Volcano-05012012.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00333-ip-10-171-10-70.ec2.internal.warc.gz
en
0.955972
190
3.234375
3
You use a fingerprint to access your smartphone, an access card to enter your place of work, and the last four digits of your Social Security number to verify you are who you say you are. But can any of these things really determine your identity? Not really, according to a new Defense Department guidance document. Many of you have heard of “joint operations,” “information operations,” and “intelligence collection activities” when it comes to the world of defense and national security. Well, let me introduce you to the Pentagon’s latest buzz phrase–Identity Activities. “Identity is the summary (or sum total) of multiple aspects of an entity’s characteristics, attributes, activities, reputation, knowledge, and judgments–all of which are constantly evolving,” according to the new guidance. “Identity is the sum of gathered descriptors and assertions and not simply a physical or current manifestation of limited attributes.” What does all of this psychobabble really mean? Well, it means the Pentagon wants to employ data analysis tools that will leverage biographical, biological, behavioral, and reputational data inputs to help the military determine the identity of a person they encounter on the battlefield and whether that person poses any kind of threat. So what data elements would actually feed such an identity system? Here’s what the Defense Department says: Biographical: Name, address, passport number, tax records, etc…. Biological: Fingerprints, facial images, iris images, DNA, etc…. Behavioral: Cellphone records, social media, travel patterns, etc…. Reputational: Statements attesting or vouching for character, criminal records, credit scores, security clearances, organizational position, etc…. But that summary is really just the tip of the identity iceberg. According to DoD, “approximately 500 separate data types and subtypes of identity attributes that support relevant national security activities have been identified.” Second only to the expanded categories and subcategories of identity attributes is the number of databases maintained at the Federal level that are dedicated to one or more methods of tracking identities. - DOD Automated Biometric Identification System (ABIS). - National DNA Index System (NDIS) and Joint Federal Agencies Intelligence DNA Database (JFAIDD). - Biometric Identity Intelligence Resource (BI2R). - Detainee Reporting System (DRS). - FBI’s Next Generation Identification (NGI). - Department of Homeland Security (DHS) Automated Biometric Identification System (IDENT). - DHS TECS. - Terrorist Identities Datamart Environment (TIDE).
<urn:uuid:087dbd59-fa31-4678-9852-dccf66b056ce>
CC-MAIN-2017-04
https://www.meritalk.com/the-situation-report-pentagon-evolves-identity-management/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00177-ip-10-171-10-70.ec2.internal.warc.gz
en
0.855519
564
2.5625
3
How children are educated, enabled and empowered to understand what's around helps determine the future of any nation. Investment in education is investment in the seeds of future prosperity, and technology is playing an active part in this, with Apple [AAPL] solutions leading the way. [ABOVE: Here's a Kickstarter project I like the look of: Markup for iPad could become a useful tool for teachers. While I note the project's a few dollars short of its goal, I should also say I've not taken an in-depth look at it, and have no connection whatsoever with the developer.] Education in the DNA The relationship between Apple and the education markets is as old as the company itself. The connection is proven by the company's news this week that iTunes U downloads now number over a billion. The impact of Apple's effect on how children learn has been proven in numerous studies, with iMacs, iPods and now iPads and iPhones significantly improving children's interest in what they are doing in school. The theory is pretty simple: - Send your child to a boring school with boring teachers and boring lessons and don't be too surprised to see your child get bored and learn little. - Send your child to an interesting school that's kitted out with the equipment they are already used to using t home; staffed by teachers with an interest in positive use of these tools and lessons crafted for the digital age, and your child will very likely demonstrate better learning outcomes, including qualifications, social interaction, team working and attendance. These statements have been proved several times. I tend to think back to the BECTA-sponsored UK studies of 2002 which helped prove the link between switched-on tech and switched-on learning. As film director Lord David Puttnam, said at that time: "Moving images are the key drivers of the information society. However, we have failed to capitalize on digital technologies in education. There is a "disconnect" between the lives of pupils inside schools and the lives of the students outside. "Why shouldn't kids learn French from kids in France via video conferencing? Our education systems must respond to changes in technology, and I believe it will revolutionize the way information is taught and learnt." The Internet impact Education is not immune to the impact of technology. Enterprises (barring Yahoo!) are embracing BYOD and remote working practices; whole industries are being transformed; education is no different. We're moving fast to an age of Post-PC in schools. This doesn't mean no PCs in schools at all, of course, but likely means children will be doing the majority of their studies on iPads, iPhones or competing devices from other manufacturers. (Though bear in mind that Apple's devices are very much preferred by young people, meaning if you buy your son an Android tablet in a class full of iPad users, don't be too surprised if he doesn't look too thrilled.) This evolution of the Post-PC school is clearly evidenced within the latest Pew Internet report, "How Teachers Are Using Technology At Home and In Their Classrooms". This report shares a huge quantity of data capturing some of the changes taking place within the sector as it attempts to deliver educational attainment to the new generations of 'digital natives' -- not least news that older teachers are finding it hardest to adjust to new technologies in the classroom. [ABOVE: A second Kickstarter project (funded) seeks to create free apps to help teach environmental literacy in Californian schools.] Beware the digital divide The other important take away within the research report is the steady evolution of a large gap between digital haves and have-nots -- addressing this gap is precisely why initiatives to deploy iPads within schools make sense, as they enable a level playing field of learning for kids despite what income bracket their parents may be placed. The Pew Internet survey of 2,462 teachers fed back this statement: "They report that there are striking differences in the role of technology in wealthier school districts compared with poorer school districts." A selection of stats which suggest the impact of Post-PC technologies and the Internet on learning (verbatim, from the report): - 92% of teachers say the Internet has a “major impact” on their ability to access content, resources, and materials for their teaching - 69% say the internet has a “major impact” on their ability to share ideas with other teachers - 67% say the internet has a “major impact” on their ability to interact with parents - 57% say it has had such an impact on enabling their interaction with students - 75% say the internet and other digital tools have added new demands to their lives, agreeing with the statement that these tools have a “major impact” by increasing the range of content and skills about which they must be knowledgeable. - 41% report a “major impact” by requiring more work on their part to be an effective teacher. The Post-PC school is coming That's the Internet, but mobile device access the Internet and teachers report growing use of these among their pupils. 73 percent of teachers say they and their students use their mobile phones in the classroom or to complete assignments. The teachers also note that 43 percent of children use tablet computers in the classroom or to complete assignments (mainly iPads, I imagine). The danger is the marked difference in technology use between low income and well-provided schools. There's a huge difference in the use of tech by kids from less well-heeled families. This is a danger because the impact on educational attainment is such that this digital divide could threaten available future opportunity for children. The report also proves more use of search engines and/or Wikipedia to complete assignments. Technology can't solve every educational challenge. Teachers overall reject the opinion that all that's required is that educators throw technology at problems. Most teachers already find time to be a big limitation on what they can achieve with these post-PC solutions. In order to be effective, technology needs to be introduced appropriately. [ABOVE: A third project looks to empower other elements of the connected classroom, this time to teach "mindfulness" to children.] "There was fairly widespread agreement in focus groups that new technologies should be incorporated into classrooms and schools, as long as they enhance the lesson plan and encourage learning. Some teachers expressed concern that technology is sometimes “forced upon them” for the sake of “keeping up” rather than for actually improving learning." This is an opinion which matches what Apple's strategy within the education sector has been for at least a decade. Apple understands that throwing technology at a problem isn't enough: you need to throw solutions at a problem. These solutions might be based on technology, but also include help and advice in its deployment; lesson plans; marking and assignment and more. The intention is to plan technology deployment before making an investment in the new gadgets, not after they have been acquired. In Europe, Apple runs a network of Apple Distinguished Educators and 150 European training centers. "We teach teachers not just about Apple solutions, but also how to create content that's suitable for digital learning," explained Apple's then director of EMEA education markets, Herve Marchet. "If you want to play in the education market, you need to be a solutions provider. You aren't just bringing in the machine, you must also offer appropriate software, content and models for best practice in content creation. We even offer lesson plans," he said. - Apple has been making substantial inroads in education markets in recent years. Last year, for example, it sold 4.5 million iPads directly into educational institutions in the US, with another 3.5 million sold directly outside the US. Apple continues to explore education markets worldwide, most recently meeting with the president of Turkey to discuss a $4.5 billion iPad education plan, which may involve deployment of 15 million iPads to Turkish schoolchildren. In the UK the Essa Academy school has deployed 2,000 Apple devices for use by staff and students. This school is deeply switched-on to what switches on the digital natives it is trying to teach. Abdul Chohan, the director, said: "The academy's goal is to ensure every pupil has access to Twenty-First century learning resources and to move away from buying printed text books. Imagine being in an environment where a student can use a 'textbook' that's completely personalized for them, flick digital pages over, watch video, look at content, listen to audio, interact with it and then capture their own learning -- this is what we are hoping to achieve. Our vision means that technology will become embedded as the foundation upon which all teaching and learning takes place." The future of education, like that of the enterprise, is the evolution of these technologies to provide logically-crafted and appropriate engaging educational experiences that can be accessed anywhere. The classroom may remain the main hub for teaching, but learning becomes a 24/7 experience. With, or without, the PC. Got a story? Drop me a line via Twitter or in comments below and let me know. I'd like it if you chose to follow me on Twitter so I can let you know when these items are published here first on Computerworld. Note: To understand the appalling English used in the headline to this report, please check here.
<urn:uuid:3145ec69-3f10-4551-8d18-f4a086e85f4d>
CC-MAIN-2017-04
http://www.computerworld.com/article/2474653/apple-ios/is-our-children-learning--the-apple-effect.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00088-ip-10-171-10-70.ec2.internal.warc.gz
en
0.963912
1,926
2.796875
3
Zang K.,Key Laboratory of 3D Information Acquisition and Application | Zang K.,Engineering Center for Spatial Information Technology | Sun Y.-H.,Key Laboratory of 3D Information Acquisition and Application | Sun Y.-H.,Engineering Center for Spatial Information Technology | And 7 more authors. Journal of Natural Disasters | Year: 2010 Miniature unmanned aerial vehicle remote sensing system (MUAVRSS) plays an important role in remote sensing data acquisition. It has many advantages, such as low operation cost, good running flexibility, and so on. First, this article introduces the system composition of MUAVRSS. Then it designs a technical process of monitoring serious natural disaster using MUAVRSS. Finally, we utilized MUAVRSS for taking photos of the town of Beichuan County, which is a heavy disaster area of Wenchuan earthquake. Hundred and seven clear aerial photos were acquired. According to image mosaic and geometry correction, we preliminarily evaluated the suffering condition by comparing to the spaceborne images before the strong earthquake. Better efforts were realized. Source
<urn:uuid:10e29f32-d855-40ab-8526-242691b4f752>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/engineering-center-for-spatial-information-technology-2400465/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00206-ip-10-171-10-70.ec2.internal.warc.gz
en
0.902364
224
2.578125
3
Dutch water experts have teamed up with IBM to launch a new initiative called Digital Delta, which will investigate how to use Big Data to prevent flooding. The Netherlands is a very flat country with almost a quarter of its land at or below sea level, and 55 percent of the Dutch population is located in areas prone to flooding. The government already spends over 7 billion in water management every year, and this is expected to increase 1-2 billion by 2020 unless urgent action is taken. While large amounts of data are already collected, relevant data can be difficult to find, data quality can be uncertain and with data in many different formats, this creates costly integration issues for water managing authorities, according to IBM. The Digital Delta initiative will see Rijkswaterstaat (the Dutch Ministry for Water), local Water Authority Delfland, Deltares Science Institute and the University of Delft using IBM's Smarter Water Resource Management solution to combine data from new and existing water management projects, in order to prepare for imminent difficulties. Delft University of Technology will use IBM Intelligent Operations for Water to access weather predictions, real-time sensor data, topography and information about asset service history to make more informed and timely decisions on maintenance schedules. This will save costs while preventing flooding of tunnels, buildings and streets. Rijkswaterstaat and local water authorities will manage water balance data and share the information centrally through the Digital Delta platform, making it possible for the Dutch water system to optimise the discharge of water and improve the containment of water during dry periods, and prevent damage to agriculture. HydroLogic Research and IBM together with the Delfland Water Board will develop a scalable early flood warning method, through integration of a large amount of real-time measurement data from the water system, as well as weather information and water system simulation models. Meanwhile, Digital Delta will enable Deltares' Next Generation Hydro Software (which facilitates the numerical modeling of rivers, seas and deltas) to access large volumes of data in multiple formats, by maintaining a catalogue of frequently used data and converting it into a standardised form. IBM will use data visualisation and deep analytics to provide a real-time dashboard that can be shared across organisations and agencies. This will enable authorities to coordinate and manage response efforts and, over time, enhance the efficiency of overall water management. With better integrated information, IBM claims that water authorities will be able to prevent disasters and environmental degradation, while reducing the cost of managing water by up to 15 percent. "Aggregating, integrating and analysing data on weather conditions, tides, levee integrity, run off and more, will provide the Dutch government with detailed information that better prepares it to protect Dutch citizens and business, as well as homes, livestock and infrastructure," said Jan Hendrik Dronkers, Director General of Rijkswaterstaat. "As flooding is an increasing problem in many regions of the world, we hope that the Digital Delta project can serve as a replicable solution to better predict and control flooding anywhere in the world." Michael J Dixon, general manager of Global Smarter Cities at IBM added that the implications for this work are global, as cities around the world adopt smarter solutions to better manage the water cycle. "With this innovative collaboration, IBM is setting a worldwide example using the power of Big Data, analytics and optimisation to better manage water quality, flood risk and drought impact, while also stimulating new innovations in this crucial area of technology," he said. This story, "IBM Uses Big Data to Improve Dutch Flood Control" was originally published by Techworld.com.
<urn:uuid:fdeaf23d-a7a3-474e-8265-65b99bb5b587>
CC-MAIN-2017-04
http://www.cio.com/article/2384573/big-data/ibm-uses-big-data-to-improve-dutch-flood-control.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00114-ip-10-171-10-70.ec2.internal.warc.gz
en
0.920903
743
3.125
3
If you are using the Routed network connection on your VM, you will need setup a NAT rule that maps the internal IP address of your server to an address in your public IP allocations. Follow these steps to setup a NAT rule: 1. Click on the Administration tab from your cloud interface. 2. Open up the Virtual Datacenter where your network exists, and click the “Org VDC Networks” Tab. 3. Then right-click on the network and select “Configure Services”. 4. On the “NAT” tab you will be able to create rules to give your machine public access. To create a one-to-one NAT rule for a machine, setup both a source (SNAT) and destination (DNAT) rule to map traffic in each direction. To allow the entire internal subnet internet access, create Source NAT rule specifying the internal subnet as the source, and one of your available IPs as the destination. Ports can also be specified in the NAT rules to direct specific ports of a public IP across multiple internal IPs if desired. Public IPs can be viewed by checking the properties of the edge device on the Edge Gateways screen, and selecting the “Sub-Allocate IPs” tab.
<urn:uuid:73f923aa-55e7-4d5c-91f3-ba21bd3f8a36>
CC-MAIN-2017-04
https://www.greenhousedata.com/faq/how-do-i-assign-a-public-ip-to-my-server
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00418-ip-10-171-10-70.ec2.internal.warc.gz
en
0.881741
267
2.609375
3
Dick Tracy used a two-way wristwatch to communicate; smartwatches have been available to the masses for years now. Star Trek replicators were used to create both objects and food on demand, a role that 3D printers are now starting to fill. Life imitating fiction is hardly new. But technologists and scientists keep taking this to the next level, bringing technologies that were previously only imagined ever closer to reality. And now researchers are working on creating Iron Man’s ARC fusion reactor. Well, maybe not exactly Iron Man’s ARC reactor, but let’s not squabble over the details. The fact is, scientists at MIT are getting close to actually creating an ARC reactor (sadly, not one that will likely power a super-suit donned in order to fight the evils of the world). Luckily for researchers at MIT's Plasma Science and Fusion Center, creating a smaller fusion reactor is meant to yield clean energy, not something they have to do to keep shrapnel from reaching their hearts. Despite federal funding being cut to the project, the MIT scientists are continuing to make progress. Maybe they should call Tony Stark for a loan -- I hear he has deep pockets. To see even more awesome videos on topics like fusion reactors, head over to our IT insights channel on IDG.TV.
<urn:uuid:b5da6765-1a37-47db-84fa-3d4848b2c221>
CC-MAIN-2017-04
http://www.computerworld.com/article/3030654/sustainable-it/fusion-not-fiction.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00234-ip-10-171-10-70.ec2.internal.warc.gz
en
0.957244
274
2.890625
3
NASA this week will be send its first espresso making machine into space letting astronauts onboard the International Space Station brew coffee, tea or other hot beverages for those long space days. +More on Network World: The zany world of identified flying objects+ Making espresso in space is no small feat as making the water heat to the right temperature – 208F – and generating enough pressure to make the brew are critical in the brewing process. And then getting it into a “cup,” well that’s nearly impossible in gravity-free space. [for a NASA discussion on liquids in space go here] NASA, the Italian space agency ASI, aerospace firm Argotec, and coffee company Lavazza have come up with en experimental machine that will deliver the espresso into what basically amounts to a sippy pouch. “Proving this technology in microgravity may lead to new or improved brewing methods. Crew members may enjoy an ISSpresso beverage using specially designed space cups as part of the Capillary Beverage study—an improvement to the standard drinking pouch with a straw. These specially designed containers use fluid properties such as surface tension to control the beverage in the cup. This test may add to the field of micro-fluidics, used in Earth-based medical and drug delivery applications,” NASA stated. Other details on the machine, from NASA: - The ISSpresso requires 120V DC power which is obtained at the Utility Outlet Panel (UOP) on the ISS. - The ISSpresso requires use of a NASA standard drink bag that interfaces with the Potable Water Dispenser (PWD) in the US LAB. A few minutes of crew time is required for each drink to brew. - ISSpresso is installed near a UOP that supplies 120V DC power. After ISSpresso is physically and electrically connected, a Water Pouch is installed, and the unit is powered on. In order to utilize the ISSpresso, a NASA standard drink bag is installed, along with a capsule containing the beverage item that the crew member wishes to drink. - After the item has been brewed, the used capsule and the drink bag are removed. ISSpresso is then powered off, the Water Pouch removed. ISSpresso is then disconnected from the UOP, and it is removed and stowed. The espresso machine is onboard the sixth SpaceX commercial resupply mission to the International Space Station slated for today. SpaceX’s Falcon 9 rocket and Dragon spacecraft are expected to will deliver 4,000lbs of research equipment for physical science, biology, biotechnology, human research and a myriad technology demonstrations to the station. Check out these other hot stories:
<urn:uuid:d20350a6-491f-41d1-b9eb-e6d98c6bfe52>
CC-MAIN-2017-04
http://www.networkworld.com/article/2908031/security0/the-international-space-station-finally-gets-an-espresso-machine.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00142-ip-10-171-10-70.ec2.internal.warc.gz
en
0.920567
557
3.21875
3
There are two basic cable designs available that are used for designing fiber optic networks in North America. One is loose tube fiber cable, applied in many outside plants, duct, direct-buried applications. Another is tight buffered fiber optic cable, primarily used inside buildings. Before selecting a cable design, there are still many more factors need to consider after determining whether the cables will be used inside or outside. The modular design of loose tube cables typically holds up to 12 fibers per buffer tube with a maximum per cable fiber count of more than 200 fibers. In a loose-tube cable design, color-coded plastic buffer tubes house and protect optical fibers, also helps in the identification and administration of fibers in the system. A gel filling compound impedes water penetration. Excess fiber length (relative to buffer tube length) insulates fibers from stresses of installation and environmental loading. Loose-tube cables can be all-dielectric or optionally armored. The cable core, typically surrounded by aramid yarn, is the primary tensile strength member. The outer polyethylene jacket is extruded over the core. If armored is required, a corrugated steel tape is formed around a single jacketed cable with an additional jacket extruded over the armor. Loose-tube cables typically are used for outside-plant installation in aerial, duct and direct-buried applications. These cables are excellent for outside plant applications since they can be made with the loose tubes filled with water-absorbent powder or gel that withstands high moisture conditions. They also give a more stable transmission under continuous mechanical stress. Buffer tubes are stranded around a dielectric or steel central member, which serves as an anti-buckling element. With tight-buffered cable designs, the buffering material is in direct contact with the fiber. It has low crush and impact resistance along with a low attenuation change at lower temperatures. The tight-buffered design is well-suited for “jumper cables” that connect outside plant cables to terminal equipment, and also for linking various devices in a premises network. As with loose-tube cables, optical specifications for tight-buffered cables also should include the maximum performance of all fibers over the operating temperature range and life of the cable. The breakout design and distribution design are the two typical constructions of the tight-buffered cables. The breakout design has an individual jacket for each tight-buffered fiber, and the distribution design has a single jacket protecting all of the tight-buffered fibers. The modular buffer-tube design permits easy drop-off of groups of fibers at intermediate points, without interfering with other protected buffer tubes being routed to other locations. The tight-buffered design provides a rugged cable structure to protect individual fibers during handling, routing and connectorization. Yarn strength members keep the tensile load away from the fiber. Multi-fiber, tight-buffered cables often are used for intra-building, risers, general building and plenum applications. There are single-fiber and multi-fiber tight-buffered cables available. Single-fiber cables have a single fiber strand surrounded by a tight buffer. To terminate loose-tube cables directly into receivers and other active and passive components, single-fiber tight-buffered cables are used as pigtails, patch cords, and jumpers. Multi-fiber cables have two or more tight-buffer cables that are contained in a common outer jacket. General building, risers, and plenum applications often use multi-fiber, tight-buffered cables. These cables are also used for handling ease and flexibility within buildings and alternative handling and routing. With these innovative network designs, optical fiber cables have paved the way for easier, more efficient custom cable assembly. Whether for an administrative, medical, or industrial network, fiber optics networking is quickly becoming the number one choice.
<urn:uuid:4460337a-4089-48ef-a466-0ab8731565a5>
CC-MAIN-2017-04
http://www.fs.com/blog/technology-of-loose-tube-cable-and-tight-buffered-cable.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00262-ip-10-171-10-70.ec2.internal.warc.gz
en
0.921921
797
3.3125
3
Since the server room is always in operation, consumes copious amounts of energy, and houses equipment containing hazardous materials, there is great opportunity for green improvements. Greening the server room will not only reduce harm to the environment, but it will also reduce energy costs through improved efficiencies. This note is the first in a three-part series that examines what IT can do to improve overall energy efficiency and reduce power requirements. Key topics include: - Rising energy costs. - Server room energy consumption. - Power supply efficiency. - Emerging energy standards for servers. - High value, energy-saving tips for the server room. Understanding how to improve energy efficiency in the server room saves money and the environment.
<urn:uuid:73d8f6bb-9635-4822-a688-f0fa3823840e>
CC-MAIN-2017-04
https://www.infotech.com/research/greening-the-server-room-improve-energy-efficiency
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00170-ip-10-171-10-70.ec2.internal.warc.gz
en
0.90645
147
2.765625
3