text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
Overview on APIs
Application Programming Interfaces are a vital part of what makes business communication operate efficiently and securely in an online environment.
APIs establish guidelines for software communication and operation. Without APIs, software would use wildly different methods to accomplish the same goals, requiring programmers to learn a whole new set of rules for each implementation.
In other words, it’s a set of standards that make it easier for new programmers to understand the work of their peers.
APIs are powerful tools that businesses can use to move incredible amounts of data across the Internet. However, API-based communication can introduce problems when uninvited third parties intercept data containing private information like financial records and passwords.
Unless the API implementation is secure, there’s a risk of exposing internal data and customer information.
SOAP and REST: Cross-Device, Cross-Platform Communication API Pros and Cons
Simple Object Access Protocol and Representational State Transfer are two competing API standards that allow one application to communicate with another over a network like the Internet using platform agnostic transfer methods. Which API method you go with is a decision that can go either way depending on your business needs. However, most third-party services use one of two standards.
If you’re getting ready to choose an API, use the comparison points below as a guide:
1) REST supports multiple data output types, including XML, CSV, and JSON. SOAP can only handle XML. Because the JSON format is easier to parse than XML, using REST to send data in JSON can actually save on computer infrastructure costs by requiring less computing power to do the same job. JSON and CSV data is also considered easier to work with from a programming standpoint.
2) REST is also able to cache data transfers, so when another endpoint requests an already completed query, the API can use the data from the previous request. Alternatively, SOAP implementations have to process the query every time.
3) SOAP offers better support for Web Services specifications, often making it a stronger option when standardization and security are primary concerns. Both formats support Secure Sockets Layer for data protection during the transfer process, but SOAP also supports WS-Security for enterprise-level protection.
When you’re dealing with crucial private information like bank account numbers, it makes more sense to use SOAP. However, SOAP’s extra security isn’t necessary if you’re sending the day’s forecast to a mobile application.
While SOAP may sound like it has a total advantage over REST in this case, it comes down to how well the API is implemented. A good REST implementation can be more secure than a poorly-designed SOAP implementation. SOAP also has built-in error handling for communication errors via the WS-ReliableMessaging specification. REST, on the other hand, has to resend the transfer whenever it encounters an error.
Testing for API Security and Stability Is Essential
API testing is very different in nature from debugging a website or an application, because whether the software works or not depends on the processing servers and systems handling the heavy lifting.
APIs move a lot of data behind the scenes, and it’s not as obvious to spot when the implementation is working reliably. Errors in the data transfer requesting handling programming can cause incorrectly formatted responses, which the software won’t be able to use.
It’s extremely important that the API platform can handle all the concurrent users that will be accessing the services at the same time. Bottlenecks in the API can cause the service to respond slowly—and the negative effects can rebound in application functionality, website performance, and customer satisfaction. These problems can be compounded when it’s unclear which API endpoint is experiencing the problem.
A service like Apica’s API testing platform can simulate SOAP and REST API users in the testing portal to make sure your implementation is efficient and able to handle the workload. If it isn’t, the service can pinpoint any problematic areas.
Are You Ready to Achieve Peak Web and Mobile Performance?
Start a 6 month full-featured trial of Apica LoadTest or a 30-day trial of Apica WPM.
Start Your Free Trial! | <urn:uuid:08f329d0-7dd3-4ed5-9e7b-01aaa7fe54b9> | CC-MAIN-2017-04 | https://www.apicasystem.com/blog/understanding-security-dependability-soap-rest/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00535-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.896201 | 870 | 3.203125 | 3 |
A World Wide Name (WWN) or World Wide Identifier (WWID) is a unique identifier used in storage technologies including Fibre Channel, Advanced Technology Attachment (ATA) or Serial Attached SCSI (SAS). A WWN may be employed in a variety of roles, such as a serial number or for addressability; for example, in Fibre Channel networks, a WWN may be used as a WWNN (World Wide Node Name) to identify a switch, or a WWPN (World Wide Port Name) to identify an individual port on a switch. Two WWNs which do not refer to the same thing should always be different even if the two are used in different roles, i.e. a role such as WWPN or WWNN does not define a separate WWN space. The use of burned-in addresses and specification compliance by vendors is relied upon to enforce uniqueness.
World Wide Port Name, WWPN, or WWpN, is a World Wide Name assigned to a port in a Fibre Channel fabric. Used on storage area networks, it performs a function equivalent to the MAC address in Ethernet protocol, as it is supposed to be a unique identifier in the network.
A World Wide Node Name, WWNN, or WWnN, is a World Wide Name assigned to a node (an endpoint, a device) in a Fibre Channel fabric. It is valid for the same WWNN to be seen on many different ports (different addresses) on the network, identifying the ports as multiple network interfaces of a single network node. | <urn:uuid:4b2cde1b-7b61-4709-872d-baa511356d68> | CC-MAIN-2017-04 | https://community.emc.com/thread/170991?start=0&tstart=0 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279923.28/warc/CC-MAIN-20170116095119-00261-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.892384 | 320 | 2.921875 | 3 |
Oak Ridge National Laboratory Storage System Delivers Over One Terabyte Per Second in Throughput to Drive Radical Advances in Science and Big Data Analysis, Essential to DOE and Office of Science Missions
ORNL is a national multi-program research and development facility managed by UT-Battelle for the U.S. Department of Energy. The Oak Ridge Leadership Computing Facility (OLCF) was established at Oak Ridge National Laboratory in 2004 with the mission of providing leadership computing for scientists working on some of the world’s most pressing problems.
In support of its new Titan supercomputer, Oak Ridge National Laboratory (ORNL) has selected DataDirect Networks (DDN) to build the world’s fastest storage system to power the fastest supercomputer in the world. Titan is designed to deliver a peak capability of over 27,000 trillion calculations per second, or 27 petaflops, a system that is over ten times more powerful than previous generations of ORNL computers.
Buddy Bland, project director for the Oak Ridge Leadership Computing Facility at Oak Ridge National Laboratory:
“When building the world’s fastest system for data intensive computing, we carefully considered all aspects of high-throughput I/O infrastructure and how efficient storage platforms can complement our supercomputer’s efficiency. The ORNL and DDN teams have worked together to architect a file system designed to enhance the performance of our Titan supercomputer and enable our users to achieve unprecedented simulations and big data insights through massively scalable computing.”
For the growing number of problems where experiments are impossible, dangerous, or inordinately costly, advances of this compute magnitude offer the benefit of immediate and transformative insights in energy, national security, the environment and the economy, as well as to answer fundamental scientific questions. Using DDN’s SFA12K-40 storage systems as the backbone for Spider II, this new file storage system is designed with 40 petabytes of raw capacity and is capable of ingesting, storing, processing and distributing research data at unprecedented speed. This amount of storage capacity is equivalent to more than 227,000 miles of stacked books – or the distance from ORNL’s facility in Oak Ridge, TN to the moon – and enables ORNL to dramatically increase Titan’s computational efficiency and deliver vastly more accurate predictive models than ever before. As the de facto standard in storage for the world’s leading supercomputers, DDN continues to push the frontiers of science and technology from laptop to petaflop, building on its $100M investment in extreme scale computing and commitment to the DOE’s FastForward program to pave the road to exascale.
DDN Sets Standard for High Performance Computing
- ORNL selected the DDN SFA12K-40 as the high-throughput building block for its Lustre* parallel file system. The platform delivers performance in excess of 10x what is achievable with contemporary scale-out NAS systems.
- Building on a decade of ORNL and DDN optimizations for the Lustre file system, the DDN system is configured with Lustre performance of over one terabyte per second to meet the demands of Titan’s 299,008 CPU cores.
- The ORNL Spider II configuration from DDN includes:
- 36 DDN SFA12K-40 systems, each with 1.12PB of raw storage capacity;
- Over 40PB of raw capacity in only 36 data center racks;
- A combined 20,000 disk drives in a single system.
- The combination of DDN’s and ORNL’s expertise in scaling Lustre in production environments will enable Titan to perform approximately 6x faster with 3x the capacity of its predecessor, Spider.
- Architecturally unique in many ways, Titan’s power, scalability and efficiency serve as a showcase for the requirements of tomorrow as high performance computing (HPC) technologies continue to be adopted across the enterprise for Big Data computing. | <urn:uuid:70c76f19-9835-4d91-a0e3-4db5e784f53e> | CC-MAIN-2017-04 | http://www.ddn.com/customers/ornl/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00408-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.897797 | 815 | 2.875 | 3 |
Two researchers have developed a mathematical model for discovering the optimal moment to deploy specific cyber weapons in their arsenal.
In a research paper recently published by Proceedings of the National Academy of Sciences (PNAS), Professor of Political Science and Public Policy at the University of Michigan Robert Axelrod and postdoctoral research fellow Rumen Iliev have described the equation they created and the things it takes in consideration:
- The weapon’s stealth, i.e. the probability that if you use it now it will not be detected and will still be usable in the next time period
- The weapon’s persistence, i.e. the probability that if you refrain from using it now, it will still be useable in the next time period
- The value of the weapon, which is directly tied with its stealth and persistence
- The current and likely future stakes
- The threshold of stakes that will cause you to use the weapon
- The discount rate – a reflection of the fact that a given payoff is less a year from now than it is today.
“Both stealth and persistence depend not only on the resource itself, but also on the capacity and vigilance of the intended target,” they explained. “The stealth of resource used against a well-protected target is likely to be less than the stealth of the same resource against a target that is not particularly security conscientious. Likewise, a resource will typically have less persistence against a target that keeps up-to-date on security patches than one that does not.”
The equation shows a number of (fairly obvious) things. For one, the more stealthy the weapon, the better is to use it sooner rather than later. Secondly, the more persistent the weapon is, the longer its use can be postponed.
The researchers tested their model on past attacks – Stuxnet, the Iranian attack on Saudi Aramco, and your garden-variety, everyday Chinese cyber espionage – and has proven true, they claim.
The Stuxnet worm had low persistence because it used four different zero-day exploits, and it was designed to be very stealthy. The stakes were high: it was better to delay Iran’s ability to attain enough enriched uranium for nuclear weapons that throw wrenches in their plans later.
“Our model predicts that a resource like Stuxnet that was expected to have poor persistence and comparatively good stealth would be used as soon as possible, and certainly in a high-stakes situation. This is apparently just what happened,” they pointed out.
In Saudi Aramco’s case, they weapon used wasn’t stealthy, but the stakes were high enough to warrant swift action, which was, again, what happened.
On the other hand, Chinese cyber espionage campaigns are usually not performed at the optimal moment, but it’s difficult to say why. “Second-guessing a nation’s choice is always problematic,” the researchers noted.
“This paper clarified some of the important considerations that should be taken into account in any decision to use a method of exploiting a target’s vulnerability. The focus has been on optimal timing for such use,” they researchers shared.
“This kind of analysis can help users make better choices and help defenders better understand what they are up against. In some situations, one may want to mitigate the potential harm from cyber conflict, and in other situations, one may want to harness the tools of cyber conflict. In some cases, one might want to do both. In any case, an important step is to understand the logic inherent in this new domain.” | <urn:uuid:52b41ef9-637a-4c3b-908e-f689a61f8b7c> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2014/01/14/equation-determines-the-optimal-moment-for-a-cyber-attack/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00464-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.960541 | 747 | 2.625 | 3 |
Recently, Fiberstore has launched an excellent new product named DWDM Red/Blue Band Filter. Questions, such as “What is red/blue band filter?” or “How does it work?” or “Where should I use?” etc. may be the focus of interest of many users. Based on these questions, we will make a brief introduction to the DWDM Red/Blue Band Filter in today’s blog.
What’s DWDM Red/Blue Band Filter?
In recent years, filter-based, Wavelength Division Multiplexing (WDM) related products have become a popular method of increasing bandwidth throughout optical networks. As an important component of DWDM system, DWDM red/blue band filter is the result of years of telecommunications experience in
interference filter technology. Like other DWDM filters, the Red/Blue band filter is also based on environmentally stable thin film filters technology. There are a variety benefits of this micro-optics device. DWDM red/blue band filter is characterized with wide passband, low insertion loss, high return loss, excellent environmental stability and high power handling capability.
How Does DWDM Red/Blue Band Filter Work?
A typical DWDM red/blue band filter is a three-port device. One port is called the “common” while the other two ports respectively called “pass” and “reflect” that provide the conduit for the two wavelength “bands”. These two bands are called Blue band (λ<1543 nm) and the Red band (λ>1547 nm). Reflected port and the pass port must be designated either Red or Blue, and then one band goes through the reflected port, and the other band goes through the pass port. In general, the band, which requires the highest isolation, goes on the pass port. It can be used as a band combiner or a band splitter. When it is a combiner, the bands are combined in the filter and sent to the common port. Conversely, if it is as a splitter, the both bands are fed from the common port and split to the corresponding port (pass port or reflect port). Thus, it is typically used as a bidirectional WDM (ie. One band is sent to the common port, while the other band is delivered from the common port).
Where Should I Use DWDM Red/Blue Band Filter?
According to its working principle, red/blue band filter can be used in several ways, such as red and blue band separators, application of DWDM system monitoring and erbium doped fiber amplifiers etc. In addition, when using a Red/Blue filter in a DWDM module, a Mux may be combined with a Demux. For example, the Mux combines DWDM channels in the Red band, while the Demux separates DWDM channels in the Blue Band. In other word, using a Red/Blue filter, one can combine the Red Transmit channels and the Blue Receive channels onto a single fiber.
DWDM Red/Blue Band Filter in Fiberstore
Fiberstore recently launches 1×2 DWDM Red/Blue Band Filter in three package types: Typical, Plastic ABS Box Package and 1RU 19″ Rack Mount Package. In addition, to better meet your special control and application, Fiberstore supplies custom channel and custom band filters, all units are available as steel tubes, modules, rack mounts or custom mount designs with a more competitive price. Welcome any product consultations if you are interested in our products.
Warm Tips: You can know more details of our new excellent DWDM Red/Blue Band Filters on line or contact us directly over E-mail firstname.lastname@example.org. | <urn:uuid:8b4826a3-b4d1-4be5-97db-1f8f804014ba> | CC-MAIN-2017-04 | http://www.fs.com/blog/brief-introduction-to-dwdm-redblue-band-filter.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00372-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.92353 | 793 | 2.875 | 3 |
Simple Mail Transfer Protocol – SMTP is used as an internet standard for the transmission of e-mails in case of an IP (internet protocol network) network. SMTP was first time defined in 1982 by RFC 821 but recently updated in the year 2008 by RFC 5321. Nowadays, its comprehensive form that is referred as extended SMTP is in widespread use. SMTP is with the help of TCP port 25 is particularly use for outgoing mail transportation. The main aim of simple mail transfer protocol is to transmit emails reliably in more efficient way as compare to others in use. But SMTP as a self-determining protocol has no concern with other communication’s subsystems and is worked out as an independent and trustworthy channel for the data stream.
Well! SSL (Secure sockets layer) is used for the protection of SMTP connections. Moreover, today’s e-mail servers are utilizing this protocol for both electronic mail sending and receiving purposes. Anyway, different IETF SMTP specifications are including: RFC 821 (offered in 1982), RFC 1123(offered in 1989), RFC 1425 (offered in 1983), RFC 1651 (offered in 1994 as the replacement of RFC 1425 and RFC 1869 (offered in 1995 as the replacement of RFC 1651).
One important thing to be noted about SMTP is that it is a text based transportation protocol and the process of communication by mail sender with receiver is done by issuing specific command series such as mail, RCPT etc along with some required data over a consistent and structured data stream channel like TCP (transmission control protocol).
As mentioned above, in order to communicate along with a mail sever STMP uses different commands but most commonly used SMTP transactions are consisted on three commands and these can be elaborated as:
- MAIL command which indicates the sender’s address. In other words, it is used to set-up the return address as <[email protected]>
- RCPT command indicates the receiver’s e-mail address. While communicating, this command might be issued more than one time for multiple recipients
- DATA is consisted on message content and these can be separated with the help of empty lines as message header and message body
Some more SMTP commands are such as: BDAT, BURL, DSN, EHLO, AUTH, ONEX, SIZE, SOML, NOOP etc. While reply codes for STMP are as: 211 (system status or help reply), 214 (help messages), 453 (no mail for you) etc.
While processing STMP transactions, a server’s answer back can either be in positive or in negative way besides the transactional reply intended for data. Furthermore, SMTP client can be an email client that is also known as email user agent or a server email transfer agent. To convey messages, the SMTP servers can act as SMTP clients in an undergoing session.
For your convenience, a session establishment etc can be explained with the following SMTP model as on the sender request, a two way communication channel establishes by the sender STMP that further generates commands for the receiver STMP. After the channel organization, “MAIL” command points out the sender mail and on receiver acceptance, OK can be sent back as a reply by the receiver while STMP from sender side using RCPT command can identify the mail. Again, OK reply is sent but now from the receiver STMP side. Recipient can be one or more than one in such negotiation sessions. | <urn:uuid:5c710acc-3915-40b2-bb72-2d9aefbfb9f5> | CC-MAIN-2017-04 | https://howdoesinternetwork.com/2011/smtp | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00096-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.938127 | 728 | 3.765625 | 4 |
While most parents say they want to be better financial role models for their teens, too few parents are capitalizing on key opportunities to talk with teens about important money management topics, according to a new survey of parents and teens from Capital One.
More than 80 percent of parents surveyed say they want to be better role models for their children regarding money management and that they have made money mistakes they want their own children to avoid. Yet, more than a third (35 percent) of teen respondents said that when they ask for money, their parents rarely or never use these occasions to discuss topics such as budgeting and planning for the future. More than half (53 percent) of parents surveyed report that their teen asks them for money at least once a week, providing ample opportunities for parents to talk with teens about making positive financial choices. However, for many parents and teens, these conversations instead turn into disagreements. Nearly one-third (29 percent) of parents of teens surveyed said they argue with their teen about money at least every month.
While talking about money can be difficult, more than half of teen respondents (53 percent) said they want to learn more about topics like budgeting, saving and credit. Teens also reported that their parents are their primary resource for learning about money management.
In recognition of Financial Literacy Month and the importance of developing solid money management skills as part of a teen's successful development, Capital One and Search Institute have partnered to create Bank It, a multimedia financial literacy program that helps parents and teens talk about, understand and manage money. Through an interactive web site (www.bankit.com) and local face-to-face workshops, the program empowers families to explore twelve key topics, including budgets, goals and strategies for making financial choices that count. Bank It is also designed to reach families where they are most comfortable, whether online or through a local community-based organization. The survey results found that the number one preferred setting for both parents and teens to learn about how to discuss money management topics is through an online resource.
"Bank It was designed to help parents and teens more easily and effectively talk about financial choices, challenges and dreams. Through free, easy-to-use online tools, parents and teens can work together to learn practical skills for making positive money choices and avoiding common mistakes," said Carolyn Berkowitz, vice president, community affairs, Capital One. "Capital One is committed to investing in programs that help children, teens and adults increase their money management skills through innovative, interactive learning opportunities. Our goal with these programs is to help to set individuals of all ages on the path to a life of fiscal responsibility and economic success."
Bank It blends financial information with Search Institute's research-based framework of Developmental Assets, a widely used approach to youth development in communities, schools, families and youth organizations. The Institute's Developmental Assets identify the relationships, opportunities, skills, values and commitments young people need to make wise choices and succeed in life.
Braun Research was engaged to conduct 500 interviews with parents as well as their 13-18 year olds throughout the U.S. In order to achieve two interviews per household, Braun Research interviewed 802 households with parents and continued until 500 teens were reached in the same households. | <urn:uuid:b22bb98b-a059-475b-8fde-508ac389d50c> | CC-MAIN-2017-04 | http://www.banktech.com/data-and-analytics/80--of-parents-of-teens-want-to-be-better-financial-role-models-survey-finds/d/d-id/1293731 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.60/warc/CC-MAIN-20170116095120-00032-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.964574 | 660 | 2.546875 | 3 |
As the exascale barrier draws ever closer, experts around the world turn their attention to enabling this major advance. Providing a truly deep dive into the subject matter is the Harvard School of Engineering and Applied Science. The institution’s summer 2014 issue of “Topics” takes a hard look at the way that supercomputing is progressing.
In the feature article “Built for Speed: Designing for exascale computers,” Brian Hayes considers all of the remarkable science that will be enabled if only the computer is fast enough.
Hayes explains that the field of hemodymamics is poised for a breakthrough, where a surgeon would be able to perform a detailed simulation of blood flow in a patient’s arteries in order to pinpoint the best repair strategy. Currently, however, simulating just one second of blood flow takes about five hours on even the fastest supercomputer. To have a truly transformative effect on medicine, scientists and practitioners need computers that are one-thousand times faster than the current crop.
Getting to this next stage in computing is high up on the list of priorities of SEAS. Hayes writes that science and engineering groups in the school are contributing to software and hardware projects to support this goal while researchers in domains such as climatology, materials science, molecular biology, and astrophysics are gearing up to use such powerful resources.
From here, Hayes details the numerous challenges that make exascale a more onerous challenge than previous 1000x milestones. For a while, chipmakers relied on increasing clock rates to drive performance gains, but this era is over.
“The speed limit for modern computers is now set by power consumption,” writes Hayes. “ If all other factors are held constant, the electricity needed to run a processor chip goes up as the cube of the clock rate: doubling the speed brings an eightfold increase in power demand.”
Shrinking transistors and putting multiple cores on each chip (multicore) has helped boost the total number of operations per second since about 2005. However, there is of course a fundamental limit as to how small the feature sizes can be before reliability becomes untenable.
From an architecture perspective, systems have gone from custom-built hardware in the 1980s to vanilla off-the-shelf components through the 1990s and 2000s. Now there is a swing back to specialized technologies again. The first petaflopper, Roadrunner, used a hybrid design with CPU working in tandem with specialized Cell BE coprocessors. Now most of the top supercomputers are based on a heterogenous architecture, using some combination of CPUs and accelerators/coprocessors.
The challenges are not just on the hardware side. Hanspeter Pfister, a Wang Professor of Computer Science and director of IACS who was interviewed by Hayes, believes getting to exascale will require fundamentally new programming models. Pfister points out that the LINPACK benchmark is the only program that can rate and rank machines at full speed. Other software may harness only 10 percent of the system’s potential. There are also issues with operating systems, file systems and middleware that connects databases and networks.
Pfister is also quite skeptical of the future of programming tools like MPI and CUDA. “We can’t be thinking about a billion cores in CUDA,” he says. “And when the next protocol emerges, I know in my heart it’s not going to be MPI. We’re beyond the human capacity for allocating and optimizing resources.”
Some believe that the only tenable solution to extreme-scale computing is getting the hardware and software folks in the same room. This approach, called “co-design” will help bridge the gap between what users want and what manufacturers can supply. The US Department of Energy has established three co-design centers to facilitate this kind of approach.
The US DOE originally intended to field an exascale machine sometime around 2018, but that timeline slipped due primarily to a lack of political will to fund the effort. Since then 2020 has been bandied about as a target, but that may also be overly optimistic. One data point in support of getting to exascale sooner rather than later is the need to conduct virtual nuclear testing in support of stockpile stewardship. This program alone, according to one expert interviewed for the piece, is enough to ensure that exascale machines are built. There are other applications that could also come to be regarded as critical for national security, for example climate modeling. | <urn:uuid:451d1001-3d04-4b37-8ad2-a21569ca8c06> | CC-MAIN-2017-04 | https://www.hpcwire.com/2014/07/24/getting-exascale/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00152-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.944158 | 943 | 3.3125 | 3 |
2.3.2 What is a one-way function?
A one-way function is a mathematical function that is significantly easier to compute in one direction (the forward direction) than in the opposite direction (the inverse direction). It might be possible, for example, to compute the function in the forward direction in seconds but to compute its inverse could take months or years, if at all possible. A trapdoor one-way function is a one-way function for which the inverse direction is easy given a certain piece of information (the trapdoor), but difficult otherwise.
Public-key cryptosystems are based on (presumed) trapdoor one-way functions. The public key gives information about the particular instance of the function; the private key gives information about the trapdoor. Whoever knows the trapdoor can compute the function easily in both directions, but anyone lacking the trapdoor can only perform the function easily in the forward direction. The forward direction is used for encryption and signature verification; the inverse direction is used for decryption and signature generation.
In almost all public-key systems, the size of the key corresponds to the size of the inputs to the one-way function; the larger the key, the greater the difference between the efforts necessary to compute the function in the forward and inverse directions (for someone lacking the trapdoor). For a digital signature to be secure for years, for example, it is necessary to use a trapdoor one-way function with inputs large enough that someone without the trapdoor would need many years to compute the inverse function (that is, to generate a legitimate signature).
All practical public-key cryptosystems are based on functions that are believed to be one-way, but no function has been proven to be so. This means it is theoretically possible to discover algorithms that can compute the inverse direction easily without a trapdoor for some of the one-way functions; this development would render any cryptosystem based on these one-way functions insecure and useless. On the other hand, further research in theoretical computer science may result in concrete lower bounds on the difficulty of inverting certain functions; this would be a landmark event with significant positive ramifications for cryptography. | <urn:uuid:26505613-e952-4051-8808-279807ac99a7> | CC-MAIN-2017-04 | https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/what-is-a-one-way-function.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00546-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.910126 | 448 | 4.21875 | 4 |
Nebula today unveiled Nebula One, the first enterprise cloud computer, built from the ground up to power big data, web, and mobile applications.
Nebula One is a turnkey private cloud system that provides compute, network and storage services through a simple self-service interface and popular APIs, using industry-standard servers from vendors such as HP, IBM, and Dell.
At the core of the Nebula One cloud system is the Nebula Cloud Controller, a hardware appliance that turns racks of certified industry-standard servers into a scalable on-premise infrastructure-as-a-service cloud system.
Nebula One runs Cosmos, Nebula’s distributed enterprise cloud operating system, which builds on OpenStack to provide a rich self-service user experience and compatibility with Amazon Web Services and OpenStack APIs.
While a single-rack deployment is enough for many medium-sized businesses, the Nebula One system can scale to multi-rack deployments to meet the needs of large enterprises.
PARC, the R&D lab that invented the computer mouse and the graphical user interface, has selected Nebula to power their private cloud infrastructure. “PARC researchers can now use and reuse the readily-available compute resources they need from the Nebula One cloud, provisioning in minutes what once took days to manually provision or months to procure,” said Walt Johnson, Vice President, Intelligent Systems Lab, PARC. | <urn:uuid:2ffad6bf-e3c2-4541-b47d-10bad7530590> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2013/04/02/nebula-one-the-first-enterprise-cloud-computer/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00546-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.909771 | 286 | 2.515625 | 3 |
The Most Important Thing to Know about Tests
A certification test is like a piece of sophisticated equipment, such as a cell phone. We usually don’t know what’s inside the phone or what goes into producing it, but we like that it works. And we do have some basic understanding of its function, which helps. It’s the same with tests. Their purpose is clear, but how they are built and why are generally mysteries to those who take the tests, including IT certification candidates.
Because the tests have such profound effects on us, we need to understand them better. Not completely, of course. That would take more time and effort than it would be worth. But enough to get some basic questions answered. Here are a few of the more important questions:
- How is a passing score set? How can a single score separate people who are competent from those who aren’t?
- Why are there so many stupid questions in tests?
- Why aren’t the tests shorter? Or longer?
- Why does it cost so much to take a test?
- What percentage of people pass the test?
- Why are the tests mostly multiple-choice?
- Aren’t there better ways to test someone?
- How is the test scored?
- How is the time limit set?
- Why doesn’t the test cover all of the material I studied?
These are all good questions. And many others could be on the list. It’s a good idea to cover these questions in this column, which I’ll do from time to time. Today, however, there is space to answer only one of them, and I picked the last one on the list: Why doesn’t the test cover all the material I studied?
The answer to this has to do with the test’s design. So first I need to contrast the terms “test” and “test form.” If you register to take a test at a testing center, you are actually signing up to take one of several possible test forms. A test form for a typical IT certification exam contains questions sampled from the entire domain of skills. If you were able to gather them and list all of the questions from all of the test forms, you would see all of the domain’s topics represented. If all of the questions were part of only one big test form, it could be several hundred questions long and take you several hours to complete.
Programs divide the entire set of questions more or less randomly into test forms. The purpose is to build test forms of sufficient length to measure you well, at the same time covering a reasonable and representative sample of the domain of skills. And part of the test design makes sure that the forms are not so large that they take too much time and cost more to take.
One important additional point about test forms: They are built and validated to produce equivalent scores for any test taker. So it doesn’t matter which one is randomly selected for you.
The test taker—you in this case—should be unaware of the exact test form and therefore the exact questions to be presented that day. This will properly motivate you to study all of the material, complete the full training course, gather as much experience as possible, etc. Obviously, that’s not a bad thing for you or the certification program. And it’s a good thing for anyone who hires someone who passed the test and got certified.
To reinforce the point, if you were given the subset of topics covered in a specific test form you were scheduled to take, you would probably only prepare to answer questions on those topics. If every test taker did the same thing, there would be no consistency in the quality of those who are certified. And each person’s capability would be too narrow to be effective on the job.
To summarize, a test as published by an IT certification program contains all of the questions that cover the domain. The test forms (what you will see) contain reasonable and equivalent samples of that content.
Here’s another, somewhat dubious, advantage: Say you study all of the material you can, but you still fail the test. The good news is that you already have a head start on preparing for the retake.
David Foster, Ph.D., is president of Caveon (www.caveon.com) and is a member of the International Test Commission, as well as several measurement industry boards. | <urn:uuid:b10b5be6-c260-42f1-bc7d-cab71006baa4> | CC-MAIN-2017-04 | http://certmag.com/the-most-important-thing-to-know-about-tests/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00298-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.968134 | 934 | 2.6875 | 3 |
Michigan State University's Institute for Health Care Studies, in collaboration with the Michigan Department of Community Health, Tomorrow's Child, and the Michigan Public Health Institute has developed a Web-based Safe Sleep Course to provide individuals caring for pregnant women, infants and care-givers with education strategies and interventions to promote a consistent safe sleep environment.
Development of the course is one of many steps multiple state agencies are taking in response to a rise in the preventable infant deaths. Every year in Michigan, close to 50 infants, or one child every week, dies due to unsafe sleep practices. Risk factors identified by local Child Death Review Teams include:
By making training readily available to providers, more mothers and families will receive the consistent unified messages on how to keep their babies safe while sleeping. Registered nurses who successfully complete the Safe Sleep Course and submit an evaluation will receive .50 nursing continuing education contact hours through MSU, an approved provider of continuing education by the Michigan Nurses Association.
The Web-based Safe Sleep Course can be accessed for free by going to http://learning.mihealth.org. | <urn:uuid:238539a6-2023-496c-84c9-c23e5e7b9194> | CC-MAIN-2017-04 | http://www.govtech.com/health/Online-Training-Provides-Safe-Sleep-Guidance.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281419.3/warc/CC-MAIN-20170116095121-00050-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.930752 | 220 | 2.75 | 3 |
5 modifications of the worm have already been detected
Since "Nimda" was discovered on September 18, 2001 Kaspersky Lab has detected 5 more modifications of this network worm. Some of them have already been seen "in-the-wild" but fortunately none of them has caused an epidemic compared to the original one. Kaspersky Lab recommends users to carefully read the descriptions of the recently discovered Nimda modifications and to download the latest KasperskyTM Anti-Virus database updates to prevent infection.
The original worm discovered on September 18, 2001.
"Nimda" penetrates a computer in several different ways:
First of all, via e-mail: an infected e-mail in HTML format, containing several embedded objects enters a target computer. Upon viewing the e-mail, one of the objects (named README.EXE, about 57Kb size) is automatically executed unbeknownst to the user. In order to accomplish this, the worm exploits a breach in Internet Explorer's security that was first detected in March of this year.
Secondly, while surfing infected Web sites: in place of the original Web site, a user is shown its modified version containing a malicious Java program, which downloads and starts the "Nimda" copy on a remote computer, using the aforementioned breach.
Thirdly, via the local network: the worm scans all accessible network resources, dropping thousands of copies of itself here. This is done with the idea that upon finding the file on a disk or server, a user will single-handedly infect his/her own computer.
In addition to penetrating workstations, "Nimda" also carries out an attack on Web servers running under Microsoft Internet Information Server (IIS). To do this it exploits a breach in IIS called "Web Server Folder Traversal" as described in the corresponding Microsoft security bulletin.
Slightly modified original "Nimda" worm, but compressed with PCShrink utility. The filenames "README.EXE" and "README.EML" are replaced with "PUTA!!.SCR" and "PUTA!!.EML".
This is exactly original "Nimda" worm, but compressed by UPX compressor.
Slightly modified original "Nimda" worm, but compressed with PECompact utility. The only difference with the original worm is "copyright" text strings are patched in this version with following text: "HoloCaust Virus.! V.5.2 by Stephan Fernandez.Spain".
This is recompiled "Nimda" variant with several subroutines fixed and optimized. This variant was found in-the-wild at the end of October 2001. The visible differences with original worm version are:
The attached file name: SAMPLE.EXE (instead of README.EXE)
The DLL files are: HTTPODBC.DLL and COOL.DLL (instead of ADMIN.DLL)
The "copyright" text is replaced with:
Concept Virus(CV) V.6, Copyright(C)2001, (This's CV, No Nimda.)
A more detailed description of the worm is available in the Kaspersky Virus Encyclopedia.
Defense procedures thwarting all known modifications of "Nimda" have already been added to the Kaspersky Anti-Virus database update. | <urn:uuid:b6fa990f-56f7-42c1-bd10-a5144cd7cf04> | CC-MAIN-2017-04 | http://www.kaspersky.com/au/about/news/virus/2001/_Nimda_Is_Breeding | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00536-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.9083 | 706 | 2.515625 | 3 |
Universal XSS (UXSS) is a particular type of Cross-Site Scripting that has the ability to be triggered by exploiting flaws inside browsers, instead of leveraging the vulnerabilities against insecure web sites.
One of these UXSS flaws was disclosed earlier today on russian forum rdot, the flaw takes advantage of the Data URI Scheme to execute script using the MIME Type ‘text/html’, which makes the browser render it as a webpage.
So, how would an attacker exploit this fancy new bug?
The first trick here is to use the Data URI Scheme in combination with another (less dangerous) flaw called “Open Redirection” which happens when an attacker can use the webpage to redirect the user to any URI of his choice.
So if you don’t have one of these “Open Redirection” bugs on your website, you’re safe, right? Not so fast. There’s websites that are made exclusively for this purpose to shorten URI’s like bit.ly andtinyurl.com.
Here’s a proof-of-concept link on tinyurl: http://tinyurl.com/operauxss. If you open this link in Opera, you will find yourself looking at an alert box saying “tinyurl.com”.
Hang on, there’s more! The original author of the forum post, M_script, pointed out that you could take this one step further.
This is where the clever part of this vulnerability comes in play. If you embed a script in the payload that calls the method location.reload() in Opera, it will update the current domain to the original domain where the link was clicked.
This means that an attacker may execute script not only from the domain containing the open redirect, but also All domains allowing links to other domains. Yes, you read that right.
Here’s a proof-of-concept link with the second stage of this vulnerability: http://tinyurl.com/operauxssstep2.
Other browsers block redirects to the Data URI Scheme or changes the domain where the script is executed from, avoiding the XSS issue.
What can you do to protect yourself against this bug?
If you don’t want to change browser, you can head over to Tools->Preferences->Advanced->Network and uncheck the checkbox labeled “Enable automatic redirection”.
Update: Opera has now released a patch for this problem. Update your Opera browser to version 12.10.
By: Mathias Karlsson | <urn:uuid:1335d60b-ea9a-47c6-8517-1fa8f0b73d46> | CC-MAIN-2017-04 | https://labs.detectify.com/2012/10/05/universal-xss-in-opera/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00536-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.895994 | 537 | 2.734375 | 3 |
The Human Brain project officially begins this month with more than 130 research institutions from Europe and around the world and hundreds of scientists from a diverse range of fields. With €1.2 billion in funding from the EU, this is unquestionably the most ambitious neuroscience effort ever launched. Project backers anticipate that a deep understanding of how the human brain operates will open the way for tremendous advances in medical and information technologies.
On Monday, October 7, 2013, six months after the Human Brain project was selected by the EU as one of its FET Flagships, the project partners met at EPFL (Ecole polytechnique fédérale de Lausanne), the coordinating institution, to begin the first leg of this exciting journey. Gathering for this one-week intensive workshop are a multitude of domain experts, including neuroscientists, doctors, computer scientists and roboticists.
The initial project goal is to create six research platforms matched to the following specialities: neuroinformatics, brain simulation, high-performance computing, medical informatics, neuromorphic computing and neurorobotics.
During the next 30 months team members will build and test the platforms. They are scheduled for completion by 2016 at which point they will be turned over to Human Brain Project scientists as well as researchers from around the world – and that’s when the science begins. As with most preeminent academic and government computational resources and tools, these platforms will be available on a competitive basis.
In honor of this launch, the Brain Project has published several high-quality videos describing the different aspects of this endeavor.
In this video, called “Future Computing,” the narrator explains that simulating the complete human brain will require supercomputers 100 times more powerful than any that exist today. The project’s high-performance computing platform will offer simulation scientists unprecedented exascale capabilities and multi-scale technologies will enable each part of the brain to be modeled at the appropriate level of detail. An emphasis on interactive supercomputing will allow scientists to work with the new platforms – to visualize and engage with simulations – in the same way that they would utilize other lab devices and instruments.
“New high-performance computing technologies from the Human Brain Project will have an impact that goes far beyond brain research,” the narrator reports. “No engineered system can match the flexibility, resilience and energy-efficiency of the human brain. None can match its ability to effortlessly learn new tasks without programming.
“One of the Human Brain Project’s most important goals is to develop a completely new category of neuromorphic computing systems – chips, devices and systems – directly inspired by detailed models of the human brain. Neuromorphic computing has the potential to make an enormous impact in industry, transport services, health care and in our daily lives.” | <urn:uuid:4c9d9702-0669-46c6-b85b-cccba0a0541a> | CC-MAIN-2017-04 | https://www.hpcwire.com/2013/10/16/supercomputing-essential-human-brain-project/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00444-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.922662 | 577 | 3.078125 | 3 |
The explosive growth in data coming out of experiments in cosmology, particle physics, bioinformatics and nuclear physics is pushing computational scientists to design novel software tools that will help users better access, manage and analyze that data on current and next-generation HPC architectures.
One increasingly popular approach is container-based computing, designed to support flexible, scalable computing. Linux containers, which are just now beginning to find their way into the HPC environment, allow an application to be packaged with its entire software stack, including portions of the base operating system files, user environment variables and application “entry points.”
With the growing adoption of container-based computing, new tools to create and manage containers have emerged — notably Docker, an open-source, automated container deployment service that allows users to package an application with all of its dependencies into a standardized unit for software development. Docker containers wrap up a piece of software in a complete filesystem that houses everything it needs to run, including code, runtime, system tools and system libraries. This guarantees that it will always operate the same, regardless of the environment in which it is running.
While Docker has taken the general computing world by storm in recent years, like container-based computing it has yet to be fully recognized in HPC. To facilitate the use of both of these tools in HPC, NERSC is enabling Docker-like container technology on its systems through a new, customized software package known as Shifter. Shifter, designed as a scalable method for deploying containers and user-defined images in an HPC environment, was developed at NERSC to improve flexibility and usability of its HPC systems for increasingly data-intensive workloads. It is initially being tested on NERSC’s Edison system — a Cray XC30 supercomputer — by users in particle physics and nuclear physics and will eventually be made available as an open source tool for the general HPC community.
Here, NERSC’s Doug Jacobsen, Shane Canon, Lisa Gerhardt and Deborah Bard — who have been instrumental in developing, deploying and testing Shifter — discuss how Shifter works and how it will help the scientific community better utilize resources at NERSC and other supercomputing facilities and increase their scientific productivity in the process.
Q. How is container-based computing changing the way applications are developed and deployed in HPC?
Canon: That remains to be seen, but my expectation is that, similar to the way it is having an impact in the enterprise space, it will carry over into scientific computing and HPC as well. One reason I think it will be powerful is because it is a productivity enhancer. It makes it easier for users to develop something locally on their laptop and push to another place. Another key factor is scientific reproducibility; being able to take that image and know that you can reliably instantiate it over and over or share it with others is really powerful. I think the enterprise world wants the same things but for different reasons — for them it is about compliance and stability, while for scientists it’s about reproducibility and verifiability. But it wouldn’t surprise me if, five years from now, the way people build and deliver software is through something like containers, that it becomes the predominant way people share their software, both in general and in HPC.
Q: With the growing adoption of container-based computing, Docker is gaining popularity in the HPC community as well. What motivated you to develop Shifter rather than modify Docker for use on HPC systems?
Jacobsen: Shifter is strongly focused on the needs and requirements of the HPC workload, which means it can deliver the functionality we are looking for while still meeting the overall performance requirements and constraints that a large-scale HPC platform imposes. Shifter allows the user to supply us with a Docker image that we can then convert to a form that is easier and more efficient for us to distribute to the compute nodes. We are leveraging the user interface that Docker makes available to people to create their software images and leveraging that ecosystem, but not directly using their software internally. Shifter it is not a replacement for Docker functionality; it is specifically focused on the HPC use cases.
Canon: What Docker has done is develop a framework that makes it easy for people to create images and then publish those to something like Dockerhub, which makes it easy for them to share. So we are leveraging that and trying to preserve the things we think are most useful for scientists. We felt it was important for the implementation to make it easy for users to create a software environment and then instantiate that on our systems and also leverage what is happening in the Docker ecosystem so they can use some of the existing images out there or publish to Docker and share with other scientists but also easily run on NERSC systems. So we are preserving the best parts of Docker.
Q. How does Shifter work?
Jacobsen: Shifter works by converting user- or staff-generated images in Docker, virtual machines or CHOS (another method for delivering flexible environments) to a common format that provides a tunable point to allow images to be distributed on the Cray supercomputers at NERSC. Through the user interface, a user can select an image from their Dockerhub account or private Docker registry. An image manager at NERSC then automatically converts the image to the common format based on an automated analysis of the image. The image is then copied to the Lustre scratch filesystem and the user can begin submitting jobs — all of which run entirely within the container — specifying which image to use.
Q. In addition to enabling user-defined images and automating the image conversion process, what other advantages does Shifter bring to NERSC users?
Jacobsen: What makes this software a big deal is that it is enabling science on our systems that has been inaccessible in the past. For example, for data-intensive users such as researchers with experimental apparatus who want to analyze data versus just-run simulations on our systems — their codes typically tend to be very different from the way most calculations on Edison run. They tend to have very large, complex software stacks with many different dependencies.
With Shifter, users can prepare a Docker container on their own system, bring it onto the Edison system through already constructed pipelines and it just works. Applications go from not working on Edison to immediately working. And as a critical side benefit, Shifter provides a lot of performance benefits to data-intensive codes that rely on many different dependencies because of the way the software shifts to the compute nodes. With Shifter, it is very performant to start them up and run them. Previously, we relied a lot on the centralized resources at NERSC to make that happen.
Bard: I’ve been working with Shifter since I came to NERSC, evaluating it for running simulations for the LSST (Large Synoptic Survey Telescope). One of the things I have learned is that you have root access when you use Docker, which is not good because you can accidentally screw things up. This is a barrier to running Docker in a lot of places (in HPC) that Shifter fixes, which is huge. People don’t want to deploy Docker because of security issues, but Shifter controls the external connections in a way that means it works at NERSC. And they are going to make it open source, which is brilliant. When you’re thinking about software for a large collaboration, for example, you want to be able to develop a software environment that people can run anywhere, and with Shifter you can run it safely anywhere.
I’ve also been learning about how to incorporate Shifter into workflow tools, and it is very easy, which is nice. I am particularly interested in how we are going to be supporting it from the users’ perspective, not just within large collaborations but for all users of Cori (NERSC’s next-generation Cray X40). With Shifter, users will be able to get running very quickly on Cori.
Q. Are there certain applications or science domains in which Shifter will have greater impact, or is it designed to improve data management across the board?
Canon: I think it can work for a large range of applications. Initially it’s more important for some of the nontraditional and data-intensive areas because they are the ones that often come in with the most challenging software requirements, and trying to take those requirements and deploy their software on an HPC system can sometimes be very challenging. It’s the thing they stumble on right out of the gate. They don’t even get to the point where they are effectively running their applications because they can’t get all the prerequisite software requirements satisfied first.
What’s happened is that as datasets have gotten larger these communities’ computing demands have grown, so that they now have problems that are similar in scale to traditional HPC users. In the past maybe they could have gotten by just running something on a workstation or very small cluster, but now they have problems that have gotten big enough that they can take advantage of facilities like NERSC. But then when they tried to make that leap they would struggle. We may be able to work around the problems, but a lot of the time it is very time consuming and tedious. So now we believe we have a solution — Shifter — that will allow them to get past those problems more quickly, and the early results are very encouraging.
Can you share some of those results and/or successes?
Jacobsen: There are two ways in which Shifter has already been successful. First, we have two major groups beginning to use it right now: the LCLS experimental facility at SLAC, and the high energy physics community at CERN. The LCLS, for example, has its own software environment, and it is rather challenging to take it out of its context and put it into our environment. LCLS spent a lot of time trying to adapt to Edison, which they eventually did, but it took them months to make a Docker image. Using Shifter, however, we were able to create a Docker image in one day and demonstrated that the staff effort in migrating applications is greatly reduced, which was the original purpose of Shifter: to make our system more adaptable to external software.
The other benefit is that, because of the way we present images to the Edison system in Shifter, it turns out that the software can load much faster than before. So in the case of the LCLS, before Shifter once an image had been ported to NERSC it could sometimes take up to half an hour just for the software to start. With Shifter, everything starts in a matter of seconds — somewhere between 5-20 seconds. So this results in much better utilization of our resources.
Gerhardt: With scientists from the Large Hadron Collider’s (LHC) ALICE, ATLAS and CMS experiments at CERN, we are testing Shifter in conjunction with their CVMFS (CERN Virtual Machine File System) software package. I’ve been working to bring the CVMFS software onto Edison, but it’s a huge software repository. For example, with ATLAS (one of two general-purpose detectors at the LHC), if we do a straight-up rsync over scratch, we’re working with more than 3.5 TB of data and 20 million inodes. We found that Shifter is a good tool for handling this because you can build a filesystem image on the local node and deliver startup times around half a minute on a single node. So with Shifter the jobs run as efficiently, if not more so, than they do in their current configurations and without the user having to jump through any special hoops. It just works.
Canon: From a big-picture perspective, Shifter is really about trying to enable and simplify the process of science. Scientists really struggle with the fact that they create some sort of code or simulation and it can be really difficult for another user to replicate the computing environment that was used. It’s just as challenging as it used to be for scientists to replicate experimental conditions. Shifter is potentially one way to address that challenge.
Q: One last question; are you going to make this available to other centers? How can people learn more about it?
Canon: Doug has already gone through the steps to open source it and release it through a BSD (Berkeley Software Distribution) license. The intent is that others can download it and use it at their centers. While that might take away from it being a unique capability for NERSC, we think in the end it is important for scientists because the more available this capability is for users, the more they will adopt it and make it a standard for how they operate.
Also, as we’ve been developing Shifter we’ve been discussing it with Cray, and hopefully this is something that will become a mainstream capability for Cray systems. The fact that they are working with us on this means they recognize the potential for it.
We also plan to offer demonstrations of Shifter in the Department of Energy booth at SC15 in November. | <urn:uuid:f3d74401-bfbf-4a41-a6ef-703dec2c9018> | CC-MAIN-2017-04 | https://www.hpcwire.com/2015/08/07/nerscs-shifter-makes-container-based-hpc-a-breeze/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00170-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.955607 | 2,741 | 2.5625 | 3 |
RETIRED CONTENTPlease note that the content on this page is retired. This content is not maintained and may contain information or links that are out of date.
The idea of making payments using a mobile phone has long been in the heads of credit card companies, financial institutions, wireless carriers, merchants, and consumers. The McLean Report research notes, “Mobile Payments: Will That Be Cash, Credit, or Cell Phone?” and “Mobile Payments: Conquering the Western Frontier?” previously touched on this topic in 2004 and 2006 respectively. Mobile payments have been gaining popularity in countries like Japan and South Korea over the past few years, and trials are now underway in North America. It looks like the hype is finally starting to become reality, but it will be a few years yet before the technology is ubiquitous.
Mobile Payment Technology Overview
There are a couple of methods for making payments using a mobile device. One involves using Short Message Service (SMS) to exchange text messages to carry out a transaction – such as the PayPal and Amazon TextBuyIt mobile payment systems. The other method – and the focus of this research note – involves using Radio Frequency Identification (RFID) based Near Field Communication (NFC). | <urn:uuid:98b582d7-e538-4afc-a18e-b7e49e8f8294> | CC-MAIN-2017-04 | https://www.infotech.com/research/mobile-phone-payments-making-progress | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280763.38/warc/CC-MAIN-20170116095120-00078-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.927955 | 251 | 2.765625 | 3 |
IT must know how thoroughly their source code has been tested. Without this knowledge there will be costly and embarrassing surprises when users run defective parts of the code that IT never executed before release. Code coverage analysis is one of the best ways to measure the amount of source code tested, since it measures how much of the program constructs have been executed. IT can use code coverage to identify parts of the system that they may want to test more rigorously and allow the testers to judge when the source code has been tested enough.
Testing Without Code Coverage Analysis
Where test measurement is done, the most common way to measure testing is to keep a list of scenarios in the form of use cases (based on requirements) that should work with the system, and then test the system with these scenarios. The use cases describe a main scenario, alternate scenarios and error scenarios that are useful for test planning. However, this approach can miss the underlying complexities of code that is being executed. | <urn:uuid:13387dca-de1a-4875-a5bc-2f4a45e3c7d1> | CC-MAIN-2017-04 | https://www.infotech.com/research/improve-your-software-testing-with-code-coverage-analysis | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00199-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.952813 | 193 | 2.828125 | 3 |
Earlier this week, the Florida Power & Light company (FPL) received approval from the Florida Public Service Commission (PSC) to begin construction of three solar energy centers that will make Florida the second largest supplier of utility-generated solar power in the nation.
Earlier this year, the governor signed into law a comprehensive energy bill which provided for the development of renewable energy, subject to PSC approval. At that time, FPL presented a proposal to the PSC for three solar energy centers that includes the world's largest photovoltaic solar array and the first "hybrid" energy center that will couple solar thermal technology with an existing natural gas combined-cycle generation unit.
DeSoto Next Generation Solar Energy Center
Planned for construction on FPL-owned property in DeSoto County, Fla., the DeSoto project will provide 25 megawatts of photovoltaic solar capacity, making it the world's largest photovoltaic solar facility. DeSoto is expected to be in service by December 2009.
Martin Next Generation Solar Energy Center
Planned for construction at FPL's existing Martin Plant site, the Martin project will provide up to 75 megawatts of solar thermal capacity in an innovative "hybrid" design that will connect to an existing combined-cycle power plant. It is the world's first project to integrate solar thermal steam generation into a combined-cycle steam turbine. When the power of the sun is being harnessed to produce electricity from steam, less natural gas is required. The Martin facility is expected to be on-line at the end of 2009 and completed by 2010.
Space Coast Next Generation Solar Energy Center
Planned for construction at the Kennedy Space Center, the Space Coast project will provide 10 megawatts of photovoltaic solar capacity in an innovative public-private partnership. Space Coast Solar will be operating by the first quarter of 2010.
These facilities will prevent the release of nearly 3.5 million tons of greenhouses gases over the life of the projects, FPL said in a press release. This is the equivalent of removing 25,000 cars from the road per year, according to the U.S. Environmental Protection Agency. In addition, photovoltaic solar systems, which convert sunlight directly to electricity, consume no fuel, use no water, and produce no waste. FPL's unique solar thermal design, which uses the power of the sun to produce electricity from steam, uses no fossil fuel, no additional cooling water and produces zero greenhouse gas emissions. | <urn:uuid:c11b9cf3-a122-4469-ab83-e0ca1cb4e0e1> | CC-MAIN-2017-04 | http://www.govtech.com/technology/Florida-Utility-Receives-Approval.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280065.57/warc/CC-MAIN-20170116095120-00107-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.899134 | 510 | 2.6875 | 3 |
F codes provide many facilities for arithmetic, relational, logical, and concatenation operations. All operations are expressed in Reverse Polish notation and involve the use of a "stack" to manipulate the data.
There are three forms of the F code:
F Uses only the integer parts of stored numbers unless a scaling factor is included. If the JBCEMULATE environment variable is set to "ROS", the operands for "-", "/" and concatenate are used in the reverse order.
FS Uses only the integer parts of stored numbers.
FE Uses both the integer and fraction parts of stored numbers.
n is a number from 1 to 9 used to convert a stored value to a scaled integer. The stored value"s explicit or implied decimal point is moved n digits to the right with zeros added if necessary. Only the integer portion of this operation is returned.
elem is any valid operator. See later.
F codes use the Reverse Polish notation system. Reverse Polish is a postfix notation system where the operator follows the operands. The expression for adding two elements is "a b + ". (The usual algebraic system is an infix notation where the operator is placed between the operands, for example, "a + b").
The F code has operators to push operands on the stack. Other operators perform arithmetic, relational, and logical operations on stack elements. There are also concatenation and string operators.
Operands that are pushed on the stack may be constants, field values, system parameters (such as data and time), or counters (such as record counters).
F codes work with a pushdown stack.
Note: All possible F correlative operators push values onto the stack, perform arithmetic and other operations on the stack entries, and pop values off the stack.
The term "push" is used to indicate the placing of an entry (a value) onto the top of the stack so that existing entries are pushed down one level. "Pop" means to remove an entry from the top of the stack so that existing entries pop up by one level. Arithmetic functions typically begin by pushing two or more entries onto the stack. Each operation then pops the top two entries, and pushes the result back onto the top of the stack. After any operation is complete, the result will always be contained in entry 1.
ORDER OF OPERATION
F code operations are typically expressed as "F;stack2;stack1;operation".
Under most emulations, this will be
Note that the FE and FS codes are evaluated in the same way for all emulations.
Input conversion is not allowed.
Push a value of 3 onto the stack. Push a value of 5 onto the stack.
Take entry 1 from entry 2 (3 - 5) and push the result (-2) back onto the stack as entry 1. ROS emulations will subtract 3 from 5 and return a result of 2.
Push a value of 3 onto the stack. Push a value of 5 onto the stack. Take entry 2 from entry 1 (3 - 5) and push the result (-2) back onto the stack. This works in the same way for all emulations.
Push a value of 2 onto the stack. Push a value of 11 onto the stack. Push a value of 3 onto the stack. Subtract entry 1 from entry 2 (11 - 3) and push the result (8) back onto the stack. Now divide entry 2 by entry 1 (2 divided by 8) and push the result (0) back onto the stack.
Under ROS emulation, this would evaluate as 3 - 11 = -8, followed by -8 / 2 = -4.
A push operator always pushes a single entry onto the stack. Existing entries are pushed down one position. Push operators are: "literal"Literal. Any text string enclosed in double or single quotes.
field-number is the value of the field from the current record.
R specifies that the last non-null value obtained from this field is to be applied repeatedly for each multivalue that does not exist in a corresponding part of the calculation.
RR specifies that the last non-null value obtained from this field is to be applied repeatedly for each subvalue that does not exist in a corresponding part of the calculation.
(format-code) is one or more format codes (separated by value marks) enclosed in parentheses. Applied to the value before it is pushed onto the stack.
The arithmetic F code operators work on just the top stack entry or the top two stack entries. They are:
Miscellaneous operators control formatting, exchanging stack entries, popping the top entry, concatenation, and string extraction. They are:
Relational operators compare stack entries. The result is pushed onto stack entry 1 and is either 1 (true) or 0 (false). Relational operators are:
Logical operators include a logical AND test and a logical inclusive-OR test.
Logical operators are:
& AND stack entries 1 and 2. If both entries contain non zero, a 1 is pushed onto stack entry 1, otherwise, a 0 is pushed.
! OR stack entries 1 and 2. If either of the entries contains non zero, a 1 is pushed onto stack entry 1; otherwise, a 0 is pushed.
A powerful feature of F and FS code operations is their ability to manipulate multivalues. Individual multivalues can be processed, one by one, or you can use the R (or RR) modifier after a field number, to repeat a value and thus combine it with each of a series of multivalues. Field operands may be valued and subvalued. When mathematical operations are performed on two multivalued lists (vectors), the result is also multivalued. The result has an many values as the longer of the two lists. Zeros are substituted for the null values in the shorter list if the R option is not specified.
To repeat a value for combination with multivalues, follow the field number with the R operator. To repeat a value for combination with multiple subvalues, follow the FMC with the RR operator.
Format codes can be used in three ways. One transforms the final result of the F code, another transforms the content of a field before it is pushed on the stack, and the third transforms the top entry on the stack.
f-code is a complete F Code expression.
field-number is the field number in the record from which the data is to be retrieved.
format-code is any valid format codes.
] represents a value mark (ctrl ]) that must be used to separate each format-code.
To process a field before it is pushed on the stack, follow the FMC with the format codes enclosed in parentheses.
To process the top entry on the stack, specify the format codes within parentheses as an operation by itself.
To specify more than one format code in one operation, separate the codes with the value mark, (ctrl ]).
All format codes will convert values from an internal format to an output format.
Obtain the value of field 2. Apply an MD2 format code. Then apply a group extract to acquire the integer portion of the formatted value, and push the result onto the stack. Subtract 100 from the field 2 formatted, group-extracted value. Return this value. Note that under ROS emulation, the value returned would be the result of subtracting the integer value from the group extract, from 100. In other words:
100 - OCONV(OCONV(Field2, "MD2"), "G0.1" ). | <urn:uuid:d4f3dffe-900b-4afb-9d61-dcda0e6a2b24> | CC-MAIN-2017-04 | http://www.jbase.com/r5/knowledgebase/manuals/3.0/30manpages/man/jql2_CONVERSION.F.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00501-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.842254 | 1,579 | 3.78125 | 4 |
SQL injections are one of the most dangerous attacks used against web applications. In 2010, they were ranked in the top spot for the OWASP Top Ten and second in the CWE/SANS Most Dangerous Programming Errors.
The reasons SQL injections are so dangerous are:
The following walkthrough will take you through a simple SQL injection attack to see just how this works. This is not meant to be a “hacking” tutorial. It is simply a way to show you just how easily a web site can be compromised using this technique.
While you can see just exactly how this works, you can try this for yourself by downloading HacMe Casino from Foundstone. HacMe Casino is a web site that was built with multiple vulnerabilities left in it.
As you can see from the screenshot above, there is a login section. This is an ideal entry point for this type of attack because the authentication process of most web applications checks the credentials against a database that makes use of SQL.
In order to properly execute this attack, we need to trick the database into believing that what we enter is true. For example, a valid user with the Login John_Doe and the Password winbig could enter these credentials into the text boxes and click Login. When they are checked against the database using the query SELECT * FROM users WHERE (username=’
The result is an unsuccessful login because the statement was not TRUE. The Login did not equal John_Doe and/or the Password did not equal winbig. We could keep trying to brute force our way in but even with a tool to automate this, we could be waiting for a long time. Instead, what if we entered something we knew to be true? Take for instance the statement 1=1. That is a true statement, but it is not going to work. We need to enter in a valid SQL string so let’s use ') OR 1=1-- instead. By entering this as the value for the Login, and leaving the Password value blank, we are presented with this:
We are now successfully logged in as Andy Aces. And why Andy? Because he had the unfortunate luck of being the first entry in the users table of the database and the query SELECT * FROM users WHERE (username=’’) OR 1=1—AND password=’’) returns just that.
Another way this can be put in place is to look at the URL when successfully logged into a site. For instance, an attacker can create a legitimate account on a shopping site. When they login, he notices that the URL reads: http://onlinestore.com/accountView?id=Gf4b3sO8550. Changing the id to the string used earlier, ') OR 1=1--, could cause the server to display a page with all the account records, not only the attacker’s.
The examples listed above represent a SQL injection vulnerability in its simplest form. Most attacks launched against web sites involve much more sophistication, but that doesn’t mean to say that something as simple as a=a won’t work.
Seeing just how easy this type of attack is to perform makes it clear as to why it is something that needs to be taken seriously and needs to be defended against. | <urn:uuid:04f8c719-3b5d-4df4-b4bd-78e31bbe577c> | CC-MAIN-2017-04 | http://www.applicure.com/blog/anatomy-of-an-sql-injection-attack | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00043-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.92666 | 684 | 3.140625 | 3 |
On July 9th, 2008 a massive effort was made among software and hardware vendors to release a simultaneous patch to their products. This patch was created to mitigate or minimize the effects of a vulnerability discovered in the basic operation of the Internet Domain Name System or DNS. This subsystem is critical to the operation of the Internet and provides for the translation of human readable names into computer usable IP addresses.
Vendors of most major operating systems and network hardware participated in the effort. Each vendor also released their own advisories and patches using their existing patch processes. The US CERT also played a major role in coordinating the release and advises all organizations to test the patches from the vendors and get them applied as soon as possible. While no known malicious activity exists as of the time of this writing, it is largely known that attackers are assembling the details that have been made public and are attempting to recreate the vulnerability and exploitation techniques that the initial researcher discovered.
Download the paper in PDF format here. | <urn:uuid:581c24c5-80b3-4765-be09-206c71edf287> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2008/07/15/dns-vulnerability-overview-and-suggested-mitigations/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00253-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.971668 | 197 | 2.625 | 3 |
It is expected in the near future that an individual customer’s need of symmetric bandwidth of Bi-Directional (BiDi) signals is common place with communication systems of optical transport networks, access networks, wireless backhaul networks, and private transmission networks. Network operators have to meet the customer’s need whereas making every effort to save in CAPEX and OPEX. The single wavelength BiDi transmission technology offers a unique solution to meet these apparently conflicting goals at the same time, particularly in access networks such as FTTx and in wireless backhaul networks between a base station and antenna tower or a Remote Radio Head (RRH), compared with the two-wavelength BiDi transmission and the duplex transmission which are currently in use. This article presents pros and cons between competing technologies, operating principles of the single wavelength transmission technology and its applications, and Fiberstore’s BiDi transmission products.
A full duplex transmission technology uses a pair of fibers for a simultaneous communication in both directions. For example, in a P2P upstream signal from the subscriber to the CO. The optical transceivers at two ends of a transmission link can be identical if one wavelength is used for both directions. However, the CAPEX and OPEX are much higher due to the cost for two fibers and their installation compared with other BiDi technologies described below which use a single fiber. This technology can be used in the Wavelength Division Multiplexing (WDM) communication as well as in the P2P communication.
A two-wavelength BiDi transmission system uses one fiber, but two wavelengths for a simultaneous communication in both directions. These wavelengths are separated widely from each other. For example, in a P2P access network, the downstream signal from the CO to a subscriber is at 1550 nm and the upstream signal from a subscriber to the CO is at 1310 nm. The fact that a different signal wavelength must be used in each opposite direction of transmission imposes on the network operators two disadvantages.
A single wavelength BiDi transmission system, on the other hand, uses one fiber and one wavelength for a simultaneous communication in both directions. For example, in a P2P access network , the wavelength can be at 1550nm (or 1310 nm) for both downstream and upstream signals. This reduces CAPEX and OPEX for the network operators since they need to deploy only one kind of optical transceivers at 1550 nm (or 1310 nm). This also guarantees a foolproof installation of transceivers without any confusion since all the transceivers are identical and there is one fiber. In a WDM BiDi system, this is only a viable approach for providing each channel a fully bi-directionally dedicated (or symmetric) bandwidth. This technology may face between upstream and downstream signals a crosstalk and an interferometric beat noise, both coming from reflections at the interface between a transceiver and a channel link fiber with PC (or UPC) type connectors , which may impose a limit on the maximum allowable channel loss, or in other words, the maximum transmission distance. These reflections, however, can be mitigated by using APC type connectors.
Here is a table that summarizes pros and cons of various BiDi transmission technologies. The single wavelength BiDi clearly shows its own unique advantages over two other competing technologies, two-wavelength BiDi and Duplex.
|Single Wavelength BiDi||Two-Wavelength BiDi||Duplex|
|Transmission Distance Limited By||Return Loss||Allowable Channel Loss||Allowable Channel Loss|
|Allowable Channel Loss|
|P2P||Number of Fibers||1||1||2|
|Minimum Number of Transceiver Types||1||2||1|
|Foolproof Installation of Transceiver||Yes||No||No|
If PC or UPC type connectors are used, the transmission distance may be limited by return loss. If APC type connectors are used, the transmission distance is limited mainly by allowable channel loss.
There is always a chance that a wrong type of transceiver can be installed if other different type of transceivers is available.
Each duplex transceiver has two optical receptacles, one for the Tx and the other for the Rx. There is always a chance that the Tx at the CO is connected to the fiber for the upstream signal for the subscriber.
A TDM for one direction (e.g. upstream) is necessary.
CAPEX and OPEX are high due to two pairs of optical MUX and DEMUX for a link.
The single wavelength BiDi transmission technology allows over a single fiber a simultaneous communication in both directions at the almost same wavelength. Here is a figure that shows a simple of such transmission system: a P2P optical communication system which is composed of an OutSide Plant (OSP) fiber link over a single fiber as a medium of transmission and identical transceivers at both ends of the fiber link. The signal wavelengths from two transceivers, downstream signal from Tx 1 and upstream signal from Tx 2, are very close to each other, which explains why this approach is named as “a single wavelength BiDi transmission”.
The single wavelength BiDi transmission technology finds its applications broadly in the optical transport networks, access network s such as FTTx networks, wireless backhaul networks, and private transmission networks even though the transmission distance may be limited since most deployed optical transmission networks are equipped with PC type connectors and might have finite reflections. However, it may be still very attractive for P2P and WDM transmission systems with the distance up to 20 km because of its unique advantages over other technologies. Furthermore, the transmission distance can be extended much longer up to 120 km once the reflection is minimized using APC connectors.
In the WDM BiDi transmission application, this single wavelength BiDi transmission is only a viable approach for providing each channel a fully bi-directionally dedicated (or symmetric) bandwidth. The two-wavelength BiDi transmission technology cannot allocate for each channel a fully dedicated bandwidth in both directions simultaneously since all the subscribers must share a common wavelength in one direction, e.g., 1310 nm in upstream with the TDM technology.
The single wavelength BiDi transmission technology is also well poised to support the wireless backhaul networks, such as links between a CO and a base station, a base station and a RRH connected through an optical WDM BiDi system shown in the figure below, a base station and many picocells along the streets in metropolitan areas, and a link between a base station and antennas on a tower.
The single wavelength bi-directional transmission can be very cost-effective for a P2P system with link length up to 20 km and for a WDM system with link length up to 120 km. The transmission distance can be extended much longer once the reflection is minimized using APC connectors. This technology will be only viable solution for WDM BiDi systems when each channel needs a fully bi-directionally dedicated bandwidth. Particularly, it is well poised for the wireless backhaul networks to meet the ever increasing demand of bandwidth and volume of traffic. | <urn:uuid:b59dd735-58ff-442d-8fa7-377e2bacd002> | CC-MAIN-2017-04 | http://www.fs.com/blog/ideal-solution-for-wdm-bi-directional-systems-single-wavelength-bidi-transmission-technology.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00465-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.923514 | 1,473 | 2.9375 | 3 |
One of the more recent evolutions in network security has been the movement away from protecting the perimeter of the network to protecting data at the source. The reason behind this change has been that perimeter security no longer works in today’s environment. Today, more than just your employees need access to data. Essentially, partners and customers must have access to this data as well meaning that your database cannot simply be hidden behind a firewall.
Of course, as your databases become more exposed to Internet, it is imperative that you properly secure them from the vulnerabilities of the outside world. Securing your databases involves not only establishing a strong policy, but also establishing adequate access controls. In this paper, we will cover various ways databases are attacked, and how to prevent them from being “hacked”.
Download the paper in PDF format here.
To learn more about database security, take a look at the “Anatomy of a Database Attack” webcast by Application Security. | <urn:uuid:df7e9a88-4e81-490e-a27a-1b8495b7de05> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2003/09/06/protecting-databases/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00465-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.952467 | 200 | 2.890625 | 3 |
Before the invention of nylon cable ties, the method of early, to ensure that project very unorganized. The manufacturers had to adhere to friction taps, hand wrapping through lacing cords and twines. However, though they used to be secure initially, things like adhesive tape used to be at risk of getting peeled off once dry and lacing cords used to put the insulation wires at risk of being cut through.
Therefore, the need to better protection products, not only will keep together, but also protect them from damage by their use.
The inventor of the cable ties, Thomas & Betts was established in 1898 by two engineers, Robert Thomas and Hobart Betts after they graduated from Princeton University. The purpose of the company had always been to educate the end users about its products and, therefore, in this respect is very successful.
This was the company’s strategy to generate a customer pull also by aiming at keeping the distributor’s shelves occupied by T & B products. It was in 1930 when the company expressed concern that is the goal of mutual benefit by having a commercial parthership with dealers who are very prominent reach the target audience.
Cable ties were first invented in the year 1958 by an electrical company known as Thomas & Betts. Built for aero plane harnesses, they were introduced under the brand name, ‘Ty-Rap’ and were patented in the same year. However, manufacturing is the only difference is in the beginning, ratchet is not made by nylon but metal in stead. the first cable ties have a steel claw attached based on the production process is more time consuming, so relatively low efficiency.
The result was a need to have two separate manufacturing processes, that is first molding the tie and then inserting the metallic pawl to complete a mere single tie. In addition, for vulnerable to damage and become loose in the process, or cause damage if fell on the circuit.
Industrial production progress pave the way for fine tuning, a complete cable tie. The attributes it had were self locking and a two component product. Although the new cable tie had an innovative design, fine adjustment quality and reduced installation time, it still had the time consuming two way manufacturing process. However, as time span, the cable tie industry witnessed a further development. Therefore, a self-locking nylon cable tie. Later on, came the period of modifying the design of cable ties and thus the manufacturing process further enhanced to get the product known as a ‘wire bundling device’.
1968 witnessed a time when a patented fixture manufacturers as a unique line design. This company was the first to produce a one piece nylon tie in the entire United States of America. Also the production process was not very time consuming than in the past the initial setup, so is an era of various applications and the color of the cable ties that we see today.
Buy cable management products online at FiberStore.com | <urn:uuid:18877d10-f2a5-412d-9fe4-1fc8563f53fe> | CC-MAIN-2017-04 | http://www.fs.com/blog/the-development-history-of-cable-ties.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281649.59/warc/CC-MAIN-20170116095121-00005-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.977123 | 596 | 2.828125 | 3 |
I guess this really is NASA's bag, baby! Early this year I wrote about the space agency's nascent plan to send a robot spacecraft out to retrieve a small asteroid and place it into lunar orbit so we Earthlings can study it at our leisure without getting radiation all over us. The really odd thing about the plan is that NASA imagines hauling the asteroid back toward Earth not with a tractor beam or something else appropriately high-tech, but with a lowly bag. C'mon, space dudes, you're not going to the mall! But the top space dude at NASA's Jet Propulsion Laboratory recently told the Washington Post that the bag plan ... Just. Might. Work. “It’s not as crazy as it seemed at the beginning,” said Charles Elachi, who runs the JPL and who probably doesn't have the time or budget for crazy. NASA's plan, as explained in the Post, goes like this:
The mission, which could cost upward of $2 billion, would use a robotic spacecraft to snag the small rock and haul it into a stable orbit around the moon. Then astronauts would blast off in a new space capsule atop a new jumbo rocket, fly toward the moon, go into lunar orbit, and rendezvous with the robotic spacecraft and the captured rock. They’d put on spacewalking suits, clamber out of the capsule and examine the rock in its bag, taking samples. This would ideally happen, NASA has said, in 2021.
However, Republicans in Congress are putting up resistance to the project, and NASA itself faces hard choices due to budget restraints, so there's a good chance the asteroid-bagging plan may never lift off. Unless, of course, a corporate sponsor steps up. If you're the CEO of Hefty or Glad, wouldn't you want your company's logo on the first space bag? Just think about it. Now read this: | <urn:uuid:3079208f-14d1-4465-a6c9-03f6330ab8fb> | CC-MAIN-2017-04 | http://www.itworld.com/article/2704407/enterprise-software/nasa-s-asteroid-bagging-plan--not-as-crazy--as-it-seems.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281649.59/warc/CC-MAIN-20170116095121-00005-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.954096 | 393 | 2.8125 | 3 |
Earlier this week, Forbes released an article titled “Insider Trading on the Dark Web”. BrightPlanet was mentioned within the article, and BrightPlanet was introduced as a company that collects content from what is called the Dark Web. While we appreciate being mentioned in Forbes, there are a few definitions we want to clear up for readers.
With recent emphasis being on the Silk Road shutdown by the media, we’ve found a significant misunderstanding of the terms Surface Web, Deep Web, and Dark Web. We hope that this posting can help guide you through these often confused terms and get a better understanding of how the web works. You’ll understand that Forbes’ definition of Dark Web content was indeed inaccurate. Let’s get started.
Starting with the Surface
To start on our journey of the different aspects of the web, we’ll begin with the surface; the parts you’re most familiar with. The Surface Web is anything that can be indexed by a typical search engine like Google, Bing or Yahoo. Google has a great interactive story explaining how they index and search the web in depth.
To help you understand how search engines work, I want you to open a traditional news or blog site (CNN, BBC, etc.) and begin clicking different links to new article pages. Once you have finished doing that, come back to the blog posting.
If you’re done clicking links, you’ve just behaved how search engines’ crawling technology finds and identifies websites. Search engines rely on pages that contain links to find and identify content. You’ll find that this is a great way for finding new content on the web that most of the people generally care about (blogs, news, etc.). But this technique of navigating links also misses a lot of content. Let’s go a little deeper to find out exactly what type of content is missed.
Moving a Little Deeper
From a purist’s definition standpoint, the Surface Web is anything that a search engine can find while the Deep Web is anything that a search engine can’t find. The Forbes article that we mentioned previously used BrightPlanet’s definition for the Deep Web as the definition for the Dark Web. There are a number of reasons that a search engine can’t find data on the web, today we plan on covering the most common one.
Remember how we had you open up a web page and crawl links? Now I want you to stop and open up a different web page, let’s use the travel site Hotwire this time. I have a challenge for you – I want you to attempt to find the price of a hotel in Sioux Falls, S.D. (BrightPlanet’s headquarters) from April 10 to 12 (Sioux Falls is still cold in April). But wait, there’s a catch, you can only interact with the site like a standard search engine would – meaning, you can only click links to get there.
There’s a nice search box that Hotwire allows users to fill out, but you can’t use it. Search engines don’t use search boxes, they just use links. You’ll quickly find that you can’t find the search results you are looking for without a search box. The results of a Hotwire search are perfect examples of Deep Web content.
Other examples of Deep Web content can be found almost anytime you navigate away from Google and do a search directly in a website – government databases and libraries contain huge amounts of Deep Web data. Here’s a few other examples:
Google search can’t find the pages behind these website search boxes. Most of the content located in the Deep Web exists in these websites that require a search and is not illicit and scary like the media portrays. However, if you go a little deeper in the Internet you’ll find the Dark Web.
Getting a Little Darker
Continuing with our definitions, we’ve learned that the Surface Web is anything that a search engine can access and the Deep Web is anything that a search engine can’t access. The Dark Web then is classified as a small portion of the Deep Web that has been intentionally hidden and is inaccessible through standard web browsers.
The most famous content that resides on the Dark Web is found in the TOR network. The TOR network is an anonymous network that can only be accessed with a special web browser, called the TOR browser. This is the portion of the Internet most widely known for illicit activities because of the anonymity associated with the TOR network.
The key thing to keep in mind is the Dark Web is a small portion of the Deep Web. Some media is inaccurately defining both and we want to do our best to clear up the confusion.
Want to learn more about the Deep Web? Download our whitepaper on Understanding the Deep Web in 10 Minutes which includes some of the information you just read and builds on it.
At BrightPlanet, we help customers find the data they want on the Deep Web, harvest it and make it usable. The buzzword Big Data is permeating every industry and we provide data-as-a-service to help organizations harness and use Big Data from the web.
Learn more about our Data-as-a-Service here.
Featured Image: Manveet Singh | <urn:uuid:6e827205-34e7-44bb-a1d4-391e06fa3b65> | CC-MAIN-2017-04 | https://brightplanet.com/2014/03/clearing-confusion-deep-web-vs-dark-web/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280587.1/warc/CC-MAIN-20170116095120-00125-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.918115 | 1,103 | 2.765625 | 3 |
- Users have too many passwords:
- On different systems,
- with different policies,
- expiring at different times.
- Complexity leads users to do bad things:
- Write down passwords ("sticky notes").
- Forget/lock out passwords and call the help desk.
- Reuse old passwords.
- Password synchronization pushes password updates
from one system to another:
- Multiple physical passwords.
- Same value everywhere.
- Password synchronization allows users to:
- Remember a single password value.
- Manage it on a single schedule.
- Comply with a single password policy. | <urn:uuid:eee93e93-bb5d-4018-9ea9-7e2e9ae3e491> | CC-MAIN-2017-04 | http://hitachi-id.com/largedocs/presentation-hipm-full/19.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00153-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.729165 | 130 | 3.171875 | 3 |
Energy & Climate Change Strategy
EMC’s primary GHG emissions arise from the generation of the electricity needed to run our business—including our supply chain—and power our products. Therefore, our energy and climate change strategy focuses on the following key areas:
- Reducing emissions from our own operations by:
- Decreasing the demand for energy
- Maintaining a highly efficient infrastructure
- Optimizing logistics routes and modes to decrease carbon intensity and footprint
- Designing and operating our data centers and facilities for energy efficiency
- Identifying opportunities to adopt renewable energy sources that are economically and environmentally sound
- Engaging suppliers in measuring and reporting
- Collaborating with suppliers in taking measures to reduce emissions
- Working with the IT industry to develop standards for reporting supply chain emissions
- Supplying energy-efficient products
- Developing innovative approaches to manage the exponential growth of data in their operations
- Delivering services to help customers implement the most energy-efficient solutions for their businesses
- Supplying information solutions to optimize business functions, accelerate research, leverage data assets, and enhance public infrastructure
American Business Act on Climate Pledge
In 2015, EMC joined the American Business Act on Climate Pledge to encourage global leaders to reach an international climate change agreement at COP 21 in December 2015. In addition to EMC’s commitment (see below), Chief Sustainability Officer Kathrin Winkler participated in a White House panel on climate change as part of the announcement.
In support of our goal to achieve 80% absolute reduction in greenhouse gas emissions by 2050 in accordance with the 2007 Bali Climate Declaration, EMC Corporation pledges to:
- Realize a 40 percent absolute reduction of global Scope 1 and 2 GHG emissions below 2010 levels by 2020
- Obtain at least 20 percent of global grid electricity needs from renewable sources by 2020
- Have all hardware and software products achieve increased efficiency in each subsequent version by 2020
- Reduce energy intensity of storage products 60 percent at a given raw capacity and 80 percent for computational tasks from 2013 to 2020
In 2015, EMC continued our engagement with the Renewable Energy Buyers’ Alliance (formerly the Corporate Renewable Buyers’ Partnership) sponsored by the World Wildlife Fund, World Resources Institute, Rocky Mountain Institute, and Business for Social Responsibility. In addition, EMC joined the Business Renewables Center, an initiative of the Rocky Mountain Institute, to learn from and share best practices with other corporate purchasers of renewable energy.
Climate Change Policy Statement
Our Goals and Performance
We began measuring our GHG emissions in 2005. Since then, our energy intensity by revenue – the amount of global GHG we emit per $1 million we earn – has declined by more than 46 percent, from 32.99 to 17.55 metric tons. In 2015, EMC purchased 40,000 MWh of Green-e® Energy certified Renewable Energy Certificates (RECs) which helped us achieve our 2015 target.
- 40 percent reduction of global Scopes 1 and 2 GHG emissions per revenue intensity below 2005 levels by 2015. Achieved in 2012, 2013, 2014, and 2015
- 20 percent of global electricity needs served by renewable sources by 2020 (excluding VMware)
- 40 percent absolute reduction of global Scopes 1 and 2 GHG emissions below 2010 levels by 2020 (excluding VMware)
- 50 percent of global electricity needs to be obtained from renewable sources by 2040 (excluding VMware)
- 80 percent absolute reduction of global Scopes 1 and 2 GHG emissions below 2000 levels by 2050 (excluding VMware)
Determining Our Goals
To set our long-term goals, we began with the imperative to achieve an absolute reduction of at least 80 percent by 2050 in accordance with the Intergovernmental Panel on Climate Change’s (IPCC’s) Fourth Assessment Report recommendations. We then modeled various reduction trajectories; our goal was to identify a solution that would be elastic enough to adjust to changes in our business, while achieving a peak in absolute emissions by 2015, in accordance with recommendations from the 2007 Bali Climate Declaration.
Our model was based on the Corporate Finance Approach to Climate-stabilizing Targets (C-FACT) proposal presented by Autodesk in 2009. The model calculates the annual percentage reduction in intensity required to achieve an absolute goal. We selected this approach because intensity targets better accommodate growth through acquisitions (in which net emissions have not changed but accountability for them has shifted), and aligns business performance with emissions reductions performance rather than forcing tradeoffs between them. Setting an intensity trajectory also drives investment beyond one-time reductions to those that can be sustained into the future.
The C-FACT system, however, is “front-loaded” as it requires a declining absolute reduction in intensity each year. EMC developed a variant of the model that requires reductions to be more aggressive than the previous year. This makes better economic sense for the company as it takes advantage of the learning curve for alternative fuels as they become more efficient and cost effective. Please see the “Trajectory Diagram” in this section for more information.
While EMC put much thought into setting our long-term goals, some stakeholders felt that they were too distant for most people to conceptualize. In response to this feedback, in 2014, we established our new 2020 targets to mark progress.
The basis of our mid-term targets is an understanding of the contribution that businesses must make to GHG mitigation to avoid dangerous climate change, as described in the CDP and World Wildlife Fund report “3% Solution.” We believe these mid-term goals are aggressive and aspirational, particularly given the anticipated growth in our business. However, we also realize the potential for a combination of escalating effects of climate change and a lack of collective action could require that all businesses, including EMC, accelerate their mitigation plans. We will continue to monitor conditions and adjust our targets accordingly.
Energy Management and Renewable Energy
EMC’s reduction targets will best be achieved through a holistic approach to all aspects of energy management – including supply, demand and procurement. We continue to explore strategies for meeting our renewable energy goals by investigating renewable energy options that are economically and environmentally sound. In 2015, our efforts included:
- Establishing cross-functional representation for a Global Energy Team to drive long-term energy strategy for EMC. This body is tasked with long-term planning of our energy supply, demand and procurement in all of our four global theaters – Asia Pacific and Japan (APJ), Europe, Middle East and Africa (EMEA), Latin America, and North America.
- Retaining a new Global Energy Advisory firm to assist our Global Energy Team with mapping out a strategy for global energy management. This service includes identifying global renewable energy programs that we can investigate as part of our renewable energy and emission reduction goals.
- Implementing a new tool for managing our global carbon accounting and reporting. The tool can also be expanded to assist us in global water accounting and reporting.
- Breaking ground on a solar field project in Bellingham, Massachusetts. EMC, working in conjunction with Borrego Solar, is installing three 650 kilowatt ground-mounted solar photovoltaic (PV) arrays totaling 1.95 megawatts on EMC-owned property. The system is comprised of more than 6,000 solar PV panels, and is expected to generate 2,520,000 kilowatt hours of energy. The solar farm project is expected to be completed by mid-2016.
- Conducting more detailed research on other opportunities for solar photovoltaic (PV) energy generation in the U.S., including potential hosting of more solar PV generation facilities, becoming a consumer of solar PV generated off-site through purchased power agreements (PPAs), and other possible solar PV models. These efforts are continuing into 2016.
- Continuing to investigate other potential alternative energy purchasing in the U.S., India, Ireland and other locations where we have large global facilities and a greater proportion of Scope 2 GHG emissions.
During 2015, EMC purchased 40,000 MWh of Renewable Energy Certificates (RECs) in support of renewable energy generated in the U.S. The RECs purchased supported renewable electricity delivered to the national power grid by alternative energy sources. The RECs are third-party verified by Green-e Energy to meet strict environmental and consumer protection standards. The 40,000 MWh represents 7 percent of the grid electricity consumed at all EMC facilities in the U.S. during 2015. Although we purchased fewer RECs in 2015, this aligns with our long-term strategy to invest in on-site renewable projects.
Also in 2015, in alignment with our position on national climate policy, EMC approved the use of an internal price on carbon. This price has been initially set at $30 per metric ton CO2e, and will be reviewed periodically to adjust for market and regulatory conditions. The intention of this shadow price is to educate and inform business decision makers about the expected future costs associated with GHG emissions. As an initial step, training has been deployed to employees in our finance organizations who support major electricity-consuming or -producing capital expenditures such as new buildings, lease renewals, data center relocation, and lab consolidation.
Reporting & Accountability
We are committed to reporting our progress transparently and disclosing our GHG emissions annually to CDP. To learn more, see our 2016 CDP Climate Change questionnaire response.
Our Ireland Center of Excellence (COE) has continued to participate in the European Emissions Trading Scheme (ETS), which is a cap and trade Scope 1 emissions program that has now entered the third trading phase from 2013 to 2020. This COE has consistently remained within its operating allowance for the previous phases since 2005, but phase three of trading has, as expected, proved to be challenging, and the Ireland COE produced 2,594 metric tons of CO2e against an allowance of 2,550. Previous years of strong performance against our allowance ensured that we have more than adequate additional spare allowances available to cover this excess.
Further energy reduction projects commissioned during 2015 within the Ireland COE have brought our total thermal rated input to below 20 MW. As a result, we now fall outside of the criteria to be a member of the EU ETS. We will, however, continue to monitor and drive reductions in our CO2e emissions.
- EMC 2016 CDP Climate Change Information Request Response
- EMC 2015 CDP Climate Change Response
- EMC 2014 CDP Climate Change Response
- EMC 2013 Investor CDP Response
- EMC 2012 Investor CDP Response
EMC RECOGNIZED FOR CLIMATE DISCOSURE AND GHG MANAGEMENT
CDP 2015 S&P 500 Climate Disclosure Leadership Index (CDLI)
For the seventh time, EMC was included on the CDLI, earning a score of 100 for the depth and quality of the climate change data disclosed to investors and the global marketplace. To learn more, read the Press Release.
In addition, EMC was recognized as a world leader in supplier action on climate change by CDP in 2015, securing a position on the CDP Supplier Climate A List.
Scope 3 Emissions
At EMC, we continually strive to increase the breadth and depth of our GHG reporting. In our 2015 CDP Climate Change questionnaire response, we reported estimated global corporate emissions for eight of the 15 categories of Scope 3 emissions based on the WRI Greenhouse Gas Protocol Corporate Value Chain (Scope 3) Accounting and Reporting Standard. The following five reported categories represent the greatest opportunity to drive improvement and minimize emissions through our own actions and influence.
In 2015, the GHG emissions associated with business travel was 145,726 metric tons CO2e, including VMware. We track global corporate business travel miles from commercial flight and rail via our corporate travel booking tool. In addition, we estimate the GHG emissions associated with global business travel car rentals and global hotel stays based on data provided by our Travel department. The methodology for calculating the emissions associated with business travel is aligned with the GHG Protocol Corporate Accounting and Reporting Standard.
We continually seek to reduce GHG emissions associated with employee business travel by implementing advances in technology, business processes and resource management. We apply technology to allow us to perform changes remotely to customer technical environments, resulting in reduced emissions from travel. To learn more, visit Employee Travel & Commuting.
As of the publication of this report, our 2015 global GHG emissions from employee commuting have not yet been estimated. Please refer to EMC’s 2015 CDP Climate Change response for updated information. EMC maintains a comprehensive employee commuter services program focused on minimizing single-occupancy vehicles and unnecessary local employee travel. To learn more about our employee commuting programs, visit Employee Travel & Commuting.
Direct Tier 1 Suppliers
In 2015, the GHG emissions associated with EMC’s direct material suppliers was 211,809 metric tons CO2e. This reflects Scope 1 and Scope 2 GHG emissions data reported by direct Tier 1 suppliers comprising 96 percent of our annual spend. Using economic allocation, we use their data to calculate our share of their GHG emissions. Because this allocation approach requires access to supplier revenues, a small number of private companies are excluded from the analysis. The total reported metric tons of CO2e is extrapolated to provide an estimated figure for 100 percent of our direct materials supplier emissions. To learn more, visit Supply Chain Responsibility.
EMC’s Global Logistics Operations generated approximately 76,989 metric tons CO2e in 2015, a 26 percent reduction in absolute carbon footprint from 2014. We attribute this reduction to various factors including our continuous efforts to shift to lower-emitting modes of transport. This number covers inbound, outbound, interplant, and customer service transportation and logistics, but excludes in-country goods freighting for Brazil, Japan, and Russia. In 2015, we collected data related to carrier operations representing 92 percent of our logistics spend and extrapolated total emissions proportionately based on the reports we received. To learn more, visit Logistics.
Use of Sold Products
Environmental Lifecycle Analyses conducted prior to 2012 confirmed our expectations that more than 90 percent of lifecycle impacts are due to electricity consumed during the product use phase. EMC estimates that the lifetime GHG emissions from use of EMC products shipped to customers during 2015 will be approximately 3,392,352 metric tons CO2e, including VMware. This value represents our customers’ Scope 2 GHG emissions from the generation of electricity that is powering our equipment. To learn more about how we provide ongoing information to end-use customers about how to use our products more efficiently, visit Our Products. | <urn:uuid:8960725c-5027-4db5-9198-d2b0edecefe4> | CC-MAIN-2017-04 | https://www.emc.com/corporate/sustainability/operations/energy-climate-change-strategy.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00179-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.931175 | 3,032 | 2.546875 | 3 |
Definition: Computation based on quantum mechanical effects, such as superposition and entanglement, in addition to classical digital manipulations.
Specialization (... is a kind of me.)
Shor's algorithm, Grover's algorithm, Simon's algorithm, Deutsch-Jozsa algorithm.
See also model of computation.
Note: Quantum computers can, in theory, make computations that are impossible to do exactly with classical computers or achieve exponential speedup. Since nobody has yet (as of February 2005) implemented more than a few bit operations, problems such as decoherence and measurement error may limit quantum computation to a few specialized roles.
Ethan Bernstein and Umesh Vazirani, Quantum Complexity Theory, SIAM Journal on Computing, 26(5):1411-1473, 1997.
If you have suggestions, corrections, or comments, please get in touch with Paul Black.
Entry modified 17 December 2007.
HTML page formatted Mon Feb 2 13:10:40 2015.
Cite this as:
Paul E. Black, "quantum computation", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 17 December 2007. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/quantumComputation.html | <urn:uuid:be244d50-ec1a-429c-b9f9-e0150ce0c803> | CC-MAIN-2017-04 | http://www.darkridge.com/~jpr5/mirror/dads/HTML/quantumComputation.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00391-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.868688 | 286 | 3.015625 | 3 |
National Oceanic and Atmospheric Administration
To support the federal response to the oil disaster in the Gulf of Mexico, the Naval Oceanographic Office has deployed sensor systems to monitor surface currents and measure physical properties of the deeper Gulf waters to better analyze the disbursement of the millions of barrels of oil that has poured into the environment.
The instruments support a modeling program in the Gulf, providing information to the national response team's effort to assess the status and drift of an oil slick now the size of Maryland, said Capt. Brian Brown, commanding officer of the Naval Oceanographic Office at Stennis Space Center in Mississippi.
His organization is part of a joint federal effort that includes a number of agencies, primarily the National Oceanic and Atmospheric Administration. The mission has personal implications for the 1,000 military, civilian and contract workers at the office because "this is our backyard," Brown said. "We work and live on the Gulf. Many of our employees grew up here."
With information from sea sensors and satellites, the office provides the Defense Department and other federal agencies a global ocean forecast model that is useful for 72 hours.
The model, which has a 7-mile grid resolution, includes the Gulf of Mexico and provides information on currents and ocean structure from the surface of the sea down to the bottom of the ocean, Brown said.
To better understand and predict smaller scale ocean circulations in the Gulf of Mexico as part of the response to the oil spill, the Naval Oceanographic Office has accelerated the planned development of a higher resolution 1.5-mile grid resolution model, currently undergoing final verification and validation.
Both models depend on data obtained by measuring key variables, including currents, temperature and salinity both on and below the surface of the ocean. The Navy recently deployed 20 instruments in the Gulf of Mexico with support from NOAA to gather additional data for the higher resolution model, Brown said.
These include two subsurface Seagliders deployed from the NOAA ship Thomas Jefferson this week, said Dan Berkshire, technical lead for the ocean measurements department at the Naval Oceanographic Office.
Similar to aerial gliders, the 110-pound, 9-foot long gliders "fly" through the sea on predetermined compass headings, using internal bladders to adjust their buoyancy and sensors to measure temperatures and salinities from the surface to depths of 3,200 feet. A fluorometer was installed on each glider to measure dissolved organic material in the sea, including potential oil.
Berkshire said the gliders surface every six hours to transmit the data they have collected via a built-in iridium satellite telephone satellite chip to a Defense satellite gateway in Hawaii, which then relays the information back to the Naval Oceanographic Office in near real time. While the Seaglider is on the surface, Berkshire said it is updated via satellite with a new course.
Brown said the Naval Oceanographic Office also deployed six subsurface APEX profiling floats, which gather the same data the Seagliders collect, as well as 12 Davis drifting surface buoys to measure surface ocean currents.
But he pointed out the two systems, unlike the Seagliders, are at the mercy of currents to control their location. Both systems transmit their data via the Argos satellite system operated by NOAA, NASA and the French governments space agency in a partnership. The data is available to the global oceanographic community via the World Meteorological Organization's Global Telecommunications Service. The surface drifters provide positions several times a day and the profiling floats send observations every five days.
The data the seagliders transmits is fed to the supercomputing complex at Stennis, which includes a 12,872 processor Cray machine capable of 117 trillion operations per second and a 5,312 processor IBM machine capable of 92 million operations per second, said Robert Lorens, director of the ocean prediction department at the Naval Oceanographic Office.
The supercomputers use the data to run in a sophisticated ocean forecast model to produce representations of currents and forecast ocean structure. The data is stored in standard oceanographic file formats that can be displayed as standalone graphics, incorporated into geospatial information systems or a variety of other systems, Brown said.
Lorens said the Navy, with its ocean modeling and supporting data collection program, can assist the national response team's efforts in determining the long-term transport of the oil in the Gulf of Mexico, including whether the the area's loop current could carry oil around the southern tip of Florida and up its eastern coast.
There is always an inherent risk in operating in the maritime environment, especially beneath the ocean surface. Brown said he is concerned the oil could render the seagliders and their sensors close to inoperable, but he believes the potential benefits outweigh the risks. | <urn:uuid:e730ebed-44ae-492d-b203-865eaf53d9f3> | CC-MAIN-2017-04 | http://www.nextgov.com/defense/2010/05/navy-joins-effort-to-track-moving-oil-slick-in-the-gulf/46819/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00299-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.929867 | 975 | 2.875 | 3 |
A new report from the Chartered Institute of Insurance warns that Big Data could create an “underclass” of people who cannot afford insurance.
Big Data has been heralded as the future of business, with computer algorithms and machine-learning increasingly being used in tandem to enable companies to gather and act upon new insights. For example, such data could tell a business how a machine is performing on a production floor, or which product is selling best in a retail store.
There has been excitement for Big Data in the insurance world too, with risk managers seeing the potential to know more about their policyholders, and how much of a risk they really are. By knowing more about their customers, and how they behave, insurers believe they can improve risk management, reduce the likelihood of having to pay out on a claim, and ultimately improve their own bottom line.
In addition, through Big Data, insurers believe that they can price more effectively and ultimately advise their policyholders on how to lead a healthier and safer lifestyle.
Despite this, the new paper from CII warns that the Big Data approach threatens to destroy the insurance market model of pooling risk.
“Data is a double-edged sword,” said David Thomson, director of policy and public affairs at the CII, in the report. “The insurance sector needs to be careful about moving away from pooled risk into individual pricing. They need to think about the broader public interest.”
The report says that the concept of pooling risk “underpins the effectiveness of insurance cover”.
“Some people may be identified as such high risk to insurers that they are priced out of insurance altogether,” adds the report.
“Big Data could, in effect, create groups of ‘uninsurable’ people. While in some cases this may be to do with modifiable behaviour, like driving style, it could easily be due to factors that people can’t control, such as where they live, age, genetic conditions or health problems.”
The ethical issues of Big Data
Experts say that pricing around health and, in particular, genetic data is contentious. For example, some insurance professionals have questioned at what point an insurer intervenes in the event of a serious incident – like a heart attack – while basing pricing on genetic conditions seems unfair and exclusive to many.
The UK government acted on the latter in 2000, signing an agreement with the Association of British Insurers (ABI) in order to stop the insurance industry from using predictive genetic test results. That agreement runs until 2019, although a review is due later this year.
“You could price people out of the market for health products. There’s a danger insurers will not offer health cover to some people. The government would intervene if people are doing social sorting,” added Thomson.
Swiss Re customer technology manager Oliver Werneyer touched on some of the difficulties of IoT and Big Data at our recent Internet of Insurance summit.
“It’s great to figure out those people that are now healthier than they were, and for those you can give discounts. That’s exciting, except you now have people that are not as healthy as you thought they were.”
Spiros Margaris, VC and senior advisor at http://moneymeets.com, kapilendo.de, dser.de and ranked No. 1 Fintech Influencer by Onalytica, told Internet of Business that these dangers could be allied by emerging InsurTech start-ups.
“There is a danger that with Big Data insurance companies will not insure some people anymore and therefore some people might fall through the cracks. Though I truly hope and believe that InsurTech (insurance technology) start-ups would pick up where others fail.
“The Fintech (financial technology) industry’s greatest achievement will be to provide the unbanked – or in this case the uninsured – a possibility for a better life.”
Taking place on 27-28 September in New York, the Internet of Insurance is exploring the profound impact of IoT on insurance business models and customer relationships. Featuring case studies from USAA, Progressive, Liberty Mutual and more – email [email protected] for more information | <urn:uuid:7a6a6dfd-ee70-4cc9-8d40-9c48003d568f> | CC-MAIN-2017-04 | https://internetofbusiness.com/big-data-make-people-uninsurable/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00318-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.956235 | 884 | 2.6875 | 3 |
There is no doubt that flash technology is changing the shape of data centers (a.k.a. data centres). Even the Rock Band Queen knew this more than 30 years ago. Flash changes the economics of deploying high performance applications and removes the performance bottlenecks that exist with more traditional storage systems based on spinning disks. As I demonstrated in Real World Data Reduction from Hybrid SSD + HDD Storage you don’t need flash to get data reduction, but flash storage with data reduction allows you to improve the capacity economics of flash (performance of flash at or cheaper than disk) in addition to the power and cooling savings, and the reliability improvements due to no moving parts. Something you may not know is that today modern flash devices, such as certain SSD’s, may have better reliability than hard drives. From a physical capacity stand point SSD capacity is increasing at similar rates to processor increases in performance due to the number of transistors per dye reducing over time. Some vendors are already shipping 16TB 2.5″SSD drives, which will be at $ per GB cheaper than 2.5″ drives, at lower power, lower cooling and much less space. The M.2 Device from Intel in the image above has a capacity of 3.5TB and is an incredibly small form factor. So what does networking have to do with this?
Whether you are using fibre channel or ethernet, networking will play a big role in how well flash technology helps you and more importantly your applications. The reason for this is quite simple. It is something that database administrators have known for a while. The reason is this, the further away from your application that the data lives the higher the latency and lower the throughput, and consequently the less performance and response time to your users. This is even more acute with flash, especially as the capacity and performance of the technology marches ever forward as it only takes one or a few drives to bring a high performance network to saturation point. This factor is also one of the reasons that some engineered systems have been using Infiniband networks and RDMA for some time, but even that is too slow. Here is a graph comparing the throughput of three different flash devices with current ethernet network technologies.
The common types of flash devices of the 2.5″ hot pluggable variety we see in enterprise systems today can deliver about 500MB/s throughput and up to 50K or more IOPS at low latency. So it would take only 2 drives to saturate a 10GbE ethernet network or 4 drives to saturate a 16Gb/s FC network. Lucky we usually have multiple NIC ports or HBA’s per server. But this doesn’t help us when a storage array could have 10’s to hundreds of individual drives, or 12 to 24 drives in a single server or storage shelf. Even with the common flash technology today, any time you connect it to a network, you are creating a performance bottleneck and can’t possibly achieve close to full performance.
If we look at NVMe now, which is the next generation of flash technology that is going to become more popular by the end of 2016 and mainstream into 2017. Each device can deliver enough throughput and IOPS to saturate a 40GbE NIC. If you have 2 devices in a system, you can saturate a dual port 40GbE NIC. This is one of the primary reasons NVMe based storage systems such as EMC’s DSSD are not using traditional networks to connect the storage to servers. Instead they are using lots of direct PCIe Gen3 connections. They have realized the network is a major bottleneck and is too slow to deliver the kind of performance capability that flash based on NVMe can deliver. Each individual NVMe device is 6x to 8x faster than the common flash we see in most enterprise storage systems today. How many customers have multiple 40GbE NIC’s or 32Gb/s FC HBA’s per server in their datacenters today?
SSD’s are fast, NVMe based SSD’s are faster, but 3D Xpoint, a joint development between Intel and Micron is mind boggling fast. 3D Xpoint, which was announced in 2015 and expected to be in enterprise platforms by 2018/2019 is 1000x faster than todays common SSD’s used in most enterprise systems. At the sort of performance that 3D Xpoint can deliver motherboards, processor technology, memory bus and everything else will have to have a massive boost as well. Each device could more than saturate a multi port 400GbE network (400GbE is the next thing after 100GbE). As soon as you put this on a network you are waiting an age. 3D Xpoint is expected to deliver latency as low as 150ns or less, faster than the enterprise 40GbE and 100GbE switch ports today. Even Gen3/Gen4 PCIe is not fast enough to keep up with this sort of performance. Don’t even start thinking about the impact of In-memory DB’s, which are running at DRAM speeds.
As the image from Crehan Research Inc. above shows 10GbE and 40GbE ports are increasing and the cost of 100GbE ports are coming down. But 100GbE still isn’t widely adopted, and neither is 40GbE in servers just yet. Crehan Research expects 100GbE to start being more widely adopted from 2017 according to their 2015 report. But this will be at the switching / backbone and not at the server. With NVMe becoming mainstream and 3D Xpoint only a couple of years away from being deployed, network connectivity to each server has no hope of increasing 1000 fold in this short amount of time. We would effectively need dual port TbE connectivity to every server.
So we can see from the evidence that if you connect flash to a network you are going to have a bottleneck that impacts performance and limits the usefulness of the investment to some fraction of its potential. At the same time you want to make sure you have data protection, while still getting closer to the potential performance that can be achieved from the flash you have. How can we do both? Have high performance, low latency, the data as close to the applications as possible, and still maintain data protection?
The simple answer would be to connect SSD’s to local RAID cards. This would work with every day 2.5″ SSD’s (although you’d need multiple RAID cards per server for performance), but that doesn’t work with NVMe or 3D Xpoint. Multiple local RAID controllers in every server would create hundreds or thousands of silos of storage capacity that then have to be individually managed. We spent a long time creating architectures that could be centrally managed to eliminate this management overhead. We shouldn’t be going backwards to take advantage of new technologies.
The real answer is by two fold, firstly virtualization and secondly investing in system architectures that are distributed in nature and have at the heart of them a concept of data locality. An architecture with data locality ensures that data is kept local to the applications, on the local server, while being distributed for data protection. The reason we need virtualization is because we have so much abundance of high performance storage now there are few single applications that can actually make use of it. By using virtualization we can make use of the compute capacity and the storage capacity and performance. We’re fortunate that Intel keeps increasing the power of their processors every year for us to make ever better use of the cores for compute and now also for high performance storage (no proprietary components required).
The concept of data locality is used by many web scale applications that need to grow and scale while maintaining data protection and high performance. By having a concept of data locality you reduce the burden on the network, removing it from being an acute bottleneck, and future proof your datacenter for the new types of flash technology. Data is accessed locally by the application from the local PCIe bus through memory and only the writes or changes are sent across the network to protect them. Architectures based on data locality when implemented properly will scale linearly with predictable and consistent performance. This eliminates a lot of guess work and troubleshooting and reduces business risk as the architecture is more capable of adapting to changing requirements quickly by adding or subtracting systems from the environment.
You can adopt distributed architectures with data locality by building it into your custom written apps, or by implementing some of the new web scale big data applications (Hadoop and the like). But if you don’t have an army of developers how can you benefit form this type of architecture? An architecture that will be adaptable to new storage technologies and is future proof for the changes coming in the data center? The answer isn’t a SAN, because as we have covered, if you connect flash on the end of a network you can’t achieve anywhere near it’s potential. The only current solutions that exist are hyperconverged systems where the server and storage are combined into a single unit and then combined into a distributed architecture.
Not all hyperconverged systems have a concept of data locality as part of their architecture. So you need to be careful which vendor you choose. You should evaluate each vendor with regards to your requirements and business needs and look at who can protect your investment into the future without major architecture disruption. Some vendors are promoting anti-locality and recommending customers go all flash and just buy more network ports. Unfortunately the network can’t and won’t keep up with flash technology (400GbE is too slow). So you are guaranteeing substandard performance and an architecture that won’t be able to seamlessly adapt to the rapidly changing flash technologies.
Also note once you invest in flash and you move it closer to your applications you will find that your overall CPU utilization increases, in some cases very dramatically. This is because storage is no longer your bottleneck. Your applications are no longer waiting for IO to complete and are therefore much more responsive to users, and are able to complete batch jobs much more quickly, process more transactions in less time. As a result your CPU’s are a lot less idle. Ultimately you get a lot more useful work done in less time. So don’t be alarmed if suddenly your CPU utilization hits 80% when you run on flash. This is expected. After all isn’t this just good use of your investment, getting the most out of the assets?
You can watch this and other topics being discussed over beers with Tony Allen, one of the development engineers at Nutanix in the Beers with Engineers series below.
Data Locality is the only way to future proof your architecture and get the most out of the continuing evolution of flash technology that is continuing to disrupt datacenters. Nutanix (where I work) has had data locality as a core part of the architecture since the very first release. This is the primary reason the architecture has been able to continue to scale and adapt customer environments through different generations of technology over the past 5 years without changing the underlying architecture, and is future proofed for the changes that are coming. We allow our customers to mix and match platforms, while keeping the data local to the applications, making the path between the applications and the data as short as possible and thereby lowering application latency.
This post first appeared on the Long White Virtual Clouds blog at longwhiteclouds.com. By Michael Webster +. Copyright © 2012 – 2016 – IT Solutions 2000 Ltd and Michael Webster +. All rights reserved. Not to be reproduced for commercial purposes without written permission. | <urn:uuid:b1cabece-a91c-4ad9-aa61-01f4c28419b5> | CC-MAIN-2017-04 | http://longwhiteclouds.com/2016/06/05/your-network-is-too-slow-for-flash-and-what-to-do-about-it/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00556-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.950071 | 2,396 | 2.609375 | 3 |
The series of tornadoes that killed at least six in Oklahoma and raised havoc in other parts of the Central Plains April 14-15 was preceded by a rare early, high-risk warning.
For the second time in U.S. history, the National Weather Service’s Storm Prediction Center issued a high-risk warning more than 24 hours in advance of the more than 75 tornadoes that later hit Oklahoma as well as Kansas, Iowa, Nebraska and north Texas.
High-risk warnings are issued on average two to five times a year but not with a 24-hour lead time. So what was different about these events? The storm was “well forecast” by the computer models as potentially dangerous and forecasters have been gaining confidence in forecasting every year, according to Russ Schneider, director of the National Oceanic and Atmospheric Administration’s Storm Prediction Center in Norman, Okla.
“The conditions were favorable, based on computer model guidance, for a potentially major tornado outbreak,” Schneider said. “When there is sufficient confidence, what we want is to state what we’re seeing in the data and what our expert forecast is so that people can prepare.”
The increased confidence comes from: a better understanding of science; improved observations, including satellite, radar and other observing systems; and greatly improved numerical forecast models and numerical modeling or ensemble forecasting. The latter is numerical modeling on a smaller scale (down to individual thunderstorms) that involves taking numerous computer forecasts and determining whether the storm will be isolated and a potential super cell or a solid line squall. Solid line storms tend to have less potential to become tornadoes than the isolated super cells.
“This will be an increasing trend within severe weather forecasts where we’ll be able to give the emergency management community a greater range of information on what the worst possible outcomes are and what some of the less severe outcomes are and the chances of each,” Schneider said.
He said most communication to the public will be the day of the event, but FEMA will continue to receive forecasts up to three days and sometimes four days in advance of storms. “We certainly want to stay responsible in our communication both to the emergency management community and to the public. We certainly don’t want to ‘over warn’ so to speak.”
The early spring storms like the one that prompted the early warning on April 13, are more likely candidates for early warnings than later storms when the jet stream weakens, Schneider said.
“We certainly won’t always be able to makes these kinds of decisions, particularly with a high-risk storm so many days in advance, although we are able to identify areas that may prove to be troublesome many days in advance, particularly very early in spring.”
Although Schneider sees a trend of more early warnings, he said predicting the future is a difficult business. “We don’t actually know the complete state of the atmosphere at any one moment with certainty, so we can’t calculate the future with certainty.” | <urn:uuid:14dc9dad-292b-45ff-972b-975e159cb216> | CC-MAIN-2017-04 | http://www.govtech.com/em/disaster/Early-High-Risk-Severe-Weather-Warnings.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00282-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.955705 | 631 | 3.046875 | 3 |
As you might expect of such complex and successful software, IBM Lotus Notes and Domino share a long and rich history. In some respects, this history mirrors the evolution of the computing industry itself -- the development and widespread adoption of PCs, networks, graphical user interfaces, communication and collaboration software, and the Web. Lotus Notes and Domino have been there nearly every step of the way, influencing (and being influenced by) all these critical developments.
This article briefly retraces the history of Lotus Notes and Domino, starting with the earliest conceptual and development stages and continuing through major feature releases. Along the way, it examines:
- Where the idea of Notes originated
- Notes pre-release development
- Release 1.0
- Release 2.0
- Release 3.0
- Release 4.0 and 4.5
- Release 5.0
- Release 6 and 6.5
- Release 7
- Release 8
The early days: The birth of an idea
You may find this a little surprising, but the original concept that eventually led to the Notes client and Domino server actually pre-dates the commercial development of the personal computer by nearly a decade. Lotus Notes and Domino find their roots in some of the first computer programs written at the Computer-based Education Research Laboratory (CERL) at the University of Illinois. In 1973, CERL released a product called PLATO Notes. At that time, the sole function of PLATO Notes was to tag a bug report with the user's ID and the date and to make the file secure so that other users couldn't delete it. The system staff could then respond to the problem report at the bottom of the screen. This kind of secure communication between users was the basis of PLATO Notes. In 1976, PLATO Group Notes was released. Group Notes took the original concept of PLATO Notes and expanded on it by allowing users to:
- Create private notes files organized by subject
- Create access lists
- Read all notes and responses written since a certain date
- Create anonymous notes
- Create director message flags
- Mark comments in a document
- Link notes files with other PLATO systems
- Use multiplayer games
PLATO Group Notes became popular and remained so into the 1980s. However, after the introduction of the IBM PC and MS-DOS by Microsoft in 1982, the mainframe-based architecture of PLATO became less cost-effective. Group Notes began to metamorphose into many other "notes type" software products.
Ray Ozzie, Tim Halvorsen, and Len Kawell worked on the PLATO system at CERL in the late 1970s. All were impressed with its real-time communication. Halvorsen and Kawell later took what they learned at CERL and created a PLATO Notes-like product at Digital Equipment Corporation.
At the same time, Ray Ozzie worked independently on a proposal for developing a PC-based Notes product. At first, he was unable to obtain funding for his idea. However, Mitch Kapor, founder of Lotus Development Corporation, saw potential in Ozzie's work and decided to invest Lotus's money for its development. Kapor's business acumen, creativity, and foresight were critical in changing Ozzie's vision into reality.
Development on Notes begins
Near the end of 1984, Ozzie founded Iris Associates Inc., under contract and funded by Lotus, to develop the first release of Lotus Notes. In January 1985, shortly after Iris Associates began, Tim Halvorsen and Len Kawell joined Ozzie, followed soon after by Steven Beckhardt. All brought extensive knowledge and vision to the company as well as career-long interests in collaboration and messaging software at a time when such concepts were considered novel at best and impractical at worst. They modeled Lotus Notes after PLATO Notes, but expanded it to include many more powerful features. Alan Eldridge from Digital Equipment Corporation soon joined Iris Associates, contributing to the database and security features of the Notes architecture.
The original vision of Notes included on-line discussion, email, phone books, and document databases. However, the state of the technology at the time presented two serious challenges. First, networking was rudimentary and slow compared to today. Therefore, the developers originally decided to position Lotus Notes as a personal information manager (PIM), like Lotus Organizer, with some sharing capability. Second, PC operating systems were immature, so Iris had to write a lot of system-level code to develop things such as the Name Server and databases. Eventually, as networking became more capable, Iris began to speak of Notes as groupware. The term groupware (which eventually grew virtually synonymous with Lotus Notes) refers to applications that enhance communication, collaboration, and coordination among groups of people.
To meet these goals, Lotus Notes offered users a client/server architecture that featured PCs connected to a LAN. A group could set up a dedicated server machine (a PC) that communicated with other groups' server machines (either on the same LAN or through switched networks). Servers exchanged information through replicated data (that is, there were potentially many copies of the same database resident on different servers, and the Notes server software continuously synchronized them). This made it just as easy for users to exchange information with co-workers in a branch office as with those in their own office.
The vision of the founders quickly evolved into the idea of creating the first virtual community. Tom Diaz, former Vice President of Engineering at Iris, said, "It was eccentric to think about group communication software in 1984, when most people had never touched an email system...the product was very far ahead of its time. It was the first commercial client/server product."
Another Notes key feature was customization. According to Tim Halvorsen, early on there was debate over the structure of Lotus Notes. He said the developers wondered, "Should we build applications in the product or should we allow it to be flexible and let users do it because we don't know what they will want?" They eventually opted for a flexible product that allowed users to build the applications they needed. Thus, Notes architecture used a building block approach; you could construct group textual applications by piecing together the various services that are available. "This was big in the success of the product," stated Halvorsen. "In no case do we say, 'no, this is the only way you can do it.'" Lotus Notes has survived the changes in the industry because it is a flexible product users can customize to fit their changing needs.
Around this time, Apple Computer released the Macintosh with a new easy-to-use graphical user interface. This influenced the developers of Lotus Notes, and they gave their new product a character-oriented graphical user interface.
Most of the core development was completed within two years, but the developers spent an additional year porting the code for the client and the server from the Windows operating system to OS/2. During this time, the developers at Iris used Lotus Notes to communicate remotely with people at Lotus. Halvorsen said, "Simply using the product every day helped us develop key functionality." For example, the developers needed to synchronize data between the two different locations, so they invented replication. "This wasn't in the original plan, but the problem arose and we solved it," said Halvorsen.
The development of Lotus Notes took a long time by today's standards. But according to Steve Beckhardt, this extended development period helped ensure the success of Lotus Notes. This made Lotus Notes a very solid product with no real competition in the market.
In August 1986, the product was complete to a point it demonstrated all of its unique capabilities and had preliminary documentation. It was ready to ship to the first internal Lotus users. At that time, Lotus evaluated and accepted the product. Lotus bought the rights to Notes in 1987.
Lotus Notes was successful even before its first release. The head of Price Waterhouse viewed a pre-release demo of Lotus Notes and was so impressed he bought 10,000 copies. At that time, it was the largest PC sale ever of a single software product. As the first large Lotus Notes customer, Price Waterhouse predicted that Lotus Notes would transform the way we do business. As we now know, they were right.
Release 1.0: A star is born
The first release of Lotus Notes shipped in 1989. During the first year it was on the market, more than 35,000 copies of Lotus Notes were sold. The Notes client required DOS 3.1 or OS/2. The Notes server required either DOS 3.1, 4.0, or OS/2. Figure 1 shows the Notes client user interface.
Figure 1. Release 1.0 screen
Release 1.0 provided several ready-to-use applications such as Group Mail, Group Discussion, and Group Phone Book. Lotus Notes also provided templates that assisted you in the construction of custom applications. This ability to design customizable applications using Lotus Notes led to a business partner community that designed Notes applications. Today, thousands of companies build their own software products that run on top of Lotus Notes, but the founders didn't expect Lotus Notes to be a developers' product. They envisioned a shrink-wrapped PC communications product that would run right out of the box. In reality, it became both.
Release 1.0 offered the following functionality, much of it revolutionary in 1989:
- Encryption, signing, and authentication using the RSA public-key technology, which allows you to mark a document in such a way that the recipient of the document can decisively determine that the document was unmodified during transmission. Lotus Notes was the first important commercial product to use RSA cryptography, and from that point on, users considered security as a prime feature of Lotus Notes.
- Dial-up functionality, including the ability to use the dial-up driver for interactive server access, the ability to allow users to specify modem strings, support for operator-assisted calling, and automatic logging of phone call activity and statistics.
- Import/export capability, including Lotus Freelance Graphics metafile import, structured ASCII export, and Lotus 1-2-3/Symphony worksheet export.
- Ability to set up new users easily, including allowing system/server administrators to create a user mailbox, to create a user record in the Name and Address database, and to notarize the user's ID file through dialog boxes. You can also automatically create a user's private Name and Address database, in case that user wants to use private distribution lists.
- An electronic mail system that allows you to send mail without having to open your private mail file, to receive return receipts, to be notified when new mail arrives, and to automatically correct ambiguous or misspelled names when creating a mail message.
- On-line help, a feature not offered in many products at this time.
- Inclusion of the formula language, making the programming of Notes applications easier.
- DocLinks providing "hotlink" access between Notes documents.
- Keyword (checkbox and radio button) features.
- Access Control Lists (ACLs) determining who can access each database and to what extent.
- Ability to administer remote replicas of databases from a central place, if the database manager desired that behavior. You can replicate ACLs as an entire list, not just individual entries, to remote copies of the database.
The first set of enhancements to Lotus Notes became available in 1990. Release 1.1 was not a feature release, but an internal restructuring of the code that included new portability layers. The developers made a large architectural investment in Lotus Notes as a multi-platform product. They wrote a large amount of the product insulating the functional parts of Lotus Notes from the operating system. This means that, although Lotus Notes ran on many platforms, the developers didn't port the code from platform to platform. They developed the code for different operating systems in parallel. Already, this investment began to pay off. In this release of Lotus Notes, they supported additional operating systems: OS/2 1.2 Extended Edition, Novell Netware Requester for OS/2 1.2, and Novell Netware/386. However, their biggest achievement and the focus of this release was the added support for Windows 3.0, which was achieved by working closely with Microsoft as an influential Beta site for Windows 3.0.
Release 2.0: Bigger and better
The next major release of Lotus Notes shipped in 1991. For Release 2.0, scalability became the focus. After Release 1.0 sold to large companies, Iris realized Lotus Notes needed to scale to support 10,000 users. Lotus Notes was initially intended for small- to medium-sized businesses. The founders' original vision did not include large companies as users; they only expected 25 or so people logging in to one server. The reason for this was that the PCs of the day didn't have a lot of power. As the PCs and their networks became more powerful, so did Lotus Notes. Figure 2 shows shows the Release 2.0 user interface.
Figure 2. Release 2.0 screen
Throughout the 1990s, as Lotus Notes accommodated more and more users, larger companies bought it. Sales growth was slow, but steady as Lotus sold the product to high-end customers willing to invest time and effort getting large groups of users up and running. As these early customers used Lotus Notes with great success, the installed user-base grew.
Originally, there was a 200-license minimum for Lotus Notes; Lotus did not sell individual copies. As a result, the minimum purchase price was $62,000. Lotus targeted big companies because they felt that only those companies would comprehend and exploit the potential of the product. Price Waterhouse and other early test sites showed that the big companies got it.
Tim Halvorsen remembers that as Lotus Notes slowly began to grow, so did the development team. By the second release, there were approximately 12 developers working on Lotus Notes. For the early releases, Halvorsen said, "We were very responsive to the needs of our customers, but then we also tried to build it with the ability to accommodate future changes in the industry."
Release 2.0 included the following enhancements:
- C API
- Column totals in views
- Tables and paragraph styles
- Rich text support
- Additional formula language @functions
- Address look-up in mail
- Multiple Name and Address books
- Return receipt for mail memos
- Forwarding documents from mail
- Larger databases and desktop files
Release 3.0: Lotus Notes for everyone
Lotus Notes Release 3.0 shipped in May 1993. By this time, Iris had about 25 developers working on Lotus Notes. Release 3.0 was build number 114.3c. This means that it was the 114th successful build of Lotus Notes ever and that it took three tries to complete the final build.
At the time of the release, more than 2,000 companies and nearly 500,000 people used Lotus Notes. The goal of Release 3.0 was to build further on what Lotus Notes already was, to make the user interface cooler and more up-to-date, and to evolve it further as a cross-platform product. Lotus aimed the product at a larger market and reduced the price accordingly. Release 3.0 featured the first of a series of rewrites of the database system, NIF, to make the product scale to even larger user populations. This release was suitable for about 200 users simultaneously using a server. Figure 3 shows the Release 3.0 user interface.
Figure 3. Release 3.0 screen
Release 3.0 also added greater design capability and many additional features, including:
- Full-text search
- Hierarchical names, views, forms, and filters
- Additional mobile features, including background replication
- Enhanced scalability
- Alternate mail capability
- Development of common API strategies for cross-platform Notes applications
- Selective replication
- Support for AppleTalk networks
- Deployment and administrative improvements
- Support for the Macintosh client
- A server for the Windows operating system
Lotus SmartSuite shipped in 1993 with a Bonus Pack, called Notes F/X that allowed applications to share data and still integrate the data in a Notes database using OLE.
In May 1994, Lotus purchased Iris Associates, Inc. This had very little effect on the product itself, but it did simplify some of the pricing and packaging issues surrounding Lotus Notes. In May 1995, Lotus released InterNotes News, a product that provided a gateway between the Internet news sources and Lotus Notes. This was the first project that reflected the growing influence of the Internet on Lotus Notes.
Release 4.0: A whole new look
In January 1996, Lotus released version 4.0. This release offered a completely redesigned user interface based on customer feedback. This interface exposed and simplified many Notes features, making it easier to use, program, and administer. When the developers gave a demonstration of the new user interface at Lotusphere (a yearly user group meeting), they received a standing ovation from the crowd of customers.
The product continued to become more scalable. It became faster and faster as companies added additional processors to multiprocessor servers. Lotus cut the price of Lotus Notes in half, and thus successfully gained a larger market share. Figure 4 shows the new user interface introducted in Release 4.0.
Figure 4. Release 4.0 screen
In addition, Lotus Notes began to integrate with the Web, and many new features reflected the prominence of Web technology in the industry. Ray Ozzie, the first Notes developer and founder of Iris Associates, saw the importance of the Web before the Web became the phenomenon it is today. This was a key element in the success of Lotus Notes. A new product called the Server Web Navigator allowed the Notes server, connected to the Web, to retrieve pages off the Web, and then allowed users to view the pages in a Notes client.
Another product that leveraged the Web was a server "add-in" called the InterNotes Web Publisher. Now users could take a Notes document, convert it to HTML, and display it in a Web browser. The server could statically take Notes documents and publish them to the Web. It was not yet dynamic because there was a time delay involved in this process. The documents went to the file server and were later published to the Web.
Release 4.0 also offered:
- LotusScript, a programming language built into Lotus Notes
- A three-paned UI for mail and other applications with document preview ability
- Pass-thru servers
- A new graphical user interface for server administrators
- Built-in Internet integration, including Web browser accessible Notes databases
- Upward mobility, including locations and stacked icons
- An enhanced replicator page
- Rapid application development and programmability as a result of an integrated development environment (IDE), infoboxes, and redesigned templates
- View, folder, and design features, including the ability to create action bars, the ability to create navigators that allow easy graphical navigation among views, and improved table support
- Search features, such as the ability to search a database without indexing it, and the ability to add conditions to a search with the Search Builder without writing a formula
- Security features, such as the ability to keep local databases secure and the ability to restrict who can read selected documents
- Internet server improvements, including SOCKS support, HTTP proxy support, and Notes RPC proxy support
In July 1995, IBM purchased Lotus, primarily to acquire the Notes technology. The buyout impacted Lotus Notes in a positive way. Prior to the buyout, the Notes developers felt that they were facing some strategic uncertainty as a result of the growing prominence of the Web and increasing competition in the market. The IBM acquisition provided solid financial backing, access to world class technology, including the HTTP server that became IBM Lotus Domino, and an increased sales force. Lotus Notes now sold to very large Fortune 500 companies, and it sold to entire corporations instead of just departments. These positive gains gave the developers of Lotus Notes the freedom to invest in long-term projects. In 1996, following the release of Lotus Notes 4.0, the business and technological competition exploded -- for messaging products, for Web servers, and for development systems for these products.
The development of Release 4.0 took more than two years, which in light of the growing competition and the shorter development cycles of competitors using the Web to release products, was now too long. To give large enterprises a highly stable Notes system and to ensure that Iris Associates would continue its tradition of technical leadership, the developers divided the Notes product line into the following two branches:
- A product line of new feature releases, beginning with Release 4.5, offered first-rate new functionality in the fastest development cycle possible while still maintaining good quality. Market competition and the needs of the software vendors building applications on top of Lotus Notes influenced these releases.
- 90-day releases, also called quarterly maintenance releases, contained few or no new features. Maintenance input from existing Notes customers almost entirely drove this second product line. Many of these customers were the large-enterprise users who heavily stressed the server and were among the first to find deployment-blocking bugs. The sole purpose of these releases was to gather up fixes for bugs, test them in an integrated manner, and make them available to licensed customers. These releases were more conservatively managed than the new feature releases, and they were appropriate for large companies who were more interested in a highly stable release of the product than in pioneering brand new technology. A third digit in the product release number designated maintenance releases, such as the 3 in 4.5.3.
Even today, at any particular time, there are two Notes families (or two code streams) maintained this way, while a third code stream is underdevelopment for the next feature release.
New users had a choice as to the release of Lotus Notes they could buy. Most new users received the current feature release. As time passed, most users began to combine the releases, so that on some machines they took advantage of the new feature release, while other machines ran a maintenance release version. These two releases of the product did merge at certain points in the development process. When coding started for a new feature release, all the code from past releases, including the bug fixes were merged together and a new code stream began. This merging process happened a few times early in the development process of the new feature release. This merging process ensured that the reliability of feature releases was high.
Release 4.5: The Domino theory
Lotus changed the brand name of the Notes 4.5 server product to "Domino 4.5, Powered by Notes" in December 1996 and shipped the Lotus Domino 4.5 server and the Lotus Notes 4.5 client. Lotus Domino transformed the Notes Release 4.0 server into an interactive Web applications server. This server combined the open networking environment of Internet standards and protocols with the powerful application development facilities of Lotus Notes. Lotus Domino provided businesses and organizations with the ability to rapidly develop a broad range of business solutions for the Internet and for intranets. The Domino server made the ability to publish Notes documents to the Web a dynamic process. Figure 5 shows the Release 4.5 calendar user interface.
Figure 5. Release 4.5 screen
Release 4.5 provided the following improvements:
- Messaging, including native Notes calendar and scheduling, SMTP/MIME support (SMTP MTA), cc:Mail network integration (cc:Mail MTA), POP3 support (on the Notes server), and a Mobile corporate directory
- Internet server, including Domino.Action, and multi-database full-text searching
- Personal Web Navigator, including client-side retrieval of HTML pages over HTTP, Personal Web Navigator database, Java applet execution, Netscape plug-in API support, and HTML 3.2 support
- Scalability and manageability, including Domino server clusters, directory assistance, Administration Process enhancements, new database management tools, Windows NT single logon support, and Notes/NT user management
- Security, including Execution Control Lists and password expiration and reuse
- Programmability, including Script Libraries, OLE2 support on the Macintosh, extended OCX support, LotusScript enhancements, and IDE enhancements
- Enhanced application development capability with support for Java 1.1 agents and Java-based access to Notes objects
- Seamless Web access from the Notes client
- Ability to hide design elements from a Web browser or a Notes client if necessary
Release 5.0: Web integration by design
Lotus Notes and Domino Release 5.0 shipped in early 1999 as the 160th build since 1984. The Release 5 code was a direct descendent of Release 1.0 and parts of its architecture still supported Release 1.0 clients. But, while backwards compatible, Release 5 was definitely moving into the future.
With Release 5's continued Web integration, it was no longer a question of Lotus Notes versus the Internet -- they became inseparable. The new user interface for Release 5 illustrated this by taking on more browser-type characteristics. Release 5 also supported more Internet protocols and extended its reach to include access to information stored in enterprise systems as well as Notes databases.
Figure 6 shows the improved Lotus Notes Release 5 Welcome page.
Figure 6. Release 5.0 screen
For application developers, Lotus Domino Designer, the successor to Lotus Notes Designer for Domino, offered significant enhancements that make development more productive. Lotus Domino Designer is an integrated development environment with the tools needed to rapidly build and deploy secure e-business applications.
The new Lotus Domino Administrator made Domino network administration easier with a redesigned user registration and new tools for server monitoring and message management. Important enhancements to the Domino server included:
- Internet messaging and directories, including full-fidelity messaging, native MIME and SMTP support, the new Directory Catalog, and LDAP features
- Database improvements, such as transaction logging and a new on-disk structure (ODS)
Release 5.0. was available on Windows NT, Windows 95, Windows 98, OS/2, Netware, and UNIX. This wide availability, combined with its ability to entwine Lotus Notes with the Internet, set a new standard for:
- Easy access to all the information that is important to you, be it personal or public
- Server independence because of the ability to use Lotus Notes with Lotus Domino Release 5 as well as other Internet-standard servers
- The ability to read and send messages to any Internet mail server without needing to know about Internet standards, thanks to one, consistent interface
- The latest innovations in Internet messaging with native support for all the major Internet standards
On the Notes client-side, Release 5 provided easy access to all the information that is important to you -- whether that information is personal (such as your email and calendar) or public (such as your favorite Web sites and Internet newsgroups). The Notes client included a new browser-like user interface with a customizable Welcome page for tracking your important daily information. It also included improvements to the applications you use in your daily work, such as mail, calendar and scheduling, Web browsing, and discussions. As interface designer Robby Shaver said when discussing the Release 5 client, "The number one goal is to just make the client easier."
Release 6.0: Doing things faster, better, and cheaper
When Lotus Notes 6 and Lotus Domino 6 were introduced in October, 2002, the business world was dominated by talk of lower cost of ownership, increased productivity, and faster deployment and turnaround. This reflected both the direction of business software as well as each corporation's need to perform more efficiently in the face of ever-increasing time and financial pressures. The message from our customers was clear: We need to do more with less, and we need to do it faster.
As usual, Lotus Notes and Domino were in the forefront of this trend. The Domino 6 server offered improved installation as well as scalability and performance enhancements designed to streamline maintenance and lower administration overhead. Lotus Domino Designer 6 made it easier to create complex applications and to reuse code, reducing development and deployment time. And Lotus Notes 6 remained the collaboration tool of choice for tens of millions of users worldwide with enhanced calendar and scheduling as well as other personal productivity improvements.
For example, the Notes 6 default Welcome page was redesigned to increase ease of use and to make more functionality accessible:
Figure 7. Notes 6 Welcome screen
The Notes 6 Welcome page had many new features, including:
- Welcome page action buttons, for example, to create a new mail memo or calendar entry
- A preview pane similar to standard Notes databases
- A wizard for customizing and personalizing your Welcome page
- The Launch Pad for quick access to applications, tasks, and links
- A Tip of the Day about using the Notes client
- Quick Notes interface to create mail, contacts, journal entries, and reminders without having to open the respective databases
One of the more significant Notes 6 enhancements was improved calendar and scheduling, offering new functionality to help manage your time more effectively. For example, the new mini-view, colors, and summary features help to quickly identify the most pressing items. Lotus Notes 6 offered multiple options for creating and editing meetings and other calendar entries. Rescheduling could be done primarily through a new point-and-click interface. These and other Notes 6 C&S features are described in detail in the article, "Saving time with Notes 6 Calendar and Scheduling." And for a complete rundown of new Notes 6 features, see the article, "Notes 6 Technical Overview."
Lotus Domino Designer 6 also focused on the trend of doing more with less, offering enhancements in the following areas:
- Reusability features that allow designers to take code written for one application and to reuse it in another.
- Agent design and management with a redesigned agent interface and enhanced agent properties along with the ability to attach and debug agents running on the server.
- Presentation development, introducing new features that bring the creation and management of new presentation elements, such as layers and style sheets, into the integrated design environment.
- Managing complex applications with better support both for applications that span multiple databases and that include objects that aren't traditional elements of an NSF file and for third-party tools for use on the design elements of these applications.
- Database development, making it easier for developers to do the basic work of building an application -- from small UI changes to major additions like type-ahead for @functions, HTML in the programmer's pane, the Data Connections resource type, and features to support mobile applications.
These and other Lotus Domino Designer 6 features are described in the article, "Domino Designer 6 Technical Overview."
But perhaps the most significant enhancements in Lotus Notes/Domino 6 were in the Domino server. As with the Notes client and Lotus Domino Designer, our primary theme was helping you work more efficiently. For example, installation and setup offered more options and an improved interface to allow administrators to get servers up and running faster. And we made it easier for an administrator to centrally manage multiple remote servers through features such as policy-based management. Policies help you maintain standard settings and configurations for registration, setup and desktop, archiving, and security. For more information about policy-based management, see the article, "Policy-based system administration with Domino 6."
Server scalability and performance was another major issue. To address these needs, Lotus Domino 6 introduced features such as network compression, which can reduce the number of bytes sent during transactions by up to 50 percent, and statistics monitoring and analysis to help you plan and run individual systems (as well as your whole domain) more efficiently. In Lotus Domino 6, you can monitor performance statistic profiles using charts that display the statistics in real-time or historically. And the Domino Server Monitor includes server profiles that monitor tasks and processes specific to a certain subset of servers.
Of course, security remained an overarching concern for all administrators. Lotus Domino 6 boasted new security functionality, such as the new certificate authority, delegated server administration, and improved password management. And you could push Admin ECLs to clients dynamically on an as-needed basis, making it easier to deliver timely updates and to update clients who received the default ECL during setup.
Other new Lotus Domino 6 features included:
- Messaging enhancements, including iNotes Web Access and Domino Everyplace servers, extending access to Domino's messaging infrastructure. And new features for the Web server expanding the capabilities for Web application development and deployment.
- Changes to directories, for example the ability to use LDAP, NameLookup, or both to serve up directories, and a directory indexer task that updates views in the Domino Directory.
- Domino hosting features that allow multiple organizations to be transparently hosted by a single logical Domino server.
- Server cluster enhancements, including making the Cluster Administrator a server thread, adding new settings to control the number of active Cluster Replicators, and adding new Cluster Replicator commands for better control over cluster replication and information gathering.
- Domino Off-Line Services (DOLS) enhancements.
The article, "Domino 6 Technical Overview" describes these and all other new Lotus Domino 6 features.
Release 6.5: Everybody's talking
In September 2003, IBM released Lotus Notes/Domino 6.5. This version offered tighter integration with other IBM/Lotus technologies, such as IBM Lotus Sametime instant messaging and IBM Lotus Domino Web Access (formerly iNotes Web Access). And we expanded on the "faster, better, cheaper" theme of release 6.
In keeping with the Notes/Domino release plan of alternating between focusing on the server in one release and the client in the next, much of the effort around Release 6.5 involved end-user productivity enhancements for the Notes 6.5 client. One of the more significant of these enhancements (the one that inspired the title of this section) was Lotus Sametime instant messaging integration. From within the Notes 6.5 client, you could now log into Lotus Sametime, check whether a user was online, start a chat with one or more users, and conduct online meetings. This significantly extended the reach of the Notes client, allowing you to instantly communicate and collaborate with others. The inclusion of instant messaging at no additional charge remains a unique advantage of Lotus Notes in Release 6.5 and beyond.
Another example of the Notes 6.5 commitment to productivity was its expanded calendar and scheduling functionality. You could now create a calendar entry or To Do item from a mail message, simply by dragging and dropping the message from any view in your mail file onto the Calendar or To Do bookmark. You could also use drag and drop to create a mail message out of a calendar entry, or a calendar entry out of a To Do item. Other calendar and scheduling improvements included the ability to reschedule one or more instances of a repeating meeting without affecting the other meetings, and printing distribution lists in mail messages or calendar entries.
In Notes 6.5 mail, you could now mark a mail message with the Follow Up flag to indicate that you need to take future action on that message. Icon indicators helped you determine more quickly whether or not you have already replied to a message or forwarded it. You could also specify that mail received from a specific sender be automatically sent to your Junk mail folder. And you could more easily create QuickRules.
The Lotus Domino Web Access client was improved to help bring its level of functionality closer to the Notes client experience. New Notes-like features included Lotus Sametime integration, better calendar and scheduling, the ability to copy messages into calendar entries or To Do items, template customization, one-click sending and filing of messages, adding a person to a Contacts list, and local archiving.
For Lotus Domino Designer 6.5, Domino application developers could now add Lotus Sametime person awareness to their applications by enabling a names field in a form to show online status. You could also add awareness to views by enabling columns to show online status. Another application development feature was Lotus Domino Toolkit for WebSphere Studio 1.1, a set of Eclipse plug-ins for creating JavaServer Pages (JSPs) with Domino Custom Tags. Lotus Domino Designer 6.5 also offered LotusScript classes for Java/CORBA and COM bindings and an enhanced LotusScript NotesRegistration class.
The Lotus Domino 6.5 server expanded the number of supported platforms on which Lotus Domino runs. New platforms included Linux on zSeries (S390) and Windows Server 2003. And Lotus Domino 6.5 added support for the Mozilla 1.3.1 browser on Linux, including support for offline access in Lotus Domino Web Access on a Linux client.
Of course, performance was as important as ever. To address this need, Lotus Domino 6.5 added new Server.Load workloads, including workloads for Lotus Domino Web Access, Mail, and IMAP. Linux administrators welcomed the ability to monitor platform statistics for Linux and Linux on zSeries platforms. And you now had better control over database replications. Lotus Domino 6.5 for iSeries added support for multiple versions of Lotus Domino on one partitioned machine. And Lotus Domino for z/OS added hardware cryptography capability to reduce CPU rates when SSL is enabled.
Other server-related enhancements included the Unified Fault Recovery/Cleanup Scripts interface, the ability to enable/disable NSD to collect diagnostic and other data, free-running Memcheck to validate in-memory data structures, timestamps in SEMDEBUG.TXT, and the ability to collect and record system and server data at startup.
Simultaneous with the ship of Lotus Notes/Domino 6.5 was the release of Lotus Enterprise Integrator 6.5. New Lotus Enterprise Integrator 6.5 features included the ability to assign reader-level access to Activity documents and Connection documents; a dependent activity report for showing subordinate relationships for all activities in the Lotus Enterprise Integrator Administrator; support for Linux Red Hat 7.2, United Linux 1.0, Windows 2003, and Sun Solaris 9i; the ODBC Connector for iSeries; and improved performance for Virtual Documents.
On final note: In Release 6.5.1, we synchronized the release of Lotus Notes/Domino with the Lotus extended products, including Lotus Sametime, IBM Lotus QuickPlace, and IBM Lotus Domino Document Manager.
Release 7.0: New horizons
Lotus Notes/Domino 7 was released in August, 2005, and customers' expectations were never higher. They wanted us to continue the trend of making Lotus Notes and Domino easier to deploy and manage with fewer resources. At the same time, users increasingly looked at Lotus Notes and Domino as critical components of an all-encompassing on-demand workplace, fully integrated with other IBM technologies, such as IBM WebSphere Portal and IBM DB2.
Many of the most significant enhancements in Release 7 were for the Domino 7 server. For example, Domino 7 server administration tools now supported DB2 databases. In addition, Lotus Domino 7 offered better integration with IBM WebSphere Application Server and WebSphere Portal. And Lotus Domino 7 provided better integration for Web standards.
The new Domino Domain Monitoring (DDM) feature provided administrators one location within the Domino Administrator to view the status of multiple servers across a domain or multiple domains. DDM used probes to gather information across multiple servers, checking for any issues. This information was then collected and presented in a special database (DDM.NSF). DDM provided ongoing, 24/7 monitoring of all your servers with fast recognition and reporting of critical server and client issues.
Another important addition to server management in Lotus Domino 7 was Activity Trends. This feature collected and stored statistics on activities involving the server, databases, users, and so on. This information allowed you to review Activity Trends information to better judge how database workload was distributed among the servers in your environment. Activity Trends even provided recommendations for balancing database workload, based on resource goals that you specified, and included a workflow to help implement these recommendations.
Lotus Domino 7 offered autonomic diagnostic collection, allowing you to evaluate call stacks generated when a Notes client or Domino server crashed, using the automatic diagnostic collection functionality introduced in Lotus Notes/Domino 6.0.1. Autonomic diagnostic collection extended the capability of automatic data collection by analyzing call stacks located in the Fault Report mail-in database, and then evaluating this data to determine whether or not other instances of the same problem had occurred.
Lotus Notes Smart Upgrade was another area of improvement. Lotus Domino 7 provided a mail-in database to notify administrators of the status of Smart Upgrade status (success, failed, or delayed) by each user and machine. If a server in a cluster failed, Smart Upgrade could switch to another member of the cluster. To avoid excessive server load, the Smart Upgrade governor limited the number of downloads from a single server. Other Domino 7 administration enhancements included InstallShield Multiplatform (ISMP) installation and Linux/Mozilla support for the Web Administration client.
New security functionality in Lotus Domino 7 included stronger keys for encryption (1024-bit RSA keys and 128-bit RC2). Lotus Domino 7 also provided better support for single sign-on (SSO) and new security-related APIs for handling of encrypted mail. (See the developerWorks Lotus article, "Security APIs in Notes/Domino 7.") Other security features included private blacklist/whitelist filters for SMTP connections, and DNS whitelist filters for SMTP connections. Whitelist filters could be enabled on the client and at the DNS level. Mail Rules allowed users to select blacklists.
Some of the most important work in Lotus Domino 7 was done behind the scenes to improve server performance, and this work paid off. In tests done with the NotesBench R6Mail and R6iNotes workloads on one Domino partition on all platforms, server scalability improved by a whopping 80 percent compared to Release 6.5 (and a 400 percent improvement on Linux). Our tests also showed that Lotus Domino 7 reduced server CPU utilization (up to 25 percent). Other performance-related enhancements included Linux thread pools, IIOP performance improvements, networking performance improvements, better mail rule scalability, and improved scalability for Lotus Domino Web Access mail servers. All these were designed to help reduce the cost and overhead of maintaining your Lotus Notes/Domino environment.
Lotus Notes 7 provided users with enhanced calendar and scheduling, better Lotus Sametime integration, and improvements to mail, desktop, and interoperability. For calendar and scheduling (C&S), Lotus Notes 7 added a Calendar Cleanup feature for calendar maintenance. Calendar Cleanup let you delete entries based on creation/last modified dates. You could also select the type of entries (Calendar or To Do) to delete. You could have your calendar accept a meeting even if it conflicts with an existing meeting, and cancel C&S workflow when responding to a meeting invitation. Lotus Notes 7 also significantly improved Rooms and Resources functionality to better manage your rooms and resources. (For more on this topic, see the developerWorks Lotus articles, "Rooms and Resources design in Lotus Notes/Domino 7" and "New Rooms and Resource features in Lotus Notes/Domino 7.")
Lotus Notes 7 further expanded Lotus Sametime integration. Presence awareness was added to C&S views, Team Rooms, Discussions, To Do documents, the Personal Name and Address Book, the Rooms and Resources template, and the Domino Directory. In addition, Notes instant messaging chat windows were now in a separate thread. Notes instant messaging meetings provided features such as screen sharing, whiteboard, audio, and video. And you could now paste Notes URLs into chat windows.
For mail users, Lotus Notes 7 offered a Quick Follow Up feature, allowing you to select one or more mail messages and mark them for follow up without displaying the Follow Up dialog box. Follow Up actions were also available from the right-click mouse menu. Mail Rules now supported the Stop Processing action and blacklist/whitelist spam. A new status bar icon indicated whether email you receive was digitally signed, encrypted, or both. You could also work with Notes mail through the Smart Tags feature in Microsoft Office XP. (For more information, see the tip, "Using Smart Tags in Lotus Notes/Domino 7.0.")
Other Notes 7 enhancements included improved archiving, enhanced Meetings view, and AutoSave (see the developerWorks Lotus article, "All about Autosave in Lotus Notes/Domino 7").
As we mentioned earlier, Lotus Notes/Domino 7 provided the ability to use DB2 as a data store. To support this, Lotus Domino Designer 7 featured two new types of views for DB2-enabled databases: DB2 Access views and DB2 Query views. DB2 Access views define how your data is organized, and DB2 Query views use an SQL query to populate its data (instead of a view formula that selects documents from within the NSF file). You could define fields to be accessed relationally on a per-form or per-database basis.
A new design element let you maintain the function of a Web service. This design element includes all the attributes typically expected of a Web service. For more information, see the article, "Lotus Notes/Domino 7 Web Services."
Lotus Domino Designer 7 offered several new usability features to its interface. For example, you could now sort the Comments column. You could also define the name, alias, and comment directly in the design list and add view actions to right-click menus. Lotus Domino Designer 7 also provided a toolbar icon to toggle the LotusScript debugger state (on or off). Lotus Domino Designer 7 also included programmability enhancements, including new functions, properties, and methods.
Lotus Domino Designer 7 added support for JVM 1.4.2 and the Java debugger. Other new features included WebSphere Portal integration improvements, View Shared Column support, and support for multiple User Profile columns in a view.
Lotus Domino Web Access 7 provided a number of new features, including a new Lotus Domino Web Access client template (dwa7.ntf). Lotus Sametime instant messaging awareness integration now more closely matched the Notes client awareness features. Productivity enhancements included single-click Follow Up, Quick Mail Rule, and forwarding any Lotus Domino Web Access object in an email.
Release 8: Built on Eclipse
Lotus Notes and Domino 8, first announced at the IBM Lotus Technical Forum in Hannover, Germany, in June, 2005, was released in August, 2007. This latest version of Lotus Notes and Domino shows significant changes over earlier versions and builds on the strengths of the collaboration and messaging product with a new user interface, powerful new functionality, innovative productivity tools, and expended support for business solutions.
The Lotus Notes 8 client is based on the Eclipse framework, making it possible to run Eclipse-based code within Lotus Notes. This fundamental innovation facilitates a significant leap: Eclipse plug-ins can be wired together with Lotus Notes applications as composite applications. And by building composite applications, you can get quick access to your business information in one view. Similarly, you can extend the client program and customize the user interface.
Lotus Notes 8 is built on IBM Lotus Expeditor, IBM's universal managed client software, which, in turn, is built on Eclipse. In essence, Lotus Notes 8 is now on an open-source, Java-based platform. New features in Lotus Notes 8 include:
- Open button for fast access to the applications you use most often
- Sidebar that displays critical information and alerts incuding Lotus Sametime V7.5.1 contacts, day-at-a-glance, RSS, and ATOM feeds
- Context-sensitive toolbars and customizable view preferences
- Support for activity-centric computing
- Word processing, spreadsheet, and presentation applications, which support Open Document Format (ODF), Microsoft Office, and Lotus SmartSuite file formats
- Omnipresent search center for email, calendar, the Web, and your desktop
- Collaboration history that lets you search and view your collaboration with specific people
- Mail recall feature
- Conversation mode that lets you collect and review email threads based on subject headings
The user interface in the Lotus Notes 8 client has been updated as shown in figure 8.
Figure 8. Release 8 user interface
Lotus Domino 8 includes improvements in performance, administration, and serviceability. Many of the changes in Lotus Domino 8 support new features in Lotus Notes 8, such as message recall, improved user registration, and mail threading. Support for changes to application development include managed deployment of composite applications to Lotus Notes 8 clients and the ability for Lotus Domino to be both a Web service consumer and a Web service provider.
Lotus Domino 8 supports an open application infrastructure and lets you deploy composite applications in Lotus Notes 8 and lets you extend Web services support. New features in Lotus Domino 8 include:
- Policy management of inbox cleanup to help manage inbox sizes
- Integration with IBM Tivoli Enterprise Console software
- Support for RedHat Linux 5
- Improved Internet security features, including the ability to prevent access to Internet password fields in the Domino Directory and Internet account lockout due to password entry failure
Lotus Domino Designer 8 offers new capabilities that are in step with the new features of Lotus Notes and Domino 8. It includes new features and functions that enable you to provide more value through Lotus Notes and Domino applications and support service-oriented architectures (SOA). In addition, working with Lotus Domino 8, Lotus Domino Designer 8 refines the native Web service provider support first introduced in Lotus Domino 7, offering more options that allow other systems to make use of Domino data and business logic.
Working with composite applications in Lotus Notes and Domino 8 extends the "more, better, faster" model of previous releases. Lotus Notes and Domino 8 make it easy to integrate existing and new solutions and data into composite applications. These new applications aggregate components on the screen to present content from different systems -- Lotus Notes databases, Java applications, and the Web, for example -- to the user, all in a single context. See figure 9.
Figure 9. A composite application screen in Lotus Notes 8
The new productivity editors are applications for creating, editing, and sharing documents, presentations, and spreadsheets; they are also included in the standard Lotus Notes 8 license. The editors, closely integrated with Lotus Notes, support several file formats; their default is the same Open Document Format used by OpenOffice 2.0 and other products that are based on open-source code. Figure 10 shows a sample document.
Figure 10. The Lotus Document Editor
The release of Lotus Notes and Domino 8 completes a process that started in 2002 when IBM embraced standards-based computing. The latest versions of the product enhances the Lotus Notes user interface, offers activity-centric computing, and introduces composite applications.
- Read the developerWorks Lotus article, "What's new in IBM Lotus Notes and Domino V8."
- Read the developerWorks Lotus article, "New features in Lotus Domino 7.0."
- Read the developerWorks Lotus article, "New features in Lotus Notes and Domino Designer 7.0."
- Read the developerWorks Lotus article, "New features in Notes/Domino 6.5."
- Read the developerWorks Lotus article, "Domino 6 Technical Overview."
- Read the developerWorks Lotus article, "Notes 6 Technical Overview."
- Read the developerWorks Lotus article, "Domino Designer 6 Technical Overview."
- Read the developerWorks Lotus article, "Notes R5 Technical Overview."
- Read the developerWorks Lotus article, "Domino R5 Technical Overview."
- Read the developerWorks Lotus article, "Domino Designer R5 Technical Overview."
- Get started with IBM Lotus Notes and Domino V8 technical content.
- Refer to the developerWorks Lotus Composite Applications page. | <urn:uuid:65c6b7c8-12e2-4b76-b236-72a32a0abb3e> | CC-MAIN-2017-04 | http://www.ibm.com/developerworks/lotus/library/ls-NDHistory/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281649.59/warc/CC-MAIN-20170116095121-00006-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.927532 | 10,795 | 2.8125 | 3 |
Johns Hopkins University engineers are using diamonds to change the properties of an alloy used in phase-change memory, a change that could lead to the development higher capacity storage systems that retain data more quickly and last longer than current media.
The process, explained this month in the online edition of Proceedings of the National Academy of Sciences (PNAS), focused on changes to the inexpensive GST phase-change memory alloy that's composed of germanium, antimony and tellurium.
"This phase-change memory is more stable than the material used in current flash drives. It works 100 times faster and is rewritable about 100,000 times," said the study's lead author, Ming Xu, a doctoral student at the Whiting School of Engineering at Johns Hopkins University.
"Within about five years, it could also be used to replace hard drives in computers and give them more memory," he suggested.
GST has been in use for two decades and today is widely used in rewritable optical media, including CD-RW and DVD-RW discs.
IBM and others are already developing solid-state chip technology using phase-change memory, which IBM says can sustain up to 5 million write cycles. High-end NAND flash memory systems used today can sustain only about 100,000 write cycles.
By using diamond-tipped tools to apply pressure to the GST, the researchers found they could change the properties of the alloy from an amorphous to a crystalline state and thus reduce the electrical resistivity by about four orders of magnitude. By slowing down the change from an amorphous state to a crystalline state, the scientists were also able to produce many varying states allowing more data to be stored on the alloy.
GST is called a phase-change material because, when exposed to heat, an area of the alloy can change from an amorphous state, in which the atoms lack an ordered arrangement, to a crystalline state, in which the atoms are neatly lined up in a long-range order.
An illustration of how the diamond-tipped tools were used to compress GST
The two states are then used to represent the computer digital language of ones and zeros.
In its amorphous state, GST is more resistant to electric current. In its crystalline state, it is less resistant
The two phases of GST, amorphous and crystalline, also reflect light differently, allowing the surface of a DVD to be read by tiny laser.
While GST has been used for some time, the precise mechanics of its ability to switch from one state to another have remained something of a mystery because it happens in nanoseconds once the material is heated.
To solve this mystery, Xu and his research team used the pressure from diamond tools to cause the change to occur more slowly.
The team used a method known as X-ray diffraction, along with a computer simulation, to document what was happening to the material at the atomic level. By recording the changes in "slow motion," the researchers found that they could actually tune the electrical resistivity of the material during the time between its change from amorphous to crystalline form.
"Instead of going from black to white, it's like finding shades or a shade of gray in between," said En Ma, a professor of materials science and engineering, and a co-author of the PNAS paper. "By having a wide range of resistance, you can have a lot more control. If you have multiple states, you can store a lot more data."
Lucas Mearian covers storage, disaster recovery and business continuity, financial services infrastructure and health care IT for Computerworld. Follow Lucas on Twitter at @lucasmearian, or subscribe to Lucas's RSS feed . His e-mail address is email@example.com.
Read more about emerging technologies in Computerworld's Emerging Technologies Topic Center.
This story, "Researchers use diamonds to boost computer memory" was originally published by Computerworld. | <urn:uuid:e5b9fd57-9048-4b82-bb1f-335489637d8b> | CC-MAIN-2017-04 | http://www.itworld.com/article/2726187/storage/researchers-use-diamonds-to-boost-computer-memory.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00492-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.950035 | 822 | 3.4375 | 3 |
The Body condition is used to define the message body (content of an email) that once met will trigger an action to be processed by the program.
To define the message body within the Conditions tab, expand the Field's context menu and choose Body (Fig. 1.).
|Fig. 1. Choosing Body in the Conditions tab.|
There are two factors (Fig. 2.) that need to be configured within this condition:
- Operator - enables definition of how the condition will be executed. The execution method may be only set to true.
In this particular condition, the operator may be set to trigger an action if the message body:
- contains keyword(s)
- Actual condition value - here you define the expected value of the condition that will trigger the rule to apply the action.
|Fig. 2. Configuring the message Body.|
Please note that the value field depends on the operator, and its definition type may differ in regards to the chosen operator. What is more, this field is always case-insensitive.
Choosing contains keyword operator, you will be able to define strings of characters to be searched for within the message body. Additionally, besides defining strings of characters (letters and numbers) as keywords, you can also make use of wildcards as a prefix or a suffix. Either way, If the defined keywords are found while processing messages, then the condition will be met yet the defined action executed.
The program will not recognize keywords defined with $ (dollar) and ; (semicolon) characters.
This feature lets you also decide if the found keyword should be removed or left unchanged. To define a keyword for the contains keyword operator, click Edit and then the Add button. In the window that opens enter the keyword and mark the Remove from message checkbox to delete it from the message body if found. On the other hand, leave the checkbox unmarked if you do not want to remove the found keyword. Be aware, this option works regardless of other conditions and exceptions.
|Fig. 3. Defining a keyword to be searched for within the body of the message.|
Message Direction - this article describes how to configure the Message Direction condition. | <urn:uuid:a9f838b7-62b9-46e5-bd8a-251fc0ad1081> | CC-MAIN-2017-04 | https://www.codetwo.com/userguide/exchange-rules-family/body.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00310-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.819033 | 451 | 2.53125 | 3 |
The Efficiency of Water
Sun, of Santa Clara, Calif., already offered its Grid Rack to customers, where racks populated with technology ordered by the customer were put together by Sun and then delivered to the users site. The vendor was looking to transfer that capability to an entire data center setup, and the largest size that made sense was a standard shipping container, Papadopoulos said.Water is more efficient than air, which is the method most widely used in traditional data centers, he said. Inside the shipping containers, the systems are set up front-to-back along the wall of the container, with heat exchangers between each one, Papadopoulos said. The warm air from one is passed through exchanger, where its chilled and then used to cool the next server, he said. "It forms this kind of perfect cyclonic flow inside the box, and its very quiet, its very efficient," Papadopoulos said. What is Congress doing to reduce data center power consumption? Click here to read more. Charles King, an analyst with Pund-IT Research, said the concept addresses a lot of concerns that businesses have, but that Sun is going to need to answer some key questions on issues such as security before the shipping container business takes off. "Its an interesting idea because it addresses a lot of the challenges that people have concerning data center facility costs, in particular the real estate component," said King, in Hayward, Calif. "The whole cost issues around data centers have little to do with the technology and everything to do with the support and construction of the facility." Being able to run multiple containers togethereven stacking themwould help address those issues, he said. However, most data centers have several layers of security, and at a time when disaster recovery and compliance are key issues, having a data center thats housed inside a shipping container might not be enough security for many enterprises, King said. Gadre admitted that the Blackbox idea wont be for everyone, including some who might want the highest levels of security. But in the areas of Web serving and HPC, it should find customers, he said. The idea of integrating cooling, networking and power distribution in a central fashion with the hardware is getting looks from a number of OEMs. Hewlett-Packard, of Palo Alto, Calif., with its Lights Out Project, is looking to do something similar on a smaller scale, looking at an infrastructure model that brings power and cooling closer to the compute nodes themselves. The goal is similar: to create an environment that addresses power and cooling concerns while increasing flexibility inside the facility. Papadopoulos said this is a trend in the industry that is going to grow in importance. "I think there is a huge pent-up demand for somebody to figure this out," he said. Power and cooling have become key issues in data centers as system form factors have become more dense, particularly with the rise of blade computing. One of the key promises of bladesbeing able to pack more compute power into smaller areasis hindered by the amount of power consumed and heat generated. Major technology consumers like Google predict that soon they will be paying more to power and cool the systems than for buying the machines themselves. IT vendors are addressing these issues in a number of ways. Chip makers like Advanced Micro Devices and Intel are producing more energy efficient processors; OEMs are building systems with power consumption in mind; and software makers are putting power monitoring and managing functions into their products. Virtualizationthe ability to run multiple operating systems and applications on single physical machinesalso is an important technology. Sun has been vocal on these issues. The company is promoting its UltraSPARC T1 "Niagara" chip, which offers up to eight processing cores while consuming less powerabout 70 wattsthan many other processors. The company also is using AMDs Opteron in their x86 servers. Check out eWEEK.coms for the latest news, views and analysis on servers, switches and networking protocols for the enterprise and small businesses.
A key to Sun being able to put the technology into such a compact space is the ability to use water to cool the systems, Gadre said. | <urn:uuid:995d12e3-404a-47a8-85b7-7139ed62ffcc> | CC-MAIN-2017-04 | http://www.eweek.com/c/a/IT-Infrastructure/Sun-Unveils-Data-Centers-in-a-Box/1 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00218-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.967721 | 838 | 2.703125 | 3 |
American space agency NASA has built an Artificial Intelligence (AI) platform capable of aiding firefighters when they enter a burning building.
The platform, called Audrey, is the product of a partnership between the agency’s Jet Propulsion Lab (JPL) and the Department for Homeland Security (DHS).
This project forms part of the Next Generation First Responder program, which aims to identify ways firefighters, police and paramedics can stay safe while in the field.
Mass of data from AI platform
Audrey collects data about heat, gases and other signs of danger to help first responders get through the flames safely and quickly, letting them save victims.
To make the AI platform possible, the designers used several technologies developed by NASA and the Department of Defence. It’s been in the works for nine months.
Mark James, lead scientist of the Audrey project at JPL, explained that the platform works with mobile devices and fire equipment. “As a firefighter moves through an environment, Audrey could send alerts through a mobile device or head-mounted display,” he said.
Integrated with IoT
What makes it innovative is the fact that it’s not limited to one user and can track an entire team of firefighters. It sends recommendations to individuals on how they can work together more effectively.
It’s been designed to work alongside the Internet of Things, utilising devices and sensors that communicate with each other. For example, wearable tech attached to a firefighter’s jacket could provide information on their location.
The cloud plays a pivotal role here and allows Audrey to watch situations as they develop. It can analyse them and predict the exact resources that’ll be needed next, saving the firefighters much needed time.
The platform was demonstrated in June at the Public Safety Broadband Stakeholder Meeting, which was held by the Department of Commerce. During the presentation, it utilised several sensors and made safety recommendations, with field tests to follow within a year.
John Merrill, NGFR program manager for the DHS Science and Technology Directorate, said the technology improves the skillset of first responders in the field and helps them build new strengths.
“The proliferation of miniaturized sensors and Internet of Things devices can make a tremendous impact on first responder safety, connectivity, and situational awareness,” Merrill said. “The massive amount of data available to the first responder is incomprehensible in its raw state and must be synthesized into useable, actionable information.”
A guardian angel
Edward Chow, manager of JPL’s Civil Program Office and program manager for Audrey, said: “When first responders are connected to all these sensors, the AUDREY agent becomes their guardian angel. Because of all this data the sensor sees, firefighters won’t run into the next room where the floor will collapse.”
“Most A.I. projects are rule-based. But what if you’re only getting part of the information? We use complex reasoning to simulate how humans think. That allows us to provide more useful info to firefighters than a traditional A.I. system.” | <urn:uuid:da9a76a6-388e-4f70-8da6-edd1b25ec121> | CC-MAIN-2017-04 | https://internetofbusiness.com/nasa-builds-ai-platform-firefighters/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.60/warc/CC-MAIN-20170116095120-00034-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.92833 | 645 | 2.90625 | 3 |
Big data is all data
What is “big data?” One convenient litmus test to answer the question is: when the volume, velocity or variety of data becomes too great to handle with conventional data processing tools and techniques, you know you’re dealing with big data. Indeed, the technologies most often associated with big data such as Hadoop and MapReduce are important because they make data characterized by the “three Vs” of volume, velocity and variety more cost efficient and effective. However, this way of looking at the issue limits big data to a technical challenge and misses what has become the real significance of big data: finding new ways to use data to create business value. So if an organization is setting aside certain types of data (such as sensor data captured from their delivery vehicles, or clickstreams from a heavily-accessed website) to the exclusion of other data, they’re missing the real benefit of big data.
Think of it this way: for every physical device generating machine data or customer sharing information about themselves in social media, there is a network of data that defines the business context and enriches any analytics that we might perform on the subject. Consider, for example, a delivery vehicle that generates geo-location and temporal data, as well as sensor readings from its mechanical systems such as engine performance, temperature and fuel consumption. A company might use that data to perform analytics to optimize routing, delivery schedules, service agreements, staffing and more. While we can marvel today at the amount of sensor data emanating from a modern delivery vehicle, in fact a lot of useful data about a vehicle (or any other major piece of equipment) is already percolating through many other systems. For example, purchase information about the vehicle, along with technical specifications, might be stored in a ERP system; information about the driver (training curriculum, years of experience, driving record, etc.) could be in an HR system; maintenance records could be in another system.
I could go on with examples but you likely get the picture that most of the things generating new or raw data types are also referenced in many other systems around your enterprise. The same principle applies to clickstreams on websites, machine log files and other things. Being able to connect to these systems and augment analytics with additional business context can add a very powerful element.
And you shouldn’t ignore so-called unstructured content when considering data to include in a big data project. While social media analytics is a popular and widely-explored use case in the big data world, there is a really a whole universe of human-generated content to be mined. Most organizations manage vast amounts of human-generated content ranging from mundane things like operating manuals to more interesting things like message archives, wikis, lab reports, customer interaction summaries, comments in survey responses and strategy documents—the list is endless. This data is usually stored in content management systems and other secured repositories that don’t lend themselves to easy access with typical big data analytical tools.
Here’s a simple taxonomy, or checklist, you should be considering when deciding what types of data to include in your next big data project. Not all types of data are available or relevant for every project, but it may be helpful to go through the step of considering these categories:
- Sensor and machine data, which reflects the physical world or the performance of devices across the “Internet of Things”
- Business applications and systems of record which contain the transactional records of the organization as well as information about business practices
- Human-generated language and content of all kinds, whether in formal documents, message logs, reports or internal discussions
- And finally there’s the data outside the organization such as social media content
For some more thoughts on how organizations can expand the sources of data they incorporate into their big data projects, I invite you listen to my podcast on the topic. Also review the big data exploration use case, as well as other use cases that the big data team at IBM has identified to help organizations like yours. | <urn:uuid:a9df5d3f-a991-4be4-800a-17978ac463b4> | CC-MAIN-2017-04 | http://www.ibmbigdatahub.com/blog/big-data-all-data | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00520-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.925026 | 837 | 2.65625 | 3 |
Virtualization: From Concept to Reality
Virtualization technology is possibly one of the most important issues in IT and has started a top to bottom overhaul of the computing industry. Virtualization refers to the act of creating a virtual (rather than actual) version of something. In computing, it is a proven software technology that makes it possible to run multiple operating systems and applications on the same server at the same time. It began in the 1960s, as a method of logically dividing the system resources provided by mainframe computers between different applications.
Virtualization is achieved in various forms. Some of them are precisely discussed below. Hardware Virtualization, which typically refers to Server Virtualization, is most commonly used today. It is all about conversion of one physical server into several individual & isolated virtual spaces that can be taken up by multiple users based on their respective requirements. The concept of isolating a logical operating system (OS) instance from the client that used to access it is coined as Desktop Virtualization. Application Virtualization, the third important type, is a technology that encapsulates computer programs from the underlying operating system on which it is executed, thus enhancing its portability. Virtualization today is one of the hot topics in storage management (Storage Virtualization). It deals with the amalgamation of multiple network storage devices into what appears to be a single storage unit. Network Virtualization, sometimes referred to as Software-Defined Networking, is similar to Storage Virtualization. It is the process of combining hardware and software network resources and network functionality into a single, software-based administrative entity, a virtual network.
Virtualization is transforming the IT landscape and fundamentally changing the way that people utilize technology. It’s the most effective way to reduce IT expenses while boosting efficiency and agility – not just for large enterprises, but for small and midsize businesses too. Workloads get deployed faster, performance and availability increases and operations become automated, resulting in a simply manageable and cost affordable IT.
When it comes to virtualization technology offerings, there are quite a few brands known for it; some of the commonly heard names being VMware, Microsoft, Citrix, Red Hat, Oracle, Amazon and Google. Their solutions scale from a few virtual machines that host a handful of websites, virtual desktops or intranet services all the way up to tens of thousands of virtual machines serving millions of Internet users.
Despite of all the benefits, one must consider certain issues before opting virtualization. All server applications are not virtualization-friendly. This means that some aspects of computer technology within business might not impart us the available option of virtualization. It is also important to select a solution that offers adequate data protection as third party servers often put data at risk.
“Virtualization is very similar to outsourcing,” says Lynne Ellyn, CIO, DTE Energy. tweet
Thus, virtualization continues to provide the next byte, never knowing where it is coming from. | <urn:uuid:6e457036-dc35-4a1a-8261-d50bde101411> | CC-MAIN-2017-04 | http://www.altencalsoftlabs.com/blogs/2016/03/26/virtualization-from-concept-to-reality/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00272-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.943653 | 599 | 3.171875 | 3 |
"Big Data" is here, and it is even more confusing than the “cloud." Incomplete and obsolete definitions are being used to define Big Data, which confuses customers and vendors alike. How can we get past the confusion in the market and identify opportunities to successfully help companies implement Big Data?
First, let's start with the definition of Big Data. More than a decade ago, META Group analyst Doug Laney introduced the challenges of data growth with the three Vs: volume, velocity and variety. This is the description that is still used today as the starting point of describing Big Data. However, as data has evolved over the past decade, we now start to ask ourselves what each of these Vs really mean:
- Does "volume" refer to the size of a database? Or size of a data object? Or the cumulative size of all data within an organization?
- Does "velocity" refer to the speed of data coming in? The speed of data acquisition? The speed of data processing? The speed of visualizing and charting data?
- And does "variety" refer to a variety of data types? Or a variety of sources? Or a variety of applications supported?
Although the answer could be "All of the Above," this isn't a helpful way to think about Big Data or to start designing potential product suites to help companies with their business needs. To be less clever and more straightforward, let's just say that Big Data is any sort of data that can't be stored or analyzed by a business's existing database or analytics solution. And let's start breaking out the concept of Big Data into different types of use cases based on the type of Big Data being used and the tools involved in supporting Big Data.
The increasing size of data volumes we can call Expanding Big Data. This refers to data that is outgrowing its original database or data warehouse. As this data grows from gigabytes to terabytes, companies typically seek Big Data appliances that combine hardware with either a Hadoop distribution (open-source software for distributed computing)or a data warehouse. Examples include the Dell Apache Hadoop solution, EMC Greenplum Data Computing Appliance, IBM Netezza, Netapp Hadoopler, Oracle Big Data Appliance and Teradata Extreme Data Appliance.
Networked and correlated data on a massive scale is Social Big Data. The most obvious examples of Social Big Data come from social media monitoring, where marketing and service professionals seek to monitor Facebook, Twitter and other social networks to identify brand sentiment and business opportunities. Although social media monitoring solutions such as Salesforce's Radian6, Marketwire's Sysomos and Crimson Hexagon can provide social monitoring and analytics, companies seeking to integrate social information with existing product and customer information need to bring this information in-house. In this case, data integration technologies become important in combining social data with CRM and other traditional enterprise data transactions. This data integration consists of three steps often abbreviated as ETL: | <urn:uuid:c6ea3943-5e33-46a4-9489-25d12caec66c> | CC-MAIN-2017-04 | http://www.channelpartnersonline.com/articles/2012/07/big-data-is-the-new-cloud.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00272-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.927356 | 614 | 2.921875 | 3 |
The IT Infrastructure Library® (ITIL®) encompasses the following six areas:
- Problem Management
- Incident Management
- Change Management
- Configuration Management
- Service Level Management
- Release Management
To gain a further understanding of ITIL, download a Giva ITIL whitepaper.
Giva eAssetManager and Giva eSoftwareManager specifically address Configuration Management.
Configuration Management provides configuration information to all the other ITIL processes through a centralized Configuration Management Database (CMDB). As opposed to a simple asset database, the CMDB documents all the relationships between configuration items (CIs) that deliver a specific IT service.
Giva eAssetManager and Giva eSoftwareManager help you manage and control the CMDB, with all the details of relationships between Incidents, Problems, Known Errors, and RFCs.
Many ITIL processes use the CMDB. Incident Management uses the Giva CMDB to more rapidly diagnose service outages. Problem Management uses it to identify Known Errors. Change Management uses the CMDB to assess the risk of a change. Release Management uses it to configure a test environment that reflects the live environment.
Since all ITIL processes use the CMDB so extensively, the CMDB accuracy is essential. Giva eAssetManager and Giva eSoftwareManager provide tools to ensure the accuracy of the CMDB through planning, identification, control, status accounting, verification, and audit. | <urn:uuid:148e5ac1-321a-4547-aee4-aaa124bf74db> | CC-MAIN-2017-04 | https://www.givainc.com/asset-management-software/itil-problem-incident-service-level-cloud-hosted.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00446-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.790291 | 290 | 2.625 | 3 |
A COBOL pointer is like a void pointer in C: it can point to any kind of object. Pointers are mainly useful for stitching together various data objects to form a data structure. Typically, data structures are built from dynamically allocated memory.
Declare a pointer as an elementary data item bearing the USAGE IS POINTER clause, with no PICTURE. E.g:
05 EXAMPLE-P USAGE IS POINTER.
EXAMPLE-P is a four-byte field which can store the address of any data item. Unless you're interfacing COBOL to Assembler or something, you really don't care what the pointer looks like internally. | <urn:uuid:14403750-d883-4eec-8496-80916e19975a> | CC-MAIN-2017-04 | http://ibmmainframes.com/about15876.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285315.77/warc/CC-MAIN-20170116095125-00135-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.907527 | 141 | 2.984375 | 3 |
Synchronous Optical Networking is usually called SONET for short. The SONET standards were coded in the mid-1980s to consider benefit of low-cost fiber optic transmission. It defines a hierarchy of data rates, formats for framing and multiplexing the payload data, as well as optical signal specifications (wavelength and dispersion), allowing multi-vendor interoperability.
SONET may also be referred to as “T-1 on steroids”. Can you explain that? As you may know, the digital hierarchy (DS-0, DS-1, DS-2, DS-3 and much more) was created to provide cost-effective multiplexed transport for voice and data traffic from one location inside a network to a separate.
SONET and SDH (Synchronous Digital Hierarchy) are two equivalent multiplexing protocols for transferring multiple digital bit streams using lasers or LEDs (light-emitting diodes) over the same optical fiber. They were made to replace PDH (Plesiochronous Digital Hierarchy) system to get rid of the synchronization issues that PDH Multiplexer had. SONET is synchronous, which means that each connection achieves a continuing bit rate and delay. For example, SDH or SONET might be utilized to allow several Internet Service Providers to talk about exactly the same optical fiber, without being affected by each others traffic load, and without having to be able to temporarily borrow released capacity from one another. SONET and SDH are considered to become physical layer protocols since they offer permanent connections and do not involve packet mode communication. Only certain integer multiples of 64kbits/s are possible bit rates.
SONET is really TDM(time division multiplexing) based and this causes it to be readily supported fixed-rate services such as telephony. Its synchronous nature is designed to accept traffic at fixed multiples of the basic rate (64kbit/s), without requiring variable stuff bits or complex rate adaptation.
The SONET data transmission format is based on a 125us frame composed of 810 octets, of which 36 are overhead and 774 are payload data. The fundamental SONET signal, whose electrical and optical versions are referred to as STS-1 and OC-1, respectively, is thus a 51.84Mb/s data streams that readily accommodate TDM channels in multiples of 8 kb/s.
It is important in fiber optic network that SONET can be used to encapsulate PDH and other earlier digital transmission standards. It is also used directly to support either an ATM (Asynchronous Transfer Mode) or packet over SONET/SDH (POS) networking. So SONET/SDH is actually a generic all-purpose transport container for moving both voice and knowledge traffic. They in themselves aren’t communications protocols.
SONET brings by using it a subset of benefits that make it differentiate themselves from competitive technologies. These include mid-span meet, improved operations, administration, maintenance, and provisioning (OAM&P), support for multipoint circuit configurations, non-intrusive facility monitoring, and the capability to deploy a variety of new releases.
Improved OAM&P is among the greatest contributions that SONET brings to the networking field. Element and network monitoring, management, and maintenance has always been something of the catch-as-catch-can effort due to the complexity and diversity of elements inside a typical service provider’s network. SONET overhead includes error-checking ability, bytes for network survivability, and a diverse set of clearly defined management messages. | <urn:uuid:c713518f-5d0b-4407-9dcd-111e29db7e44> | CC-MAIN-2017-04 | http://www.fs.com/blog/synchronous-optical-networking-introduction.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00045-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.932321 | 750 | 3.234375 | 3 |
126.96.36.199 What happens when a key expires?
In order to guard against a long-term cryptanalytic attack, every key must have an expiration date after which it is no longer valid (see Question 188.8.131.52). The time to expiration must therefore be much shorter than the expected time for cryptanalysis. That is, the key length must be long enough to make the chances of cryptanalysis before key expiration extremely small. The validity period for a key pair may also depend on the circumstances in which the key is used. The appropriate key size is determined by the validity period, together with the value of the information protected by the key and the estimated strength of an expected attacker. In a certificate (see Question 184.108.40.206), the expiration date of a key is typically the same as the expiration date of the certificate, though it need not be.
A signature verification program should check for expiration and should not accept a message signed with an expired key. This means that when one's own key expires, everything signed with it will no longer be considered valid. Of course, there will be cases in which it is important that a signed document be considered valid for a much longer period of time. Question 7.11 discusses digital timestamping as a way to achieve this.
After expiration, the old key should be destroyed to preserve the security of old messages (note, however, that an expired key may need to be retained for some period in order to decrypt messages that are still outstanding but encrypted before the key's expiration). At this point, the user should typically choose a new key, which should be longer than the old key to reflect both the performance increase of computer hardware and any recent improvements in factoring algorithms (see Question 220.127.116.11 for recent key length recommendations).
However, if a key is sufficiently long and has not been compromised, the user can continue to use the same key. In this case, the certifying authority would issue a new certificate for the same key, and all new signatures would point to the new certificate instead of the old. However, the fact that computer hardware continues to improve makes it prudent to replace expired keys with newer, longer keys every few years. Key replacement enables one to take advantage of any hardware improvements to increase the security of the cryptosystem. Faster hardware has the effect of increasing security, perhaps vastly, but only if key lengths are increased regularly (see Question 2.3.5). | <urn:uuid:92ecf617-860e-48c6-8a99-c5f4e57de333> | CC-MAIN-2017-04 | https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/what-happens-when-a-key-expires.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00439-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.937123 | 506 | 3.078125 | 3 |
The Federal Communications Commission is working to ensure that during the transition from copper to fiber Internet infrastructure, service continues to be accessible and available during power outages.
“Technology transitions are the process of changing the wires to more modern infrastructure,” said Anita Dey, assistant chief of the Consumer and Governmental Affairs Bureau of the FCC.
When companies change the wires from copper to fiber, they don’t have to apply to the FCC but they do have to inform consumers three months in advance.
“Consumers are being forced off of copper onto fiber,” said Michele Levy Berlove, attorney adviser for the Competition Policy Division at the Wireline Competition Bureau at the FCC.
The problem is that when the power goes out, copper can still conduct the electricity needed to provide power to traditional landline phones but fiber can’t.
“Can your company involuntary move you from copper to fiber?” Berlove said. “The short answer is yes.”
However, the FCC has made rules to ensure that people with disabilities have the same access to the new system and that consumers are aware of ways that they can maintain power to their phones in the event of a power outage.
“New technology can allow everyone including people with disabilities to do things that we thought weren’t possible but it can also take away things that were possible,” said Suzanne Rosen Singleton, chief of the Disability Rights Office at the Consumer and Governmental Affairs Bureau at the FCC.
For example, she said, when movies transitioned from silent films to talkies, deaf people were excluded from enjoying films in theaters. Closed captioning allows people to enjoy TV and movies even if they can’t hear.
Another example is smartphones, which have changed from buttons–which blind people could use to make calls by touch–to a flat screen.
“Even though the technology made an advancement, some people with disabilities were left behind,” Singleton said.
Singleton recognized that some technologies put people with disabilities on equal footing. For example, text messaging, with was designed for people with disabilities, was adopted by the majority of the population that preferred the new service to making phone calls.
Singleton said the FCC rules regarding technology transitions requires accessibility, usability, and compatibility levels that are as effective as the legacy technology. The new equipment also must work with any equipment that makes it more accessible.
“We also encourage companies to make this transition as simple and affordable as possible,” Singleton said.
Linda Pintro, senior legal adviser of the Policy and Licensing Division of the Public Safety and Homeland Security Bureau, said that although fiber wires cause phones to lose power during outages, the new services could improve 911 and first responder services.
The FCC created backup power rules that mandate that companies must inform consumers that they can purchase backup power that will keep their phones running during a power outage, and the implications of this service.
“The commission has thought about this new technology and how it may affect your ability to call 911 after a power outage,” said Pintro. | <urn:uuid:3e45f5d2-32e8-48ea-87b2-c88432dbe0a5> | CC-MAIN-2017-04 | https://www.meritalk.com/articles/fcc-works-to-make-new-internet-infrastructure-accessible/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00255-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.966989 | 645 | 2.875 | 3 |
A microchip used by the US military and manufactured in China contains a secret "backdoor" that means it can be shut off or reprogrammed without the user knowing, according to researchers at Cambridge University's Computing Laboratory.
UPDATE: However, one security consultancy has said that the implication that the backdoor might have been secretly inserted by the Chinese manufacturer is "bogus", and that malicious intent is unlikely.
In a draft paper, Cambridge University researcher Sergei Skorobogatov wrote that the chip in question is widely used in military and industrial applications. The "backdoor" means it is "wide open to intellectual property theft, fraud and reverse engineering of the design to allow the introduction of a backdoor or Trojan", they said.
The discovery was made during testing of a new technique to extract the encryption key from chips, developed by Cambridge spin-off Quo Vadis Labs. The "bug" is in the actual chip itself, Skorobogatov wrote, rather than the firmware installed on the devices that use it, meaning there is no way to fix it than to replace the chip altogether.
"The discovery of a backdoor in a military grade chip raises some serious questions about hardware assurance in the semiconductor industry," wrote Skorobogatov.
However, Robert Graham, of US security consultancy Errata Security, wrote yesterday that the backdoor is unlikely to have been added maliciously. He claims that the entry route discovered by Skorobogotov is likely to be a debugging tool deliberately installed by the manufacturer.
"It's remotely possible that the Chinese manufacturer added the functionality, but highly improbable. It's prohibitively difficult to change a chip design to add functionality of this complexity."
He also questioned the description of the chip as "military grade". "The military uses a lot of commercial, off-the-shelf products. That doesn't mean there is anything special about it."
Graham writes that the backdoor could pose a security threat, however. "It not only allows the original manufacturer to steal intellectual-property, but any other secrets you tried to protect with the original [encryption] key." | <urn:uuid:2ae40099-8b44-4b57-80a3-54254e36123e> | CC-MAIN-2017-04 | http://www.information-age.com/security-backdoor-found-in-china-made-us-military-chip-2105468/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283301.73/warc/CC-MAIN-20170116095123-00071-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.957255 | 439 | 2.71875 | 3 |
Up To: Contents
See Also: Plugin API, Embedded Perl Interpreter Overview, Active Checks
Unlike many other monitoring tools, Nagios does not include any internal mechanisms for checking the status of hosts and services on your network. Instead, Nagios relies on external programs (called plugins) to do all the dirty work.
What Are Plugins?
Plugins are compiled executables or scripts (Perl scripts, shell scripts, etc.) that can be run from a command line to check the status or a host or service. Nagios uses the results from plugins to determine the current status of hosts and services on your network.
Nagios will execute a plugin whenever there is a need to check the status of a service or host. The plugin does something (notice the very general term) to perform the check and then simply returns the results to Nagios. Nagios will process the results that it receives from the plugin and take any necessary actions (running event handlers, sending out notifications, etc).
Plugins As An Abstraction Layer
Plugins act as an abstraction layer between the monitoring logic present in the Nagios daemon and the actual services and hosts that are being monitored.
The upside of this type of plugin architecture is that you can monitor just about anything you can think of. If you can automate the process of checking something, you can monitor it with Nagios. There are already a lot of plugins that have been created in order to monitor basic resources such as processor load, disk usage, ping rates, etc. If you want to monitor something else, take a look at the documentation on writing plugins and roll your own. Its simple!
The downside to this type of plugin architecture is the fact that Nagios has absolutely no idea what it is that you're monitoring. You could be monitoring network traffic statistics, data error rates, room temperate, CPU voltage, fan speed, processor load, disk space, or the ability of your super-fantastic toaster to properly brown your bread in the morning... Nagios doesn't understand the specifics of what's being monitored - it just tracks changes in the state of those resources. Only the plugins themselves know exactly what they're monitoring and how to perform the actual checks.
What Plugins Are Available?
There are plugins currently available to monitor many different kinds of devices and services, including:
Plugins are not distributed with Nagios, but you can download the official Nagios plugins and many additional plugins created and maintained by Nagios users from the following locations:
How Do I Use Plugin X?
Most all plugins will display basic usage information when you execute them using '-h' or '--help' on the command line. For example, if you want to know how the check_http plugin works or what options it accepts, you should try executing the following command:
You can find information on the technical aspects of plugins, as well as how to go about creating your own custom plugins here. | <urn:uuid:f3535a83-eeb9-48ef-85b6-5603f612bce1> | CC-MAIN-2017-04 | https://assets.nagios.com/downloads/nagioscore/docs/nagioscore/3/en/plugins.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00283-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.911433 | 608 | 2.734375 | 3 |
In an effort to ease some of the heavy traffic and Wi-Fi congestion in airports, as well as, other high use hubs, the Federal Communications Commission (FCC) is attempting to open up certain portions of the airwaves. The FCC voted unanimously, 5-0 to write new rules that would open more U.S. airwaves to Wi-Fi devices.
At a monthly meeting in Washington, FCC Chairman Julius Genachowski said, “Wi-Fi congestion is a very real and growing problem. We’re at the early stage of this but it will only get worse as Wi-Fi use grows.”
The FCC’s proposal would make available for public use some of the airwaves that are now largely used by government entities such as the Department of Defense and the Federal Aviation Administration. They currently use these airwaves in question for navigation, surveillance and other activities that are not mentioned.
Genachowski said, "This proposal today is based on a tremendous amount of engineering work, so we don't now see any reason why we can't put 195 new megahertz of spectrum for unlicensed use on the market and do it in a way that's compatible with other existing users."
The proposed rule would add 195 MHz of unlicensed spectrum to the 555 MHz currently available in the less congested 5 GHz radio frequency band. The proposal is also trying to come up with better technical rules for sharing the spectrum airwaves used for Wi-Fi transmission. Genachowski went on to say that the FCC will consult with federal and non U.S. users of nearby airwaves to enable non-interfering shared use.
The idea is to move ahead as quickly as possible with no delays. Of course you should all know by now that nothing is ever that easy. Automakers and their suppliers say that the new Wi-Fi frequencies could jam car-to-car wireless communications systems. These are systems that are currently being developed to prevent accidents.
Wade Newton is a spokesperson for the Washington based Alliance of Automobile Manufacturers. He said, “Automakers have already invested heavily in the research and development of these safety critical systems, and our successes have been based on working closely with our federal partners. It is imperative that, as we move forward, we do adequate research and testing on potential interference issues that could arise from opening up this band to unlicensed users.”
The FCC said that it would take comments on the plan before voting on a final passage of the proposal. The plan for Wi-Fi is part of President Obama’s strategy to expand airwaves sharing. It is a plan to cope with a shortage of frequencies that threatens to slow wireless Internet traffic. The FCC will listen to comments for the plan.
Analysts forecast implementation would take months given the concerns that wider use of the new airwaves would risk interfering with important government programs already on those wavelengths. FCC officials said the goal was particularly to boost wireless connections at stadiums, airports, convention centers and other places where large numbers of people try to use the Internet at the same time.
Given the fact that more and more Wi-Fi devices are being activated every day by the “I want to be connected everywhere” generation, there is definitely a need to access more of the spectrum. However, this cannot come at the cost of ignoring everything else. While we can all see the need to move ahead as quickly as possible with something like this, we shouldn’t lose sight of the potential problems. We do not want to take away from the automakers ability to provide safer driving experiences simply to let someone at the airport do last minute shopping. What is more important, having someone at a stadium let their friends know that they are at the ballgame by posting it on a social media site or getting to the ballgame safely?
Edited by Brooke Neuman | <urn:uuid:ba4ce4c3-7092-4de5-90eb-be0d439f0eda> | CC-MAIN-2017-04 | http://www.mobilitytechzone.com/topics/4g-wirelessevolution/articles/2013/02/20/327594-fcc-votes-ease-wireless-jams-expanding-wi-fi.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00219-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.967966 | 790 | 2.59375 | 3 |
3.1.6 Could users of the RSA system run out of distinct primes?
As Euclid proved over two thousand years ago, there are infinitely many prime numbers. Because the RSA algorithm is generally implemented with a fixed key length, however, the number of primes available to a user of the algorithm is effectively finite. Although finite, this number is nonetheless very large. The Prime Number Theorem states that the number of primes less than or equal to n is asymptotic to n/lnn. Hence, the number of prime numbers of length 512 bits or less is roughly 10150. This is greater than the number of atoms in the known universe. | <urn:uuid:28e8525a-d1b2-4d0c-b677-5add2990a2d6> | CC-MAIN-2017-04 | https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/run-out-of-distinct-primes.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00219-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.931708 | 137 | 3.78125 | 4 |
I’m not sure if the invention of the Smartphone has been a boon for humanity, but it definitely has made us look like robots who are always glued to it.
Given the device’s prevalence today, the age-old practice of “leaving your work at the office” no longer holds water. Back in the late 80s and early 90s, when I was a kid, my parents never logged on to their office network to check emails or bothered answering a call from work once they were home or traveling on vacation. Work was just restricted to the office … but those were the “good old days” and are, for better or for worse, long-gone.
Times have changed. Part of the beauty of technology, though, is that it changes with time. Companies started expanding, 1 employee became 11 and they kept multiplying and there came a need to connect all the branch offices. IPSec-based remote VPN was born out of such a need. The times changed again and the technology improved and in came SSL VPN based VPNs.
Time continues to march on and, with it, change marches in lock step. So, what’s next?
Let’s take a quick look into each of these technologies and understand what were the drivers back in “the day,” and what are the drivers today that are driving change in remote access environment.
IPSec (IP Security) VPN is a legacy VPN solution that connects corporate devices to trusted networks. IPSec was originally developed to link two wired networks and provided an infrastructure to extend a private network across the Internet to reach out to partners, customers etc. and build a Virtual Private Network (VPN).
It is probably the most-adopted solution for data security in transit; however, it is not well-suited for use in mobile and wireless networks. An IPSec tunnel required that the IP addresses of the two end points remain unchanged. Some of the disadvantages of using an IPSec VPN solution are as below:
- IPSec also is not the most reliable connection when the users are mobile and moving from one network to another and/or suspend and resume laptop connections. Users have to re-authenticate when they encounter a gap in the connection and that can lead to user frustration, loss of user productivity and high volume calls to the support desk.
- IPSec also does not allow any kind of optimization of application traffic when it is delivered on an end user device like mobile phone.
- Does not allow administrators to apply granular application level policies and hence an IPSec VPN appliance cannot be used as a centralized policy manager.
SSL VPN is a secure way to remotely access application data. As opposed to IPSec VPNs, which connect corporate devices to trusted networks, SSL VPNs connect users using any browser-enabled device to specific applications. It is well suited for BYOD (laptops and desktops) as well as users accessing applications from home, café or any remote location.
SSL VPN meets most of the use cases for a remote worker but falls short of meeting the needs of a mobile user. Similarly to IPSec VPN, SSL VPN also has some disadvantages:
- SSL VPN solutions, just like IPSec VPNs, do not handle roaming users between the networks very well. In case of poor connectivity, applications crash or data is lost.
- SSL VPN operates at layer 7 mostly using a TCP connection, rather than a UDP connection. This results in lower wireless-network performance
With end users relying more and more on their smartphones and tablets to access corporate data like emails, enterprise and cloud applications, Enterprises are being challenged to improve security for these end user devices. When you throw in BYOD, the challenges are even more complex.
Question comes to mind: Why can’t Enterprises use the existing infrastructure like SSL VPN or IPSec to enforce secure access from Mobile phones?
They just can’t. Traditional VPNs are just not built for the mobile environment. Although these technologies work great for users who connect from a stationary device like a PC over a LAN connection in office or a laptop over a residential broadband connection, they cannot do the same for devices that are mobile and not stationary. A traditional VPN cannot adjust to the devices in motion and the properties like IP address, point of network attachment that changes with the motion. As a result, users get high drop rates, bad connectivity, and have to continually reconnect the devices. This results in loss of productivity and increase in frustration.
In addition the mobile phones use cellular networks that have low throughput and higher packet loss as compared to a wired connection. The applications users accesses from a mobile device are typically written for stable wired connections they have a high throughput. This results in poor application behavior and loss of productivity for the mobile user.
Mobile VPN is built using the same SSL VPN technology but has added features from a traditional SSL VPN solution and specific to a mobile user. Optimizing mobile app traffic, integration with MDM/MAM products to provide security and compliance, providing SSL VPN tunnel and per app tunnel on mobile devices, etc. are some of the features that are not provided with a traditional SSL VPN solution.
Now the quandary before you is: “I have multiple point products deployed in my datacenter for each of the use cases above. My users have to go to multiple URLs for accessing applications remotely and are frustrated with the user experience. Multiple solutions for remote access are driving up the cost and has led to redundancy in my datacenter. What should I do?”
The answer is simple: One URL. IT needs to start consolidating and converging remote access infrastructure. We need to start looking for solutions that provide secure remote access to any application, on any device.
For more information on One URL and how it drives consolidation, stay tuned for our next post. To be continued … | <urn:uuid:8c914544-06c3-4f2a-930c-7b422c453b0e> | CC-MAIN-2017-04 | https://www.citrix.com/blogs/2015/05/12/multiple-remote-access-solutions-in-an-enterprise/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.60/warc/CC-MAIN-20170116095120-00035-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.952841 | 1,208 | 2.53125 | 3 |
Multi-Dimensional Graph Data Opens the Door to New Applications
As the use of graph databases has grown in recent years, ever more applications of this technology involve storing, searching, and reasoning about events. In fact, many companies use this technology for this purpose, and the size of these databases is rising in many cases to billions of events. Now, there is advanced technology which overcomes performance problems that emerge when searching and reasoning over event databases of such size.
The kinds of events that graph databases manage typically have at least the following elements:
- A type, such as a phone call, a text message, a bank transaction, an observation of a moving vehicle, and so on
- A start and end time, or a single instant of time (the temporal dimensions)
- Location coordinates (the geospatial dimensions)
- A set of actors, such as sender and receiver, payer and payee, vehicle and operator, and so on (the social network dimensions).
Some important advances in semantic graph technology have improved the ability to store, search, and reason about geospatial, temporal, and social network data. However, the advances in these three areas remained quite isolated from each other for some time.
The next logical step was to bring these advanced capabilities together to support searching and reasoning over records that combine all of these dimensions. Recent technical innovations continue the progress along this path by enabling highly efficient applications dealing with vast amounts of such multi-dimensional data. Diverse kinds of applications can benefit by harnessing this newly available power, such as tracking moving objects in time and space, managing weather data, detecting fraud or other criminal activity, and more.
Key Kinds of Data and Reasoning
Before we delve into unified multi-dimensional facilities, it is useful to summarize the key characteristics of geospatial, temporal, and social network data and the kinds of reasoning we do about such data.
Geospatial data is about location in space. Two-dimensional geospatial databases describe location in terms of latitude and longitude or in terms of x- and y-axes on a grid. Three-dimensional geospatial databases might add a dimension for altitude, height, or simply a z-axis in a 3D grid.
We can ask the following questions about locations and shapes that we have stored in a database:
- What are all the events that occurred within a specified radius of a given location?
- How far are two given locations from each other?
With prior technology, support for searching and reasoning about geospatial data was limited to two-dimensional coordinates. More advanced geospatial technology supports three-dimensional coordinates. As an example of the practical ramifications of this enhancement, tools based on the newer technology can search for and reason about objects moving through three-dimensional space, such as airplanes; whereas previous versions could only deal with objects moving through two-dimensional space, such as automobiles.
Temporal data is about time. Key questions we ask about time often have to do with time intervals. Given two time intervals, we can, for example, ask the following questions:
- Does one interval occur entirely before the other?
- Do the intervals meet (meaning one interval starts where the other interval leaves off)?
- Do the intervals overlap?
Temporal data technology generally supports this kind of reasoning about time intervals. However, typically when searching event databases we are simply looking for events that occurred within a specific time interval.
Social Network Data
Social network data captures connections between actors, such as the fact that one person is a friend of another. But social networks do not have to be about people; social network technology is proving useful in other fields such as life sciences, where, for example, researchers study protein interaction patterns as social networks in which the actors are proteins.
We can ask the following kinds of questions about a group of actors and the connections among them that we have stored in a database:
- How far apart in the network are two given actors, and how strong is the relation?
- What are the cliques and ego groups?
- How important is a given actor in the group?
- How cohesive is the group?
Multi-Dimensional Graph Data – Bringing it All Together
Reasoning about each of these kinds of data as described above is useful, but, as we have seen, event databases require combining these facilities.
Geospatial, Temporal, Social Network, and More
For example, a log of cellular phone calls may well include the following data for a given call:
- The latitude and longitude of the call originator’s location and receiver’s location
- Start time and finish time
- The calling and receiving phone numbers.
In this case, the call log entries in the database – which are event records, where a phone call is an event – clearly have both geospatial and temporal data. We can use such data to answer questions such as:
- What calls have an originating location within a given radius of a given location within a given time interval?
- Did a given phone number place a call within a given radius of a given location within a given time interval?
- What calls to a given area code were made by a given phone number within a given time interval?
Figure 1 is a snapshot of a screen from a phone call application of this genre. The application displays a Google map. The user sets the radius and date/time interval and clicks on a location on the map. The application then displays the locations of callers and receivers of calls that originated within that radius within the date/time interval.
For an additional enhancement, we could construct a social network of connections between phone numbers, making it possible to pose questions such as finding the phone numbers (and, implicitly, their owners) that are the most central for phone call traffic in a given radius of a given location for a given subgroup of phone numbers.
Consider another example, where we store observations of airplanes moving through space. At regular periods we record the latitude, longitude, altitude, and heading of flying airplanes and we time stamp each observation. This data enables us to ask questions such as how many airplanes were within a given altitude range with a given heading during a given time interval.
Clearly, databases that support these scenarios are multi-dimensional. They have geospatial dimensions, temporal dimensions, and social networking dimensions. Moreover, there are strong use cases for adding additional dimensions to such databases. For example, in the case of airplane tracking, each time-stamped observation may also include weather readings such as outside air temperature, wind speed, and barometric pressure.
As mentioned earlier, previous technology supported searching and reasoning over two-dimensional geospatial data, whereas more recent technology supports three-dimensional geospatial data. But new technology, such as AllegroGraph version 5, can search and reason over an open-ended number of additional dimensions. Thus these new facilities are not merely three-dimensional, because there is no restriction to three dimensions. It is more accurate to use the term N-dimensional to describe the nature of graph databases and related applications that use these new facilities.
The idea of combining geospatial, temporal, social networking, and other dimensions in a database record is not new, but up to now implementation of this idea has been limited. The roadblock has been serious performance degradation as multi-dimensional databases grow to enormous sizes. Despite the fact that graph databases have known efficiency advantages over relational databases for dealing with geospatial, temporal, and social network data, simply using a graph database is not enough to get over the performance hurdle with gigantic multi-dimensional databases. The performance hit is most severe when search parameters are about proximity, such as searching for events that occurred within a specified radius of a given location or within a given time interval, or within a temperature range.
But performance is another area that has recently seen the addition of innovative technology that can answer complex proximity questions across multiple dimensions over billions of records in sub-second time. A key characteristic of this new technology is that, with proper database design, the time required to execute a search does not increase substantially as the size of the database increases.
Highly Scalable Applications
We are just beginning to tap the potential of this powerful technology. Here are a few examples of what we can do with the new N-dimensional search and reasoning capabilities:
- Insider Threat Detection: Quickly identify risks and the potential impact that an individual’s actions pose to the public or an organization. New semantic-based behavior models can empower companies to gain the critical knowledge necessary to predict high-risk events to prevent or aid in crisis situations.
- Precision Medicine: Integrate information from structured and unstructured data (and integrate different types of data – patient information with socio-economic and genetic information, etc.) to improve efficiencies and personalize care. Provide graphical analysis of genetic info, images, clinical trials, and public health data to help fuel discoveries, improve patient care and cut the overall cost of healthcare.
- Law Enforcement/Homeland Security: An application tied to a constantly updated database of telephone calls and text messages could use the location data, the time stamps, and the social network represented by the phone numbers to determine the focal points of a criminal enterprise and monitor the movements of the key actors in near real time as their centrality emerges from the data.
As the need to manage vast numbers of event records increases and organizations begin to understand the implications of high capacity multi-dimensional graph database technology, people will think of all sorts of applications that even the creators of the technology have not contemplated. We may one day look back on the emergence of this technology as an inflection point that took us to a new level of powerful data management.
About the Author: David S. Frankel has over 30 years of experience in the software industry as a technical strategist, architect, and programmer. He is recognized as a pioneer and international authority on the subject of model-driven systems and semantic data modeling. David has made major contributions to a number of industry standards, including XBRL, ISO 20022, BIAN, and UML. You can read more at his website: http://www.dfrankelconsulting.com | <urn:uuid:e5d31031-f183-435d-a550-0b138c5f2e51> | CC-MAIN-2017-04 | https://www.datanami.com/2015/06/02/multi-dimensional-graph-data-opens-the-door-to-new-applications/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00521-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.928299 | 2,108 | 2.515625 | 3 |
What are the similarities and differences between Data-Driven and Document-Driven DSS?
Document-Driven DSS is a relatively new category of Decision Support. There are certainly similarities to the more familiar Data-Driven DSS, but there are also major differences.
In my framework paper (Power, 2001), Document-Driven DSS are defined as integrating "a variety of storage and processing technologies to provide complete document retrieval and analysis." The Web provides access to large document databases including databases of hypertext documents, images, sounds and video. Examples of documents that would be accessed by a Document-Driven DSS are policies and procedures, product specifications, catalogs, news stories, and corporate historical documents, including minutes of meetings, corporate records, and important correspondence. A search engine is a powerful decision-aiding tool associated with a Document-Driven DSS (cf., Fedorowicz, 1993, pp. 125-136).
A defining difference is that Data-Driven DSS help managers analyze, display and manipulate large structured data sets that contain numeric and short character strings while Document-Driven DSS analyze, display and manipulate text including logical units of text called documents (cf., Sullivan, 2001).
Another defining difference is the analysis tools used for decision support. Data-Driven DSS use quantitative and statistical tools for ordering, summarizing and evaluating the specific contents of a subject-oriented data warehouse. Document-Driven DSS use natural language and statistical tools for extracting, categorizing, indexing and summarizing subject-oriented document warehouses.
What are the similarities? First, both systems use databases with very large collections of information to drive or create decision support capabilities.
Second, both types of systems require the definition of metadata and the cleaning, extraction and loading of data into an appropriate data management system using an organizing framework or model.
Third, building either type of system involves understanding the decision support needs of the targeted users. Also, user needs can and will change so rapid application development or prototyping is often desirable for either category of DSS. Neither type of system can meet all of the decision support needs of all managers in an organization. The best development approach is to try to meet a specific, well-defined need initially and then incrementally expand the structured data or documents that are captured and organized in the foundation data/document management system.
Document-Driven DSS help managers process "soft" or qualitative information and Data-Driven DSS help managers process "hard" or numeric data. Both categories of DSS come in "various shapes and sizes". Some systems support senior managers and others support functional decision makers on narrowly-defined tasks. The Web has increased the need for and the possibilities associated with Document-Driven DSS. Please check the following references for more ideas on this Ask Dan! question.
Fedorowicz, J. "A Technology Infrastructure for Document-Based Decision Support Systems", in Sprague, R. and H. J. Watson, Decision Support Systems: Putting Theory into Practice (Third Edition), Prentice-Hall, 1993, pp. 125-136.
Power, D. J., "Supporting Decision-Makers: An Expanded Framework", Informing Science eBook, June 2001.
Sullivan, Dan. Document Warehousing and Text Mining.
The above response is from Power, D., What are the similarities and differences between Data-Driven and Document-Driven DSS? DSS News, Vol. 2, No. 11,
Last update: 2005-08-06 21:55
Author: Daniel Power
You cannot comment on this entry | <urn:uuid:8cf5edcf-6a87-432c-b011-2f823d6b0a08> | CC-MAIN-2017-04 | http://dssresources.com/faq/index.php?action=artikel&id=42 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284411.66/warc/CC-MAIN-20170116095124-00025-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.872175 | 747 | 2.515625 | 3 |
Researchers at Marshall University in the United States are set to receive a new GPU-powered cluster that will allow them to make further advances in bioinformatics, climate research, physics computational chemistry and engineering.
Nicknamed “BigGreen” the cluster will boast “276 central processing unit cores, 552 gigabytes of memory and more than 10 terabytes of storage.” This, coupled with the eight NVIDIA Tesla GPUs with 448 cores each will push BigGreen into the six teraflop range—and will allow the university’s researchers to explore new areas aided by simulation and parallel computation capabilities.
This new cluster comes about following a round of NSF funding under the “Cyberinfrastructure for Transformational Scientific Discovery in West Virginia and Arkansas (CI-TRAIN) program. This is a project that seeks to advance the IT capabilities of the two states’ institutions to build more robust nanoscience and geosciences research programs in particular.
As Dr. Jan I. Fox, Marshall’s senior vice president for information technology said in a statement this week, “For example, a 3-D scan of Michelangelo’s statue ‘David’ contains billions of raw data points. Rendering all that data into a 3-D model would be nearly impossible on a desktop computer,” she said. “Using our high-performance computing capabilities, a student or professor could run that same data and produce the model in just a fraction of the time. It will literally change the way we work and do research at Marshall University.”
Fox went on to note that “the new cluster is critical to assisting researchers with their diverse objectives. He noted that this addition “makes possible scholarly innovation and discoveries that were, until recently, possible only at the most prestigious research institutions,” she said. “Along with our connection to Internet2, our students and faculty now have access to computing power, data and information we could only imagine just a few years ago.” | <urn:uuid:294bcfa5-827a-43d6-ac66-80547f6d8147> | CC-MAIN-2017-04 | https://www.hpcwire.com/2011/09/29/marshall_scores_biggreen_gpu_cluster/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00421-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.925826 | 420 | 2.578125 | 3 |
Many of the latest supercomputers are based on accelerators, including the two fastest systems according to the 11/2013 TOP500 list. Accelerators are also becoming widespread in PCs and are even starting to appear in handheld devices, which will further boost the interest in accelerator programming.
This broad adoption is the result of high performance, good energy efficiency, and low price. For example, comparing a Xeon E5-2687W CPU to a GTX 680 GPU, both of which were released in March 2012, we find that the GPU started out four times cheaper, has eight times more single-precision performance and four times more main-memory bandwidth, and provides over thirty times as much performance per dollar and six times as much performance per watt. Based on these numbers, accelerators should be used everywhere and all the time. So why aren’t they?
There are two main difficulties with accelerators. First, they can only execute certain types of programs efficiently, in particular programs with sufficient parallelism, data reuse, and regularity in their control flow and memory access patterns. Second, it is harder to write effective software for accelerators than for CPUs because of architectural disparities such as very wide parallelism, exposed memory hierarchies, lockstep execution, and memory-access coalescing. Several new programming languages and extensions thereof have been proposed to hide these aspects to various degrees and thus make it easier to program accelerators.
The initial attempts to use GPUs, which are currently the most prominent type of accelerator, for speeding up non-graphics programs were cumbersome and required expressing the computation in form of shader code that only supported limited control flow and no integer operations. Gradually, these constraints were lifted, making GPUs more general-purpose computing devices and enabling non-graphics-experts to program them. The biggest step in this direction came with the release of the CUDA programming language. It extends C/C++ with additional qualifiers and keywords as well as library functions and a mechanism to launch code sections, called kernels, on a GPU.
The rapid adoption of CUDA, combined with that fact that it is proprietary and the complexity of writing good CUDA code, triggered the creation of several other programming approaches for accelerators, including OpenCL, C++ AMP, and OpenACC. OpenCL is the non-proprietary counterpart of CUDA and is backed by many large companies. It is not restricted to NVIDIA GPUs but also supports AMD GPUs, multicore CPUs, MICs (Intel’s Xeon Phi), DSPs, and FPGAs, making it very portable. However, just like CUDA, it is very low level and requires the software developer to explicitly orchestrate data movement, select where variables live in the memory hierarchy, and manually express parallelism in the code. C++ Accelerated Massive Parallelism (C++ AMP) operates at a medium level. It allows expressing data parallelism directly in C++ and hides all low-level code from the programmer. Parallel “for each” statements encapsulate parallel code. C++ AMP is tied to Windows, does not (yet) support CPUs, and suffers from startup overhead, making it impractical for accelerating short-running code sections.
OpenACC is a very high-level approach that allows programmers to annotate their code using pragmas to inform the compiler which code sections to accelerate, e.g., by offloading them to a GPU. The idea is similar to how OpenMP can be used to parallelize CPU programs. In fact, there are efforts underway to merge the two approaches. OpenACC is still maturing and currently only supported by a few compilers.
To predict how accelerator programming might develop from this point forward, it may be helpful to study how other acceleration hardware has evolved in the past. For example, some early high-end PCs contained an extra chip, called a co-processor, to accelerate floating-point (FP) calculations. Later, this co-processor was combined with the CPU on the same die and is now fully integrated with the integer processing core. Only separate FP registers and ALUs remain. The much more recently added SIMD support (including MMX, SSE, AltiVec, and AVX) did not start out on a separate chip but is now also fully integrated in the core. Just like the floating-point instructions, SIMD instructions operate on separate registers and ALUs.
Interestingly, the programmer’s view of these two types of instructions is surprisingly different. The floating-point operations and data types have been standardized long ago (IEEE 754) and are now ubiquitous. They are directly available in high-level languages through normal arithmetic operations and built-in 32-bit single-precision and 64-bit double-precision types. In contrast, no standard exists for SIMD instructions, and their existence is largely hidden from programmers. It is left to the compiler to ‘vectorize’ the code and employ these instructions. Developers wishing to use SIMD instructions explicitly have to resort to compiler-specific macros that are not portable.
Since GPUs and MICs obtain their high performance through SIMD-like execution, we believe accelerators are more likely to track the evolution of SIMD- than FP-instruction support. Another similarity to SIMD, and a key factor that made CUDA so successful, is that CUDA hides the SIMD aspect of the GPU hardware and allows the programmer to think in terms of individual threads operating on scalar data elements rather than warps operating on vectors. Hence, accelerators will undoubtedly also be moved onto the CPU chip, but we surmise that their code will not be seamlessly interwoven with CPU code nor will the accelerators’ hardware-supported data types be made explicitly available to the programmers.
Some accelerators have already been combined with conventional processing cores on the same chip, including on AMD’s APUs (as used in the Xbox One), Intel’s processors with HD Graphics, and NVIDIA’s Tegra SoC. However, accelerators will probably remain separate cores because it is difficult to fuse accelerator and conventional cores to the degree that was possible with FP and SIMD instructions, i.e., to ‘reduce’ the accelerator to just a set of separate registers and ALUs in the general-purpose core. After all, accelerators are so fast, parallel, and energy efficient because they are based on dissimilar architectural tradeoffs, such as incoherent caches, very different pipeline designs, GDDR5 memory, and an order of magnitude more registers and multithreading. Hence, the complexity of having to run separate accelerator code will remain. As even cores on the same die tend to share no more than the bottom of the memory hierarchy, data transfers between CPU and accelerator cores will possibly become faster but also remain a bottleneck.
The explicit orchestration of data exchanges between devices is a significant source of errors and a substantial burden on programmers. For short kernels, it is often the case that more code needs to be written to transfer data back and forth than to express the actual computation. Eliminating this burden is one of the primary benefits of higher-level programming approaches such as C++ AMP and OpenACC. Even low-level approaches have been addressing this problem. For example, streamlined and unified memory addressing is one of the major improvements in the latest CUDA and OpenCL releases and NVIDIA GPU hardware. Yet, to achieve good performance, some help by the programmer is generally needed, even in very high-level approaches like OpenACC. In particular, locality-aware memory allocation and data migration often have to be handled manually.
Unfortunately, any ease provided by such improvements may only turn out to be a partial solution. Based on the assumption that future microprocessors will be similar to today’s (small) supercomputers, it is likely that they will contain many more cores than can be served by a shared (NUMA) memory system. Instead, we believe there will be clusters of cores on each die where each cluster has its own memory, possibly stacked on top of the cores in a 3D design. The clusters communicate with each other via an on-chip network using a protocol akin to MPI. We do not believe this is farfetched as Intel just announced that it will include networking capabilities in their future Xeon chips, which is a step in this direction. Hence, it is likely that future chips will become more and more heterogeneous, comprising latency- and throughput-optimized cores, NICs, encryption and compression cores, FPGAs, etc.
That raises the all-important question of how to program such devices. We think the answer is surprisingly similar to how today’s multiple CPU cores, SIMD instruction set extensions, and compute accelerators are being used. Basically, there are three levels at which this is done, which we refer to as libraries, automated tools, and do-it-yourself. The library approach is the simplest and works by calling functions in a library that has been accelerated by someone else. Many of the latest math libraries belong to this category. As long as most of the computation takes place inside the library code, this approach is very successful. It allows a few expert library writers to enable the acceleration of a large number of applications.
The automated tools approach is the approach taken by C++ AMP and OpenACC, where the compiler has to do the heavy lifting. The success of this approach depends on the quality and sophistication of the available software tools and, as mentioned, often needs help from the programmer. Nevertheless, most developers can reasonably quickly achieve positive results with this approach, which is not limited to predetermined functions in a library. This is perhaps reminiscent of how a few expert teams code up the inner workings of SQL, which then allows a large number of ‘regular’ programmers to benefit from the optimizations and know-how that the experts encoded.
Finally, the do-it-yourself approach is represented by CUDA and OpenCL, which give the programmer full control over and access to almost every aspect of the accelerator. If implemented well, the resulting code can outperform the other two approaches. However, this comes at the cost of a steep learning curve, lots of extra code that needs to be written, and a slew of additional possibilities for bugs. Ever improving debugging and programming environments will help alleviate these problems, but only to a degree. Hence, this approach is primarily useful for expert programmers, such as those writing the aforementioned libraries and tools.
Since it is trivial to use, programmers will employ the library approach whenever possible. However, this hinges on the availability of appropriate library functions, which are easy to provide in well-defined and mature domains such as standard matrix operations (BLAS) but hard to do for emerging areas or unstructured computations. In the absence of adequate libraries, a programmer’s second choice will be the tools approach, assuming that the tools will be mature. Any computations that are not available in a library, do not demand the highest performance, and are supported by the compiler will likely be coded up using the tools approach. For the remaining cases, the do-it-yourself approach will have to be used. Since OpenCL incorporates the successful ideas that CUDA introduced, is non-proprietary, and supports a large range of hardware, we believe OpenCL or a derivative of it will start dominating this domain akin to how MPI has become the de facto standard for distributed-memory programming.
Taking the union of the hardware features and evolution outlined above, future processor chips might contain multiple clusters with their own memory, each cluster consists of a set of cores where not all cores necessarily have the same capabilities, each multithreaded compute core comprises a large number of processing elements (i.e., functional units or ALUs), and each processing element may be able to perform SIMD operations. Even though actual chips might not include all of the above, they all share a key similarity, namely a hierarchy of distinct parallelization levels. To effectively and portably program such a system, we propose what we call the “copious-parallelism” technique. It is a generalization of how MPI programs are typically explicitly written to adapt to the number of available compute nodes or how OpenMP code implicitly adapts to the number of available cores (or threads).
The main idea behind copious parallelism, and the reason for its name, is to provide ample and parameterizable parallelism for each level. The parameterization makes it possible to decrease the parallelization at any level to match the hardware parallelism at that level. For example, on a shared-memory system, the highest level of parallelization is not necessary and should be set to just one “cluster”. Similarly, in a core where the functional units are unable to perform SIMD instructions, the parameter determining the SIMD width should be set to one. This technique is able to exploit the common features of current multicore CPUs, GPUs, MICs, and other devices as well as likely future architectures. While it is definitely harder to write software in this manner, copious parallelism makes it possible to extract high performance from a broad range of computing devices with just a single code base.
We have tested this approach on a direct n-body simulation [ref]. We wrote a single copious-parallelism implementation in OpenCL and assessed it on four very different architectures: an NVIDIA GeForce Titan GPU, an AMD Radeon 7970 GPU, an Intel Xeon E5-2690 multicore CPU, and an Intel Xeon Phi 5110P MIC. Given our FLOP mix of 54% non-FMA operations, the copious-parallelism code achieves 75% of the theoretical peak performance on the Titan, 95% on the Radeon, 80.5% on the CPU, and 80% on the MIC. While this is only one example, the results are extremely encouraging. In fact, we believe the copious-parallelism technique may well be and remain for quite some time the only portable high-performance approach for programming current and future accelerated systems.
About the Authors
Kamil Rocki is a postdoctoral researcher at IBM Research. Prior to joining Almaden Research Center in California he has spent 5 years in Japan at the University of Tokyo where he graduated with a PhD degree in Information Science. Before that he received his M.Sc. and B.Sc. degrees in Computer Science from Warsaw University of Technology. His current research focuses on high performance parallel algorithms and hardware design. Other interests include supercomputing, AI, computer vision and robotics. He has been working in the field of GPGPU programming for the past 8 years.
Martin Burtscher is Associate Professor in the Department of Computer Science at Texas State University. He received the BS/MS degree in computer science from the Swiss Federal Institute of Technology (ETH) Zurich and the Ph.D. degree in computer science from the University of Colorado at Boulder. Martin’s research interests include efficient parallelization of programs for GPUs and multicore CPUs, automatic performance assessment and optimization, and high-speed data compression. He is a senior member of the IEEE, its Computer Society, and the ACM. Martin has co-authored over 75 peer-reviewed scientific publications, including a book chapter in NVIDIA’s GPU Computing Gems, is the recipient of an NVIDIA Academic Partnership award, and is the PI of a CUDA Teaching Center. | <urn:uuid:fd74aa43-0884-4109-b930-424efdbd58c1> | CC-MAIN-2017-04 | https://www.hpcwire.com/2014/01/09/future-accelerator-programming/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00237-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.939012 | 3,175 | 3.5 | 4 |
Mobile telephony has had a huge positive impact in the 13 years since its introduction to Sudan, Africa's largest country. Sudan's student population is one group that has benefitted, being able to stay in touch with family and friends when away from home. One student, Hiba, went further and turned her mobile phone into an innovative business.
Like most students the world over, Hiba depended on financial support from her parents when she began her studies at Khartoum University, and like most students, she soon needed more money to "fill the gap."
Hiba quickly realized that the mobile in her pocket – the same mobile that she bought to stay in touch with her parents in the first place – could help her.
She began selling mobile credits to fellow students, which in turn allowed them to stay in touch with their families and friends.
"I would buy credit for 100 Sudan Pounds (about USD 45) and distribute it to my friends for 110 Sudan Pounds (about USD 49.5)," she says.
In other words, Hiba's friends were able to buy the amount of credit they needed and, because she was able to divide the credit into so many portions, she was able to make a profit.
Her business model proved so successful among her friends they spread the word and other students began buying into the service.
Hiba's case is just one example of how the mobile phone has made a difference in Sudan.
The story of the mobile phone in the country is also the subject of a new report compiled by African operator Zain and Ericsson, called Socio-Economic Impact of Mobile Phones in Sudan.
Jeffrey Sachs, director of the Earth Institute at Columbia University, writes in the forward of the report that mobile telephony has had a "remarkable" impact on economic development in the country.
"Mobile telephony has quickly assumed a central place in Sudan’s economy: in direct employment in the telecoms sector itself; in providing market information and logistical support in the dominant agriculture sector; and in enabling families to stay in contact in the course of conflicts, migration, and large population displacements.
"Mobile penetration has extended beyond the Khartoum region to include South Sudan and even conflict-ridden Darfur. The use of mobile phones in refugee camps to support health, education, and family reunification is also being tested. The report underscores the central fact that mobile telephony offers a remarkable, indeed, unique, tool for economic development and can even reach the poorest of the poor through creative approaches by the providers and users."
Zain and Ericsson report on economic impact of mobile communications in Sudan (pdf)
More information about the Sudan report on the Zain website
Jeffrey Sachs on fighting poverty with connectivity
Zain report on effects and meaning of the mobile phone in a (post) war economy in Karima, Khartoum and Juba, Sudan (pdf) | <urn:uuid:ad99b53c-1a41-4d14-839e-43c357aff866> | CC-MAIN-2017-04 | https://www.ericsson.com/thecompany/stories/100322_minutes_mean_money_20100322141108 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00173-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.967211 | 602 | 2.859375 | 3 |
Tamm S.L.,Bell Center
Biological Bulletin | Year: 2015
Ctenophores, or comb jellies, are geotactic with a statocyst that controls the activity of the eight ciliary comb rows. If a ctenophore is tilted or displaced from a position of vertical balance, it rights itself by asymmetric frequencies of beating on the uppermost and lowermost comb rows, turning to swim up or down depending on its mood. I recently discovered that the statocyst of ctenophores has an asymmetric architecture related to the sagittal and tentacular planes along the oral-aboral axis. The four groups of pacemaker balancer cilia are arranged in a rectangle along the tentacular plane, and support a superellipsoidal statolith elongated in the tentacular plane. By controlled tilting of immobilized ctenophores in either body plane with video recording of activated comb rows, I found that higher beat frequencies occurred in the sagittal than in the tentacular plane at orthogonal orientations. Similar tilting experiments on isolated statocyst slices showed that statolith displacement due to gravity and the resulting deflection of the mechanoresponsive balancers are greater in the sagittal plane. Finally, tilting experiments on a mechanical model gave results similar to those of real statocysts, indicating that the geometric asymmetries of statolith design are sufficient to account for my findings. The asymmetric architecture of the ctenophore statocyst thus has functional consequences, but a possible adaptive value is not known. © 2015 Marine Biological Laboratory. Source
Dundon S.E.R.,Dartmouth College |
Chang S.-S.,University of California at Los Angeles |
Kumar A.,U.S. National Institutes of Health |
Occhipinti P.,Dartmouth College |
And 4 more authors.
Molecular Biology of the Cell | Year: 2016
Nuclei in syncytia found in fungi, muscles, and tumors can behave independently despite cytoplasmic translation and the homogenizing potential of diffusion. We use a dynaction mutant strain of the multinucleate fungus Ashbya gossypii with highly clustered nuclei to assess the relative contributions of nucleus and cytoplasm to nuclear autonomy. Remarkably, clustered nuclei maintain cell cycle and transcriptional autonomy; therefore some sources of nuclear independence function even with minimal cytosol insulating nuclei. In both nuclear clusters and among evenly spaced nuclei, a nucleus' transcriptional activity dictates local cytoplasmic contents, as assessed by the localization of several cyclin mRNAs. Thus nuclear activity is a central determinant of the local cytoplasm in syncytia. Of note, we found that the number of nuclei per unit cytoplasm was identical in the mutant to that in wild-type cells, despite clustered nuclei. This work demonstrates that nuclei maintain autonomy at a submicrometer scale and simultaneously maintain a normal nucleocytoplasmic ratio across a syncytium up to the centimeter scale. © 2016 Dundon et al. Source
Peinado G.,National University of Colombia |
Osorno T.,National University of Colombia |
Del Pilar Gomez M.,National University of Colombia |
Del Pilar Gomez M.,Bell Center |
And 2 more authors.
Proceedings of the National Academy of Sciences of the United States of America | Year: 2015
Melanopsin, the photopigment of the "circadian" receptors that regulate the biological clock and the pupillary reflex in mammals, is homologous to invertebrate rhodopsins. Evidence supporting the involvement of phosphoinositides in light-signaling has been garnered, but the downstream effectors that control the light-dependent conductance remain unknown. Microvillar photoreceptors of the primitive chordate amphioxus also express melanopsin and transduce light via phospholipase-C, apparently not acting through diacylglycerol. We therefore examined the role of calcium in activating the photoconductance, using simultaneous, high time-resolution measurements of membrane current and Ca2+ fluorescence. The light-induced calcium rise precedes the onset of the photocurrent, making it a candidate in the activation chain. Moreover, photolysis of caged Ca elicits an inward current of similar size, time course and pharmacology as the physiological photoresponse, but with a much shorter latency. Internally released calcium thus emerges as a key messenger to trigger the opening of light-dependent channels in melanopsin-expressing microvillar photoreceptors of early chordates. © 2015, National Academy of Sciences. All rights reserved. Source
Stevenson J.W.,Providence College |
Stevenson J.W.,Bell Center |
Conaty E.A.,Providence College |
Conaty E.A.,Bell Center |
And 10 more authors.
PLoS ONE | Year: 2016
The amyloid precursor protein (APP) is a causal agent in the pathogenesis of Alzheimer's disease and is a transmembrane protein that associates with membrane-limited organelles. APP has been shown to co-purify through immunoprecipitation with a kinesin light chain suggesting that APP may act as a trailer hitch linking kinesin to its intercellular cargo, however this hypothesis has been challenged. Previously, we identified an mRNA transcript that encodes a squid homolog of human APP770. The human and squid isoforms share 60% sequence identity and 76% sequence similarity within the cytoplasmic domain and share 15 of the final 19 amino acids at the C-terminus establishing this highly conserved domain as a functionally import segment of the APP molecule. Here, we study the distribution of squid APP in extruded axoplasm as well as in a well-characterized reconstituted organelle/microtubule preparation from the squid giant axon in which organelles bind microtubules and move towards the microtubule plus-ends. We find that APP associates with microtubules by confocal microscopy and co-purifies with KI-washed axoplasmic organelles by sucrose density gradient fractionation. By electron microscopy, APP clusters at a single focal point on the surfaces of organelles and localizes to the organelle/microtubule interface. In addition, the association of APP-organelles with microtubules is an ATP dependent process suggesting that the APP-organelles contain a microtubule-based motor protein. Although a direct kinesin/APP association remains controversial, the distribution of APP at the organelle/microtubule interface strongly suggests that APP-organelles have an orientation and that APP like the Alzheimer's protein tau has a microtubule-based function. © 2016 Stevenson et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Source | <urn:uuid:1ebebcab-6584-472c-af98-5f03344ef4ba> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/bell-center-687170/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280763.38/warc/CC-MAIN-20170116095120-00081-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.883033 | 1,486 | 2.71875 | 3 |
Scientific American's March issue has an intriguing article which explores the efforts of digital activists to circumvent corporate and governmental control over the Internet. The aim of the moment is to configure and build a decentralized mesh network that cannot be blocked, filtered or turned off.
Egypt's Internet shutdown during last year's Arab Spring played a significant inspirational role.
Image: Scientific American Magazine
With a "shadow" network configured, activists would remain able to communicate, even after central hubs have gone dark.
Another fascinating addition to all of this is Scientific American's Science Talk podcast: The Coming Entanglement [MP3].
In the podcast, SA editor Fred Guterl talks with Bill Joy and Danny Hillis about the need to build an alternative, hardier network due to the ever increasing complexity of our current Internet (which makes it ever more prone to unexplained failures).
Joy and Hillis envision a simpler, more robust network as a way to shelter some of our critical infrastructure from entanglements. | <urn:uuid:b99b3a37-aaed-4bc6-bfef-245b758ba1cf> | CC-MAIN-2017-04 | https://www.f-secure.com/weblog/archives/00002319.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280410.21/warc/CC-MAIN-20170116095120-00018-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.907336 | 201 | 2.65625 | 3 |
The real question for it organizations isn't whether open-source software is more secure than proprietary software but which type of software is fixed fastest.
The real question for it organizations isnt whether open-source software is more secure than proprietary software but which type of software is fixed fastest.
Humans code both open-source and proprietary software, which means that mistakes will be made, and the resulting holes need to be reported and closed. Right now, open source has it all over proprietary software when it comes to owning up to and resolving problems.
Recent history provides striking examples of the approaches open-source and proprietary software vendors take to fixing security problems.
Two key open-source applications were hit recently by serious security problems.
In June, a flaw was found in the popular, historically secure Apache Web server that made it possible to remotely exploit code on a vulnerable system. And just this month, the Slapper worm spread to thousands of systems by taking advantage of a hole in the OpenSSL program.
These were serious problems, and in both cases, the developers of the programs responded quickly. The Apache Software Foundation made a patch available two days after the Web server hole was announced. In the case of OpenSSL, a patch was available the day the flaw was announced.
Compare this with how security problems were handled recently in the proprietary world.
A serious flaw was found in Windows XP recently that made it possible to delete files on a system using a single URL. Microsoft Corp. quietly fixed this problem in Windows XP Service Pack 1, without notifying users of the problem.
A more direct comparison can be seen in how Microsoft and the KDE Project responded to an SSL (Secure Sockets Layer) vulnerability that made the Internet Explorer and Konqueror browsers, respectively, potential tools for stealing, among other things, credit card information.
The day the SSL vulnerability was announced, KDE provided a patch. Later that week, Microsoft posted a memo on its TechNet site basically downplaying the problem.
Of course, there have been open-source problems that werent patched quickly. And software vendors, including Microsoft, often respond quite quickly to security holes.
But, in general, open-source organizations react quickly and openly to problems while software vendors instinctively cover up, deny and delay. In addition, open-source organizations almost always fix vulnerabilities with small, focused patches, thus limiting unanticipated side effects. Vendors such as Microsoft, on the other hand, tend to provide multiple patches and fixes all rolled up into service packs, which are notorious for creating new problems while fixing old ones.
In the end, it comes down to motivation. For open-source developers, the most important thing is credibility, which means taking problems seriously. Most proprietary software vendors, on the other hand, say that features come before security and seem to believe that it is better to sweep a problem under the rug than to admit a mistake openly. | <urn:uuid:27316c2e-5867-49c7-abb2-ec9e2e1060e8> | CC-MAIN-2017-04 | http://www.eweek.com/c/a/Application-Development/eWeek-Labs-Open-Source-Quicker-at-Fixing-Flaws | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00228-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.957714 | 593 | 2.5625 | 3 |
Example of Tampering
An example of vote tampering would involve the voter making the selections, but with the attacker intercepting the final ballot when submitting it. The ballot could be recorded after a few items were changed, and it would be difficult to find any traces of which votes had been modified. Diebold systems are used in several states, including Georgia, Maryland, Utah, Nevada, New Jersey, Pennsylvania, Indiana and Texas.Last fall, a Washington, D.C., district system invited a team from the University of Michigan's College of Engineering to try to breach its pilot of an online voting system. It took the team only 3 hours to find a SQL injection flaw to take over the server, change ballot results, cause the site to broadcast the university's fight song when someone accessed the site, and find personal information of voters registered on the system. There have been several opportunities for cyber-attackers intent on influencing the political process in recent weeks around the world. During the Russian elections earlier this month, popular Russian media Websites such as the Moscow Echo radio station, election monitoring group Golos and the LiveJournal blogging service were knocked offline by distributed denial of service (DDoS) attacks. A botnet using a piece of malware was behind some of the DDoS attacks, according to Sebastien Duquette, a researcher at ESET. The DDoS attacks targeted Websites that were discussing election fraud and other political violations, Moscow Echo's editor in chief claimed. It's a plausible scenario as "true political activism is a strong and real motivator for Internet DDoS attack activity," Mike Paquette, chief strategy officer of Corero Network Security, told eWEEK. "It is not hard to imagine that fringe groups, loosely associated with one political party, might employ these cyber-attacks to generally, or specifically, help their party in certain elections." DDoS attacks aren't just a tool for protesters, as the establishment can use it just as effectively. In Russia, DDoS was used "as a mechanism of propaganda, censorship, information withholding and unfair political advantage," Paquette said. Three of the top seven leaders in South Korea's ruling Grand National Party quit their posts for allegedly tampering with national elections in late October, the Wall Street Journal reported earlier this month. South Korea's cyber-terrorism police arrested a legislative aide to a top ruling politician after finding evidence that he launched the DDoS attack on the National Election Commission's Website on election day. The attack prevented young voters from being able to find their polling places, and may have suppressed voter turnout among the demographic that traditionally favor opposition parties, according to the report.
"In light of the rapidly approaching 2012 U.S. Presidential Election, it seems there may be a need to give serious attention to securing our election technology," Cameron Camp, security researcher at ESET, wrote on the company blog. "Unscrupulous, well-heeled bad actors" can easily gather together a group of hackers, especially if they are politically motivated, to tamper with votes and swing elections, Camp said. | <urn:uuid:b52f035d-1b7a-4855-86e5-476aebe6ed8e> | CC-MAIN-2017-04 | http://www.eweek.com/c/a/Security/Hackers-Threaten-Voting-Systems-Electroal-Process-138443/1 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00532-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.963584 | 621 | 2.703125 | 3 |
Ethernet VLANs (e) - Flash
- Course Length:
- 1 hour of eLearning
NOTE: While you can purchase this course on any device, currently you can only run the course on your desktop or laptop.
As the communications industry transitions to wireless and wireline converged networks to support voice, video, data and mobile services over IP networks, a solid understanding of Ethernet and its role in networking is essential. Ethernet is native to IP and has been adopted in various forms by the telecom industry as the Layer 1 and Layer 2 of choice. VLANs are used extensively in the end-to-end IP network and a solid foundation in IP and Ethernet has become a basic job requirement for the carrier world. Starting with a brief history, the course provides a focused basic level introduction to the fundamentals of Ethernet VLAN technology. It is a modular introductory course only on Ethernet VLAN basics as part of the overall eLearning IP fundamentals curriculum. The course includes a pre-test and a post-test.
This course is intended for those seeking a basic level introduction to Ethernet Bridging.
After completing this course, the student will be able to:
• Define Ethernet VLANs
• Identify Ethernet VLAN applications and benefits
• Summarize the key variations of the Ethernet family of standards to support VLANs
• Identify the key types of Ethernet VLANs
• Describe VLAN Trunks and their purpose
1. Virtual Local Area Networks (VLANs)
2. VLAN Application and Benefits
3. Default VLAN
4. Multiswitch VLANs: Trunks and Tagging | <urn:uuid:69cc31f8-7d05-4bf9-b64b-29e30b6efdfc> | CC-MAIN-2017-04 | https://www.awardsolutions.com/portal/elearning/ethernet-vlans-e-flash?destination=elearning-courses | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00532-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.895218 | 340 | 3.15625 | 3 |
As the amount of data being collected continues to grow, more and more companies are building big data repositories to store, aggregate and extract meaning from their data. Big data provides an enormous competitive advantage for corporations, helping businesses tailor their products to consumer needs, identify and minimize corporate inefficiencies, and share data with user groups across the enterprise. With a growth rate of 58 percent in 2013 alone, these technologies and their benefits are here to stay.
Unfortunately, legitimate organizations aren’t the only groups that are going big. Large sets of consolidated data are a tempting target for cyber attackers. Breaching an organization’s big data repository can provide criminal groups with bigger payoffs and more recognition from a single attack. And when attackers set their sights on big data repositories, the effects can be devastating for the affected organizations. Terabytes of data in these repositories may include a company’s crown jewels: customer data, employee data, and trade secrets. The recent data breach at Target is estimated to cost the company upwards of $1.1 billion, and the PlayStation breach cost Sony an estimated $171 million. A breach in a big data repository could be even more damaging at a financial institution or healthcare provider, where the value of the data is extremely high and government regulations come into play.
Securing big data comes with its own unique challenges beyond being a high-value target. It’s not that big data security is fundamentally different from traditional data security. Big data security challenges arise because of incremental differences, not fundamental ones. The differences between big data environments and traditional data environments include:
- The data collected, aggregated, and analyzed for big data analysis
- The infrastructure used to store and house big data
- The technologies applied to analyze structured and unstructured big data
The variety, velocity and volume of big data amplifies security management challenges that are addressed in traditional security management. Big data repositories will likely include information deposited by various sources across the enterprise. This variety of data makes secure access management a challenge. Each data source will likely have its own access restrictions and security policies, making it difficult to balance appropriate security for all data sources with the need to aggregate and extract meaning from the data. For example, a big data environment may include a dataset with proprietary research information, a dataset requiring regulatory compliance, and a separate dataset with personally identifiable information (PII). A researcher might want to correlate their research with a dataset including PII data, but what restrictions should be in-place to ensure adequate security? Protecting big data requires balancing analysis like this with security requirements on a case-by-case basis.
In addition, many of the repositories collect data at high volumes and velocity from a number of different data sources, and they all might have their own data transfer workflows. These connections to multiple repositories can increase the attack surface for an adversary. A big data system receiving feeds from 20 different data sources may present an attacker with 20 viable vectors to attempt to gain access to a cluster.
Another big data challenge is the distributed nature of big data environments. Compared with a single high-end database server, distributed environments are more complicated and vulnerable to attack. When big data environments are distributed geographically, physical security controls need to be standardized across all accessible locations. When data scientists across the organization want access to information, perimeter protection becomes important and complicated to ensure access to users while protecting the system from a possible attack. With a large number of servers, there is an increased possibility that the configuration of servers may not be consistent – and that certain systems may remain vulnerable.
An additional big data security challenge is that big data programming tools, including Hadoop and NoSQL databases, were not originally designed with security in mind. For example, Hadoop originally didn’t authenticate services or users, and didn’t encrypt data that’s transmitted between nodes in the environment. This creates vulnerabilities for authentication and network security. NoSQL databases lack some of the security features provided by traditional databases, such as role-based access control. The advantage of NoSQL is that it allows for the flexibility to include new data types on the fly, but defining security policies for this new data is not straightforward with these technologies.
Securing Big Data
So what can be done to help bring the security of traditional database management to big data? Several organizations describe and define different security controls. The SANS Institute provides a list of 20 security controls. The list contains several controls that I would recommend to address the security challenges presented by big data.
- Application Software Security.Use secure versions of open-source software. As described above, big data technologies weren’t originally designed with security in mind. Using open-source technologies like Apache Accumulo or the .20.20x version of Hadoop or above can help address this challenge. In addition, proprietary technologies like Cloudera Sentry or DataStax Enterprise offer enhanced security at the application layer. Specifically, Sentry and Accumulo also support role-based access control to enhance security for NoSQL databases.
- Maintenance, Monitoring, and Analysis of Audit Logs. Implement audit logging technologies to understand and monitor big data clusters. Technologies like Apache Oozie can help implement this feature. Keep in mind that security engineers in the organization need to be tasked with examining and monitoring these files. It’s important to ensure that auditing, maintaining, and analyzing logs are done consistently across the enterprise.
- Secure Configurations for Hardware and Software. Build servers based on secure images for all systems in your organization’s big data architecture. Ensure patching is up to date on these machines and that administrative privileges are limited to a small number of users. Use automation frameworks, like Puppet, to automate system configuration and ensure that all big data servers in the enterprise are uniform and secure.
- Account Monitoring and Control. Manage accounts for big data users. Require strong passwords, deactivate inactive accounts, and impose a maximum permitted number of failed log-in attempts to help stop attacks from getting access to a cluster. It’s important to note that the enemy isn’t always outside of the organization. Monitoring account access can help reduce the probability of a successful compromise from the inside.
Organizations that are serious about big data security should consider these first steps. Cyber criminals are never going to stop being on the offensive, and with such a big target to protect, it is prudent for any enterprise utilizing big data technologies to be as proactive as possible in securing its data.
Jeff Markey is a Data Scientist with ThreatTrack Security, supporting corporate data mining efforts and product development. He has 7 years of experience implementing data analytics in the cyber security field. He holds a Master of Science in Computer Science and Mathematics from Johns Hopkins University and is certified as a Global Information Assurance Security Expert (GSE). | <urn:uuid:6099f997-d4cf-4f63-a017-13bed9fcc79d> | CC-MAIN-2017-04 | http://data-informed.com/manage-big-datas-big-security-challenges/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00164-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.916684 | 1,401 | 2.53125 | 3 |
I have a basic question about coding of spanned records in assembler program.
I have worked lot on non spanned VSAM records.
Now the question is how to create a spanned record?
what are things that i should take care for as compared to non spanned VSAM records (for eg in ACB,RPL macros also in JCL and DS section).
A spanned record is a variable-length record in which the length of the record can exceed the size of a block. A record is called spanned record if it is split between two or more blocks.
Spanned records are logical records that are larger than the CI size. To have spanned records, the file must be defined with the SPANNED attribute at the time it is created. Spanned records are allowed to extend across or span control interval boundaries. The RDFs describe whether the record is spanned or not.
Spanned records are needed when the application requires very long logical records. A spanned record may be the data component of an AIX cluster. If spanned records are used for KSDS, the primary key must be within the first control interval. Refer toAlternate indexes.
For closer information plse look at IBM.COM, search for VSAM Redbooks,
Title: VSAM Demystified | <urn:uuid:07908b67-a779-405a-aae8-85a28367130c> | CC-MAIN-2017-04 | http://ibmmainframes.com/about29683.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00550-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.92545 | 274 | 2.640625 | 3 |
Innovation is one of those terms that means many things to many people. Depending upon who you speak to, you'll get a number of equally valid definitions and perspectives. Perhaps the same can be said for topics such as consumerization of IT and even cloud computing, but that's perhaps a topic for another blog. Here we'll tackle why precise terminology for innovation is important and some ways to get started.
When we think about innovation, clearly everyone wants to deliver more innovation to their customers, wants their organizations to be more innovative internally, and wants to see more innovation from their strategic partners and suppliers. These are certainly noble goals and just like any other goals they need SMART objectives -- Specific, Measurable, Attainable, Relevant and Time-bound -- so that customers, partners and employees actually know why these innovation efforts are important, what they're trying to achieve, and when their efforts are moving in the right direction.
In addition, one item I personally believe is highly important for success in the innovation arena is a common agreement on terminology. Without common agreement on the precise meaning of various terms related to innovation, organizations will risk mis-aligned objectives, expectations and outcomes.
To give an example, how would you define an Innovation Workshop? Is it a highly-structured brainstorming session with online group decision-support software and a step-by-step methodology with trained facilitators, an ad-hoc whiteboarding session to explore ideas, or perhaps a day full of presentations with a customer? Should an Innovation Workshop focus purely on disruptive (i.e. transformational) innovations or incremental (i.e. tactical) innovations as well?
As another example, how would you define innovation investments? What constitutes an investment in innovation compared to one for sustaining existing products and services? How about the space in-between where you're innovating to enhance existing products and services? How do you define terms such as core investments, adjacent investments and transformational investments?
The answer is "innovation" and it's related terms can be whatever you'd like to define them as. The important point is that clear definitions are needed so that everyone is on the same page and knows what you, or others, mean when these terms are brought up in conversation. An innovation workshop then means x, and not y or z. An innovation investment means a, and not b or c, and so forth. Scoping is highly important as well, so your audience knows what's in and out of scope in your innovation agenda.
If you're managing an innovation program within your organization, these kinds of definitions are some of the first things you should spell out. If you're a stakeholder or participant in various innovation initiatives, these definitions are things you should expect to get answers to and be able to find clearly articulated and readily available.
Some of the first areas to pay attention to are your definition of innovation, your definition of innovation investments, and how you intend to measure success. Of course, the specific tactics your company employs in order to achieve your desired innovative outcomes may vary year-over-year, but the fundamental terms of how you invest in and measure innovation should be fairly constant so you can measure year-over-year progress.
As an example, in terms of innovation investments, think about clearly delineating between the innovation objective (e.g. sustaining the core business, investing in adjacenies to the core business, and investing in transformational new areas for the business) and the type of innovation (e.g. business model, process, organizational, product, or service innovation etc.). That way, you can track items such as your innovation investment mix over time as discussed in my article on "Investing in transformation for 2013".
Note that these definitions may also change, or have different underlying options by which to categorize, depending upon whether you're taking an organizational view or a broader industry or market view. For example, in a recent New York Times article, Clayton Christensen talks about three types of innovation investment as being "empowering", "sustaining" or "efficiency"-oriented.
Spending some extra time upfront in defining your terminology for innovation can pay dividends in terms of downstream efficiency and aligning all your participants so you can then focus less on the definitions and explanations and more on the actual innovative outcomes your organization intends to deliver. As you scale your program across business units and even globally, the payback becomes even more significant.
Finally, defining your terminology, and even the underlying taxonomy, for innovation is never a one-shot deal. Over time, memories fade and employees change roles, so you'll need an appropriate cadence of communications and training in order to institutionalize these learnings and best practices.
This article is published as part of the IDG Contributor Network. Want to Join? | <urn:uuid:e0dde993-de9e-4fb1-8d65-0d369e33bba4> | CC-MAIN-2017-04 | http://www.computerworld.com/article/2473761/it-transformation/the-importance-of-precise-terminology-for-innovation.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00366-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.946309 | 986 | 2.65625 | 3 |
From Cabir to FakeDefend, the last decade has seen the number of mobile malware explode. In 2013, Fortinet’s FortiGuard Labs has seen more than 1,300 new malicious applications per day and is currently tracking more than 300 Android malware families and more than 400,000 malicious Android applications.
Besides the sheer growth in numbers, another important trend to note is that mobile malware has followed the same evolution as PC malware, but at a much faster pace. The widespread adoption of smartphones–and the fact that they can easily access a payment system (premium rate phone numbers)–makes them easy targets that can quickly generate money once infected.
Furthermore, they have capabilities such as geo-location, microphones, embedded GPS and cameras, all of which enable a particularly intrusive level of spying on their owners. Like PC malware, mobile malware quickly evolved into an effective and efficient way of generating a cash stream, while supporting a wide range of business models.
In the following chronology, FortiGuard Labs looks at the most significant mobile malware over the last 10 years and explains their role in the evolution of threats.
2004: The first attempt
Cabir was the world’s first mobile worm. Designed to infect the Nokia Series 60, its attack resulted in the word “Caribe” appearing on the screen of infected phones. The worm then spread itself by seeking other devices (phones, printers, game consoles…) within close proximity by using the phone’s Bluetooth capability.
“Experts believe that the worm was developed by the hacker group called 29A as “proof of concept’ due to its relatively inoffensive character,” said Axelle Apvrille, senior mobile anti-virus researcher with Fortinet’s FortiGuard Labs.
2005: Adding MMS to the mix
CommWarrior, discovered in 2005, picked up where Cabir left off by adding the ability to propagate itself using both Bluetooth and MMS. Once installed on the device, CommWarrior would access the infected phone’s contact file and send itself via the carrier’s MMS service to each contact. The use of MMS as a propagation method introduced an economic aspect; for each MMS message sent, phone owners would incur a charge from their carrier. In fact, some operators have stated that up to 3.5 percent of their traffic was sourced to CommWarrior, and eventually agreed to reimburse the victims.
The virus, which also targeted the Symbian 60 platform, has been reported in more than 18 countries across Europe, Asia and North America. Altogether, the mobile worm infected more than 115,000 mobile devices and sent more than 450,000 MMS messages without the victims’ knowledge, illuminating for the first time that a mobile worm could propagate as quickly as a PC worm.
“At the time, Symbian was the most popular smartphone platform with tens of millions users around the world,” Apvrille continued. “However, the objective behind CommWarrior was to propagate itself as widely as possible and not to profit from the charges incurred through the MMS messages.”
2006: Following the money
After the demonstrated successes of Cabir and CommWarrior, the security community detected a Trojan called RedBrowser touting several key differences from its predecessors. The first was that it was designed to infect a phone via the Java 2 Micro Edition (J2ME) platform. The Trojan would present itself as an application to make browsing Wireless Application Protocol (WAP) websites easier. By targeting the universally supported Java platform rather than the device’s operating system, the Trojan’s developers were able to target a much larger audience, regardless of the phone’s manufacturer or operating system.
The second, and perhaps more important difference, is that the Trojan was specifically designed to leverage premium rate SMS services. The phone’s owner would typically be charged approximately $5 per SMS — another step toward the use of mobile malware as a means of generating a cash stream.
Apvrille added, “Until the emergence of RedBrowser, the security community believed it was impossible that a single piece of malware could infect a wide range of mobile phones with different operating systems. The use of J2ME as an attack vector was an important milestone during this period, as was the use of SMS as a cash generating mechanism.”
2007-2008: A period of transition
Despite stagnation in the evolution of mobile threats during this two-year period, there was an increase in the number of malware that accessed premium rate services without the device owner’s knowledge.
2009: The introduction of the mobile botnet
In early 2009, Fortinet discovered Yxes (anagram of “Sexy”), a piece of malware behind the seemingly legitimate “Sexy View” application. Yxes also had the distinction of being a Symbian certified application, which took advantage of a quirk within the Symbian ecosystem that allowed developers to “sign off” applications themselves.
Once infected, the victim’s mobile phone forwards its address book to a central server. The server then forwards a SMS containing a URL to each of the contacts. Victims who click on the link in the message download and install a copy of the malware, and the process is repeated.
The spread of Yxes was largely limited to Asia, where it infected at least 100,000 devices in 2009.
“Yxes was another turning point in the evolution of mobile malware for several reasons,” Apvrille said. “First, it is considered the first malware targeting the Symbian 9 operation system. Secondly, it was the first malware to send a SMS and access the Internet without the mobile user’s knowledge, a development deemed a technological innovation in malware. Finally, and perhaps most importantly, the hybrid model that it used to self-propagate and communicate with a remote server, gave antivirus analysts a reason to fear that this was perhaps a forewarning for a new kind of virus — botnets on mobile phones. Future events would later validate that perception.”
2010: The industrial age of mobile malware
2010 marked a major milestone in the history of mobile malware: the transition from geographically localized individuals or small groups to large-scale, organized cybercriminals operating on a worldwide basis. This is the beginning of the “industrialization of mobile malware” in which attackers realized that mobile malware could easily bring them a lot of money, eliciting a decision to exploit the threats more intensely.
2010 was also the introduction of the first mobile malware derived from PC malware. Zitmo, Zeus in the Mobile, was the first known extension of Zeus, a highly virulent banking Trojan developed for the PC world. Working in conjunction with Zeus, Zitmo is leveraged by cybercriminals to bypass the use of SMS messages in online banking transactions, thus circumventing the security process.
There were other malware in the headlines well this year, most notably Geinimi. Geinimi was one of the first malware designed to attack the Android platform and use the infected phone as part of a mobile botnet. Once installed on the phone, it would communicate with a remote server and respond to a wide range of commands –such as installing or uninstalling applications–that allowed it to effectively take control of the phone.
“While the introduction of mobile malware for Android and mobile botnets were certainly significant events during 2010, they were overshadowed by the growing presence of organized cybercriminals who began to leverage the economic value of mobile malware,” Apvrille said.
2011: Android, Android and even more Android
With attacks on Android platforms intensifying, more powerful malware began to emerge in 2011. DroidKungFu, for example, emerged with several unique characteristics, and even today is considered one of the most technologically advanced viruses in existence. The malware included a well-known exploit to “root” or become an administrator of the phone – uDev or Rage Against The Cage – giving it total control of the device and the ability to contact a command server. It was also able to evade detection by anti-virus software, the first battle in the ongoing war between the cybercriminals and the anti-virus development community. Like of most the viruses before it, DroidKungFu was generally available from unofficial third party app stores and forums in China.
Plankton also arrived on the scene in 2011 and is still one of the most widespread Android malware. Even on Google Play, the official Android apps store, Plankton appears in a large number of apps as an aggressive version of adware, downloading unwanted ads to the phone, changing the homepage of the mobile browser or adding news shortcuts and bookmarks to the user’s mobile phone.
“With Plankton, we’re now playing in the big leagues! Plankton is one of the top 10 most common viruses across all categories, putting it in the same league as the top PC viruses,” Apvrille added. “The days of mobile malware that lag behind their PC counterparts are over. Currently there are more than 5 million devices infected with Plankton alone.”
2013: Game on – new modes of attack
2013 marked the arrival of FakeDefend, the first ransomware for Android mobile phones. Disguised as an antivirus, this malware works in a similar way to the fake antivirus on PCs. It locks the phone and requires the victim to pay a ransom (in the form of an exorbitantly high antivirus subscription fee, in this case) in order to retrieve the contents of the device. However, paying the ransom does nothing to repair the phone, which must be reset to factory settings in order to restore functionality.
It was also in 2013 that Chuli first appeared. Chuli malware was considered the first targeted attack on the Android platform. Cybercriminals behind the attack leveraged the email account of an activist at the World Uyghur Conference, held March 11-13, 2013 in Geneva, to target the accounts of other Tibetan Human Rights activists and advocates. The emails sent from the hacked account included Chuli as an attachment, a piece of malware designed to collect data such as incoming SMS, SIM card and phone contacts, location information, and recordings of victims’ phone calls. The captured information was then sent to a remote server.
“2013 can be considered the year mobile attacks “turned pro,” said Apvrille. “Increasingly targeted and sophisticated, malware like FakeDefend or Chuli are examples of attacks comparable to those we know of today in the PC world.
Moreover, it’s perfectly reasonable to ask whether an attack like Chuli is ushering us into an era of mobile cyber-warfare and the beginning of the potential involvements with governments and other national organizations.”
With cybercrime, it is always difficult to predict what will happen next year and even more so over the next 10 years. The landscape of mobile threats has changed dramatically over the past decade and the cybercriminal community continues to find new and increasingly ingenious ways of using these attacks for one sole purpose – making money.
However, with the explosion of smartphones and other mobile technologies, a reasonable prediction is the convergence of mobile and PC malware. As everything becomes “mobile,” all malware will then be “mobile.”
Beyond mobile devices, the most likely future target for cybercriminals is the Internet of Things (IoT). While extremely difficult to forecast the number of connected objects on the market in the next five years, Gartner estimates 30 billion objects will be connected in 2020, while IDC estimates that market to be 212 billion. More and more manufacturers and service providers are capitalizing on the business opportunity presented by these objects, but it’s reasonable to assume that security has not yet been taken into account in the development process of these new products. Will the IoT be “The Next Big Thing” for the cybercriminal? | <urn:uuid:1cffa328-84d0-46fb-b299-24ef8cd0948c> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2014/01/24/looking-back-at-10-years-of-mobile-malware/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00394-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947421 | 2,485 | 2.703125 | 3 |
Relational database systems, such as IBM DB2 and Oracle Database, have undergone over a quarter century of development. During that time they have managed to successfully fight off competing database technologies for supporting mainstream database management. Do you remember the object/relational wars of the eighties?
MapReduce, a software framework introduced by Google for supporting parallel processing over large petabyte files has garnered significant attention of late. IBM is experimenting with this in conjunction with Google, and GreenPlum recently announced support.
The significant interest in MapReduce, and related technologies such as Hadoop and HDFS, has led to a backlash from the relational camp. David DeWitt and Michael Stonebraker have been especially outspoken (see www.databasecolumn.com/2008/01/mapreduce-a-major-step-back.html and www.databasecolumn.com/2008/01/mapreduce-continued.html).
Here is a small quote from their thoughts on the topic:
"As both educators and researchers, we are amazed at the hype that the MapReduce proponents have spread about how it represents a paradigm shift in the development of scalable, data-intensive applications. MapReduce may be a good idea for writing certain types of general-purpose computations, but to the database community, it is:
1. A giant step backward in the programming paradigm for large-scale data intensive applications
2. A sub-optimal implementation, in that it uses brute force instead of indexing
3. Not novel at all -- it represents a specific implementation of well known techniques developed nearly 25 years ago
4. Missing most of the features that are routinely included in current DBMS
5. Incompatible with all of the tools DBMS users have come to depend on"
Does this mean the database wars are starting up again?
My opinion is that MapReduce is not intended for general purpose commercial database processing and is therefore not a major threat to relational systems. However, it does have its uses (as Google has demonstrated) for certain types of high volume processing. It also demonstrates that as data volumes get bigger, and the complexity of data and data structures increases, other types of database technology may start to gain traction in certain niche marketplaces. The use by IBM of the SPADE language, instead of StreamSQL, in its InfoSphere Streams product (System S) also demonstrates the changes going on in the database market.
What do you think?
Posted November 25, 2008 4:20 PM
Permalink | No Comments | | <urn:uuid:1ab966d1-3653-4e8c-9068-cdee3c71ec0a> | CC-MAIN-2017-04 | http://www.b-eye-network.com/blogs/white/archives/2008/11/will_mapreduce.php | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00302-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.932062 | 525 | 2.5625 | 3 |
Data Center Electricity Usage
It is widely acknowledged that data centers are large consumers of energy; in 2010, data centers accounted for about 1.3% of all electricity used worldwide, and about 2% of the electricity used in the U.S. Furthermore, their energy usage increases every year. Between 2005 to 2010, data center electricity usage increased by about 56% worldwide. In the U.S., it increased by about 36% during that time frame This energy usage pattern has raised questions about the future of data centers and how they relate to sustainability and pollution. Additionally, it has been estimated that energy expenditures account for 12% of data center expenses. From both a public relations and economic perspective, therefore, data center owners may wish to explore ways of reducing their electricity consumption from the electric grid.
Solar Photovoltaic Systems: A Possible Solution?
If you have a data center, one way to reduce grid electricity consumption is to install a solar photovoltaic (PV) system on the facility. These systems utilize sunlight to generate electricity, which can then be used to power your data center. Solar PV costs have fallen over the past decade, making them cost-competitive solutions in more and more areas.
There are many potential benefits to solar PV installations. Such systems can:
- Reduce energy costs
- Act as a price hedge against rising energy costs
- Reduce the amount of pollution-rich energy consumed from the grid
- Reduce carbon emissions
- Strengthen public relations
- Capitalize on under-utilized roof or ground space
Is Solar PV Right for Your Data Center?
While solar PV systems are beneficial for many facilities, some sites are not suitable for solar PV. It is important to ask the right questions before moving forward with a solar project.
Generally speaking, it is best to install solar PV systems in states with net metering policies. Net metering allows end users to consume the electricity generated at the site if there is sufficient demand. If more electricity is generated than the facility can use, that electricity is fed into the electrical grid and the project owner is given a credit. That credit can be applied to electricity consumed from the grid. In other words, under net metering a customer’s electricity meter can “spin backwards.”
The facility itself must have the right physical characteristics. First, there must be adequate space for a sizable solar PV system. Systems that are too small do not allow for economies of scale. Generally speaking, it is best if there is at least 100,000 square feet of available roof space, preferably in a centralized location, or five acres of open ground space. This space cannot be significantly shaded by trees, nearby buildings, or even rooftop structures or HVAC units. For a roof-mounted solar PV system, the roof should be in good condition with a remaining lifetime of at least ten years; twenty is preferred. The roof should also have the structural strength to support the solar system and be composed of eligible materials; this excludes materials like clay tile, metal, or slate. Ideally, the surface on which the solar system is installed would be flat. It is best if the pitch of the roof does not exceed 5º for roof-mounted systems, and it is best if the pitch of the ground does not exceed 10º for ground-mounted systems.
The cost structure must also be considered. Solar PV systems tend to make economic sense in areas with high electricity rates and incentives. One of the most important incentives is the 30% Federal Investment Tax Credit (ITC); this is available nationwide. Other incentives are state-based, such as sales tax exemptions, property tax exemptions, grants, and rebates.
Another factor that can affect project economics is the salability of solar renewable energy credits (SRECs). SRECs represent the value of the environmental attributes associated with solar electricity. SRECs are often sold separately from the electricity generated by a solar PV system, and SREC values vary as a function of generation timeframe and location. As state budgets have tightened, some states have moved away from grants or rebates and implemented SREC-driven markets.
Project Investmentvs. Risk: Financing Structures
If a solar PV system makes technical and economic sense for your data center, the next question to consider is how the system will be financed. Data center owners can certainly purchase solar photovoltaic systems. Generally speaking, this approach leads to the greatest financial return, assuming that your company has tax appetite to take the 30% tax credit. It does, however, carry certain risks with it. As the system owner, you are responsible for ensuring that the system is properly designed, installed, and that it continues to operate properly. Contractors and consultants can be hired to support this work; for example, an independent engineering firm can review the equipment selection and design, a consultant can oversee the project construction, and an operation and maintenance (O&M) firm can be hired to ensure ongoing system operations. Locating, vetting, engaging, and supervising these firms does require an investment of time, of course, and should the system have unexpected operational issues, you would bear the cost of repair.
A popular financing option that minimizes risk is a power purchase agreement (PPA) structure. Under a PPA structure, you do not own the solar PV system. Rather, a third party financier owns the system and sells you, the project host, the electricity generated by the solar system at a predetermined rate. This rate is often lower than the rate charged by the local utility. The financier takes on the responsibility for the system design, installation, and ongoing operations; you are only responsible for purchasing the electricity produced. The financier also takes responsibility for obtaining any incentives available, and typically passes through the benefits of the 30% tax credit and any other incentives to the site host via a lower PPA electricity rate. The financier is also responsible for the sale of any SRECs; the project host can choose to purchase these, but typically they are sold to a third party or utility.
Procuring a Solar PV System
Once you know how you want to finance your solar system, the next step is to find the right partner. Often, a solar development firm can either sell you a system directly or help you to find a financier that would own the system for you. It is important to review the background, qualifications, and experience of prospective development partners and to get proposals from several knowledgeable, reputable firms. Proposal evaluation should include more than just a comparison of the developers’ costs; a number of factors should be taken into account, including the experience of the developer, materials selected, the warranties and guarantees, the validity of the system size and output, the installation schedule, the subcontractor selection, and the proposed O&M. It is especially important not to presume that the proposal has presented the savings to the data center accurately; you should perform your own financial analysis on the value of the system. This analysis should be based upon the actual savings that you would enjoy, calculated by considering the electricity rate that you currently pay to an electricity utility and/or a direct access supplier.
Before selecting a developer to work with, it may also be advisable to consider draft or sample contracts from the developers; some developers will have unreasonable or unduly unfavorable terms in their contracts. Once a vendor has been selected, it will be vitally important to negotiate the business terms of the contract appropriately. The wrong terms can leave you unprotected and open to liabilities; the right terms can provide significant protection and ensure that the solar installation is an asset to the data center. If you do have a direct access electricity supplier, it is extremely important to review that contract to ensure that it does not preclude you from engaging in on-site generation projects.
The amount of attention required during the construction phase depends upon the type of installation. For systems that are purchased, more careful attention should be paid to the installation and system commissioning; the owner may want to consider hiring a third party firm to review the commissioning and verify proper system installation and operations. Indeed, for many firms that have little to no experience with solar PV systems, it is helpful to engage with an energy consulting firm to help provide guidance throughout the assessment and procurement process. Such a firm can verify that your site is suitable for solar, advise on the financing methods, identify reputable development partners, thoroughly evaluate proposals, negotiate advantageous contracts, and oversee construction.
As data center electricity usage continues to rise and electricity prices increase, we expect more and more data centers to procure solar PV systems to both reduce their financial exposure and improve their image.
Mark Crowdis is the president of Reznick Think Energy, a Bethesda, Maryland-based renewable energy consulting firm that is a subsidiary of tax and accounting firm CohnReznick LLP. Elyse Rhodin is a senior analyst at Reznick Think Energy. E-mails: firstname.lastname@example.org and email@example.com
To read more from the November issue of DCJ Magazine click here | <urn:uuid:b5532ca1-f169-4db3-83ec-cd2ccc4ed741> | CC-MAIN-2017-04 | http://www.datacenterjournal.com/using-solar-energy-systems-to-offset-data-center-electricity-consumption/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284411.66/warc/CC-MAIN-20170116095124-00026-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.942015 | 1,858 | 3.296875 | 3 |
eDirectory uses FLAIM as its database. FLAIM (Flexible Adaptable Information Manager) is used for traditional, volatile, and complex information. It is a very scalable database engine that supports multiple readers and a single-writer concurrency model. Readers do not block writers and writers do not block readers.
Physically, FLAIM organizes data in blocks. Some of the blocks are typically held in memory. They represent the block cache. The entry cache (sometimes called a record cache) caches logical entries from the database. Entries are constructed from the items in the block cache. FLAIM maintains hash tables for both caches. The hash bucket size is periodically adjusted based on the number of items.
By default eDirectory uses a block size of 4 KB. The block cache size for caching the complete DIB is equal to the DIB size, and the size required for the entry cache is about two to four times the DIB size.
While retrieving an entry, FLAIM first checks for the entry in the entry cache. If the entry exists, reading from the block cache isn't necessary. While retrieving a block from the disk, FLAIM first checks for the block in the cache. If the block exists, a disk read operation isn't necessary.
When an entry is added or modified, the corresponding blocks for that entry are not directly committed to the disk, so the disk and memory might not be in sync. However, the updates made to the entry are logged to the roll-forward log (RFL). An RFL is used to recover transactions after a system failure.
Least Recently Used (LRU) is the replacement algorithm used for replacing items in the cache.
A checkpoint brings the on-disk version of the database to the same coherent state as the in-memory (cached) database. FLAIM can perform a checkpoint during the minimal update activity on the database. It runs every second and writes the dirty blocks (dirty cache) to the disk. Blocks that are modified in the cache but not yet written to the disk are called “dirty blocks”. FLAIM acquires a lock on the database and performs the maximum amount of possible work until either the checkpoint completes or another thread is waiting to update the database. To prevent the on-disk database from becoming too far out of sync, there are conditions under which a checkpoint is forced even if threads are waiting to update the database:
If the checkpoint thread cannot complete a checkpoint within a specified time interval (the default is 3 minutes), it is forced and the dirty cache is cleaned.
If the size of the dirty cache is larger than the maxdirtycache (if set), a checkpoint is forced to bring down the dirty cache size to mindirtycache (if set) or to zero.
An index is a set of keys arranged in a way that significantly speeds up the task of finding any particular key within the index. Index keys are constructed by extracting the contents of one or more fields (attributes) from the entries. Indexes are maintained in the block cache. Any changes to the indexed attributes requires changes in the index blocks.
eDirectory defines a default set of indexes for system attributes (fields). System attributes such as parentID and ancestorID are used for one-level and subtree searches. These indexes cannot be suspended or deleted. The directory internally uses them. Default indexes are defined for attributes such as CN, Surname, Given Name, and so on. Indexes can be of type presence, value, and substring indexes. These indexes can be suspended. On deletion they are automatically re-created.
You can use iManager or the ndsindex Lightweight Directory Access Protocol (LDAP) utility to create indexes. Indexes are server-specific.
By enabling the Storage Manager (StrMan) tag in DSTrace (ndstrace), you can view the index chosen for the search queries.
The following example is for a DSTrace log for a subtree search using “cn=admin”, CN.
3019918240 StrMan: Iter #b239c18 query ((Flags&1)==1) && ((CN$217A$.Flags&8=="admin") && (AncestorID==32821))
3019918240 StrMan: Iter #b239c18 index = CN$IX$220
The following example is for an DSTrace log for a subtree search using “Description= This is for testing”, AncestorID.
2902035360 StrMan: Iter #83075b0 query ((Flags&1)==1) && ((Description$225A$.Flags&8=="This is for testing") && (AncestorID==32821))
2902035360 StrMan: Iter #83075b0 index = AncestorID_IX
FLAIM logs operations for each update transaction in a roll-forward log (RFL) file. An RFL is used to recover transactions from a system failure or when restoring from a backup. The RFL file is truncated after every checkpoint is completed unless it is turned on (rflkeepfiles) by using a hot continuous backup. | <urn:uuid:4e826829-ec34-4217-b5ba-a4eec8c208bd> | CC-MAIN-2017-04 | https://www.netiq.com/documentation/edir88/edir88tuning/data/bqn0j4k.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00238-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.862791 | 1,077 | 2.53125 | 3 |
We used to have simple web sites. The web server sent HTML to the browser which displayed it. This was a “brochureware” site; designed for marketing or advertising. There was no business data anywhere near the web site.
Now we no longer have web sites, we have web applications; and soon, web services. Web applications reside on multiple systems in distributed architectures, using sophisticated programming languages. Corporate and customer data has been moved to the computing edge. The edge has been extended to mobile phones, PDAs, mobile sales force systems, inventory management systems, etc.
Web applications invite public access to an organisation’s most sensitive data. Customer information, transaction information and even proprietary corporate data can be accessed through web applications.
Access to the application must be allowed by firewalls and access control lists, otherwise the application won’t work. This inherent trust is precisely what hackers attempt to exploit.
We secure our web sites by hardening and protecting the servers and restricting access from the outside. However, the web application has to be accessible to the public. The web application itself contains many vulnerabilities that may be exploited. Traditional perimeter security cannot help secure the application, since application vulnerabilities are exploited over HTTP.
Web applications breach the perimeter and provide direct access to customer and business data on backend databases.
Recent incidents include a hacker gaining access to more than five million Visa and Mastercard accounts in February 2003; the Record Industry Association of America hacked seven times in six months; a vulnerability at Tower Records allowing anyone to view the customer orders database in December 2002; and Ziff Davis paying $500 to its customers after lax security exposed personal data of thousands of subscribers.
The problem is that, by and large, security professionals don’t understand web applications, whilst frequently application developers don’t know security. It’s an old story in new clothes. Even when web application audits are required, the lack of effective tools has made them impractical. Frequently security is entirely absent from the application development cycle. Security departments scrutinise the desktop, the network, even the web servers, but the web application escapes examination.
Web application vulnerabilities occur for many reasons. Of primary concern is the focus on functionality at the expense of security. The lack of security awareness and any form of audit during the development cycle is a serious handicap. Even in development teams that are security-aware, there has been a lack of effective testing tools, as well as severe resource limitations prohibiting code reviews.
The bottom line is that development creates functionality and QA tests that functionality. Security is missing entirely.
Web application vulnerabilities occur in several different areas of the application. The web server itself is subject to a variety of known (published) vulnerabilities, all of which must be patched. The administration of the server and its contents is important. A misconfigured server or poorly managed content can permit system file and source code exposure.
The application itself is of the utmost importance. It can inadvertently reveal source code and system files too, and even allow full system access. It can mistakenly permit replay attacks against customers, or customer impersonation exploits. In addition, the web application interacts with the database to manage and track customer information, and store business and transaction information. One mistake in the application can expose the entire system and database, right through a web browser, right over port 80.
Known (published) vulnerabilities in web servers are obviously a great source of risk, but perhaps the most easily defended against by patching. The difficulty comes from having to install patches on many servers. Streamlined patching procedures are essential, as are server inventories. If a patch is missed, a hacker will let you know!
Administrative issues are less easily corrected than published vulnerabilities. This requires a security awareness in those who manage the web site and its content on a daily basis. Clearly directory browsing should not be enabled anywhere, and the correct access control lists (ACLs) applied to every directory and file. This is more than just configuration, the implication of content is critical too. For instance, remnant files such as “readme.txt” or sample applications can reveal the applications and versions in use. Of course, commercial applications have known vulnerabilities too, just like web servers and operating systems. Backup files or improper application mapping can reveal source code, including the information necessary to connect to the database.
The management of web application vulnerabilities must occur in several different areas.
Security must be brought to the web development team. Create and enforce secure coding practices. Assess the code while its being developed, to identify insecure techniques before they are replicated. Ensure that QA test for security as well as functionality. Think security during change control procedures – don’t consider just the functional and performance impact of changes, consider the security impact as well.
The security department must learn application security. Create and promote internal awareness campaigns. Work with the development team to develop and publish best practices, and enforce those best practices. Create procedures to work with development to remediate vulnerabilities. Audit production systems frequently and with each change. Baseline and trend the results to gain a historical perspective of application security. Implement web application security assessments into Certification and Assessment programmes. Most importantly, assess applications in depth. Ensure the security is ‘baked-in’, not ‘brushed-on’.
Automated assessment tools can help introduce and maintain security throughout the application life cycle. Best of breed tools include: WebInspect from SPI Dynamics, AppScan from Sanctum and ScanDo from KaVaDo.
First Base Technologies are exhibiting at Infosecurity Europe 2004 which is Europe’s number one IT Security Exhibition. The event brings together professionals interested in IT Security from around the globe with suppliers of security hardware, software and consultancy services. Now in its 9th year, the show features Europe’s most comprehensive FREE education programme, and over 200 exhibitors at the Grand Hall at Olympia from 27th to the 29th April 2004. www.infosec.co.uk. | <urn:uuid:cc36d492-e5fa-4fe0-aed7-7c00de493026> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2003/11/11/web-application-hacking-exposing-your-backend/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280899.42/warc/CC-MAIN-20170116095120-00146-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.934906 | 1,239 | 2.640625 | 3 |
The evolution of the computer -- from 1613 to 2013
My first computer was the Sinclair ZX81 which, unsurprisingly, came out in 1981. It had 1kB of memory (but this could be expanded with the addition of a 16kB RAM pack) and a monochrome display. Compare that machine with today’s computers and tablets (and smartphones for that matter), and the advancement is clearly staggering.
The history of the computer is littered with milestones. In 1822 Charles Babbage began work on the Difference Engine, the first automatic computing engine. In 1936 Alan Turing submitted a paper describing a device that could be programmed using symbols on tape. In 1953 IBM released the first mass-produced commercial computer, and in 1976 Steve Jobs and Steve Wozniak created the Apple I.
Ebuyer has put together an infographic detailing the history of the computer from the origins of the word to the introduction of modern-day tablets. While most of the major firsts and important moments are included, some are missing (such as any reference to the home micro boom of the 1980s).
The Infographic offers up plenty of interesting nuggets of information, including that IBM’s first PC, the IBM 5150 PC, cost $1,565 at launch, which is equivalent to approximately $4,010 in today's money.
It's a very interesting read that makes you appreciate just how far things have come, particularly in recent years.
What was your very first computer? | <urn:uuid:094dcdbd-d229-45e4-956a-678b0acc4c72> | CC-MAIN-2017-04 | http://betanews.com/2014/09/05/the-evolution-of-the-computer-from-1613-to-2013/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281419.3/warc/CC-MAIN-20170116095121-00054-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947769 | 304 | 2.859375 | 3 |
News Analysis: The popularity of the data storage format highlights the need for better management tools.
Virtualizing spinning-disk storage systems is no longer simply a trend; its now the norm and considered vital to backing up, archiving and protecting data for a growing number of businesses large and small.
The days of simply adding storage hardware to an IT system and pouring in e-mail, word documents, spreadsheets, photos and everything else willy-nilly are essentially gone.
However, as the use of virtualization in storage environments increases, so does the need for tools to manage these virtualized systems. An increasing number of vendors-from large organizations such as Hewlett-Packard and VMware to smaller companies such as Scalent Systems-are readying products for release later this year that are designed to ease the management crunch created by storage virtualization.
Virtualization software allows a single desktop PC or server to be carved up to behave as though it were many different, separate computing systems; each virtualized node behaves almost identically to an independent physical machine. As a result, capacity that would normally go unused can be put to work doing different storage duties at different levels of availability.
By the end of the decade, virtualization will be more the norm and less the exception. Virtualization transforms physical hardware-such as servers, hard drives and networks-into a flexible pool of computing resources that businesses can expand, reallocate and use at will.
Over the past few years, virtualization has started to move from test environments to production scenarios, according to industry observers. Analysts say they expect the pace of adoption to increase and say IT managers are searching for better tools, demanding more power and flexibility, and finding new ways to apply virtualization techniques.
Data storage use percentages in general are low, analysts say. It is common to find that companies write only 10 to 15 percent of their business data to a storage apparatus, leaving 85 to 90 percent of capacity in machines that constantly draw in power for availability and cooling. With the current emphasis on power conservation and eliminating so-called greenhouse gases from the atmosphere, having servers-or portions of servers-sit idle is not the most efficient use of expensive capital goods.
The problems of efficiency can be seen in the sheer numbers of servers that currently are being used-30 million installed in the United States alone, according to research company IDC.
In most cases, analysts report, companies are persuaded by sales representatives from storage companies to purchase much more capacity than they actually need.
Click here to read how IBM and Intel are working to ease virtualization in the data center.
"The server-any server-doesn't care what information is on it," said Patrick Eitenbichler, marketing manager for HP's StorageWorks business, in Cupertino, Calif. "It needs to use the same power draw whether it's empty or full. And there's really not much difference between regular app/Web/database servers and storage servers here. The main thing is that storage servers are always going to need more capacity, while regular servers use the load balancing in virtualization to utilize the finite capacity they have."
Successful virtualization deployment also means that the number of storage servers can almost always be trimmed way down-sometimes as much as tenfold-with better utilization of each machine, analysts say. This isn't particularly good news for storage hardware makers, but the increasing number of first-time buyers in the marketplace has more than offset the consolidation effect of virtualization, at least up to now.
Using virtualization software, a roomful of servers can be consolidated into a single physical box, provided that it is powerful enough.
"Pundits claim this trend is cyclical because its returning us to the old days of a single large, powerful computer-??Ã la the mainframe-running all of the tasks in an organization," said James Bottomley, chief technology officer at business continuity and disaster recovery provider SteelEye Technology, in Palo Alto, Calif.
"Although the modern consolidated, virtualized server is unlikely to look anything like the old mainframes, it's instructive to examine virtualization in light of this mainframe comparison to see if there are any lessons to be learned," Bottomley said.
Next Page: The more virtualization, the merrier. | <urn:uuid:108e003a-41ba-4574-85f1-8bb977314181> | CC-MAIN-2017-04 | http://www.eweek.com/c/a/Data-Storage/Virtualization-Demand-Grows-1 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00174-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.94225 | 883 | 2.640625 | 3 |
We have provided a reference for the most essential records you will come across and what DNS means. With this information you will have a better understanding of what each record does for your domain.
Domain Name System or more commonly known as DNS is a naming system that utilizes various information with domain names assigned to each computer, service, or resource.
A – A records are what tie host names to IP addresses. These records are critical to have since they are responsible for listing what IP address your domain is hosted on. If the A record is incorrect, you will not be able to access your domain, FTP, or email accounts. Essentially this is the internet equivalent of an entry in a telephone book that has your name, phone number ,and address.
CNAME – Identifies one domain name as the alias of another domain name. A good example of this would be if you have yourdomain.com and want it pointed to www you would add www in the name field and @ in the alias field.
NS (Name Server) – Displays the specific hostname used to search for a domain. This would be similar to an Apartment Complex. The individual apartments would be a domain and the Apartment Complex itself would be the host name “hosting” the domain.
MX – MX records are used to tell the internet where to deliver mail for your domain. You can have multiple MX records that will be tried in order of priority, with 0 being the highest priority.
TXT- This record is used for various services that need to read the information it contains. SPF records, for example, are actually TXT records that contain SPF information.
SRV – Provides information about services that are available on specific ports on your server.
Records that can be created using Edit DNS Zone:
To remove a Record:
Clear the space for the record name and set the record to “Select” in the drop down menu associated with that record. Then click on the blue Save button at the bottom of the screen to save any changes made. | <urn:uuid:0668abb7-274f-4580-b963-d5a9f53d1aae> | CC-MAIN-2017-04 | https://www.hostdime.com/resources/how-to-us-edit-dns-zone/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00568-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.933041 | 421 | 3.6875 | 4 |
There has been a lot of talk about how the DNS can provide network-based security, and how DNS is in the best position to detect malware traffic before it does any harm. But what does this mean for end users? How does it make their online lives easier and more secure?
DNS servers that are aware of sites that host malware, perform phishing activities (harvesting bank details, for instance) and other nefarious misbehaviors, can prevent end users from ever going to those sites. Remember, most of the time end users don't deliberately visit dubious sites; they do so accidentally. Either because they mistyped the name of a site or they clicked on a malicious link on a web page. In all these cases, an intelligent DNS server can simply redirect end users to pages that inform them that the site they tried to visit is potentially harmful.
Why is this better than using one of the traditional security software packages? Well, first off, end users didn't have to download and install anything. They don't have to worry about keeping the software and site lists up to date, and there's nothing slowing their PCs down. Even better, in most cases all the devices in a home use the same "Security Aware" DNS server, so they're all protected — even the games console in the teenager's bedroom. Traditional security software packages don't reach many of these things.
However, there are other ways malware can creep into the home — a laptop gets infected while on the road, someone is a bit incautious with a USB stick, and so on.
The purpose of malware is either to intercept data and observe the activities on PCs where it's installed, or to use a PC's resources to spread and provide a "botnet" for attacks on other parties on the Internet. In all cases, malware needs to communicate with a central point at some stage (called "command and control") to upload captured data, spread itself, or get instructions for the next attack. It uses the DNS to do this, so the DNS server will know where it intends to go before it actually goes there!
This means DNS servers can do several things to help: for known malware, they can block access to botnet command and control systems, thereby preventing the malware from doing any work. If the malware spreads itself by email (or if its job in life is to generate spam), the DNS server can detect the high rate of DNS "MX" (mail) queries, and in many cases recognize a pattern, and even prevent emails from being sent.
When a PC is discovered to be infected with malware, the DNS server can redirect all queries from the infected PC, to a warning web page with sources to disinfection software and other services.
DNS servers with fine-grained reporting capabilities can even be used to create web-based reports showing end users, for instance, the "bad" sites they've been protected from. These kinds of systems can be extended to allow individual users to add their own entries to the lists of "bad" sites, basically giving them their own personalized security service — the DNS server responds to their queries (and only his) according to their security lists and preferences.
All of this means ISPs can improve the user's experience, customer relations, potentially generate extra revenue and reduce churn. With a platform-based approach, it can be done incrementally aligned with other business initiatives.
By Keith Oborn, Sr. Infrastructure Architect at Nominum
|Data Center||Policy & Regulation|
|DNS Security||Regional Registries|
|Domain Names||Registry Services|
|Intellectual Property||Top-Level Domains|
|Internet of Things||Web|
|Internet Protocol||White Space|
Afilias - Mobile & Web Services | <urn:uuid:8b3d90b1-64c2-4c48-92cb-c1ef25279f43> | CC-MAIN-2017-04 | http://www.circleid.com/posts/leveraging_dns_for_subscriber_loyalty/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00384-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.908261 | 776 | 3.046875 | 3 |
In February 2013 the president of the United States issued Executive Order 13636, directing the National Institute of Standards and Technology (NIST) to take the best-known practices from industry and come up with a common Cybersecurity Framework for companies and government institutions. Understanding the basics of this framework can help IT organizations begin to develop their own cybersecurity plans. Working with people, process, and technology is required to successfully implement your new cybersecurity plan.
First, let’s look at the Cybersecurity Framework. The framework consists of five security functions: Identify, Protect, Detect, Respond, and Recover. Each of these functions is broken down into several categories and subcategories.
Here is a quick overview of the five Cybersecurity Framework functions.
- Identify the assets in your data center, how they are used in your business, the resources (human and physical) used in business context, and the risks to those assets. These can be documented using several different ways such as: asset inventory, business environment, governance plans, or risk mitigation plans.
- Protect the assets in your data center. Design, develop, and deploy processes and technology to ensure delivery of safeguards that deliver critical infrastructure services. The Protect function should limit or contain the impact of a security event. The results of the Protect function can include access control tools, security training, information protection plans, and other protective technologies.
- Detect cybersecurity events in your data center, holes in infrastructure security, and process/procedure inadequacies. The results of this function can include things like anomaly reports, security monitoring, detection processes, and audit processes.
- Respond to events from the Detect function. The goal of this function is to have an appropriate response to the threats detected during the Detect function. The results of this function can include response plans, communications, escalation plans, mitigation, and improvement plans.
- Recover from cybersecurity events detected during the Detect function. The goal of the Recover function is to bring your infrastructure back to a normal secure state. The results of this function can include recover plans, continuous improvement plans, and communication.
Implementing a Cybersecurity Framework
The first part of implementing a good security plan is to understand the key elements of security. The Cybersecurity Framework is a good start, but it does not cover everything that needs to be done. You also need to understand the assets at your disposal including people, process, and technology. I will leave the people and process part for another blogger. Let’s focus on technology. Specifically let’s talk about Software-Defined Infrastructure (SDI) and how it can help you implement a Cybersecurity Framework.
SDI Architecture overview
Here is a quick overview of the SDI Architecture.
- Orchestration and Control – orchestrates compute, storage, and network together in secured domains in response to user requests
- Telemetry – brings raw data from the infrastructure and applications to analytics for analysis
- Analytics – takes raw data and analyzes it so actions can be taken
- Policy Framework – analysis from the analytics is combined with the policy engine so the orchestration and control can request changes to the infrastructure
- Software-Defined Storage – control of storage resources through a software API
- Software-Defined Network – control of network resources through a software API
- Software-Defined Compute – control of compute resources through a software API
- Software-Defined Security – creation of security domains with resources and software tools
SDI and Cybersecurity Framework
Let’s map the Cybersecurity Framework to the different parts of the SDI architecture.
- Identify – Infrastructure gives you a list of all of the infrastructure resources in your private cloud
- Protect – The Policy Framework gives the ability to implement access control
- Detect – Telemetry and the Analytics components give the ability to detect anomalies and intrusions into the data center infrastructure
- Respond – Policy and Orchestration allows you to implement how to respond to specific cybersecurity events
- Recover – Policy and Infrastructure allows you to change policy to cover newly detected cybersecurity events
These are just a few examples of how these functions can be implemented using elements of SDI. The lesson here is to begin to understand the possibilities. Coming up with your own mappings will be key to your success in implementing a good Cybersecurity Framework for your business. | <urn:uuid:31e78d4e-545a-49f4-a9b2-a0266cf003a7> | CC-MAIN-2017-04 | http://www.csoonline.com/article/3072893/security/securing-your-private-cloud-start-with-a-plan.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00384-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.89736 | 883 | 2.921875 | 3 |
Fiber Optic Modem, also known single-port optic multiplexer, is a point-to-point type terminal equipment which uses a pair of optic fibers to achieve the transmission of E1 or V.35 or 10base-T. Fiber modem has the function of modulation and demodulation. Fiber modem is a local network relay transmission equipment, suitable for base station transmission fiber terminal equipment and leased-line equipment.
Fiber modem is similar to the baseband MODEM (digital modem). The only difference from baseband MODEM is that it access fiber line, the optical signal. The multi-ports optic transceiver generally called multiplexer. For multi-port optical multiplexer is normally be directly called “multiplexer”, single-port multiplexer is generally used on the client, similar to commonly used WAN line (circuit) networking with the baseband MODEM, and also named for “fiber modem”, “optical modem”.
Fiber Media Converter is a simple networking device making the connection between two dissimilar media types become possible. Media converter types range from small standalone devices and PC card converters to high port-density chassis systems that offer many advanced features for network management.
Fiber media converters can connect different local area network (LAN) media, modifying duplex and speed settings. Switching media converters can connect legacy 10BASE-T network segments to more recent 100BASE-TX or 100BASE-FX Fast Ethernet infrastructure. For example, existing half-duplex hubs can be connected to 100BASE-TX Fast Ethernet network segments over 100BASE-FX fiber.
When expanding the reach of the LAN to span multiple locations, media converters are useful in connecting multiple LANs to form one large campus area network that spans over a limited geographic area. As premises networks are primarily copper-based, media converters can extend the reach of the LAN over single-mode fiber up to 160 kilometers with 1550 nm optics.
Wavelength-division multiplexing (WDM) technology in the LAN is especially beneficial in situations where fiber is in limited supply or expensive to provision. As well as conventional dual strand fiber converters, with separate receive and transmit ports, there are also single strand fiber converters, which can extend full-duplex data transmission up to 120 kilometers over one optical fiber.
Other benefits of media conversion include providing a gradual migration path from copper to fiber. Fiber connections can reduce electromagnetic interference. Also fiber media converters pose as a cheap solution for those who want to buy switches for use with fiber but do not have the funds to afford them, they can buy ordinary switches and use fiber media converters to use with their fiber network.
The difference between the media converter and optical modem is that the media converter is to convert the optical signal in the LAN, simply a signal conversion, no interface protocol conversion. While, fiber modem for WAN is the optical signal conversion and interface protocol conversion, protocol converter has two types of E1 to V.35 and E1 to Ethernet.
In fact, as the developing of network technology, the concept of media converter and fiber modem has become increasingly blurred, which are basically can be unified for the same equipment. Media converter becomes the scientific name of fiber modem. | <urn:uuid:2251ae98-86c1-48d4-92aa-059ff8b31378> | CC-MAIN-2017-04 | http://www.fs.com/blog/the-confusing-concept-of-optic-modem-and-media-converter.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00229-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.902905 | 683 | 3 | 3 |
A recent report by Pingdom looks at the booming growth of Internet's DNS infrastructure. From the article: "Five years ago there were 123 DNS root server sites (the "backend" of DNS) spread out on the Internet. Today there are more than twice as many, over 300. Five years ago, 46 countries had root servers. Today, 76 have them. In other words, not only has the number of root servers grown tremendously, but their geographical spread has increased as well. This is good news for the overall stability and performance of DNS worldwide."
The report also notes that Europe has overtaken North America as the world region with the most root server sites.
Related topics: DNS
|Data Center||Policy & Regulation|
|DNS Security||Regional Registries|
|Domain Names||Registry Services|
|Intellectual Property||Top-Level Domains|
|Internet of Things||Web|
|Internet Protocol||White Space|
Afilias - Mobile & Web Services | <urn:uuid:be6ec530-4d01-4ab1-9e09-644bceed33e7> | CC-MAIN-2017-04 | http://www.circleid.com/posts/20120508_a_look_at_the_rapid_evolution_of_the_worlds_dns_infrastructure/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00349-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.834386 | 207 | 2.75 | 3 |
In what ways does cloud computing empower individuals and businesses?
The impact of social communities like Facebook and Twitter and web-enabled devices and applications that leverage the power of the cloud is already apparent.
Watch as Nicholas Carr, author of The Shallows: What the Internet Is Doing to Our Brains, discusses how cloud computing is democratizing computing power. Just as the PC revolution gave each of us access to a computer, cloud computing gives each of us access to a data center. How will you tap into the power of the utility computing grid?
- Innovation – Beyond the Infrastructure (3:30)
- Democratization of Computing Power (1:55)
- The Explosion of Apps (2:08)
- The Convergence of Media & Entertainment and Software (3:08)
- Big Data vs. Right Data (2:19) | <urn:uuid:48314753-a659-4a80-9d40-6bd58012f6f2> | CC-MAIN-2017-04 | http://www.internap.com/resources/nicholas-carr-democratization-of-computing-power/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00349-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.876075 | 174 | 2.640625 | 3 |
The ubiquity of computing devices has never been greater. New technologies like the Internet of Things (IoT) and wearables are making technology a core component of our lives. But with access to apps, email, contacts, calendars, GPS, and personal information, isn’t smart watch technology the same as smart phones? Let’s dive deeper.
The tick of the clock, the beat of the heart
There is one core difference between smart phones and smart watches that’s important to understand. Smart phones only collect external data about our environment, whereas smart watches—like the Apple Watch—have the ability to collect data about our internal environment.
Wearables have built-in pulse oximeters to measure our pulse rates. Why is this so significant? By measuring pulse rate, we can tell things about health, mood, activity level, and stress level — all of which can provide information about thoughts and emotions. When aggregating data about pulse rate over a period of time, one could derive an informed and potentially predictable understanding of mental state.
What does this mean for technology professionals?
A smart watch comes along with all of the inventory, deployment, and security implications of a smart phone, with the addition of new privacy considerations. Technology professionals should start thinking about how to manage these devices in new ways. It won't be as simple as using existing mobile management solutions and workflows — you can't fit a round peg into a square hole.
As the technology community considers managing smart watches and other wearables, technologists must remain acutely aware of the impact on the users — more so now than ever before. | <urn:uuid:bd1bfffb-c9b6-4bc9-9ee6-6c3d3a28f231> | CC-MAIN-2017-04 | https://www.jamf.com/blog/where-will-wearables-take-us/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00257-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.924593 | 333 | 2.8125 | 3 |
Professor Stephen Hawking today demonstrated his new open source communication system designed for him by Intel.
The Silicon Valley chip giant claims the system, dubbed ACAT (Assistive Context Aware Toolkit), can be adapted for the three million people worldwide who suffer from quadriplegia and motor neurone disease.
The system was jointly designed by Intel and Hawking over the last three years to replace the scientist's existing communication system, with Hawking providing ongoing feedback throughout the product's development.
"We are pushing the boundaries of what is possible with technology, without it I would not be able to speak to you today," said Hawking speaking at a press conference in London today.
"The development of this system has the the potential to greatly improve the lives of disabled people all over the world," he added.
"My old system was over 20 years old and I was finding it very difficult to communicate effectively. This new system is life-changing for me and I hope will serve me well for the next 20 years."
Hawking has been able to double his typing rate and improve common tasks by a factor of 10. Intel said the system makes it easier for Hawking to browse the web, make edits, open a new document and switch between applications.
Meanwhile, the integration of SwiftKey's typing technology means that the renowned Cambridge professor has to type 20 percent fewer characters overall.
"Professor Hawking uniquely used technology to master communicating with the world for decades, but his old system could be likened to trying to use today's modern apps and websites with a computer without a keyboard or mouse," said Wen-Hann Wang, Intel vice president and Intel Labs managing director. "Together we've delivered a holistically better communication experience that contributes to his continued independence and can help open the door to increased independence for others."
The system can also be controlled by touch, eye blinks, eyebrow movements or other user inputs for communication.
Intel said ACAT will be available to researchers from January next year.
This story, "Stephen Hawking Unveils Intel's Open Source Speech System" was originally published by Techworld.com. | <urn:uuid:ba9d9312-59aa-45c8-8f67-22a42dfa8d3a> | CC-MAIN-2017-04 | http://www.cio.com/article/2854684/software/stephen-hawking-unveils-intels-open-source-speech-system.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280587.1/warc/CC-MAIN-20170116095120-00129-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.965079 | 429 | 2.53125 | 3 |
“If you’ve got nothing to hide, let’s actually see if you’ve got nothing to hide,” said artist Mark Farid; he plans to “hack” smartphones and harvest real-time data that is normally invisible to phones’ owners and then broadcast that personal data as “data shadows.” He hopes his interactive art installation makes people feel angry, annoyed and frustrated. He hopes “people have an issue with us. I hope some people will say ‘it’s not cool of you to have done that’.”
It’s not a question of ‘to tweet or not to tweet,’ but more a question of who owns the contents of our personal messages? What exactly did you agree to share with companies when you started using an app, cloud storage service or social media? Although personal messages sent over the Internet or via texts might seem private, you are the product and your data is controlled by businesses as a commodity. That data might be “invisible” to you, but not to hackers or to companies. “Everything you're doing in virtual space is under the illusion of privacy, but it's completely public. Just as it is in the real world,” Farid stated during an interview with The Memo.
“It's as if we are leaving not only our private diaries and wallets open, but also who our friends are, what we're thinking, and our biggest insecurities – the insecurities we wouldn't even speak to our loved ones about.” Farid told the Cambridge Network, “We're more honest with Google than we are with anyone else. We believe companies gathering personal data leave us anonymous, but in fact we may be sharing more than we realize.”
“I did a test and realized our phones send out around 350,000 messages a day to connect to WiFi, or websites sending data of your activity to multiple other websites, websites I've never heard of from around the world,” Farid said. “I only have basic programming skills but it's worrying how quickly I could gain access to people's phones.”
Techopedia defines a data shadow as “a slang term that refers to the sum of all small traces of information that an individual leaves behind through everyday activities. It is a minute piece of data created when an individual sends an email, updates a social media profile, swipes a credit card, uses an ATM and so on.” Farid’s “Data Shadow” is an art installation which is meant to explore the issue of data mining and raise “questions about the apparent lack of security of our mobile phone data.”
His system will harvest “a limited amount of the personal data” from volunteers’ phones, filter the data for privacy, and then “project it onto the hard surfaces around them, creating a ‘shadow’ of their movements as they walk down the street.”
Farid told The Memo:
We are trying to make it so it’s actually the most embarrassing pictures that we can find; in term of text messages it’s likely to be the most recent. For example, you’re told to make a phone call to your other half, or ideally your mum…It’s about bringing you face-to-face with your information, because ethically you have given us permission to see it but it isn’t clear at all what’s going to happen.
If Farid’s name sounds familiar, it might be due to “Seeing I,” a social-artistic experiment during which Farid planned to spend 28 days alone in a room wearing an Oculus Rift virtual reality headset in order to experience “every waking moment through the eyes of another human being.” This time around, his art will combine data privacy and mobile security issues as Farid makes “the public aware of the privacy they are potentially sacrificing through regularly using a mobile phone and the Internet.”
He will meld data security, privacy and art, claiming “identity and/or anonymity through digital mediators” are core themes of his art. “The Internet was supposed to be the truest form of democracy,” Farid said. “In reality, it’s become a capitalist utopia.” He asked:
Do we realize how easily our phones can be hacked into in a matter of minutes? That our income bracket, bank passwords, movements, and even home address can be found by anyone with basic coding skills, let alone by data mining companies? Are we controlling technology sufficiently to create the utopia it has the potential for, or is it tipping us into dystopia?
Farid was selected for a five-month artist-in-residency program at Collusion, “a creative agency working at the intersection of arts, technology and human interaction,” and is an awardee of the 2015 Real Time Commission. Data Shadow, which will be held from October 26 to November 1 at the Cambridge Festival of Ideas, is described “an individual experience (one participant at a time), taking place in a 8x2m shipping container in central Cambridge and lasting approximately four minutes. During their journey through the container, the participant will come face to face with their own, personal data shadow.”
The Data Shadow website will go live on October 26, the same day as the panel discussion for “Data shadow: Anonymity is our only right, and that is why it must be destroyed;” Farid and “a panel of academics and technologists” will discuss “how our personal data is collected and used, and whether this level of data-mining is morally right and should continue.” | <urn:uuid:bf006349-6927-4957-aecd-9d9be785c26a> | CC-MAIN-2017-04 | http://www.computerworld.com/article/2991933/cybercrime-hacking/nothing-to-hide-artist-to-hack-phones-project-dirty-secrets-as-persons-data-shadow.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00339-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.957028 | 1,221 | 2.546875 | 3 |
There has never been a search engine that accurately reflects the Internet.
In the 1990s and 2000s, the limitation was technical. The so-called "deep web" and "dark Internet" -- which sound shady and mysterious, but simply refer to web sites inaccessible by conventional means -- have always existed.
Many parts of the Internet are hard to index, or are blocked from being indexed by their owners.
Companies like Google have worked hard to surface and bring light to the "deep, dark" recesses of the global web on a technical level.
But in the past few years, a disturbing trend has emerged where governments -- either through law or technical means or by the control of the companies that provide access -- have forced inaccuracy, omissions and misleading results on the world's major search engines.
Until recently, search engine censorship was not on the list of first-world problems. But in the last few years, governments in the United States, Europe and elsewhere in the industrialized world have discovered that, although they're prevented by free-speech laws from actually blocking or banning content where it lives, censoring search engine results is a kind of "loophole" they can get away with. In an increasingly digitized, search-engine discoverable world of content, censoring search results is a way to censor without technically violating free speech protections.
Starting in 2011, companies like Google started reporting a disturbing rise in government requests for search engine results to lie -- to essentially tell users that existing pages and content on the Internet do not exist when in fact they do. Requests for such removals by the U.S. government, for example, rose 718% from the first half of 2011 to the last half. And they've continued to rise since.
And such requests weren't just coming from the U.S., but from "Western democracies not typically associated with censorship," according to the Google policy analyst who reported the trend on behalf of the company and talked about Google's Transparency Report.
The reasons for these requests vary, and often sound reasonable -- national security, law and order, national pride, religious sensitivity, social order, suppression of hate speech, privacy, protection of children -- you name it. But when you add them up and allow them to grow in number over time, the cumulative effect is that increasingly, search results don't reflect the real Internet.
Many of these cases start out with the best intentions but result in serious problems. Let's start with a disturbing recent case in Canada.
A Supreme Court of British Columbia ruling on an intellectual property dispute between two small industrial equipment companies ordered Google to not only delete all search results referring to one of the companies, but all future such results as well -- not only in Canada, but worldwide. (Yet another unsavory dimension to the case was that the ruling applied only to Google. Bing and other search engines were not required to comply.)
The particulars of the case are irrelevant and the data involved unimportant. The precedent that a government in one country could censor information in other countries has bad implications if allowed to stand. Imagine if China were allowed to censor information about the Dalai Lama within the US, or if Pakistan were allowed to censor images offensive to Muslims in Denmark.
Even more recently, the European Court of Justice brought into existence Europe's "right to be forgotten" ruling. In a nutshell, Europe wanted to protect citizens from the fact that the Internet never forgets.
The particular case heard by the court involved a Spanish man who was in the press for serious debt problems, but who later climbed out of debt. Rather than ruling that the actual information about his money problems be removed or censored, the court invoked the search engine loophole for censorship and ordered Google, Bing and other search engines to remove his name as a search query that returned the outdated information about his finances.
Worse, the ruling required search engines to offer a process by which any European could request similar treatment, and ordered Google, Microsoft and other search engine companies to judge whether those requests were valid and to take action on the valid ones.
At last count, Google had received some 70,000 requests for changes to search results under the ruling in the past month. Microsoft only this week launched its process for censoring results. | <urn:uuid:aa271258-5f56-45ea-8bef-beedfd87b407> | CC-MAIN-2017-04 | http://www.computerworld.com/article/2489869/vertical-it/why-we-need-an-underground-google.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00157-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.954482 | 870 | 2.546875 | 3 |
If you search around online, you might find different technical documents describing deduplication and how it works, but most of these documents are fairly technical. I thought it might be helpful to describe deduplication in a way that would make sense to anyone who wants to understand what it does.
NetApp’s deduplication (also referred to also referred to as A-SIS) is a storage efficiency feature. Storage efficiency simply means that NetApp uses this offering to help you maximize the amount of available free space on your storage system. Which in turn means that you spend less money on disk drives.
Without going into a huge amount of technical detail, I will give you an example. Let’s say you have a version controlled document and there are 10 versions of that document on your storage system. If each page were 1 MB in size, each document would be a total of 10 MB in size. Multiply 10 MB by 10 documents and that’s 100 MB of total space used to store multiple versions of the same document.
If only one page is different between each version of the document, and you only saved changes for each version and not the entire document, then the first document would be 10 MB and each revision would be 1 MB, making your total storage needs 19 MB instead of 100 MB. The process of reducing the total storage space required for these documents from 100 MB to 19MB is deduplication.
Deduplication looks at each version of the document, saves only the unique content from each revision, and uses metadata to point to the original content that these documents have in common. So when you retrieve a unique version of the file, the file system returns the shared data from the original file along with the unique content from the version of the file you requested.
All of this is really done at block level not file level and there is a lot of additional technical detail as to exactly what happens, but in layperson terms that is how deduplication can help you maximize your available storage space. Keep an eye out for our future blog posts as we explain each of NetApp’s advertised storage efficiencies.
Check out the de-duplication calculator at http://www.dedupecalc.com/ to see the potential cost and space savings you can achieve by using deduplication in your environment. | <urn:uuid:423056db-e851-42bf-b87b-a4d09118a77f> | CC-MAIN-2017-04 | http://www.fastlaneus.com/blog/2010/04/14/a-non-technical-explanation-of-netapp-deduplication/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280266.9/warc/CC-MAIN-20170116095120-00065-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.9342 | 488 | 2.515625 | 3 |
When Division Fire Chief Larry Anderson was dispatched from Fremont, Calif., to Idaho as a member of the statewide incident command team last summer, he got a first-hand look at how GIS was used to manage resources and personnel during a major fire.
His experiences in Idaho made Anderson realize how underutilized GIS was for emergency preparedness in his own city. Up to that time, he had given little thought to Fremonts GIS unit in terms of developing emergency applications. "They do a variety of work, although not typically for emergency services," Anderson said. "We could have them producing maps ahead of time that would enable us to better respond to major disasters."
The first opportunity to use GIS in managing a potential crisis wasnt long in coming; in 2000, Fremont was hit with the first of several rolling blackouts, many of which came without warning. A city of nearly 100 square miles on the southeast side of San Francisco Bay, Fremont lies across major north-south highways running between San Jose and Oakland. Power outages at traffic signals along these arteries can result in accidents, pile ups and blocked emergency vehicles. Also at risk are critical facilities, including kidney dialysis centers, water district pumping stations, schools, office buildings with elevators, hazardous materials companies and pipelines transporting explosive liquids.
Since Pacific Gas & Electric schedules blackouts sequentially by circuit block, knowing the next block to go down and the section of the city it covered could enable Fremonts Emergency Operations Center (EOC) to move resources and personnel into the area ahead of time to direct traffic, put up stop signs and barriers and set up portable generators to power traffic lights at critical intersections. Knowing the next area to go down could also assist the EOC in establishing alternate routes for emergency vehicles, providing backup power for critical facilities and freeing people trapped in elevators.
To more accurately predict where the next blackouts were going to be, the EOC needed maps of the city sectioned by each PG&E circuit block. According to Anderson and other members of the EOC, however, PG&E declined to release specific circuit block information. "They cited reasons of national security for withholding the information," Anderson said. "I [could] cite some local security reasons why we should have the information. So [we were] at odds."
Building a Plan
Rather than give up, the EOC developed a plan to use the citys PG&E bills to determine which circuit block each traffic signal was in. "Since we couldnt get the block information directly from PG&E, we went through a back door I dont think many people are aware of -- every traffic light has an intersection location and a PG&E bill indicating the circuit block that light is in," said Christine Frost, Fremonts GIS manager.
After collecting and manually tabulating the data on intersections and traffic signals from PG&E bills, the EOC requested the GIS group translate the data into maps. The GIS group created a point file of traffic-signal intersections and their respective PG&E blocks, then loaded the data into ArcView. Using the signal locations as the foundation, the group developed rough outlines for each of the blocks overlaid on top of a base map of the city.
The EOC also provided the GIS group with a list of critical facilities and infrastructure likely to be affected during blackouts -- schools, hospitals, nursing homes, parks and dialysis treatment centers. Hazardous materials locations and pipelines will be added later.
In addition to providing paper maps requested by the EOC, Frost saw that an interactive map application could be included in Fremonts already extensive GIS database and made available via the citys fiber-optic intranet. Using Autodesks MapGuide, the GIS unit developed a Web-based application that could be viewed with a plug-in downloaded from the Internet.
"Within 24 hours we had the paper maps, and within 48, the Web application," Frost said. "Police, fire and maintenance can go to the MapGuide site, search and zoom in on a particular location and print the maps they need to have in the field. And they can do that on their own very quickly."
Maintenance Director Jack Rogers explained that when Californias Independent System Operator announces a Stage 3 power alert, indicating the state is running extremely low on power reserves, the city gets ready for rotating blackouts. "When the alert comes in, we form the EOC group and develop response plans for the next two hours to 48 hours, depending on how long we think the blackouts are going to last," he said.
PG&E announces the order in which areas of the city will be hit with rolling blackouts. Using the maps, the EOC can immediately see if a major thoroughfare will be involved. If so, the city can direct additional police officers to the area. Maintenance workers can be sent to put up stop signs in major intersections or portable generators in areas with traffic signals that can be run by them.
"The big advantage of these maps is that they allow us to actually see which parts of the city were going to be affected," said Rogers.
Anderson said the maps enable the fire department to identify potential public safety issues and plan alternate response routes. "We can choose routes other than the main north/south thoroughfare, broadcast them to all of our stations, and have them respond accordingly. We can also click on the map, bring up critical facilities in the next block to go down and alert those people and ours so we can be better prepared," he said.
The maps also enable the fire departments hazardous materials team to locate and alert chemical companies. "If theyre mixing certain chemicals and the power goes down, leaving them unable to stabilize the process, the results can be a real issue for us." Anderson said. "But we can locate those companies now by PG&E block, so we can alert them and our HAZMAT team ahead of time to be prepared to handle problems."
Anderson said the EOC and the police department have had enough time with the maps to assess the intersections, determine the ones they need to cover and the number of officers it will take. When the department gets a Stage 3 alert, depending on the time of day and the next block or blocks to be hit, the department can hold a shift over and have extra staffing to cover intersections without jeopardizing public safety by tying up all the officers directing traffic.
Blackouts, however, do not always come with warning. At times the EOC receives no warning from PG&E; other times they will get 10 minutes, sometimes an hour. "The last one was four minutes," Anderson said, "but we had the maps and map data layers and a game plan, so we already knew how to identify key locations we needed to cover. It was just a matter of getting our resources out there."
Anderson said that even with brief warnings, the maps have enabled the EOC to be better prepared when blackouts occur. "When we know the last block that went down, we can determine what sections of our jurisdiction are going to be effected next and pre-stage equipment in those areas. The more we know ahead of time, the more effectively we will be able to manage the incident, whether it is a rolling blackout, earthquake or flood."
Ed. Note -- PG&E has now released circuit block information to Fremont and other cities. | <urn:uuid:806ac4a6-6241-4373-86ae-cf3aea55ee4c> | CC-MAIN-2017-04 | http://www.govtech.com/magazines/gt/Chasing-Californias-Rolling-Blackouts.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00275-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.959067 | 1,525 | 2.765625 | 3 |
MISSION, KS--(Marketwired - Apr 28, 2014) - (Family Features) Lazy summer days may sound refreshing to parents, however, they may be detrimental to their children's educational advancement. A study by Dr. Harris Cooper, a professor of psychology at the University of Missouri-Columbia, reveals that students can lose an average of one to three months of what they learned upon returning back to school after summer break.
Parents can help their children avoid this "summer slide" by reinvigorating creativity, innovation and education during the summer. When you provide your kids with brain-stimulating experiences during the summer, you can help them to retain what they spent all year learning. This could help them begin the new school year with higher aptitude and give them a competitive educational edge. After all, knowledge is power.
When looking for activities for your kids during their break, think beyond the pool. There are many ways to get those brain juices flowing throughout the warmer weather months. Here are several engaging activities your kids will think are so fun they won't even know they're learning.
Use books for family bonding
A family book club is a great way to get in more bonding time while also encouraging a love of reading. The children's section of the local library or bookstore is a great place to find books that also tie in scientific lessons. Kids will love digging into tales about dinosaurs, exploring new galaxies in space and reading about the biology of deep-sea creatures. Discuss any characters, plot and theme ideas in an interactive fashion that allows every family member to take part in a stimulating literary discussion.
Celebrate the curious mind
Does your child have a curious mind? Encourage inquisitiveness by enrolling them in a specialized summer camp, such as those offered by Camp Invention, which is supported by the United States Patent and Trademark Office with curriculum developed by inductees of the National Inventors Hall of Fame. Led by local educators, this weeklong experience immerses elementary school children in engaging real-world challenges where they can turn wonder into discoveries. Each themed module uses connections between science, technology, engineering and math to inspire innovation.
Use your community's resources
Check your local museums, libraries and other community centers for classes, workshops and other great learning opportunities for your kids. Give them a journal to help them keep track of all the things that they are learning.
Talk to their teachers
Figure out what kind of lessons they will be covering in the upcoming school year and incorporate it into your summer schedule. For example, plan local field trips to historic monuments that they may be learning about in next year's history class.
Give them a journal
Every child loves having a special spot to keep a record of their wonderful summer trips, times with friends and even drawings. Encourage them to keep a journal where they can tap into their scientific side by jotting down different discoveries -- from tracking plant growth in the garden to drawing bugs in the backyard.
Questions to Consider When Finding a Camp
Many parents fondly look back on spending their own childhood summer days at camp. And because today's camps offer a much larger spectrum of specialty programs, while also featuring a more individualized experience for youngsters, Camp Invention, a premier summer enrichment day camp program, suggests asking these questions to help select the perfect summertime program:
- Does your child have special interests or talents that they would like to build on or develop?
- Is your child willing to try or learn new things?
- What goals do you have for your child while they attend summer camp?
- How much can you afford for a camp program?
Building Science Skills at Home
Because science is everywhere, it's easy to make every day a learning experience that inspires curiosity for your little one. Here are a few ways to incorporate this important subject into your family's daily summer routine:
Vacations are a great way to expand scientific knowledge through exploration. Point out the rock formations while visiting a national park, discuss animal tracks while taking a hike or check out the natural history museum in the town you are visiting.
Use current newsworthy topics to start a science-related discussion with your kids. From weather patterns to erupting volcanoes, the news is full of curious discoveries for their expanding minds.
Stock up on books, newspaper articles, puzzles, games, videos and other valuable learning tools that inspire science-related discoveries. Keep them in a centralized spot so your kids can access them at any time.
It's easy to break up the boredom of summer break with a few engaging activities that will get your kids off to a great start in the coming school year. For more information, visit www.campinvention.org or www.facebook.com/campinvention.
About Family Features Editorial Syndicate
This and other food and lifestyle content can be found at www.editors.familyfeatures.com. Family Features is a leading provider of free food and lifestyle content for use in print and online publications. Register with no obligation to access a variety of formatted and unformatted features, accompanying photos, and automatically updating Web content solutions. | <urn:uuid:7c3a5aa1-30e1-407b-a9f2-d292757a900e> | CC-MAIN-2017-04 | http://www.marketwired.com/press-release/learning-all-summer-long-1903721.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279650.31/warc/CC-MAIN-20170116095119-00001-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.940607 | 1,048 | 2.875 | 3 |
One of IBM’s best-kept secrets is that information technology buyers can save more than a million dollars if they deploy workloads with heavy Input/Output (I/O) on a mainframe as opposed to deploying these same types of applications on a group of x86-based, multi-core blade servers. Benchmark data shows that large-scale architecture (such as a mainframe) doesn’t require as much headroom, or spare capacity, as smaller systems to execute heavy I/O workloads. So, a mainframe can run 240 virtual machines as compared to about 10 virtual machines per blade on a typical Intel 8 core system. (In this comparison, both systems run the same workload at the same service level.)
The price of running a heavy I/O workload on 240 virtual machines running on a mainframe (at 70 percent CPU utilization with a high-reliability, service-level profile) should be approximately $3.3 million. The cost of running the same environment (240 virtualized machines, 24 blades, on 192 CPUs) on a group of Nehalem EP-based, Intel Xeon blade servers is approximately $4.8 million (see Figure 1). The cost of the IBM hardware is significantly more than the cost of the Intel hardware, but when software licenses (usually charged per CPU core) are rolled in, the numbers change radically to favor Linux on the mainframe. Accordingly, choosing an IBM System z as a Linux/cloud consolidation server has the potential to save IT buyers more than a million dollars!
How Is This Possible?
The whole design point of large-scale mainframe architecture is based on sharing resources. The mainframe is known as a “shared everything” architecture. Mainframes share memory, a large internal communications bus, central processing units, disk, and more. In a scale-up, shared environment, all of these resources can be made available to a common pool—all within a single chassis, or self-contained mainframe architecture.
Demand for resources in this pool is constantly fluctuating (usage peaks and valleys) just like in a distributed environment; however, it’s interesting to note that the peaks and valleys tend to balance out better in a mainframe, large-scale resource pool.
Smaller-scale servers such as Intel Xeon multi-cores don’t have as much headroom. Resources such as CPU, memory, and I/O are bound within each server, so sharing resources means hopping across a network. Keeping track of where resources are is difficult enough, but when you add network congestion and latency problems, it’s easy to see why headroom issues occur. So, smaller-scale servers must be over-provisioned. More headroom needs to be allocated to handle usage peaks and valleys and to deal with network issues in a given, small-server environment. This makes smaller servers (such as blades) less efficient.
Where’s the Proof?
A benchmark study conducted by IBM’s Software Group Project Office (accessible atftp://public.dhe.ibm.com/common/ssi/ecm/en/zsw03125usen/ZSW03125USEN.PDF) reveals the advantages of the mainframe. With regard to the report’s credibility, consider that:
- IBM sells many x86 Xeon multi-core systems. It doesn’t help IBM to disparage x86 server platforms.
- This study was done in 2009, before Nehalem EP (Intel’s first real Xeon multi-core server architecture) was released. So, you could argue that the report compares a mainframe to older Xeon architecture. However, to remedy this, we’ve supplied an updated graph (see Figure 2) based on more current Xeon architecture.
- IT executives who use both architectures can and do verify the core principle of the report—that scale-up mainframe architecture manages headroom and capacity better than x86 servers.
It’s also important to understand the spirit of this benchmark. IBM engineers were looking for a way to explain why mainframes can host more virtual machines than smaller x86 multi-core environments. So they constructed a model that: | <urn:uuid:7cc0831e-82b6-4b86-b4b9-09f2c5cb0986> | CC-MAIN-2017-04 | http://enterprisesystemsmedia.com/article/mainframe-linux-how-to-save-a-million-dollars1 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00395-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.914451 | 874 | 2.671875 | 3 |
Network News Transfer Protocol is a prime set of rules which is used by both computer’s clients as well as servers in order to manage the comments situated on the Usenet newsgroups. In the past, NNTP was introduced as the substitute of original “UUCP” protocol. But now NNTP servers can handle the collected international network of Usenet newsgroups. Moreover, such NNTP servers are included at your ISP (internet access provider). But an NNTP user will be included as a part of following explorers like: Netscape and Opera etc. But another separate client program can be added too as a newsreader.
NNTP protocol is the well suited for sending Usenet news communication either between servers or a news server and newsreader client. This very simple protocol is to some extent similar to POP3 and SMTP protocols.
Basically, NNTP protocol in documented form is appeared first in RFC 3977 that was made public in the month of October, 2006. This RFC was the result of NNTP IETF working group’s hard work which set aside RFC 977 (issued in the year 1986). But RFC 3977 launched a capability labels registry so to be used for any further expansion of the network news transfer protocol in the future. Up to now, RFC 3977 itself is only created the extensions in this protocol. Anyhow, to record a new additional room, this extension is needed to be made public as a standards follow or else as an experimental RFC. But extension, opening with X will be reserved for the private usage.
Below are those commands, identified as well as responses revisited by a NNTP server. List of these commands can be mentioned as: ARTICLE, BODY, HEAD, HELP, LAST, SLAVE, LIST, NEWNEWS, NEXT, POST, QUIT, STAT and GROUP.
But NNTP reply code grouping is included:
Code: 1yz Description: Informative message.
Code: 2yz Description: Command ok.
Code: 3yz Description: Command ok as far as this, launch rest of that.
Code: 4yz Description: Command was acceptable, but couldn’t be executed for certain reason.
Code: 5yz Description: Command can’t be implemented, or else incorrect, or any serious error relating to program is occurred etc.
Code: x9z Description: Debugging output.
Besides this, NNTP reply codes are available for the certain tasks. These can be mentioned as under:
Code: 100 Description: Help content follow.
Code: 199 Description: Debug output
Code: 200 Description: Server all set – posting acceptable.
Code: 201 Description: Server all set – no posting permitted.
Code: 202 Description: Slave status renowned.
Code: 240 Description: Article post ok.
Code: 400 Description: Service discontinued etc.
In any case, NNTP spells out a specific protocol for the giving out, inquisition, repossession and redeployment of the news articles with the help of a consistent stream like TCP server-client representation. NNTP is intentionally planned as a result that news articles require simply, is being stored on presumably central host while contributors on the other participating hosts linked to the local area network (LAN) may examine the news articles by means of stream acquaintances to the news-host. | <urn:uuid:77131f8e-2641-493c-9660-162d81253fd9> | CC-MAIN-2017-04 | https://howdoesinternetwork.com/2012/nntp | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00303-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.922522 | 697 | 2.515625 | 3 |
I was asked to analyze some production COBOL programs.
As per the inputs provided the program is a batch program. Client has given the JCL name but no where the given program is being executed in the JCL. Using the comments I understood that the given program is being executed by some other program named OCCP in the JCL.
In the COBOL program Linkage section is having some 10 copy books. In the first copy book which is coded immediately after LINKAGE SECTION. The variable lk-work declared as follows 01 LK-WORK REDEFINES DFHCOMAREA. I believe this program is not being called by CICS because it is using DISPLAY statements.
The program is reading and writing some files using some other subprogram
This program is passing the DD card names and record variables to sub program to read or write a file.
Please clarify me on the following if you have any idea.
1. What is OCCP is there any utility with name OCCP or it is an internal program.
2. Can we use redefines in the first variable declared in a section
3. How can we use DFHCOMAREA in a batch program.
4. I did not understand where the dataset names are specified to read or write the files. | <urn:uuid:363637a8-a018-4555-9420-76c32df99ba9> | CC-MAIN-2017-04 | http://ibmmainframes.com/about14779.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00286-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.942618 | 271 | 2.546875 | 3 |
The epigenetics market was valued as a USD 3.12 billion market in 2016 and is expected to show a very high growth. It is forecasted to become a USD 10.03 billion market by 2021, showing a CAGR as high as 26.85%
Epigenetics is the study of changes in gene expression caused by certain base pairs in DNA, or RNA, being "turned off" or "turned on" again, through chemical reactions. In biology, and specifically genetics, epigenetics is mostly the study of heritable changes that are not caused by changes in the DNA sequence; to a lesser extent, epigenetics also describes the study of stable, long-term alterations in the transcriptional potential of a cell that are not necessarily heritable. The term also refers to the changes themselves: functionally relevant changes to the genome that do not involve a change in the nucleotide sequence. Examples of mechanisms that produce such changes are DNA methylation and histone modification, each of which alters how genes are expressed without altering the underlying DNA sequence.
Global Epigenetics Market- Market Dynamics
The report details several driving and restraining factors of the global epigenetics market. Some of them are listed below.
The epigenetics market is segmented mainly on the basis of technique and geography. The mechanism/technology sector can be segmented into three different categories mainly DNA Methylation, RNA Interference, and Histone Modifications. On the basis of geography, the market is divided into North America, Europe, APAC and the Rest of the World.
Some of the key players in the market are:
Key Deliverables in the Study | <urn:uuid:97507c5f-ef95-4c47-8023-a6249dee6031> | CC-MAIN-2017-04 | https://www.mordorintelligence.com/industry-reports/global-epigenetics-market-growth-trends-and-forecasts-industry | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00102-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.953028 | 333 | 2.625 | 3 |
Huang C.,Xinjiang Institute of Ecology and Geography |
Huang C.,Cele National Field Science Observation and Research Station for Desert Grassland Ecology |
Zeng F.,Xinjiang Institute of Ecology and Geography |
Zeng F.,Cele National Field Science Observation and Research Station for Desert Grassland Ecology |
And 6 more authors.
Shengtai Xuebao/ Acta Ecologica Sinica | Year: 2011
Soil organic carbon storage and total nitrogen contents are not only important indicators of soil quality and sustainable crop production, but are also an option for offsetting increasing atmospheric CO 2and N 2O concentrations. Cultivation often causes deterioration of physical soil conditions and reduces nutrient status and humus content, and therefore is considered the main cause of changes in soil organic carbon and nitrogen. Most studies show a decline in soil carbon after cultivation, averaging about 30%. However, some research has suggested that organic carbon contents significantly increase after soils with low natural organic matter levels are converted to cropland. Therefore, soil organic carbon storage and the dynamics of carbon change in cropland have become important issues in evaluating the impact of agricultural management. However, many researchers pay more attention to changes in soil carbon stocks in the plough layer than to changes in deep soil layers. Twenty cropland sites in the Cele oasis, which have been cultivated for up to 100 years, were selected to study the effects of cultivation on changes in the vertical distribution of soil organic carbon, total nitrogen, and available nitrogen by using the method of trading space with time. Based on differences in soil organic carbon and total nitrogen accumulation, five sites representing 100, 80, 30, 15 and 10 years of cultivation were chosen to investigate relationships between crop yield and soil organic carbon or total nitrogen. Soil organic carbon and total nitrogen density in the surface soil layers increased significantly with longtime cultivation. Soil organic carbon densities (0-20 cm) in croplands cultivated for 100, 80, 30, 15 and 10 years were, respectively, 231.7%, 302.9%, 146.3%, 116.6%, and 130. 5% higher than those in an uncultivated desert soil. Corresponding values for total nitrogen density were, respectively, 160. 1%, 217. 6%, 123. 6%, 106. 5%, and 125. 1%. The organic carbon density in deep soil layers (40-200 cm) was also influenced by longtime cultivation, being 36. 4% lower after 30 years' cultivation than that in the desert soil. However, in the 100-year cropland it increased by 52. 0%. Similar results were not found for total nitrogen density. The C/N ratio in the 0-40 cm soil layers of the sites cultivated for 100, 80, 30, 15, and 10 years increased by 28. 3%, 23. 0%, 15.7%, 10.4%, and 6.5%, respectively, compared with that in the desert soil. However, the C/N ratio decreased in deep soil layers of the sites cultivated for 0 to 80 years. Significant negative correlations between C/N ratios and soil available nitrogen in the different soil layers were present only in the desert and 10-year cropland soils. There were significant differences in maize yield in the different croplands. In addition, maize yield was significantly positively correlated with soil organic carbon and total nitrogen density in the 0-200 cm layers, but a corresponding correlation was not found for cotton yield. This suggests that increases in soil organic carbon and total nitrogen were very important for improving maize yield at the Cele oasis, but this was not the case for cotton yield. Source | <urn:uuid:e0d6f6c1-b987-46ee-9522-0cc391a994bb> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/cele-national-field-science-observation-and-research-station-for-desert-grassland-ecology-992145/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00314-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.941857 | 747 | 2.984375 | 3 |
More people perished from the influenza pandemic of 1918 than were killed during World War I. In 18 months, the deadly flu seized 50 million to 100 million lives.
Nearly 90 years later, researchers may know why this strain of flu was so lethal. As reported in the Jan. 18, 2007, issue of Nature, an international team of researchers discovered the virus triggered an autoimmune response in an infected person - a response that attacked the lungs rather than the viral infection, eventually filling the lungs with fluid and drowning the victims.
Yoshihiro Kawaoka, professor of pathobiological sciences at the University of Wisconsin-Madison and an expert on the influenza virus, teamed with Canadian, American and Japanese researchers to introduce a genetically engineered version of the 1918 influenza virus into seven monkeys. They also infected three other monkeys with a "control" human influenza. To guarantee infection, each monkey received several million units of flu - either the 1918 version or the control version.
The monkeys infected with the conventional flu showed few clinical signs of respiratory infection, all of which were mild. However, the seven, 1918 virus-infected animals became ill within 24 hours, and their condition worsened dramatically as hours passed. Ethical guidelines forced the researchers to euthanize them within eight days of the initial infection to analyze how the two flu strains affected their tissues and organs. Their lungs were bloated, bloody and filled with fluid - similar to the pathology reports of 1918 flu victims.
Some of the damage is similar to the Southeast Asia avian influenza in that both flu strains ravage the upper and lower respiratory tracts, unlike the conventional flu, which affects the upper respiratory tract.
Based on these similarities, the researchers hope to develop medicines should another lethal influenza pandemic occur. | <urn:uuid:bbf291fe-6dc1-4976-9e06-dfa6d3812dcb> | CC-MAIN-2017-04 | http://www.govtech.com/magazines/gt/Up-Close-Influenza.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280587.1/warc/CC-MAIN-20170116095120-00130-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.948765 | 354 | 3.71875 | 4 |
“As previous grid failures, including the multiday Northeast blackout of 2003, have shown, any event that causes prolonged power outages over a large area would not only be extremely costly, it would wreak havoc on millions of people’s daily lives and could profoundly disrupt the delivery of essential services, including communications, food, water, health care and emergency response,” explained a report from the Bipartisan Policy Center’s (BPC) Electric Grid Cybersecurity Initiative, which was launched as a collaboration of BPC’s Energy and Homeland Security Projects in May 2013. Its goal is to develop policies – aimed at government agencies as well as private companies – for protecting the North American electric grid from cyber-attacks.
“Moreover, cyber threats, unlike traditional threats to electric grid reliability such as extreme weather, are less predictable in their timing and more difficult to anticipate and address,” it added. “A cyber-attack could come from many sources and—given the size and complexity of the North American electric grid—could target many potential vulnerabilities. For this reason, experts agree that the risk of a successful attack is significant, and that the system and its operators must be prepared to contain and minimize the consequences.”
To put the scope of the issue into perspective, the Industrial Control Systems Cyber Emergency Response Team (ICSCERT) reported responding to 198 cyber incidents in fiscal year 2012 across all critical infrastructure sectors. A full 41% of these incidents involved the energy sector, particularly electricity.
Current efforts to provide for electric grid cybersecurity are dispersed and involve numerous federal, state and local agencies, BPC noted. These include mandatory federal standards that apply to the bulk power system and nuclear power plants, and mechanisms to facilitate relevant information-sharing between the public and private sectors, and within the power sector itself.
“But given the complexity, fast-changing nature, and magnitude of potential cyber threats, it is also clear that more must be done to improve grid cybersecurity,” BPC said.
Urgent priorities include strengthening existing protections, for the distribution system as well as the bulk power system; enhancing coordination at all levels; and accelerating the development of robust protocols for response and recovery in the event of a successful attack.
One key policy challenge is that current “economic and institutional factors” are keeping power sector investments in cybersecurity – including investments in research and development – below where they should be.
“First, given the interconnected nature of the grid, the benefits of these investments are likely to extend beyond the footprint of an individual company,” BPC said. “Because the company making the investment is unlikely to be able to capture these spillover benefits, many companies may limit their investments to a level that is suboptimal from the perspective of the grid as a whole. Second, since the risks and consequences of a cyber-attack are difficult to estimate and quantify, individual companies may have a difficult time determining which investments to make beyond the minimum required for compliance with mandatory standards.”
While there’s no magic bullet given the nature of the evolving threat and barriers to sufficient investment, BPC is advocating a couple of new approaches. One is the establishment of an industry-wide organization, modeled on the Institute for Nuclear Power Operations (INPO), to advance cybersecurity practices across the industry.
“We expect that such an organization—coupled with appropriate incentives for participation such as insurance policies and liability protection—could do much to improve cybersecurity across the industry.”
Other approaches that it recommends rely on public-private partnerships that would mobilize the respective assets and expertise of industry and government agencies, and improve the flow of information between government and industry and across different companies. This echoes the federally developed Cybersecurity Framework recently released by the National Institute of Standards and Technology (NIST).
There is always work to do, and BPC laid out a roadmap for its efforts going forward. “In the coming months, BPC staff and Initiative co-chairs will reach out to policymakers and stakeholders to advance these and other recommendations,” said the group. “At the same time, BPC will work to address challenges that would remain even if all the recommendations in this report were adopted. For example, because privacy concerns continue to present a stumbling block for efforts to enhance information sharing between industry and government, additional ideas and compromises will be needed to break the current legislative logjam in this area.” | <urn:uuid:e13203f9-482c-4bdc-8a0a-d0aa27ec1fb2> | CC-MAIN-2017-04 | https://www.infosecurity-magazine.com/news/national-electric-grid-remains-at/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280587.1/warc/CC-MAIN-20170116095120-00130-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.951555 | 911 | 2.71875 | 3 |
Gogu C.,Ecole Nationale Superieure des Mines de Saint - Etienne CMP |
Gogu C.,CNRS Clement Ader Institute |
Haftka R.,University of Florida |
Riche R.L.,Ecole Nationale Superieure des Mines de Saint - Etienne CMP |
And 5 more authors.
AIAA Journal | Year: 2010
The basic formulation of the least-squares method, based on the L2 norm of the residuals, is still widely used today for identifying elastic constants of aerospace materials from experimental data. While this method often works well, methods that can benefit from statistical information, such as the Bayesian method, may sometimes be more accurate. We seek situations with significant difference between the material properties identified by the two methods. For a simple three-bar truss example we illustrate three situations in which the Bayesian approach systematically leads to more accurate results: different sensitivity of the measured response to the parameters to be identified, different uncertainty in the measurements, and correlation among response components. When all three effects add up, the Bayesian approach can be much more accurate. Furthermore, the Bayesian approach has the additional advantage of providing the uncertainty in the identified parameters.Wealso compare the two methods for a more realistic problem of identification of elastic constants from natural frequencies of a composite plate. Copyright © 2010 by the American Institute of Aeronautics and Astronautics, Inc. Source | <urn:uuid:27d8ab40-9ab1-4809-9b43-2c9c9a7e9094> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/center-science-des-materiaux-et-des-structures-260931/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00340-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.879416 | 300 | 2.796875 | 3 |
Open Source Multimedia Framework ? FFmpeg
introduction to the topic
Multimedia Framework is a software framework that provides functionalities to perform media processing, and an interface for applications to use this framework. 'Open source' implies that it is a free software. It gives developers a significant advantage in developing applications on the multimedia framework. The popular multimedia frameworks available in the open source community are:
- VideoLAN etc.
In this session, the speaker will cover a widely used multimedia framework called FFmpeg. Multimedia applications can use FFmpeg to leverage the following functionalities:
Transcoding: Transcoding is the conversion of a multimedia file from one format to another. It is divided into the following:
- Decoding: Decoding involves a multimedia file to be parsed, demuxed and then decoded into audio and video streams in the raw format.
- Encoding: Encoding involves the encoding of raw audio and video streams into their respective desired multimedia formats and then multiplexing into the desired container format.
- Streaming: Streaming is the transmission of multimedia streams over the network. FFmpeg can be used to provide transcoding on-the-fly.
During this session the speaker will provide an understanding of:
- The FFmpeg components
- The various use cases of FFmpeg
- The build and installation
- The Integration of hardware codecs into FFmpeg
- The most common issues faced during integration and their remedies.
About the speaker
Technical Manager, ERS-OEM-CE-Mobility
Apoorv has 9 years of industry experience in the Embedded Systems Domain. He has extensive experience in Board Bring Up, Firmware Development (Boot Loader Customization and in porting the Linux OS), Device Drivers, Multimedia Framework and User Application Development in Embedded Systems. | <urn:uuid:554597e6-67c2-4c92-ab76-8459874af87e> | CC-MAIN-2017-04 | https://www.hcltech.com/webinars/engineering-services/open-source-multimedia-framework-ffmpeg | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00184-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.847072 | 373 | 2.671875 | 3 |
PandaLabs revealed that almost six percent (5.77 percent) of the two million computers they scanned showed an infection by the malicious Conficker worm. The worm, which originated in China, has now extended across 83 countries, and is particularly virulent in the United States, Spain, Taiwan, Brazil and Mexico. In the U.S. alone, PandaLabs has identified at least 18,000 infected computers, although the real figure could be much higher.
On Jan. 12, PandaLabs issued an orange alert, cautioning users to be wary of this worm that propagates itself through USB memory devices such as USB Drives or MP3 players. In investigating Conficker further, PandaLabs’ researchers have also discovered that some variants are launching brute force attacks to extract passwords from infected computers and from internal networks in companies. The frequency of weak passwords (common words, own names, etc.) has aided the distribution of this worm. By harvesting passwords, cyber-crooks can access computers and use them maliciously.
This worm also uses an innovative system of social engineering to spread via USB devices: in the Windows options menu that appears when inserting a USB device, it has disguised the option to run the program (activating the malware) as the option to open the folder to see the files, so when users simply want to see the contents of a memory stick, they will actually be running the worm and infecting their computers. | <urn:uuid:86323be5-e128-4e35-a839-62ba37d8879f> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2009/01/22/almost-6-percent-of-computers-infected-with-the-conficker-worm/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00184-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.943736 | 295 | 2.5625 | 3 |
Researchers at the Massachusetts Institute of Technology (MIT) have unveiled a new network design that they claim could make big data analytics operations cheaper and less power intensive, without compromising on performance.
The technique involves integrating flash memory into big data applications in such a way as to overcome the speed deficit that the technology has when compared to traditional RAM-based in-memory computing.
MIT observed that flash memory is typically about a tenth as expensive as RAM and consumes around a tenth as much power. But the trade-off of this is that it is only around a tenth as fast.
However, researchers at the university presented a new system at June's International Symposium on Computer Architecture in June that should make servers using flash memory as efficient as those using conventional RAM for several common big-data applications, while preserving their power and cost savings.
The researchers were able to make a network of flash-based servers competitive with a network of RAM-based servers by moving some of the computational power off the servers and onto the chips that control the flash drives. "By preprocessing some of the data on the flash drives before passing it back to the servers, those chips can make distributed computation much more efficient," MIT stated.
Arvind, the Johnson Professor of Computer Science and Engineering at MIT, said that while the process – called BlueDBM – is not a replacement for solutions such as dynamic RAM, it offers up many new opportunities for managing large volumes of information.
"There may be many applications that can take advantage of this new style of architecture which companies recognise," he said "Everybody's experimenting with different aspects of flash. We're just trying to establish another point in the design space."
Jihong Kim, a professor of computer science and engineering at Seoul National University, added that the architecture may be particularly appealing for big data applications that require very fast or real-time responses.
"The main advantage of BlueDBM might be that it can easily scale up to a lot bigger storage system with specialised accelerated supports," Mr Kim continued. | <urn:uuid:ec438dd7-140d-445e-b80b-55f732699bd5> | CC-MAIN-2017-04 | http://kognitio.com/mit-demonstrates-more-efficient-big-data-solution/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279650.31/warc/CC-MAIN-20170116095119-00002-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.96614 | 412 | 2.96875 | 3 |
Why make a distinction between IT security and the security of information? Just ask anyone whose top-notch IT security program has been tarnished by a data security breach.
Some of the most high-profile victims of data security exploits have maintained IT security programs among the most well reputed anywhere. The lesson hammered home by such incidents is simply this: Securing IT resources and access to the information they handle and communicate does not necessarily guarantee that information will be used in a secure, trustworthy manner.
|More Tech Trends on CIO Update|
Crossing the Technology Line: Strategic vs. Utility
Is Vista The Last of Windows?
The Power of Process
Software Upgrading As 'Collusion'?
The stakes have been raised considerably by sophisticated threats focused on the theft and exploit of tangible assets, as well as in the finesse with which such threats are increasingly honed.
Regulators have also shown they consider the distinction of information security to be far from trivial. The U.S. Federal Trade Commission (FTC) has imposed penalties as high as $15 million in some cases of data security breach. The enforcement of an information security program has also factored into regulatory settlements, subject in some cases to audit every other year for 20 years.
No matter how well IT may be secured, a number of questions must be answered in order to protect and defend information, such as:
Simple questions which may be extraordinarily difficult to answer. For one thing, sensitive information may appear in any number of forms that do not lend themselves to ready identification. Some information formats have structure that simplifies their recognition, such as Social Security or credit card numbers.
Databases lend structure to information that can be leveraged to classify its sensitivity. Other formats, however, do not exhibit such structure, which substantially raises the challenge, because this by far represents the lions share of sensitive information in most organizations.
What, for example, constitutes intellectual property? How can sensitive information be recognized in any format, without engaging human judgment in each case? Once recognized, how can its security be effectively enforced?
These are questions to which solutions addressing the security of information itself have arisen to answer. New technologies such as information classification and structure management, content monitoring and filtering, information leak prevention, enterprise information rights management, application and database security; and new approaches to encryption are merging with domains such as message, Web and Internet security, content and information lifecycle management, and even networks, systems and applications themselves, as businesses have become increasingly sensitive to their information risks. | <urn:uuid:c7a6b400-da64-41d0-8263-a04bd942c1e9> | CC-MAIN-2017-04 | http://www.cioupdate.com/trends/article.php/3653776/IT-Security-Doesn146t-Mean-Information-Security.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00396-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.943746 | 516 | 2.515625 | 3 |
Visualizing 3D and 4D environmental data is necessary for greater understanding and prediction of environmental events.
Researchers from the Center for Intelligent Spatial Computing and the University of Denver are trying to better grasp how they can harness both CPUs and GPUs together to speed a sample geovisualization process using dust storms as the subject..
By visualizing these storms, the team was able to develop a 3D/4D framework for geovisualization that includes everything from preprocessing, reprojection, interpolation and rendering.
While the CPU was an important component of their initial project, especially in terms of preprocessing the data that couldn’t be held in the GPU’s on-board memory, GPUs presented a higher performance and more efficient solution than CPUs.
They were then able to compare the performance differences between GPUs and CPUs. Their findings revealed that multicore CPUs and manycore GPUs can improve the efficiency of calculations and rendering using multithreading techniques. They also found that given the same amount of data, when increasing the size of blocks of GPUs for a coordinate transformation, the executing time of the interpolation and rendering is consistently reduced after hitting a peak.
The team also concluded that the best performance results obtained by GPU implementations in all the three major processes are usually faster than CPU-based implementations, although the best performance with the rendering component is similar between GPUs and CPUs.
On the memory front, they note that the on-board memory of the GPU limits the capabilities of processing large volume data, thus they needed to do preprocessing on the CPU. Still the efficiency of their project was hit by the relatively high latency of the data flow between GPU and CPU. | <urn:uuid:ecf88fd3-cb8a-48d9-80e4-37bbd79efdb8> | CC-MAIN-2017-04 | https://www.hpcwire.com/2013/06/06/dust_storms_put_gpu_cpu_performance_to_the_test/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280308.24/warc/CC-MAIN-20170116095120-00332-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.941656 | 342 | 3.015625 | 3 |
How Secure are Password-Protected Files?
We recently discussed email security for accountants and mentioned that the use of password-protected files is not usually a very good solution for meeting data privacy needs. After writing this and getting some feed back, we thought that the issue of password-protected files really deserves some further discussion. Many people are under the assumption that if they use the “password protection” features of whatever software they are using, that their data is safe and secure. However, this is not necessarily the case. Why?
Using password-protected files to secure data is fast and easy and built into many applications. Why not use it? Certainly, password protecting files is much better than not doing so. However, there are several things that determine how secure these “protected” files really are.
First, let’s assume that the file has fallen into the malicious hands of someone (a hacker) trying to steal the data from within it. If the file is not accessible to unauthorized people in the first place, encryption doesn’t even come into the picture. The hacker needs to figure out how to access the protected data. How can s/he do this?
Unlocking password-protected files?
How can someone access the content of a password -protected file? Well, that depends:
- If the file is not encrypted, but not openable in the normal program that is used to read it (i.e. like Microsoft Word), then the hacker just needs to remove the block on opening the file by editing the file.
- If the file is encrypted, but with a weak/poor form of security, the hacker may be able to use well known techniques to break into the security in a relatively short amount of time, no matter what password is used.
- If the file is encrypted with strong encryption, such as AES, the hacker needs to guess the password used.
Case 1 used to be prevalent many years ago when password-protection was first becoming popular. Various file formats could include codes that the reader programs would detect and cause them to ask for a password before letting the file be viewed. In these cases, the raw data was not actually encrypted, and the security relied upon the assumption that (a) the user can’t/won’t look at the raw file and see what the data actually is and (b) the user can’t/won’t be able to figure out how to edit the raw file to remove the “don’t open me” instructions. Of course, both of these assumptions are invalid. No mainstream program released in the last few years with password protection is so insecure as to use these kind of assumptions. So, unless you are using old legacy software, you don’t really have to worry about this extreme form of password-protected insecurity.
As a case in point, as recently as 2004, it was discovered that Microsoft Word’s (version 2000 and 2003 in backwards-compatibility mode) password-to-modify protection can be subverted easily to gain access to the full contents. Microsoft responded to this discovery by stating that
“(When) you use the Password to Modify feature, the feature is functioning as intended even when a user with malicious intent bypasses the feature … The behavior occurs because the feature was never designed to protect your document or file from a user with malicious intent.”
Admittedly, this is not exactly password protection from viewing, but password protection from editing. But the point is the same: even widely used software from companies like Microsoft sometimes does not have any kind of real inherent security in places where a naive user would assume it does.
Case 2 is prevalent even today. This involves using old encryption methods that have long ago proven to be easily broken. For example, Word and Excel 95, 97, and 2000 files with password protection can be opened by a hacker withing 10 seconds because the encryption methods used contain known problems. For versions 2002 and 2003, the default encryption methods were made to be compatible with version 2000 and are thus susceptible to the same kind of easy access by any hacker. Versions 2002 and 2003 can use 128-bit RC4 for better (though not super) encryption; however, you need to manually enable this.
Many people still use versions of Microsoft Office older than 2007, and password-protected files generated by these versions are likely to be completely insecure. Many other programs commonly in use are also using old vulnerable encryption methods that render them completely insecure.
Case 3 is what you want if you need to use password-protected files. In this scenario, the file is actually encrypted using a highly secure encryption algorithm such as 128- or 256-bit AES. The only way to access the original data is to know or guess the password used. Microsoft Office 2007 uses 128-bit AES encryption for password protection and places those encrypted documents squarely in this case. Encrypted ZIP files (via WinZIP) use 128- or 256-bit AES encryption as well.
- Adobe Acrobat v9 (for making PDFs) uses 256-bit AES encryption, but this is actually weaker than that available in previous versions of Acrobat. This is still viable as long as your password is chosen well.
- Adobe Acrobat v8 uses 128-bit AES encryption; it is implemented in a way that is stronger and takes longer to break than that in v9. This is the best version, currently, to use for encryption.
- WinZIP and PkZIP use 128-bit or 256-bit AES encryption. These are both good as long as you have a good password. Note, however, that the file names inside a password-protected ZIP file are visible to anyone without needing to decrypt the file! If your file names are sensitive … put your password-protected ZIP file inside of another password-protected ZIP file.
- Office 2007 products (Word, Excel, Powerpoint, One Note) use 128-bit AES Encryption. This is good as log as you have a good password.
- Office 2002 and 2003 products can use 128-bit RC4, but are not configured to by default. This is bad … don’t use password encryption in these versions!
- Older versions of Office (as well as the default configurations of Office 2002 and 2003) use an older encryption scheme that is completely broken. Never use password encryption in these versions.
Breaking Strong Encryption
Password-protected files using strong encryption can only be accessed by knowing or guessing the passwords. If you are careful and use a very good password (i.e. one that cannot be easily guessed), then this form of password protection is indeed very secure.
However, it is exceedingly common for people, especially those with no security training, to use very simple passwords on such files. I.e. words found in the dictionary, like “green”, people’s names, or simple variations on these themes. Such passwords can be “guessed” easily by simply trying all words in the dictionary, all names, and all commonly used variations on all of these. For English, this means a few million possibilities (plus or minus — dictionaries vary). Computers are so fast that checking a few million possible passwords against an encrypted file can be done very quickly. So, any file protected with a password that falls into the category of “easily guessed/cracked” can be reliably opened in short order. It is not the strength of the encryption that is the problem, it is the strength of the key — the password.
In fact, the demand for opening password-protected Office and PDF files is so great that there are many commercial programs available that can do this for you for a few dollars. These are “password recovery” programs, but are equally useful to people trying to gain unauthorized access to such files. They will do all the guessing and testing and can open most files with poorly chosen passwords. For example, a quick Google search found:
- How to Open a Password-Protected PDF
- Office 2007 Password Recovery
- Office Password Pro
- PDF Password Recovery
- ZIP File Password Recovery
With all of these utilities readily available, it is within anyone’s reach to open common password-protected files.
Other Problems with Password-Protected Files
Unauthorized access to the content of a file is not the only potential problem. Anyone who can get access to the file content and its password can also alter the file content and re-protect it with the same password in a way that is, for all intents and purposes, undetectable. So, you have have an encrypted file that holds important information that has been broken into and changed and you would not know it. Use of regular password-protected files as “vaults” where the data stored therein is assumed safe and immutable is not a really good decision.
So, What Can Be Done?
If you need to use encrypted files, you should:
- Make sure that the files are encrypted using strong encryption
- Use good passwords … ones with uppercase and lowercase characters, numbers, spaces, and symbols. Things that would never be assembled into a common dictionary.
- If you are using password protection for sending files to multiple people, do not use the same password for everyone! Use a different password for each of your corespondents. This ensures that the loose lips of one person does not compromise the security of someone else.
We have time and again seen or heard of organizations that use really poor passwords, like a dictionary word, and use that same password for all encrypted documents. This is often done to make things easy for the staff or users, but effectively renders the attempt at encryption laughable.
To protect the content of the file against unauthorized change, you will have to use a digital signature, like that available in PGP and S/MIME. The digital signature allows you to verify (a) when the content was signed, (b) who signed it, and (c) if it has been altered at all since then.
Mitigate Brute Force and Dictionary Attacks
The key to being able to guess the password to an encrypted file is the ability of the hacker to try as many passwords as s/he likes as fast as possible. If this is not an option, then “guessing” the password becomes, essentially, impossible — even if the password in use is poor.
How Can This be Accomplished?
If the encrypted file is stored in a server with access only available via a web site where you have to enter the password, then:
- No one has access to the raw encrypted file and thus cannot use any of the available password cracking tools against the file itself.
- The web site can lock out access after a few password failures. For example, after 5 incorrect passwords, the hacker would not be permitted to try again for a few minutes from the same location. This makes automated testing of large numbers of possible passwords impossible.
As a case in point, LuxSci’s SecureLine Escrow service allows LuxSci users to email files to anyone on the Internet who has an email address. It digitally signs and then encrypts the files using strong encryption and stores them on a secure server. It will never email the encrypted files themselves, keeping them invulnerable to direct attacks. It uses a long random password and makes access only available via a secure (over SSL) web site which automatically locks out access after several failed password guesses. This kind of communication is uniformly more secure that emailing password-protected files.
Of course, communications security assumes that the sender or recipient is using a computer that is not compromised. But, that is the subject of a future article. | <urn:uuid:58f8171e-fa2d-404c-b000-4dc0739c98ca> | CC-MAIN-2017-04 | https://luxsci.com/blog/how-secure-are-password-protected-files.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00450-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.941852 | 2,432 | 3.09375 | 3 |
Material & Resource Use
WATER USE & MANAGEMENT
Although EMC has a relatively small water footprint throughout our operations, we take a conscientious approach to conserving this important global resource today and for future generations. We are guided by our focus on minimizing water consumption and managing wastewater in our owned and operated facilities to help protect local water quality. Our owned global manufacturing facilities produce no industrial wastewater.
EMC’s approach includes the use of various water efficiency and conservation features in our facilities worldwide, such as low-flow plumbing fixtures, rainwater capture systems, and free air cooling. We also consider water conservation and efficiency elements when designing and constructing new facilities.
At our headquarters in Hopkinton, Massachusetts, wastewater is reclaimed at an onsite treatment plant which filters wastewater through three treatment and disinfection processes, resulting in treated “gray” water. In 2012, we reused more than 13,196 cubic meters of gray water for cooling, sanitation, and irrigation. Unused gray water is returned to the ground through infiltration systems to replenish local watersheds.
At our Massachusetts campus facilities, which account for more than 30 percent of our corporate physical footprint, we have implemented a stringent Stormwater Management System to help protect and maintain the integrity of the surrounding resources. At these facilities, we have also implemented an Integrated Pest Management program to minimize and eliminate the use of chemical herbicides, insecticides, and pesticides where possible. Through diligent management efforts, we ensure a high quality of storm water runoff from our facilities. This minimizes the impact of our operations on natural resources, including groundwater and surface water, and helps ensure that these resources are protected in the future.
Since 2007, we have tracked water consumption data for all of our owned facilities and most of the larger facilities that we lease. We use the World Business Council on Sustainable Development’s Global Water Tool to analyze our operations and calculate our water footprint in water-stressed areas.
Our total 2012 global water withdrawal was 796,610 cubic meters. 79 percent of the water withdrawal data were compiled from reliable water bills and water meter readings. The remaining annual corporate water consumption was estimated using a water intensity factor calculated by benchmarking consumption at metered EMC facilities.
We recognize that water, energy, and carbon emissions are interconnected. Water is required to generate and transmit the energy we consume, and energy is used to supply the water we use. Our suppliers also use water in their operations to produce the material components in our products. Thoughtful water conservation and efficiency practices help save energy and reduce the carbon emissions generated from these activities.
We also understand that there can be trade-offs between water and carbon emissions. Water and energy are needed to power and cool our own data centers as well as those of our customers, and our wastewater treatment plant consumes energy while reducing our water footprint.
We take a holistic view of energy and water use and the resulting carbon emissions, and thus focus on driving efficiencies in our products and operations. For example, applying free air cooling technology has allowed us to reduce the amount of energy and water consumed in our data centers and labs.
Looking forward, we have started to conduct a deeper analysis to further understand the links and trade-offs between water and carbon emissions. We plan to use the findings to develop strategies to help minimize our overall impact on the environment. | <urn:uuid:16b9f5ea-22b6-4c8d-9f03-7b0ca94e7d64> | CC-MAIN-2017-04 | https://www.emc.com/corporate/sustainability/sustaining-ecosystems/water-use.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279923.28/warc/CC-MAIN-20170116095119-00268-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.938261 | 683 | 2.71875 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.