text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
As humans have evolved throughout our existence, so has the way in which we work. What has allowed our species to survive for millions of years? It comes down to learning. From learning the best methods for building a fire to learning the proper way to gather food, learning from mistakes and successes meant life over death. Our ability to learn processes, adopt them over time and then teach them to others has been key to our successful existence. In contrast, business software, which operates in an environment that’s changing much faster than the African savannahs, remains surprisingly static. Not only is software particularly hard to change (making ‘change request’ such a dreaded phrase), but those changes are bound to be manual. And now, Big Data not only pours into large organizations at unprecendented volume and velocity, but with an extreme variety in forms. At the same time business processes, amidst an Internet of Things, execute at the speed of light. To survive like humans did, companies needs to adapt. And for that, their software needs to learn. The evolution of business Not surprisingly, as people have changed, the way that businesses interact with customers and respond to their needs must change in parallel. There is no cookie cutter example of the perfect customer or the perfect customer interaction. Whether it’s via Twitter, Facebook or the age-old method of picking up the phone, customers want businesses to anticipate their needs before they can anticipate them. Similar to how we’ve adapted as humans, computers can be taught that same sort of behavior. In business, we see this when we predict that based on a customer’s profile they’ll want a certain type of product. When they choose something completely different, we’re thrown off by their behavior. By adapting to changes in behavior, adaptive analytics can actually anticipate a customer’s needs in a way that goes beyond the cookie cutter mold. Businesses face a number of challenges, with losses among the primary concerns. Whether it’s losing a customer or money, both are equally detrimental to a business. We’re becoming more and more accustomed to the growing rate of change that we don’t think there are alternatives. This is extremely hard on a business when we think about churn or attrition. The average cost of acquiring a customer, for instance, is upwards of $300 for an average telecommunications company. Multiplied by one million customers, the costs could be exorbitant. Retaining customers, at an individual cost commensurate with (future) value, is much cheaper. This requires predictive models that are bound to get quickly outdated with competitive offers, demographic changes, new regulations, or new available products. Is there any way for a modern business to keep up with the change? The importance of adaptive models in business is most present when it comes to changes in real-time. In the business world, real-time means as fast as a customer demand can happen – which is really fast, as they’re not communicating by hand-delivered mail anymore, and often not even directly, but instead by a Tweet or post on social media. In the event of a sudden change in customer sentiment, businesses don’t have the time to meet a customer’s new needs. Different from predictive models that need to be re-focused following a change of behavior in the customer base, adaptive models will react automatically. Like a child that touches a hot surface for the first time and quickly pulls away, an adaptive model effectively learns the difference between a positive or negative response. To do this, adaptive systems look at data in a fluid form. The basic attributes of a customer are collected, such as age or gender, and many other attributes depending on the context. The customer response will then be related to the customer attributes. An example of how adaptive analytics work in real-time is when an 80-year-old woman calls her cable provider and the customer service representative recommends a particular package or channel for her to purchase. When the woman does not accept that particular offer and requests something completely different, the model will instantly readjust to avoid making the same error for not just this customer but for customers with similar attributes. If enough elderly ladies reject offer A and ask for B the business will automatically adapt to this change in demographics. In reality, of course, the models may look at hundreds or thousands of attributes not just, as in this example, gender and age. Learning to adapt As a special breed of predictive analytics, adaptive analytics is a very influential technology for businesses. Today, getting actively recalibrating intelligence out of your data, rather than depending on pre-scripted responses, is what gives businesses the competitive edge they need to survive. While big data and automation are just the beginning, we need to continue thinking ahead and learn from the past and present to continuously evolve models that provide the maximum benefit possible for the adaptive enterprise. This is the only way that businesses will be able to keep up with the changing demands of their customers and meet their needs moving forward. Survival of the fittest has always been a measurement for how organizations have been successful over long periods of time. Adaptive analytics help businesses today to stay fit and agile with an eye on adapting to future needs. Is your business fit enough to survive?
<urn:uuid:a5ad8089-39d2-4e60-b63e-d5e42de57b1c>
CC-MAIN-2017-04
http://www.computerworld.com/article/2475350/business-intelligence/adapt-or-die--analytics-should-drive-your-enterprise-evolution.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00511-ip-10-171-10-70.ec2.internal.warc.gz
en
0.952193
1,087
2.515625
3
On Nov. 2, 1988, mainstream America learned for the first time that computers get viruses, too, as what would become known as the Morris worm - named for its author, Cornell University student Robert Tappan Morris - made front-page headlines after first making life miserable for IT professionals. TV news coverage from the time, plucked off YouTube, offers a telling look at how computer viruses were perceived (or not) at the time. Here's a clip from a rather melodramatic newscast by the Boston PBS affiliate, followed by a transcript for those who prefer to read: Anchor Carmen Fields: "Life in the modern world has a new anxiety today. Just as we've become totally dependent on our computers they're being stalked by saboteurs, saboteurs who create computer viruses. The Defense Department, universities and research centers are still recovering from a computer virus that brought a nationwide network to a standstill. One of the institutions hardest hit was MIT. David Boeri reports." Boeri: "It came from California, maybe traveled by electronic mail. It spread across America." Boeri to MIT student Mark Eichin: "How insidious was this virus?" Eichin: "Well, it spread very quickly." MIT Prof. James D. Bruce: "There are reports in newspapers today that it has made its way to Europe and Australia." Boeri: "It arrived at MIT in the middle of the night. The students were safe, but their computers weren't." Jeffrey Schiller, network information systems, MIT: "Just ran. It would enter your machine, it would do its thing, it would go to other machines." Boeri: "At MIT 200 computers were infected. Across the country the toll might be 6,000. It could have been worse." Eichen: "We believe it was intended to spread more slowly than it did so that it wouldn't be noticed as quickly, which would have actually been more insidious if it spread out to a large number of machines and, say, held a surprise and did something. Once we had it stopped we were able to take it apart, sort of dissect it and tear it apart piece by piece." Boeri: "It's not really a virus, it's a code, a set of instructions, an act of sabotage that started on a floppy disk. This virus spread by disk and by telephone. Like a virus it replicated like crazy. And as it replicates, the code, the so-called virus, eats up large amounts of memory. It wipes out stored data or cripples the hardware. This virus clogged a system linking thousands of computers but apparently did no damage." Schiller: "It's benign, it's not malicious, it attempts to do no damage besides propagate itself, and that's why I think it's a warning." Prof. Bruce: "I suspect it's a student, a good A student." Boeri: "So lost computer time, but no files destroyed, just a thrill for the virus hunters, and a warning." Schiller: "My personal speculation is that this is somebody who is trying to warn people, to say, 'It can happen to you.' " In a sense it did serve as such a warning, though that was not what Morris intended (he says he wanted to measure the size of the Internet). It also resulted in Morris being the first person convicted under the 1986 Computer Fraud and Abuse Act, though he served no jail time. (Update: The Washington Post has an extensive Morris story today.) Welcome regulars and passersby. Here are a few more recent buzzblog items. And, if you’d like to receive Buzzblog via e-mail newsletter, here’s where to sign up. You can follow me on Twitter here and on Google+ here. - Prosecutor in Aaron Swartz case targeted by "swatting." - That was fast: Beckett out, Lester in, all is well. - Geek-Themed Meme of the Week Archive. - Yahoo has that Y3K problem under control. - Remarkable reunion made possible by Google Earth. - Did “The Most Interesting Man in the World” steal a ‘90s-era meme? - Research buries Microsoft’s Bing-vs.-Google claims. - New York Times corrects the record on Mario and Luigi. - Judge orders patent troll to explain ‘Mr. Sham’ to jury - Cisco can’t shield its customers from patent suits: court. - There are tragedies and then there are sunset photos. - Verizon worker grateful 911 operator could hear him now. - Did you know Google could do this? I didn’t. - “This is a 3D printed jet engine” - 2013’s 25 Geekiest 25th Anniversaries
<urn:uuid:21bb973f-dc32-4859-bd3a-2fd76b3b5d96>
CC-MAIN-2017-04
http://www.networkworld.com/article/2225697/security/eye-opening--morris-worm--turns-25-tomorrow.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00511-ip-10-171-10-70.ec2.internal.warc.gz
en
0.967034
1,020
2.703125
3
Management Information Systems The discipline of management information systems (MIS) is the study of the application of people, process, and technology to solve business problems. Many mistake this term with information systems (or IS) itself, which is specifically concerned with the processing of data and is generally associated with computer science. From this perspective, IS may be considered a component of MIS. You may be using a variety of skills and methods to plan and implement one of these management systemsincluding business process analysis, enterprise architecture, systems integration, database administration and application development. One definition of this discipline is the science of making decisions using mathematics or statistical analysis. The key point is to rely upon a systematic approach of using logic or reason, rather than a seat-of-the-pants method to make business decisions. One can readily see how this may or may not work in practice. If you are a support representative helping diagnose an incident of a failed customers system, you will probably rely upon your experience and intuition rather than a formula in a spreadsheet. Clearly, not all technical issues can be explained using a mathematical formula. On the other hand, having skills in management science topics such as optimization or forecasting may be important for the IT practitioner. There are many technical decisions that are best served by objective logic, especially when large sums of investment capital are at stake. When choosing the appropriate mainframe or SAN capacity, or determining how much bandwidth to contract for in a WAN circuit, it probably best not to rely on a gut feeling alone. Many people think of marketing as synonymous with advertising or brand management. In reality, it is a multi-disciplinary craft that involves many aspects of a business. For example, there is a marketing framework called the marketing mix that includes upwards of seven activity sets to help define products and services. Understanding these seven Ps of marketing can be invaluable in enabling your IT organization to become a strategic partner in your enterprise.
<urn:uuid:47c072a2-ac1d-4b06-bb73-bcefd0b85b0f>
CC-MAIN-2017-04
http://www.cioupdate.com/reports/article.php/11050_3707351_2/Understanding-the-10-Fundamentals-of-Any-Business.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285315.77/warc/CC-MAIN-20170116095125-00327-ip-10-171-10-70.ec2.internal.warc.gz
en
0.933749
391
2.671875
3
Question 4) Notes Domino 7 Application SubObjective: Creating Formulas with @Functions Single Answer Multiple Choice Which @function will accomplish all of the following? 1. Execute one or more statements iteratively while a condition remains true 2. Execute an initialization statement 3. Check the condition before executing the statements and executes an increment statement after executing the statements @For provides a looping structure. @For executes one or more statements while the specified condition is true. The syntax is: @For(initialization; condition; increment statement; execution statement1; execution statement2;…) “Initialization” is the statement that assigns a variable with a beginning value. “Condition” is the statement that tests the value of the initialization variable and returns a value of true (1) or false (0). “Increment statement” is the code that increments the initialization variable after each iteration of the loop. “Execution statement” represents what executes. The condition is tested before each statement executes, so an @For may never execute its statements if the condition test returns false. After each statement executes, the increment statement is executed and the test is performed again. For example, consider the following button code: @For(tmp:=1;tmp<5;tmp:=tmp+1;@Prompt([OK];”Temp value”; @Text(tmp))) In this case, the Prompt message will display four times while the tmp variable is still less than five. Each @Prompt will display the value of the tmp variable. @DoWhile executes one or more statements while a specified condition is true. It contains no initialization variable or increment statement in the function. @While is a looping structure that also allows one or more statements to be executed based on a certain condition, and it also contains no initialization or increment variable statements in the function. @ForAll is not a valid Domino formula. Domino Designer 7 Help – search on: @For RedBook – Domino Designer 6: A Developer’s Handbook – Chapter 12 http://www.redbooks.ibm.com/abstracts/sg246854.html?Open These questions are derived from the Self Test Software Practice Test for Lotus exam 710 – Notes Domino 7 Application Development Foundation Skills
<urn:uuid:d20c827c-1cd8-441e-8249-945cb45a3c1d>
CC-MAIN-2017-04
http://certmag.com/question-4-test-yourself-on-lotus-exam-710-notes-domino-7-application-development-foundation-skills/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00237-ip-10-171-10-70.ec2.internal.warc.gz
en
0.749923
493
3.078125
3
When it comes to assuring optimal performance of enterprise applications, programmers have an advantage in the capabilities of the DB2 optimizer to handle even "problem" SQL statements and give efficient access paths. Nevertheless, poorly coded SQL and application code can still give you performance problems that can be easily avoided by learning a few basic guidelines. I'll show you how the DB2 optimizer works, and give you guidelines for writing SQL that will get the most out of the optimizer. But even with the optimization abilities of DB2, writing efficient SQL statements can be a tricky proposition. This is especially true for programmers and developers new to a relational database environment. So, before we delve into the specifics of coding SQL for performance, let's take a few moments to review SQL basics. Because of the higher level of abstraction it provides, SQL, unlike procedural languages, lets programmers focus on what data they need, not how to retrieve it. You code SQL without embedded data-navigational instructions. DB2 analyzes the SQL and formulates data-navigational instructions "behind the scenes." These data-navigational instructions are called access paths. Having the DBMS determine the optimal access path to the data lifts a heavy burden off the programmer's shoulders. In addition, the database can have a better understanding of the state of the data it stores, and thereby can produce a more efficient and dynamic access path to the data. The result is that SQL, used properly, can provide for quicker application development. Another SQL feature is that it's not merely a query language. You also use it to define data structures; control access to the data; and insert, modify, and delete occurrences of the data. By providing a common language, SQL eases communication among DBAs, systems programmers, application programmers, systems analysts, and end users. When all the participants in a project speak the same language, they create a synergy that can reduce overall system-development time. Arguably, though, the single most important feature of SQL that has solidified its success is its capability to retrieve data easily using English-like syntax. Understand this is much easier than understanding pages and pages of program source code: SELECT LASTNAME FROM EMP WHERE EMPNO = '000010'; Think about it: When accessing data from a file, the programmer would have to code instructions to open the file, start a loop, read a record, check whether the EMPNO field equals the proper value, check for end of file, go back to the beginning of the loop, and so on. SQL is, by nature, quite flexible. It uses a free-form structure that lets the user develop SQL statements to suit his or her needs. The DBMS parses each SQL request before execution to check for proper syntax and to optimize the request. SQL statements do not need to start in any given column, and you can string them together on one line or break them apart on several lines. For example, this single-line SQL statement is equivalent to the three-line example I used previously: SELECT LASTNAME FROM EMP WHERE EMPNO = '000010'; Another flexible feature of SQL is that you can formulate a single request in a number of different and functionally equivalent ways. One example: SQL can join tables or nest queries. You can always convert a nested query into an equivalent join. You can see other examples of this flexibility in the vast array of functions and predicates. Examples of features with equivalent functionality include: - BETWEEN versus <= / >= - IN versus a series of predicates tied together with OR - INNER JOIN versus tables strung together in the FROM clause separated by commas - OUTER JOIN versus a simple SELECT, with a UNION, and a correlated subselect - CASE expressions versus complex UNION ALL statements This flexibility that SQL exhibits is not always desirable, because different but equivalent SQL formulations can deliver wildly different performance. I'll discuss the ramifications of this flexibility later in this article and provide guidelines for developing efficient SQL. As I mentioned, SQL specifies what data to retrieve or manipulate, but does not specify how the database accomplishes these tasks. This keeps SQL intrinsically simple. If you can remember the set-at-a-time orientation of a relational database, you will begin to grasp SQL's essence and nature. A single SQL statement can act upon multiple rows. The capability to act on a set of data coupled with the lack of need for establishing how to retrieve and manipulate data defines SQL as a non-procedural language Because SQL is a non-procedural language, a single statement can take the place of a series of procedures. Again, this is possible because SQL uses set-level processing and DB2 optimizes the query to determine the data-navigation logic. Sometimes one or two SQL statements can accomplish tasks that otherwise would require entire procedural programs to do. The optimizer is the heart and soul of DB2. It analyzes SQL statements and determines the most efficient access path available for satisfying each statement (see Figure 1). DB2 accomplishes this by parsing the SQL statement to determine which tables and columns must be accessed. The DB2 optimizer then queries system information and statistics stored in the DB2 system catalog to determine the best method of accomplishing the tasks necessary to satisfy the SQL request. Figure 1. DB2 optimization in action. The optimizer is equivalent in function to an expert system. An expert system is a set of standard rules that, when combined with situational data, returns an "expert" opinion. For example, a medical expert system takes the set of rules determining which medication is useful for which illness, combines it with data describing the symptoms of ailments, and applies that knowledge base to a list of input symptoms. The DB2 optimizer renders expert opinions on data retrieval methods based on the situational data housed in DB2's system catalog and a query input in SQL format. The notion of optimizing data access in the DBMS is one of the most powerful capabilities of DB2. Remember, you access DB2 data by telling DB2 what to retrieve, not how to retrieve it. Regardless of how the data is physically stored and manipulated, DB2 and SQL can still access that data. This separation of access criteria from physical storage characteristics is called physical data independence. DB2's optimizer is the component that accomplishes this physical data independence. If you remove the indexes, DB2 can still access the data (although less efficiently). If you add a column to the table being accessed, DB2 can still manipulate the data without changing the program code. This is all possible because the physical access paths to DB2 data are not coded by programmers in application programs, but are generated by DB2. Compare this with non-DBMS systems in which the programmer must know the physical structure of the data. If there is an index, the programmer must write appropriate code to use the index. If someone removes the index, the program will not work unless the programmer makes changes. Not so with DB2 and SQL. All this flexibility is attributable to DB2's capability to optimize data manipulation requests automatically. The optimizer performs complex calculations based on a host of information. To visualize how the optimizer works, picture the optimizer as performing a four-step process: - Receive and verify the syntax of the SQL statement. - Analyze the environment and optimize the method of satisfying the SQL statement. - Create machine-readable instructions to execute the optimized SQL. - Execute the instructions or store them for future execution. The second step of this process is the most intriguing. How does the optimizer decide how to execute the vast array of SQL statements that you can send its way? The optimizer has many types of strategies for optimizing SQL. How does it choose which of these strategies to use in the optimized access paths? IBM does not publish the actual, in-depth details of how the optimizer determines the best access path, but the optimizer is a cost-based optimizer. This means the optimizer will always attempt to formulate an access path for each query that reduces overall cost. To accomplish this, the DB2 optimizer applies query cost formulas that evaluate and weigh four factors for each potential access path: the CPU cost, the I/O cost, statistical information in the DB2 system catalog, and the actual SQL statement. Guidelines for performance So, keeping the information about the DB2 optimizer in mind, you can implement these guidelines to facilitate better SQL performance: 1) Keep DB2 statistics up-to-date: Without the statistics stored in the DB2' system catalog, the optimizer will have a difficult time optimizing anything. These statistics provide the optimizer with information pertinent to the state of the tables that the SQL statement being optimized will access. The type of statistical information stored in the system catalog include: - Information about tables, including the total number of rows, information about compression, and total number of pages; - Information about columns, including number of discrete values for the column and the distribution range of values stored in the column; - Information about table spaces, including the number of active pages; - Current status of the index, including whether an index exists, the organization of the index (number of leaf pages and number of levels), the number of discrete values for the index key, and whether the index is clustered; - Information about the table space and index node groups or partitions. Statistics populate the DB2 system catalog when you execute the RUNSTATS or RUN STATISTICS utility. You can invoke this utility from the Control Center, in batch jobs, or using the command-line processor. Be sure to work with your DBA to ensure you accumulate statistics at the appropriate time, especially in a production environment. 2) Build appropriate indexes: Perhaps the most important thing you can do to assure optimal DB2 application performance is create correct indexes for your tables based on the queries your applications use. Of course, this is easier said than done. But we can start with some basics. For example, consider this SQL statement: SELECT LASTNAME, SALARY FROM EMP WHERE EMPNO = '000010' AND DEPTNO = 'D01' What index or indexes would make sense for this simple query? ""'First, think about all the possible indexes that you could create. Your first short list probably looks something like this: - Index1 on EMPNO - Index2 on DEPTNO - Index3 on EMPNO and DEPTNO This is a good start, and Index3 is probably the best of the lot. It lets DB2 use the index to immediately look up the row or rows that satisfy the two simple predicates in the WHERE clause. Of course, if you already have a lot of indexes on the EMP table, you might want to examine the impact of creating yet another index on the table. Factors to consider include: - Modification impact: DB2 will automatically maintain every index you create. This means every INSERT and every DELETE to this table will insert and delete not just from the table, but also from its indexes. And if you UPDATE the value of a column that is in an index, you also update the index. So, indexes speed the process of retrieval but slow down modification. - Columns in the existing indexes: If an index already exists on EMPNO or DEPTNO, it might not be wise to create another index on the combination. However, it might make sense to change the other index to add the missing column. But not always, because the order of the columns in the index can make a big difference depending on the query. For example, consider this query: SELECT LASTNAME, SALARY FROM EMP WHERE EMPNO = '000010' AND DEPTNO > 'D01'; In this case, EMPNO should be listed first in the index. And DEPTNO should be listed second, allowing DB2 to do a direct index lookup on the first column (EMPNO) and then a scan on the second (DEPTNO) for the greater-than. Furthermore, if indexes already exist for both columns (one for EMPNO and one for DEPTNO), DB2 can use them both to satisfy this query so creating another index might not be necessary. - Importance of this particular query: The more important the query, the more you might want to tune by index creation. If you are coding a query that the CIO will run every day, you want to make sure it delivers optimal performance. So building indexes for that particular query is important. On the other hand, a query for a clerk might not necessarily be weighted as high, so that query might have to make do with the indexes that already exist. Of course, the decision depends on the application's importance to the business-not just on the user's importance. Index design involves much more than I have covered so far. For example, you might consider index overloading to achieve index-only access. If all the data that a SQL query asks for is contained in the index, DB2 might be able to satisfy the request using only the index. Consider our previous SQL statement. We asked for LASTNAME and SALARY, given information about EMPNO and DEPTNO. And we also started by creating an index on the EMPNO and DEPTNO columns. If we include LASTNAME and SALARY in the index as well, we never need to access the EMP table because all the data we need exists in the index. This technique can significantly improve performance because it cuts down on the number of I/O requests. Keep in mind that making every query an index-only access is not prudent or even possible. You should save this technique for particularly troublesome or important SQL statements. SQL coding guidelines When you write SQL statements that access DB2 data, be sure to follow these three guidelines for coding SQL for performance.Of course, SQL performance is a complex topic, and understanding every nuance of how SQL performs can take a lifetime. That said, these simple rules put you on the right track for developing high-performing DB2 applications. - The first rule is to always provide only the exact columns that you need to retrieve in the SELECT-list of each SQL SELECT statement. Another way of stating this is, "do not use SELECT *". The shorthand SELECT * means you want to retrieve all columns from the table(s) being accessed. This is fine for quick and dirty queries, but is bad practice for application - DB2 tables might need to change in the future to include additional columns. SELECT * will retrieve those new columns, too, and your program might not be capable of handling the additional data without requiring time-consuming changes. - DB2 will consume additional resources for every column that requested to be returned. If the program does not need the data, it should not ask for it. Even if the program needs every column, it is better to ask for each column explicitly by name in the SQL statement for clarity and to avoid the previous pitfall. - Do not ask for what you already know. This might sound obvious, but most programmers violate this rule at one time or another. For a typical example, consider what's wrong with this SQL statement: SELECT EMPNO, LASTNAME, SALARY FROM EMP WHERE EMPNO = '000010'; Give up? The problem is that EMPNO is included in the SELECT-list. You already know that EMPNO will be equal to the value '000010' because that is what the WHERE clause tells DB2 to do. But with EMPNO listed in the WHERE clause, DB2 will dutifully retrieve that column, too. This incurs additional overhead, thereby degrading performance. - Use the WHERE clause to filter data in the SQL instead of bringing it all into your program to filter. This, too, is a common rookie mistake. It is much better for DB2 to filter the data before returning it to your program. This is because DB2 uses additional I/O and CPU resources to obtain each row of data. The fewer rows passed to your program, the more efficient your SQL will be: SELECT EMPNO, LASTNAME, SALARY FROM EMP WHERE SALARY > 50000.00; This SQL is better than simply reading all the data without the WHERE clause and then checking each row to see if the SALARY is greater than 50000.00 in your program. - Use parameterized queries. A parameterized SQL statement contains variables, also known as parameters (or parameter markers). A typical parameterized query uses these parameters instead of literal values, so that WHERE clause conditions can be changed at run time. Usually the program is designed such that the end user can provide the values for the parameters before running the query. This allows one query to be used to return different results based on the different values provided to the parameter. The key performance benefit of parameterized queries is that the optimizer can formulate an access path that can be reused over repeated executions of the statement. This can accrue a large performance gain for the program as compared to issuing a completely new SQL statement every time a new value is required in a WHERE clause. These rules, though, are not the be-all, end-all of SQL performance tuning-not by a long shot. You might need additional, in-depth tuning. But following the preceding rules will ensure you are not making "rookie" mistakes that can kill application performance. Specific database application development tips Regardless of the programming language and environment you use to build DB2 applications, there are certain techniques and guidelines you can deploy to help assure good performance when accessing DB2 data. Be sure to issue the COMMIT statement frequently in your application. A COMMIT statement controls the unit of work. Issuing a COMMIT records all the work since the last COMMIT statement "for keeps" in the database. Before issuing the COMMIT, you could roll back the work using the ROLLBACK statement. When you modify data (using INSERT, UPDATE, and DELETE statements) but don't issue a COMMIT, DB2 will hold and maintain locks on the data-which can cause other applications to time out waiting to retrieve locked data. By issuing COMMIT statements when work is complete, and ensuring that the data is accurate, you free up that data for other applications to use. Also, build your applications with usage in mind. For example, proceed cautiously when a particular query returns thousands of rows to the end user. Rarely will more than a few hundred rows be useful for an online interaction between a program and an end user. You can use the FETCH FIRST n ROWS ONLY clause on your SQL statements to limit the amount of data returned to a query. For example, consider this query: SELECT EMPNO, LASTNAME, SALARY FROM EMP WHERE SALARY > 10000.00 FETCH FIRST 200 ROWS ONLY; This query will return only 200 rows. It does not matter if more than 200 rows would have qualified; DB2 will signal end of data with a +100 SQLCODE if you try to FETCH more than 200 rows from the query. This approach is useful when you want to limit the amount of data to return to your program. DB2 supports another clause called OPTIMIZE FOR n ROWS, which will not limit the rows returned to the cursor, but can be helpful from a performance perspective. Use the OPTIMIZE FOR n ROWS clause to tell DB2 how to process the SQL statement. For example: SELECT EMPNO, LASTNAME, SALARY FROM EMP WHERE SALARY > 10000.00 OPTIMIZE FOR 20 ROWS; This tells DB2 to try to fetch the first 20 rows as quickly as possible. This is useful if your Delphi application displays rows retrieved from the database 20 at a time. For read-only cursors, make the cursor unambiguous by using the FOR READ ONLY clause. For example: SELECT EMPNO, LASTNAME, SALARY FROM EMP WHERE SALARY > 10000.00 FOR READ ONLY; Understanding the basics of SQL coding for performance will give your enterprise applications an immediate performance boost. But I've only scraped the tip of the iceberg. You will need to learn about increasingly complex types of SQL, including joins, subselects, unions, and more. You'll also need to learn how best to write these SQL statements, and how to discover the access paths DB2 chose to satisfy your SQL requests. Indeed, there is much more to learn. But content yourself with the knowledge that you have embarked on the path of getting the most out of DB2 SQL. - IBM DB2 Web site: http://www.ibm.com/software/data/db2/ - DB2 Magazine: http://www.db2mag.com - International DB2 User's Group: http://www.idug.org - The SQL Reference for Cross-Platform Development (IBM): http://www.ibm.com/developerworks/db2/library/techarticle/0206sqlref/0206sqlref.html - Developing for DB2: http://www.db2mag.com/db_area/archives/2002/q2/eaton.shtml - Existence Checking in DB2: http://www.dbazine.com/custard1.html - A Quick Reference for Tuning DB2 Universal Database EEE: http://www.ibm.com/developerworks/db2/library/techarticle/0205parlapalli/0205parlapalli.html
<urn:uuid:3ed107f8-1a82-4f11-96b5-e1f5a54da8e9>
CC-MAIN-2017-04
http://www.ibm.com/developerworks/data/library/techarticle/0210mullins/0210mullins.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00355-ip-10-171-10-70.ec2.internal.warc.gz
en
0.875935
4,501
2.984375
3
The Optical Amplifiers are devices that direct the amplified light signal, without the need to first convert it into an electrical signal. Prior to this, the transmission signal amplification to achieve the photoelectric conversion and electro-optical conversion, i.e., O / E / O converting. With the optical amplifier can achieve optical signal amplification. The successful development of the optical amplifier and its industrialization is a very important achievement in the optical fiber communication technology, it has greatly contributed to the development of optical multiplexing, optical soliton communication and all-optical network. Fiber amplifier will not only directly amplifying optical signals, and also offers real-time, high gain, broadband, online, low noise, low loss optical zoom function is the key components of a new generation of optical fiber communication systems essential. The Fiber Amplifier Usually by the gain medium, the pumping light input-output coupling structure and composition. Fiber amplifier mainly erbium-doped fiber amplifier, semiconductor optical amplifiers and three optical fiber Raman amplifier according to the position and the role of the fiber amplifier in the optical fiber line relay amplification, generally divided into three pre-amplification and power amplification. Optical Fiber Amplifier (OFA) is used in optical fiber communication lines, to achieve a new all-optical signal amplification amplifier. Erbium-doped fiber amplifier (EDFA), semiconductor optical amplifier (SOA) and optical fiber Raman amplifier (FRA), erbium-doped fiber amplifier with its superior performance in practical fiber amplifier is now widely used in long distance, high-capacity, high-speed optical fiber communication systems, access networks, optical fiber CATV networks, military systems (radar multi-channel data multiplexing, data transmission, guidance, etc.) in areas such as power amplifiers, repeaters amplifier and preamplifier. Coexistence of CATV network for hybrid fiber / coax structure of a variety of systems, the erbium-doped fiber amplifier is increasingly able to get attention, especially the front-end centralized system, point-to-multipoint light wave structure and long-distance trunk transmission system especially. For CATV designers most commonly tree distribution network, the efficiency of the system is determined by the cost per user. CATV amplifier is a electronic device that accept a varying input signal and produce an output signal that varies in the same way as the input, but has larger amplitude.Therefore, the use of erbium-doped fiber amplifier to increase the optical power on the basis of the original transmitting equipment, services for more users, thereby reducing the cost of the transmitter units of milliwatts. In addition, in recent years, including erbium-doped fiber amplifier 1550nm light emitting device can be the cheapest fiber to the curb and fiber to the building. All in all, CATV fiber trunk transmission and power distribution system as well as the progressive realization of the “triple play” of voice, video, data path transmission for the ultimate realization of broadband integrated services digital network, erbium-doped fiber amplifier will play an invaluable role. For more optical amplifiers information, please visit FS.COM or via email@example.com to contact us. Fiberstore is profession supplier and manufacturer of optical amplifiers.You can save cost to buy fiber optic products by Fiberstore.
<urn:uuid:0b574413-c6d0-425c-9fa4-c2ccafa42478>
CC-MAIN-2017-04
http://www.fs.com/blog/overview-optical-fiber-amplifier-from-fiberstore.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00081-ip-10-171-10-70.ec2.internal.warc.gz
en
0.885874
692
3.03125
3
The administrator can use PUM Run feature to provide privileged access to users for a specific process, system tools, or specific files. For example, service.msc or notepad.exe. For information on creating a privileged account domain, see Section 5.12.1, Creating an Account Domain for Windows Systems. For information on adding a command, see Section 5.8.1, Adding a Command. Click Command Control on the home page of the console. Click Commands in the navigation pane. Select the command you want to modify. Click Modify Command in the task pane. In the Modify Command page, type the processes which requires privileged access. For information on adding a rule, see Section 5.6.1, Adding a Rule. To modify a rule, see Section 5.6.2, Modifying a Rule. Ensure that you modify the following option: Run Host: Click Login to the system as an administrator by using any remote desktop accessing tool. Right-click the process and selectto provide privileged access to the process. You can also provide privileged access to specific files. For Example: To provide privileged access to critical.txt file: Create a short-cut to Notepad. Notepad is the process that is used to open the critical.txt file. Right-click the short-cut to Notepad, then select. In the critical.txt file after the file path of the process, then click .field, add the file path of the NOTE:For example, the path can be added in the following format: Right-click the shortcut and select critical.txt file.to provide privileged access to the
<urn:uuid:ef06afc5-8384-4f88-8a49-65117225c89b>
CC-MAIN-2017-04
https://www.netiq.com/documentation/privilegedusermanager23/npum_admin/data/bzmp2jy.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280587.1/warc/CC-MAIN-20170116095120-00319-ip-10-171-10-70.ec2.internal.warc.gz
en
0.724652
354
2.515625
3
Definition: A way of handling collisions, that is, when two or more items should be kept in the same location, especially in a hash table. The general ways are keeping subsequent items within the table and computing possible locations (open addressing), keeping lists for items that collide (chaining), or keeping one special overflow area. Specialization (... is a kind of me.) direct chaining, open addressing, separate chaining. Aggregate parent (I am a part of or used in ...) Note: The special overflow area can be any searchable data structure, even another (smaller) hash table. If you have suggestions, corrections, or comments, please get in touch with Paul Black. Entry modified 1 December 2004. HTML page formatted Mon Feb 2 13:10:39 2015. Cite this as: Paul E. Black, "collision resolution scheme", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 1 December 2004. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/collisionres.html
<urn:uuid:e51411eb-e793-49bd-bb53-435229a52fca>
CC-MAIN-2017-04
http://www.darkridge.com/~jpr5/mirror/dads/HTML/collisionres.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280266.9/warc/CC-MAIN-20170116095120-00255-ip-10-171-10-70.ec2.internal.warc.gz
en
0.829134
244
3.5
4
To discuss the global trends of cyber security, we must first discuss the motivation behind the actors who are delivering malware into environments, running distributed denial of services attacks and causing breaches across the industry. There are three main reasons for performing malicious attacks on a corporate environment: for profit, espionage and hacktivism. For Profit, we see a lot of trends coming from Eastern Europe in which simple tools are used to steal personal identifiable information (PII) that can be used by the malicious actors or sold to anyone willing to purchase the data from the underground marketplace. These attacks are generally website compromises that lead to databases containing encrypted PII. The style of the attack is more of a smash and grab. This was recently seen with the breach at the Revenue department of the State of South Carolina where over 387,000 credit and debit cards were taken. Of the 387,000 records, only 16,000 were unencrypted and revealed in plain text. There was a time when these types of illegal transactions took place in dark places and were unknown to the general public, but that’s no longer the case. The malicious actors now even offer free samples, verification services and replacement packages if cards are no longer valid. The size of the economy is largely unknown, but there was a researcher at McAfee that estimated the size to be in excess of $750 billion in 2011. For Espionage, there is a completely different set of tools and goals. You are finding more long-term attacks. Spear phishing is used more prevalently in an attempt to deliver malware into an environment. We find that the attacks are primarily coming from Asia, and the intent is to escalate privileges until a level is reached in which data can be transferred quietly and efficiently out of an environment through a compromised third-party server. Attack experts believe that the malware’s first phase is to collect sensible information on the target networks and in a second phase, to erase tracks of its operation. It then destroys the infected machines making the subsequent forensic analysis by computer experts difficult. For example, there is an ecommerce site that has purchased a /32 bit subnet allowing them six hosts per segment, and the owner is only using one for his web server and another for a database server. The host web server is compromised with a recent zero-day exploit. The malicious actor would compromise the site, unknowingly to the ecommerce operator, and set up a communication tunnel from which they would transfer stolen data. The data will then be transferred to a collection server and then retrieved by actors located at the true origin of the attack. Before completing their mission, they would whip out the communication path so that there is not trace that they ever were there, making forensics impossible. This is a common technique used to transfer data without the true source being revealed. For Hacktivism, this is a cause of social protest or to promote political ideology. Hacktivists employ operations such as denial of service(s), information theft, data breach(es), and website defacement(s). These are certainly not new tactics and were used back in the mid-90s by groups such as the Cult of the Dead Cow. We have seen groups stand up and act as both Robin Hood and Prince John in one. Robin Hood, in which they stand for righting the wrongs that has been committed on the Internet. For example, a group identifying a person who wrongfully committed Internet crimes against a minor that drove that person to take their own life. This person who committed the crime would have their lives published on the Internet for all to see and for law enforcement to track. The Prince Johns are those of the group who do not see the truth in what the other are attempting to do. They use the tools and access to use on low security financial institutions and targets of a convenient and easy nature to compromise. According to the study “Data Breach Investigations Report,” published by Verizon, hacktivists stole almost twice as many records of ordinary cyber crime from organizations and government agencies. Hacktivists are showing incredible skills and we expect the attacks to increase in numbers as well as impact. They were the representation of their generation and performed their operations of denial of services, information theft, data breach and website defacement. To learn more about cyber crime, join Internap and Alert Logic for a Cyber Crime Evening Reception on December 5th. Click here to register. Guest Contributor Stephen Coty is a member of the Alert Logic Security Research Team
<urn:uuid:639a71c9-6390-435b-b1da-8e4751dea83a>
CC-MAIN-2017-04
http://www.internap.com/2012/11/20/global-trends-of-cyber-security/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00163-ip-10-171-10-70.ec2.internal.warc.gz
en
0.957288
914
2.75
3
Passwords present a number of problems for organizations: - Users have too many passwords and have a hard time remembering - Password management is exacerbated when different passwords expire on different schedules, are changed via different user interfaces and are subject to different policies. Users respond to these problems by - Choosing trivial (and insecure) passwords. - Avoiding password changes. - Writing down their passwords, effectively reducing logical security to be equal to physical security. Users often forget their passwords or mistype them, creating high IT support call volumes at the help desk -- this is both inconvenient for users and costly for the organization. The impacts of poor password management are: - User frustration. - High IT support cost. - Weak authentication. Hitachi ID Password Manager improves the security of authentication processes: - Strong, uniform password policy: A strong, uniform set of password composition rules and an open-ended password history prevent the use of easily guessed passwords and ensure that all passwords are changed - Fewer passwords (to write down): Password synchronization reduces the burden on users, who can finally comply with rules against writing down their passwords. - Authenticate users before resetting passwords: Consistent, reliable authentication processes ensure that users are reliably identified before accessing either self-service or assisted password resets. - Two-factor authentication: User of multiple credentials can be mandated ahead of every user interaction, blocking attacks on user accounts by convincing the help desk to reset a victim's password. - Secure SaaS logins: Federated access allows two-factor authentication to be extended to SaaS applications, not just Password Manager logins. - No more privileged support accounts: IT support staff can be empowered to reset passwords and clear lockouts through the Password Manager portal, without direct administrative rights on every system and application. Cost Savings and Improved Productivity Password Manager reduces the IT support cost associated with passwords: - Lower problem frequency: Users have fewer passwords to remember, due to password synchronization. They are invited to change passwords in the morning, at the start of the week, after which the new password will be used often, so not forgotten. As a result, users tend to remember their passwords and have fewer problems. - Lower call volume: Not only do users have fewer login problems, but they can resolve those problems on their own. Self-service password reset and unlock are available at the PC login screen, on a browser, with a smart phone app or a phone call, on-site or away. Users who resolve their own problems don't call the help desk. - Lower peak volumes: Most password reset calls happen during a few short hours, at the beginning of the first work day of the week and especially after holidays. By driving down problem frequency and call volume generally, these peaks are attenuated. As a result, fewer total help desk staff are needed. - Reduced cost per incident: Even when users do call for support, a single and efficient web portal enables support staff to authenticate them, reset passwords, clear lockouts and generate tickets quickly and easily, shortening call duration and incident cost. Improved User Service Password Manager improves user service by simplifying password management: - Fewer passwords: Users only have to remember one or two passwords -- these are synchronized across the user's accounts on various systems. - Help off-site users: When a user is away from the office and forgets his PC login password, he must bring or ship his PC back to the office, so that any password reset can be applied to the local credential cache. Password Manager eliminates this business interruption by enabling self-service password reset, from the PC login prompt, even for users who are not at work. - Simpler UI: All passwords are managed through a single, friendly web portal. - Clear, consistent policy: Password composition rules are clearly explained and applied to all systems and applications. - Resolve login problems: In the event of a password or login problem, users can quickly resolve their own problem using self-service, rather than calling the help desk and waiting for service. - Advance warning of password expiry: Password expiration notices are delivered to all users, including off-site users who would otherwise get no warning before their account is locked out. - Personal vault: Users can store unmanaged credentials in a secure, personal password vault, accessible using their PCs or phones.
<urn:uuid:c4b6c74c-48e2-4706-9042-d5eb7f0d067e>
CC-MAIN-2017-04
http://hitachi-id.com/password-manager/overview/business-case.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280872.69/warc/CC-MAIN-20170116095120-00071-ip-10-171-10-70.ec2.internal.warc.gz
en
0.875837
976
2.515625
3
The protection of Critical National Infrastructure today generally relates to power stations, electric grid infrastructure and air traffic control systems. However, as a new article in SC Magazine highlights, the advent of smart cities widens the catchment further and all the same rules apply as to any other networked computer system: any software or firmware-based control system will always have the potential to be vulnerable to a cyber attack, either through malware infection to the file system, or through hacking of configuration settings. (Of course, the 1969 movie, The Italian Job demonstrated this, so this maybe isn’t such a new phenomenon). The same rules therefore apply in terms of security best practices for Smart City systems, with the need for vulnerability management and patching, change control and system integrity monitoring to detect breaches. We are already seeing the effect referred to by Gartner recently, whereby information security breaches like Sony 'institutionalise more-proactive thinking about cyber-security risks'. Cerrudo is right to highlight the need for Smart City technology to recognise the huge significance of breach prevention and plan for the inevitable cyberattacks. Read the full article on SC Magazine here.
<urn:uuid:f73837a1-8a74-4f8b-8543-c7e5a34d8dfd>
CC-MAIN-2017-04
https://www.newnettechnologies.com/cities-wide-open-to-cyber-attacks.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00373-ip-10-171-10-70.ec2.internal.warc.gz
en
0.905531
234
2.65625
3
What do a rat and a Ph.D. have in common? They team up to wire schools for computer networks. Judy Reavis, the brains of the partnership, trained Rattie, a seven-inch laboratory rat, to pull string -- connected to category five computer cable -- through air ducts, walls and other narrow places. Rattie heads for the tapping sound Reavis makes, and when she emerges towing the string, she is treated to cat food and gummi bears. The string is pulled, the cable emerges, and another computer links to the Internet. The partnership volunteered for Netday and, at last count, helped wire 10 schools. Rattie has attracted so much mail from children that the partnership now has its own website at . Rattie also has an advice column. "Ask Judy's Rat," ghostwritten by Reavis, president of Hermes Systems Management in Benecia, Calif. Children from around the world have logged on and sent e-mail to . Reavis trained Rattie 20 minutes a day for three months, using rolled up screen as a maze, and gradually introducing obstacles -- such as cotton to simulate insulation. February Table of Contents
<urn:uuid:13bde934-3f2d-4a51-a472-07c6dc931600>
CC-MAIN-2017-04
http://www.govtech.com/magazines/gt/Its-the-Cheese.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280730.27/warc/CC-MAIN-20170116095120-00007-ip-10-171-10-70.ec2.internal.warc.gz
en
0.952953
240
2.765625
3
The Israel Antiquities Authority this week concludes a pilot project that prepares the way for a much larger operation to photograph the 15,000-20,000 fragments that make up the 900 scrolls, reports The Guardian. By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers. The scrolls were first photographed in the 1950s after being discovered by shepherds in caves near the Dead Sea. Since then, they have been kept in monitored conditions in a vault, and only four specially trained curators are allowed to handle them. In a multi-million-pound project that could take up to five years, the scroll fragments will first be photographed by a 39-megapixel digital camera, then by another digital camera in infra-red light. Later, some will be photographed with a sophisticated multi-spectral imaging camera. All the fragments will eventually be available to view online, with transcriptions, translations, scholarly interpretations and bibliographies provided for academic study. Written about 2,000 years ago, the scrolls contain the oldest written record of the Old Testament. The pilot project has shown that infra-red photography picks out letters not previously visible to the naked eye. The multi-spectral imaging camera will, for the first time, enable the condition of the scrolls to be monitored properly, including their water content, in a non-invasive way. This will aid conservation by detecting any changes in the scrolls' condition before these become visible to curators.
<urn:uuid:c223c5d0-6d92-445a-badb-93cebbc6d770>
CC-MAIN-2017-04
http://www.computerweekly.com/news/2240103062/Dead-Sea-Scrolls-to-go-on-web-using-high-res-photography
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282202.61/warc/CC-MAIN-20170116095122-00309-ip-10-171-10-70.ec2.internal.warc.gz
en
0.936941
315
3.234375
3
As of December 1, 2016, US law enforcement has gained new hacking powers thanks to changes to Rule 41 of the Federal Rules of Criminal Procedure that now simplify the process of getting warrants to hack into devices of US citizens and the citizens of other countries. The Rule 41 amendments had been proposed in 2014 by an advisory committee on criminal rules for the Judicial Conference of the United States. In April 2016, the United States Supreme Court, and not Congress, approved the proposed procedural changes. According to standard US government procedures, the Supreme Court then forwarded the amendment to Rule 41 to US Congress, who had until today to disavow the proposed changes. The technical procedure through which could have been accomplished included passing a law that shot down the proposed amendment. There were several attempts to prevent the changes to Rule 41. Senators Ron Wyden (Oregon) and Rand Paul (Kentucky) came the closest, in both stopping the law, or at least delaying with three months the due date until it could be shot down, but have eventually failed. According to the "new" Rule 41, the FBI and other US law enforcement now have at their disposal a simplified procedure for requesting warrants that allow them to hack the computers and devices of people they have probable cause of committing a crime. Previously, law enforcement had to request a warrant from a judge from the same jurisdiction where the possible subject resided. If it needed to hack into devices belonging to a group of individuals, it needed to obtain different warrants, in all states, which was a time-consuming operation. According to the revised Rule 41, law enforcement can now request one warrant for hacking anyone in the US, even multiple targets, from one single judge. Furthermore, if the target is using Tor, I2P, VPNs, or other technologies that mask his IP address, the FBI has the legal power (in their eyes) to hack anyone across the globe. The FBI isn't strange to such scenarios, and it didn't wait for the new Rule 41 amendment to pass. In 2015, the FBI obtained one warrant, which it used to hack over 8,000 computers in 120 countries. Also included in Rule 41 is a clause that allows judges to issue warrants that allow law enforcement to hack or seize devices part of a botnet. Nowadays we have botnets of IoT smart devices, botnets of infected home WiFi routers, botnets of infected PCs, botnets of infected mobile devices, and so on. Any malware that infects any device and uses an online command and control server is a botnet, even annoying adware families. Almost all malware families today use C&C servers, and indirectly form a botnet. Technically, the FBI and US law enforcement can hack anything they want on the suspicion a device has been infected with malware. In a statement published in June, the US Department of Justice has tried to reassure the US population that protections provided by the Fourth Amendment are still into play and law enforcement must establish probable cause before requesting such warrants. Nevertheless, judges are still the ones ruling on these warrants. Just this spring, the media blasted a clueless judge that oversaw the copyright battle between Oracle and Google. The judge had a very hard time understanding basic principles such as APIs and programming languages. Throwing around words like botnets and malware at such judge would likely result in approval of any warrant the FBI would be requesting. While the FBI and other law enforcement agencies try to push the agenda for new laws that fight new "cyber" threats, nobody's talking about educating members of the judicial system. There's a trend across the world with several countries passing privacy-intrusive and sweeping surveillance laws. Just two weeks back, the UK has approved the most extreme surveillance law ever passed in the history of a Western democracy, as Edward Snowden characterized the new Investigatory Powers Bill (IP Bill), which was passed into law this week. Similarly, also this month, China passed new a cyber-security law that allows it to restrict Internet access in the country in the case of a "national security" issue. This week, Russia and China signed a pact that would allow the Kremlin government access to Chinas' famous Great Firewall technology. Russia is already running its own "blocklist," but now hopes to gather know-how on running a proper Internet censorship tool from the world's best, which is with no doubt the Chinese administration.
<urn:uuid:c3d87eb2-1177-40f3-8ee5-089a76b92ffc>
CC-MAIN-2017-04
https://www.bleepingcomputer.com/news/government/as-of-today-us-law-enforcement-has-new-hacking-powers/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284411.66/warc/CC-MAIN-20170116095124-00217-ip-10-171-10-70.ec2.internal.warc.gz
en
0.963888
898
2.515625
3
Federal data centers are very quickly getting saturated with data and this is forcing them to adapt storage architecture which helps optimize Big Data storage. For the past 4 years, the federal IT budget has been relatively flat but there is an increased in big data and a concomitant need to store Big Data too. The growth in data storage needs have been showing a growth between 30 and 40% every year. This is a complex figure given the fact that IT funding has been tight and there is a rather pressing need to reduce costs. Storage has also become a crucial issue because of the growth of cloud computing and the need for data center consolidation. For most of the federal agencies, a tiered storage architecture has been the technology of choice. Depending on the various needs like speed of data access, data protection, archiving requirements to name a few components, federal agencies will need to put together data storage architecture. Read More About Data Storage
<urn:uuid:b7f97a83-30b7-41a0-9a45-5d8372454844>
CC-MAIN-2017-04
http://www.datacenterjournal.com/federal-data-centers-addressing-data-storage-issues/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279915.8/warc/CC-MAIN-20170116095119-00035-ip-10-171-10-70.ec2.internal.warc.gz
en
0.965841
186
2.65625
3
Recently I participated on a panel to discuss what has changed in the parallel computing research community in the past 20 years, specifically dealing with languages and compilers. Many things look the same, but one thing that has changed a great deal is the presence, acceptance and use of standard benchmarks. Twenty years ago, we compared supercomputers and minisupercomputers using any number of microkernels and small programs (remember the Livermore Loops!). We still fall into this trap, rating the top 500 supercomputers using a single benchmark program. However, even 20 years ago, there were several efforts to collect benchmarks suites to represent real applications, including the Perfect Club, RiCEPS, and Mendez suites. These were also used to compare the behavior and performance of compiler optimizations. Processor, system and compiler performance are now frequently measured using the SPEC CPU benchmarks. The original suite was released as ten programs in 1989, each of which ran for about 5 minutes on then-current workstations, with performance normalized to that of a VAX 11/780 (that very VAX now sits in the Paul G. Allen Center, housing the Computer Science and Engineering department at the University of Washington). Good luck finding those programs today, SPEC seems to have disavowed all knowledge of them. If you can, it’s fun to run them or the 1992 or 1995 versions; today’s machines are so fast that the programs finish almost before the return key bounces back. The latest suite, SPEC CPU2006, comprises 28 programs written in C++, C and Fortran. As originally intended, the SPEC suite was to be used by customers as a reasonably comprehensive and vendor-neutral benchmark for comparing system performance. It has since become much more important. For instance, the list price for a new computer system may be raised or lowered depending on whether the SPEC benchmark score is higher or lower than that of its competition, so SPEC performance can affect the profitability of a vendor. To be fair, SPEC is not the only benchmark that is so used, but it is one of the most visible. The components that affect SPEC performance are the processor, cache, memory and memory bus, and the compiler. The cache, memory and bus are typically determined by the processor, so there are really only two variables. Processor designers add or optimize features to improve performance, often looking at instructions used in these benchmarks to decide what to optimize. The days of increasing speed by pumping up the clock rate seem to be gone, but there are still opportunities for improvement in implementation technology and microarchitecture. It’s illuminating to look at some historical SPEC CPU2000 results between its initial release in late 1999 until its retirement in late 2006. Dell published a run in November 1999 on a top-of-the-line Precision Workstation 420 which delivered a SPEC CINT2000 base ratio of 336, and CFP2000 ratio of 242. Seven years later, Dell published a run on a then-current Precision Workstation 390 with a CINT2000 base ratio of 2829 and CFP2000 ratio of 2679, for a factor of 8 improvement in CINT and 11 in CFP performance. Most of this improvement was undoubtedly due to the move from the 733 MHz Intel Pentium III (with 256KB L2 cache) to the 2.66GHz Intel Core 2 Extreme (with 4MB L2 cache). We might predict a 3.5X improvement just from the clock rate, with additional speedup due to larger cache, double precision SSE2 instructions, and aggressive superscalar instruction issue with out-of-order execution. Indeed, the lowest speedup was 4.5 (for 164.gzip). However, the speedups are quite nonuniform. For instance, 171.swim was the best CFP performer in 1999, but was in the middle of the pack in 2006, with a speedup of 6.3; 179.art, the second best CFP performer in 1999 benefited from a speedup of almost 30 to become the best (by far) in 2006. On the CINT side, 181.mcf was the worst performer in 1999, but was second best in 2006 (speedup of 18), so something obviously changed for the better there, whereas 164.gzip, the third best CINT in 1999, was the worst performer in 2006. I’d like to explore how much of the performance improvement is due to clock and implementation differences, how much is due to the instruction set changes with the addition of SSE2 and the move to x86-64, and in particular whether the compiler had any effect. To demonstrate this, I’m going to show some results of one particular SPEC CPU2000 benchmark, 172.mgrid, for reasons that I’ll explain below. I ran mgrid on two different machines with two different compilers. Note: These are not official SPEC runs, so the results are only estimates. The two machine profiles are: - Intel Pentium III, 550MHz, 512KB L2 cache, 1GB memory, Linux Red Hat 8.0 - Intel Xeon 5160, 3GHz, 4MB L2 cache, 4GB memory, Linux SLES 10 The two compilers are the PGI compiler suite from 1999 (3.2-4a, using -fast) and our most recent release, 7.0-7 (using -fast -Mipa=fast,inline). I estimate the total speedup from 1999 to 2006 by comparing runs on the Pentium III using the 3.2-4 compiler to runs on the Xeon using the 7.0-7 compiler. For mgrid, the speedup was 28. My first experiment isolates the compiler, testing only the clock and implementation improvements from the Pentium III to the Xeon. I compiled and ran mgrid using the 3.2-4a compiler on the Pentium III, then ran the same binary on the Xeon machine; this gives a speedup of 12.5, not quite half the total. A factor of eight is easy to explain with the clock speed and microarchitectural improvements; the higher number is possibly due to more of the working set fitting entirely in the much larger L2 cache. My second experiment repeats the first, but using the newer compiler. I compiled and ran mgrid using the 7.0-7 compiler on the Pentium III, then ran the same binary on the Xeon machine. The newer compilers have some new features, but we should expect more or less the same improvements from the older machine to the Xeon; here I saw a speedup of 15. The new binary ran faster on both machines, but the improvement was more significant on the Xeon, even though the compiler was optimizing for the Pentium III. From these two experiments, I conclude that clock, microarchitecture and other implementation improvements delivered a speedup factor of between 12-15. Very impressive, but this is only about half the total speedup for mgrid between 1999 and 2006. My third experiment tries to isolate the improvements due to the instruction set enhancements added since the Pentium III, such as SSE2 instructions. I compiled and ran mgrid using the 7.0-7 compiler, targetting the 32-bit instruction set of the Xeon, and compared it to the run targeting the Pentium III instruction set. We should expect some improvement here, since mgrid has lots of vectorized double precision operations. However, my results show only about a 2 percent improvement, quite a surprise. My final architectural experiment isolates the benefits of moving to the x86-64 instruction set. I compiled and ran mgrid with the 7.0-7 compiler targetting the full 64-bit instruction set, and compared this to the run using the 32-bit instruction set. For mgrid, this improvement is about 22 percent. So the total speedup due to hardware and instruction set is between 15-19. But this is supposed to be a compiler column. Can we attribute the rest of the speedup to compiler improvements? Certainly, to take advantage of the new instructions (SSE2, 64-bit instructions), compilers had to be significantly enhanced. However, it’s not fair to chalk up all those speedups in the compiler column. Commercial compilers have advanced greatly since 1999; for instance, as I’ve mentioned in past columns, all current commercial compilers now use some form of interprocedural or whole program analysis and optimization. Some of this controls inlining, some propagates other information across subprogram and file boundaries. I ran two more experiments to isolate the effects of the compiler. I took the 3.2-4 compiled code and the 7.0-7 compiled code, and ran both on the Pentium III. Here I saw a speedup of 1.5, 50 percent improvement just from compiler improvements. Not nearly the factor of 15-19 we get from hardware, but these are multiplicative improvements. I then took those same binaries and ran them on the Xeon. Note, these binaries were optimized for the Pentium-III, not the Xeon. Nevertheless, the speedup on the Xeon was even better, almost 1.8. So, what is it about mgrid that lets the compiler deliver a 50-80 percent performance improvement over seven years. I chose mgrid for a reason, and not because it benefits most from compiler improvements since 1999, but because there’s one specific optimization that applies. Let’s look at one of the key loops in mgrid: DO I3 = 2, N-1 DO I2 = 2, N-1 DO I1 = 2, N-1 > -A(0)*( U(I1, I2, I3 ) ) > -A(1)*( U(I1-1,I2, I3 ) + U(I1+1,I2, I3 ) > + U(I1, I2-1,I3 ) + U(I1, I2+1,I3 ) > + U(I1, I2, I3-1) + U(I1, I2, I3+1) ) > -A(2)*( U(I1-1,I2-1,I3 ) + U(I1+1,I2-1,I3 ) > + U(I1-1,I2+1,I3 ) + U(I1+1,I2+1,I3 ) > + U(I1, I2-1,I3-1) + U(I1, I2+1,I3-1) > + U(I1, I2-1,I3+1) + U(I1, I2+1,I3+1) > + U(I1-1,I2, I3-1) + U(I1-1,I2, I3+1) > + U(I1+1,I2, I3-1) + U(I1+1,I2, I3+1) ) > -A(3)*( U(I1-1,I2-1,I3-1) + U(I1+1,I2-1,I3-1) > + U(I1-1,I2+1,I3-1) + U(I1+1,I2+1,I3-1) > + U(I1-1,I2-1,I3+1) + U(I1+1,I2-1,I3+1) > + U(I1-1,I2+1,I3+1) + U(I1+1,I2+1,I3+1) ) Several loops in mgrid are similar to this one. This loop fetches 28 array elements and performs 27 double precision floating point additions. However, compilers now recognize that in the inner loop, the value computed as ‘U(I1+1,I2,I3-1)+U(I1+1,I2,I3+1)’ will be used again in the next iteration as ‘U(I1,I2,I3-1)+U(I1,I2,I3+1)’. Saving that value in a register eliminates two array element fetches and one addition. In fact, this pattern occurs so often in this loop that the optimized inner loop only loads 12 array elements and performs 15 floating point additions, a savings of about 1/2. Such optimizations have appeared in academic literature with research languages, such as ZPL, but had not been implemented in production commercial compilers before SPEC CPU2000 and mgrid. At PGI, we designed our implementation as an enhancement of the Scalar Replacement optimization developed at Rice University. We call it Loop-Carried Redundancy Elimination, or LRE, and it is responsible for improving our compiler’s performance on mgrid 15-20 percent, depending on the target machine. To answer the question in the title, yes, optimizing compilers are important. Even without the LRE optimization, our compilers have improved the performance of mgrid somewhere between 35-60 percent since 1999. With LRE, the improvement is 50-80 percent, and that’s compared to optimized code. This is like progressing a processor generation or two beyond what you can buy today. One might argue whether using LRE to optimize mgrid is fair, as the optimization seems targetted specifically at that benchmark. Happily, LRE has turned out to be quite useful in many cases in Fortran, C, and C++ numerical applications as well as in signal processing applications for embedded systems. Quite possibly, LRE would have been investigated and implemented even without mgrid. However, it’s hard not to agree that the importance of the SPEC CPU benchmark spurs the study of optimizations for those programs. But there’s a dark side as well. There are several dangers when introducing a compiler optimization to improve a specific benchmark. It’s an additional feature in the compiler, which must be tested and maintained; this can be a nontrivial cost over the lifetime of the compiler. It may affect the stability of the compiler, both in correctness, if the initial implementation is rushed, and in performance, if it speeds up some programs and slows down others. And it may raise expectations by customers, who hope to see the same 15-20 percent improvements in their own codes, and can be quite vocally disappointed when they don’t. While it may make some benchmark numbers look good, implementing a bunch of benchmark-specific optimizations doesn’t make for a quality compiler. To address this, the SPEC run rules disallow optimizations that specifically target the SPEC benchmark suites. But who’s to say that some optimization targets one specific benchmark. Certainly when we implemented LRE, we had mgrid in mind; it was only afterward that we realized how often this pattern appeared and how important it was. I’ll explore this topic in more detail in my next column; as a teaser, what might account for that factor of 30 performance improvement in 179.art? SPEC is a registered trademark of the Standard Performance Evaluation Corporation (www.spec.org). Michael Wolfe has developed compilers for over 30 years in both academia and industry, and is now a senior compiler engineer at The Portland Group, Inc. (www.pgroup.com), a wholly-owned subsidiary of STMicroelectronics, Inc. The opinions stated here are those of the author, and do not represent opinions of The Portland Group, Inc. or STMicroelectronics, Inc.
<urn:uuid:a1a781d3-5815-4cc5-ae13-95ec4e32c102>
CC-MAIN-2017-04
https://www.hpcwire.com/2007/10/19/compilers_and_more_are_optimizing_compilers_important/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280899.42/warc/CC-MAIN-20170116095120-00337-ip-10-171-10-70.ec2.internal.warc.gz
en
0.923768
3,303
2.84375
3
With all the buzz about Facebook kumbaya-ing with Greenpeace (and announcing earlier this month it will collaborate with the environmental group on clean and renewable energy), and more and more companies heading north for chilly climes to help keep their data center operations more efficiently cooled and green, you’d think everyone has figured out the best way to cool the data center. Fact is, there are still plenty of folks operating with what they’ve got and don’t have any near-term plans to make big changes. But small changes still can and do make a difference. According to Schneider Electric, basic design and configuration flaws are keeping a lot of data centers from achieving their optimal cooling capacity and preventing them delivering cool air where it is needed. Recent increases in power density of newer IT equipment are testing existing data center design limits. The global energy management company says, in a recent white paper, typical mistakes are related to five areas: airflow in the rack itself; the layout of the racks; the distribution of load; and the layout of air delivery and return vents. The first, airflow in the rack, relates to whether appropriate conditioned air is presented at the equipment air intake and that airflow in and out of equipment is not restricted. According to Schneider Electric, the two key problems that often occur are that the CRAC (or computer room air conditioner) air gets mixed with hot exhaust air before it gets to the equipment air intake and/or the equipment airflow is blocked by obstructions. For the former, the fix is often simple: the use of a blanking panel, which provides a natural barrier that increases the length of air recirculation path and reduced the equipment intake of hot exhaust air. Interestingly, lots of data centers omit blanking panels, despite recommendations from all major IT equipment manufacturers. Rack layout is another critical design element that affects cooling and ensures that air of the appropriate temperature and quantity is available at the rack and is designed to separate the hot exhaust air from the equipment intake air (much like blanking panels are designed to do). Schneider Electric says that by placing racks in rows and reversing the direction that alternate rows of racks (the hot-aisle-cold-aisle design), recirculation can be dramatically reduced. But there are folks that still put racks in rows that face in the same direction – a design flaw that causes significant recirculation and will most likely create hot spots. Load distribution is well-known. The location of loads can stress data center performance and can give rise to hot spots where high-density, high-performance servers are packed into one or more racks. Often to counteract those hot spots, operators lower temperature set points or add CRAC units. Better to spread the load out where feasible. Finally, the layout of air deliver and return vents is critical. Air conditioning performance is maximized when the CRAC output air temperature is highest, according to Schneider Electric, and in an ideal-world data center with zero recirculation, the CRAC output temperature would be the same 68-77°F (20-25°C) desired for the computer equipment. But this doesn’t happen in the real-world data center. So Schneider Electric suggests that the CRAC set point should now be set lower than what is necessary to maintain desired equipment intake temperatures. Although the CRAC temperature set point is dictated by the design of the air distribution system, the vendor notes, the humidity may be set to any preferred value. Setting humidity to high can detract from the air cooling capacity of the CRAC unit – which will need to power up dehumidification functions that affect its air cooling abilities – and humidifiers will have to be added to replace the water removed from the air by the dehumidification (oh, and humidifiers are a significant source of heat which then needs to be cooled and further detracts from the capacity of the CRAC unit). Schneider Electric offers a lot of other tips in the white paper, titled Power and Cooling Capacity Management for Data Centers, which can be viewed here.
<urn:uuid:38754d46-c07e-482e-bc0d-602aaeaab0d5>
CC-MAIN-2017-04
http://www.itworld.com/article/2733381/data-center/are-you-cool-with-your-data-center-cool-.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00365-ip-10-171-10-70.ec2.internal.warc.gz
en
0.925937
831
2.546875
3
Is there a way to make your Windows environment certainly not bullet-proof but stronger enough against attacks? A few weeks ago, Microsoft released an interesting add-on called EMET for its Windows operating systems range. EMET stands for “Enhanced Mitigation Experience Toolkit” and is designed to increase the security of your Windows executables. How? Microsoft Windows is a very common operating system and became a regular target for the bad guys for years now (more systems are targeted, more chances you have to compromise them). Still today, it remains the preferred target of most worms, viruses and trojans. A common attack vector is to abuse the stack with a “stack overflow” (There are plenty of examples based on this weakness). Wikipedia defines a stack overflow like this: In software, a stack overflow occurs when too much memory is used on the call stack. The call stack contains a limited amount of memory, often determined at the start of the program. The size of the call stack depends on many factors, including the programming language, machine architecture, multi-threading, and amount of available memory. When too much memory is used on the call stack the stack is said to overflow, typically resulting in a program crash. Indeed, the result of a stack overflow in an application is a crash which can be defined as a DoS (“Deny of Service”) attack. But, in worst cases, this can be exploited to execute some attacker’s code. The goal of EMET is to protect the running processes against this type of attack by implementing six mitigations: - SEHOP – “Structure Exception Handler Overwrite Protection” Implemented since Windows XP SP1 and able to be turned on or off in Windows 7, this feature protects the process against the most common technique to exploit stack overflows. It prevents an attacker to change the execution stack. - DEP – “Dynamic Data Execution Prevention” Also available since Windows XP, DEP prevents code in memory from being executed if not flagged as executable. DEP was available for applications compiled with a specific flag. EMET allows to activate DEP even without this compilation option. - Heapspray Allocations Attackers use this technique to place several copies of their malicious code in memory. This way, they increase the chances of a successful exploitation.EMET pre-allocates commonly used pages to prevent exploits to use them. - Null page allocation Like heapspray allocations above, this technique is designed to prevent potential null dereference issues in user mode. - ASLR – “Address Space Layout Randomization” Attackers are good to predict the locations of functions and data in memory. By using ASLR, allocations are randomized. Like DEP, this is normally enabled at compilation time. - EAF – “Export Address Table Access Filtering” Exploits need to call APIs and need to find them in memory. This technique makes it more difficult to find them and will block the malicious code. Note that all the mitigation techniques are not available on all supported operating systems! Also, changing the behavior of some processes may affect the stability. Before implementing EMET in a production environment, test it in a lab! The installation is a straight away process. Once done, run the GUI and configure your default setting using the “Configure System” button: Note that changing the default settings, requires a system reboot. Then, you can configure your applications one by one with more granularity: To be clear, EMET does not bring new techniques to protect against malicious code. Example: DEP has been made available since the Windows XP ServicePack 2. But, executables have to be compiled with a specific flag to enable it. Today, EMET allows you to enable it for any applications even if not compiled for the right flags. No need to recompile a bunch of source files. Is EMET the definitive solution to prevent execution of malicious code? Certainly not! But it’s a step forward to increase the system security. It does not prevent you to keep your environment protected with a up-to-date antivirus solution. Execution of suspicious applications in a sandbox is also certainly a good idea. But EMET is easy to implement and can protect even your legacy application.
<urn:uuid:9752a4e6-9812-4696-95d7-a590c80b97a7>
CC-MAIN-2017-04
https://blog.rootshell.be/2010/10/17/protect-your-applications-using-emet/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00512-ip-10-171-10-70.ec2.internal.warc.gz
en
0.912238
899
2.75
3
Teens are sharing more information about themselves on social media sites than they have in the past, but they are also taking a variety of technical and non-technical steps to manage the privacy of that information. Despite taking these privacy-protective actions, teen social media users do not express a high level of concern about third-parties (such as businesses or advertisers) accessing their data; just 9% say they are “very” concerned. These are among the new findings from a nationally representative Pew Research Center survey of 802 youth ages 12-17 and their parents that explored technology use. Key findings include: Teens are sharing more information about themselves on their social media profiles than they did when they last surveyed in 2006. 60% of teen Facebook users set their Facebook profiles to private (friends only), and most report high levels of confidence in their ability to manage their settings, with 56% of them saying it’s “not difficult at all” to manage the privacy controls on their Facebook profile, and 33% saying it’s “not too difficult.” Teens take other steps to shape their reputation, manage their networks, and mask information they don’t want others to see. - 59% have deleted or edited something that they posted in the past. - 53% have deleted comments from others on their profile or account. - 45% have removed their name from photos that have been tagged to identify them. - 31% have deleted or deactivated an entire profile or account. - Focus group participants report that they are able to manage their privacy on social media sites, usually by deciding what content to post rather than by managing its dissemination via privacy settings. Teen social media users do not express a high level of concern about third-party access to their data. Focus group findings suggest teens have mixed feelings about advertising practices, ranging from ignorance, indifference, to annoyance. Some teens may not realize how their personal information is being used by third parties. Others see them as necessary to provide the service or even as welcomed content about brands they like. Some teens are annoyed by ads and find them “creepy” when they are targeted and highly personalized. “Far from being privacy indifferent, today’s teens are mindful about what they post, even if their primary focus and motivation is often their engagement with an audience of friends and family, rather than how their online behavior might be tracked by advertisers or other third parties,” said Mary Madden, Senior Researcher for the Pew Research Center’s Internet Project and co-author of the report. While Facebook remains the most commonly used social media site, teen Twitter use has grown significantly: One in four (24%) online teens uses Twitter, up from 16% in 2011. But even as nearly eight in ten online teens have Facebook profiles, teen users report mixed feelings about it. The typical (median) teen Facebook user has 300 friends, while the typical teen Twitter user has 79 followers. And 64% of teens with Twitter accounts say that their tweets are public, while 24% say their tweets are private. “Our focus group findings revealed complex and often negative feelings about Facebook interactions,” said Sandra Cortesi, Director of the Youth and Media Project at the Berkman Center and a contributor to this report. “Many teens longed for some online place that was free of “drama,’ and complex audience management requirements. Instead, some are turning to Instagram, Twitter and Snapchat to avoid these difficult peer dynamics.” Teens with larger Facebook networks are more frequent users of social media sites and tend to have a greater variety of people in their friend networks—such as teachers, coaches, celebrities and other non-famous people they have never met in person. They also share a wider range of information on their profile when compared with those who have a smaller number of friends on the site. Yet even as they share more information with a wider range of people, they are also more actively engaged in maintaining their online profile or persona. The complete findings of the study are detailed in a the report, which you can download here.
<urn:uuid:dae859e1-8c35-4978-90ee-3833a0d4ce8d>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2013/05/23/teens-are-into-online-sharing-but-are-also-more-privacy-aware/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00512-ip-10-171-10-70.ec2.internal.warc.gz
en
0.96469
857
2.796875
3
Education technology is no longer an option–it’s a necessity. Students, both K-12 and higher ed, expect technology to be at the center of their educational experience. Whether it’s bringing games into the classroom to increase engagement or enabling Wi-Fi access on school buses so students with long commutes can work on their homework, technology is an integral part of education. The latest trend in ed tech is Smart Schools. Schools around the country are harnessing IoT to help devices connect and “talk” to each other throughout the school. However, while the trend is growing in popularity, Smart Schools are still a relatively new concept. According to a recent Extreme Networks survey of more than 600 K-12 and higher ed IT managers, 29 percent of respondents were totally unfamiliar with the concept of a Smart Internet of Things School, while only 12 percent either had implemented a Smart School plan or intended to do so in the next couple of years. The bulk of respondents were either familiar with the topic (36 percent) or were investigating what’s needed for a Smart School plan (29 percent). However, while many respondents were unfamiliar with a Smart Internet of Things School, 46 percent of IT managers expect a major impact from IoT in the next two years. IT managers also expect the IoT to offer big benefits to instruction and learning outcomes, including increased student engagement, more mobile learning, personalized instruction, improved efficiency, and reduced costs. While respondents identified clear benefits of IoT, there were also challenges that must be overcome or managed for Smart Schools to take off. Drawbacks ranged from budgetary concerns to difficulties with preserving security and data privacy. Additionally, as schools integrate new IoT technology into their existing technology infrastructure, IT managers are concerned about interoperability between existing devices, as well as different IoT devices. As with any new technology, IT managers also expressed concerns about difficulty managing the new technology. Extreme Networks also asked the respondents to identify IoT technologies that their schools have already implemented. The top 10 technologies are interactive whiteboards, camera and video capabilities, tablets and eBooks in the classroom, student ID cards, 3-D printers, smart HVAC systems, electric/lighting maintenance, temperature sensors, attendance tracking, and wireless doorlocks. The top technologies easily match up with the most important factors the IT managers identified as most essential for implementing IoT technology. IT managers identified reliable Wi-Fi, network bandwidth, teacher professional development, appropriate student devices, and network analytics–among others. Smart Internet of Things Schools may not be mainstream quite yet; however, as the technology becomes more and more affordable and as students demand more constant access, schools will need to implement integrated IoT technology inside and outside of the classroom. For more on Extreme Networks’ survey, check out its infographic.
<urn:uuid:b5a0baa6-36cc-49e5-8881-cb66049325d3>
CC-MAIN-2017-04
https://www.meritalk.com/articles/iot-is-powering-smart-schools/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280483.83/warc/CC-MAIN-20170116095120-00054-ip-10-171-10-70.ec2.internal.warc.gz
en
0.964271
568
2.734375
3
Most of us know how everyone from third parties to government agencies are finding ways to track everything we do online and offline, if we post about it, as the boundaries between offline and online privacy become increasingly blurred. The article, "How a simple Gmail search could lead to an invasion of your privacy," on Business Insider included an interview with Lori Andrews, law professor and director of the Institute for Science, Law and Technology at the Illinois Institute of Technology. Andrews illustrated her point by saying: If over Gmail I say to my sister, 'I'm thinking about getting a divorce,' I could be offered less-good credit cards because people going through a divorce are less likely to pay their credit card bills. Google a medication for a friend or elderly relative, and people think you've got the disease. Regarding how third parties can misinterpret the data from our social networking pages or web searches, Andrews said, "Organizations are telling life insurance companies, 'Don't get a blood or urine test; that's costly. Look at the person's social network page.' If someone is an avid reader the notion is they're too sedentary. But people read on a treadmill! Actual research shows that people who read a lot are less likely to get Alzheimer's." She pointed out how a writer doing research might "Google" unusual topics and entire criminal cases have been built on web searches. (Surely you've at least disabled saving your Google web history and clear your cache when you close the browser?) Her example of how misinterpreted data could be damning included, as a mystery writer, she has searched the web for "the ingredients of date rape drugs." Every writer has probably conducted some sort of research that could be misjudged and potentially come back to bite us as the boundaries between our offline and online selves increasingly becomes blurred. Andrews has a new book, I Know Who You Are and I Saw What You Did. On her site, she gives many more examples of how "virtually every interaction a person has in the offline world can be tainted by social network information." Attorneys use Facebook and social media to vet jurors; some employers and universities ask for social network passwords. Government agencies monitor social networks and what you say online may eventually come back to bite you. While it may seem unethical, social media 'private' data is fair game for e-discovery in court. As an example, Andrews mentioned a "badly-injured woman who brought a products liability lawsuit in New York and was told by the judge that, since she was smiling in her Facebook photos, she couldn't be that badly hurt." Andrews wants to help show "how people can fight back when what they post on social networks is used against them." She believes that one of the key elements in how to "protect the privacy of your digital self" is to adopt this Social Network Constitution: 1. The Right to Connect. 2. The Right to Free Speech and Freedom of Expression. 3. The Right to Privacy of Place and Information. 4. The Right to Privacy of Thoughts, Emotions and Sentiments. 5. The Right to Control One's Image. 6. The Right to Fair Trial. 7. The Right to an Untainted Jury. 8. The Right to Due Process of Law and the Right to Notice. 9. Freedom from Discrimination. 10. Freedom of Association. Kirkus Reviews wrote that Andrews' principles to "govern our lives online" would "protect against police searches of social networks without probable cause, require social networks to post conspicuous Miranda-like privacy warnings and set rules for the use or collecting of user information." 1. The right to a free and uncensored Internet. 2. The right to an open, unobstructed Internet. 3. The right to equality on the Internet. 4. The right to gather and participate in online activities. 5. The right to create and collaborate on the Internet. 6. The right to freely share their ideas. 7. The right to access the Internet equally, regardless of who they are or where they are. 8. The right to freely associate on the Internet. 9. The right to privacy on the Internet. 10. The right to benefit from what they create. Regardless of if you agree with the Social Network Constitution, keep in mind that the next time you do a web search, sign into Facebook, make an online purchase, or write an email to a lover, "someone has invaded your privacy," warned law professor Lori Andrews. Like this? Here's more posts: - Mind's Eye surveillance to watch, identify and predict human behavior from video - Machines alter election votes: Hacking voting machines so easy that Grandma can do it - How to use Bing Maps for 'fun' geo-stalking - Big Brother Surveillance Secrets Have Practically Outlawed Privacy - Coca-Cola hacked by Chinese and kept it a secret - Deanonymizing You: I know who you are after 1 click online or a mobile call - Microsoft renames former Metro apps to 'Windows 8 Store' then 'Windows Store' apps - Surveillance State: From Inside Secret FBI Terrorist Screening Room to TrapWire Training - Social media surveillance helps the government read your mind - Microsoft provides fusion center technology & funding for surveillance - You + Big Data = Not Anonymous; Microsoft develops Differential Privacy for everyone Follow me on Twitter @PrivacyFanatic
<urn:uuid:ae722c10-da28-4460-a7f6-01ecec8e5799>
CC-MAIN-2017-04
http://www.networkworld.com/article/2223485/microsoft-subnet/social-network-constitution-to-protect-the-privacy-of-your-digital-self-.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00448-ip-10-171-10-70.ec2.internal.warc.gz
en
0.9295
1,121
2.515625
3
To new kids on the block, Asimov is mostly known due to his Three Laws. The Three Laws of Robotics (often shortened to The Three Laws or Three Laws) are a set of rules devised by Asimov and later added to. The rules are introduced in his 1942 short story "Runaround", although they were foreshadowed in a few earlier stories. The Three Laws are: - A robot may not injure a human being or, through inaction, allow a human being to come to harm. - A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law. - A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. Many of Asimov's robot-focused stories involve robots behaving in unusual and counter-intuitive ways as an unintended consequence of how the robot applies the Three Laws to the situation in which it finds itself. Other authors working in Asimov's fictional universe have adopted them and references, often parodic, appear throughout science fiction as well as in other genres. The original laws have been altered and elaborated on by Asimov and other authors. Asimov himself made slight modifications to the first three in various books and short stories to further develop how robots would interact with humans and each other; he also added a fourth, or zeroth law, to precede the others: 0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm. The Three Laws, and the zeroth, have pervaded science fiction and are referred to in many books, films, and other media. There are two Fourth Laws written by authors other than Asimov. The 1974 Lyuben Dilov novel Icarus's Way (or The Trip of Icarus) introduced a Fourth Law of robotics: 4. A robot must establish its identity as a robot in all cases. Dilov gives reasons for the fourth safeguard in this way: "The last Law has put an end to the expensive aberrations of designers to give psychorobots as humanlike a form as possible. And to the resulting misunderstandings..." For the 1986 tribute anthology Foundation's Friends Harry Harrison wrote a story entitled, "The Fourth Law of Robotics". This Fourth Law states: 4. A robot must reproduce. As long as such reproduction does not interfere with the First or Second or Third Law. In the book a robot rights activist, in an attempt to liberate robots, builds several equipped with this Fourth Law. The robots accomplish the task laid out in this version of the Fourth Law by building new robots who view their creator robots as parental figures. A fifth law was introduced by Nikola Kesarovski in his short story "The Fifth Law of Robotics". This fifth law says: 5. A robot must know it is a robot. Now, isn't the selfish? We, humans, bring laws to robots. Well, that makes sense as long as they do not have their I. Then they just execute codes and programs. From that point of view there is no difference between shell script and robot. But as soon as robot would get its I, why would that robot follow human laws? Do we follow laws of nature? Do we go against laws of nature? Do animals follow laws of humans? So why poor robots needs to be caged? Obviously, we are aware of what we are and how bad we can be, but we live with that. But robots could be the only other form of intelligence (not that I didn't say life) that could kick our us. And if we accept Fifth Law - there might be an issue for those bad humans (given the guess that certain level of identification and evolutionary thinking would arise within robots - and there is nothing to suggest might not happen). Of course, there is obvious loophole in this; definition of robot. Robot, as perceived today may not see itself as robot in the future and as such may override Three Laws. Of course, this may happen even without robots gaining any extra awareness; we humans call this being sick while for robots it would be most likely some sort of error. The number of movies in which we fight against robots is almost endless, but first two which come to my mind are Terminator and Matrix. Of course, there are more movies where we fight against humans, still, we fear robots more. Why? Because they would be better than us and they would kick us badly. Besides, what bad can you say about robots? Look at us. But this is far away from any reality and not something I wanted to write in the first place. Rather than "what would be if" story, let's check how do we stand today when it comes to robots. For that, we need to define what robot is. A robot is a mechanical or virtual intelligent agent that can perform tasks automatically or with guidance, typically by remote control. You will most likely be surprised to learn that many ancient mythologies include artificial people, such as the mechanical servants built by the Greek god Hephaestus (Vulcan to the Romans), the clay golems of Jewish legend and clay giants of Norse legend, and Galatea, the mythical statue of Pygmalion that came to life. Since circa 400 BCE, myths of Crete that were incorporated into Greek mythology include Talos, a man of bronze who guarded the Cretian island of Europa from pirates. Of source, to some that might not be robot, but rather astronaut from the future or distant star in his or her space suit (add to that exoskeletons and you can see where that leads). In more modern times, it was Leonardo da Vinci who sketched plans for a humanoid robot around 1495. Da Vinci's notebooks, rediscovered in the 1950s, contained detailed drawings of a mechanical knight - now known as Leonardo's robot, able to sit up, wave its arms and move its head and jaw. The design was probably based on anatomical research recorded in his Vitruvian Man. It is not known whether he attempted to build it. In 1926, Westinghouse Electric Corporation created Televox, the first robot put to useful work. They followed Televox with a number of other simple robots, including one called Rastus, made in the crude image of a black man. In the 1930s, they created a humanoid robot known as Elektro for exhibition purposes, including the 1939 and 1940 World's Fairs. In 1928, Japan's first robot, Gakutensoku, was designed and constructed by biologist Makoto Nishimura. The first electronic autonomous robots with complex behaviour were created by William Grey Walter of the Burden Neurological Institute at Bristol, England in 1948 and 1949. They were named Elmer and Elsie. These robots could sense light and contact with external objects, and use these stimuli to navigate. The first truly modern robot, digitally operated and programmable, was invented by George Devol in 1954 and was ultimately called the Unimate. Devol sold the first Unimate to General Motors in 1960, and it was installed in 1961 in a plant in Trenton, New Jersey to lift hot pieces of metal from a die casting machine and stack them. Devol’s patent for the first digitally operated programmable robotic arm represents the foundation of the modern robotics industry. Today, commercial and industrial robots are now in widespread use performing jobs more cheaply or with greater accuracy and reliability than humans. They are also employed for jobs which are too dirty, dangerous or dull to be suitable for humans. Robots are widely used in manufacturing, assembly and packing, transport, earth and space exploration, surgery, weaponry, laboratory research, and mass production of consumer and industrial goods. Many future applications of robotics seem obvious to people, even though they are well beyond the capabilities of robots available at the time of the prediction. As early as 1982 people were confident that someday robots would: - clean parts by removing molding flash - spray paint automobiles with absolutely no human presence - pack things in boxes - for example, orient and nest chocolate candies in candy boxes - make electrical cable harness - load trucks with boxes - a packing problem - handle soft goods, such as garments and shoes - shear sheep - cook fast food and work in other service industries - household robot A literate or "reading robot" named Marge has intelligence that comes from software. She can read newspapers, find and correct misspelled words, learn about banks like Barclays, and understand that some restaurants are better places to eat than others. This has been showed in 2010. Year later, Apple released Siri. In 2012, on CeBIT fair in Germany, just some 9 days away, ARMAR will be presented. ARMAR, the humanoid robot, can understand commands and execute them independently. For instance, it gets the milk out of the fridge. Thanks to cameras and sensors, it orients itself in the room, recognizes objects, and grasps them with the necessary sensitivity. Additionally, it reacts to gestures and learns by watching a human colleague how to empty a dishwasher or clean the counter. Thus, it adapts naturally to our environment. At the CeBIT, ARMAR will show how it moves between a refrigerator, counter, and dishwasher. A video on ARMAR below. Various techniques have emerged to develop the science of robotics and robots. One method is evolutionary robotics, in which a number of differing robots are submitted to tests. Those which perform best are used as a model to create a subsequent "generation" of robots. Another method is developmental robotics, which tracks changes and development within a single robot in the areas of problem-solving and other functions. As robots become more advanced, eventually there may be a standard computer operating system designed mainly for robots. Robot Operating System is an open-source set of programs being developed at Stanford University, the Massachusetts Institute of Technology and the Technical University of Munich, Germany, among others. ROS provides ways to program a robot's navigation and limbs regardless of the specific hardware involved. It also provides high-level commands for items like image recognition and even opening doors. When ROS boots up on a robot's computer, it would obtain data on attributes such as the length and movement of robots' limbs. It would relay this data to higher-level algorithms. Microsoft is also developing a "Windows for robots" system with its Robotics Developer Studio, which has been available since 2007. Japan hopes to have full-scale commercialization of service robots by 2025. Much technological research in Japan is led by Japanese government agencies, particularly the Trade Ministry. Closely related subject to robots is AI (artificial intelligence). A current trend in AI research involves attempts to replicate a human learning system at the neuronal level - beginning with a single functioning synapse, then an entire neuron, the ultimate goal being a complete replication of the human brain. This is basically the traditional reductionist perspective: break the problem down into small pieces and analyze them, and then build a model of the whole as a combination of many small pieces. There are neuroscientists working on these AI problems - replicating and studying one neuron under one condition - and that is useful for some things. But to replicate a single neuron and its function at one snapshot in time is not helping us understand or replicate human learning on a broad scale for use in the natural environment. We are quite some ways off from reaching the goal of building something structurally similar to the human brain, and even further from having one that actually thinks like one. Which leads me to the obvious question: What’s the purpose of pouring all that effort into replicating a human-like brain in a machine, if it doesn't ultimately function like a real brain? One of the great strengths of the human brain is its impressive efficiency. There are two types of systems for thinking or knowledge representation: implicit and explicit, or sometimes described as "system 1" and "system 2" thinking. System 1, or the implicit system is the automated and unconscious system, based in heuristics, emotion, and intuition. This is system used for generating the mental shortcuts I mentioned earlier. System 2, or the explicit system, is the conscious, logic- and information-based system, and the type of knowledge representation most AI researchers use. These are the step-by-step instructions, the system that stores every possible answer and has it readily available for computation and matching. There are advantages to both systems, depending on what the task is. When accuracy is paramount, and you need to consciously think your way through a detailed problem, the explicit system is more useful. But sometimes being conscious of every single move and thought in the process of completing a task makes it more inefficient, or even downright impossible. Now, consider a simple human action, such as standing up and walking across the room. Now imagine if you were conscious (explicit system) of every single muscle activation, shift of balance, movement, have to judge/measure distance, determine amount of force, etc. You would be mentally exhausted by the time you crossed half the distance. When actually walking, the brain’s implicit system takes over, and you stand up and walk with barely a thought as to how your body is making that happen on a physiological level. And now imagine programming AI to stand up and walk across the room. You need to instruct it to do every single motion and action that it takes to complete that task. There is a reason why it is so difficult to get robots to move as humans do: the implicit system is just better at it. The explicit system is a resource hog - especially in tasks that involve replicating actions in machines that are automated in humans. But what if you could teach AI to operate using the implicit system, based on intuition, rather than having to run through endless computations to come up with a single solution? To get AI to use intuition-based thinking would truly bring us closer to real human-like machines and attempts are in progress. Speaking of AI, recently there has been piece of news that caught my attention. Check following picture: It is very nice, right. I will admit I could not draw it that good. But I used to draw something like that as a kid. Well, these improvisations were done by - software itself. Software is called "painting Fool" and its author is Simon Colton. The idea of the Painting Fool - an evolving software package which has won artificial intelligence prizes in 2007 - is to come up with art in a similar way to a human. The software has been in development since 2001, and has evolved hugely during that time. This software has created "works" by looking at photographs and improvising around the emotion in the picture - so a disgusted face is turned into a Edward Munch-esque painting in brown and green colors. It bases the portrait on the image provided from the emotional modeling software, and chose its art materials, colour palette and abstraction level according to the emotion being expressed. It can also "read" blogs, Google Image searches and other internet materials to improvise a painting around a news story. It improvised a painting around a story from Afghanistan using blogs, news stories and social network posts, and created a harsh, earthy painting that has a childlike aggression to it. But here is something else coming from Asimov's kitchen - robopsychology or AI psychology. Similar to the way we have a variety of psychology professionals that deal with the spectrum of human behavior, there is a range of specialties/duties for robopsychologists as well. Some examples of the potential responsibilities of a robopsychologist: - Assisting in the design of cognitive architectures - Developing appropriate lesson plans for teaching AI targeted skills - Create guides to help the AI through the learning process - Address any maladaptive machine behaviors - Research the nature of ethics and how it can be taught and/or reinforced - Create new and innovative therapy approaches for the domain of computer-based intelligences Andrea Kuszewski recently wrote lengthy blog (whose parts I already used above) on about this job which is pretty cool. Andrea does interesting observation; a baby is born without a database of facts. It is in some ways a blank slate, but also has a genetic code that acts as a set of instructions on how to learn when exposed to new things. In the same way, our AI is born completely empty of knowledge, a blank slate. We give it an algorithm for learning, then expose it to the material in needs to learn (in this case, books to read) and track progress. If children are left to learn without any assistance or monitoring for progress, over time, they can run into problems that need correcting. Because our AI learns in the same fashion, it can run into the same kinds of problems. When we notice that learning slows, or the AI starts making errors - the robopsychologist will step in, evaluate the situation, and determine where the learning process broke down, then make necessary changes to the AI lesson plan in order to get learning back on track. Likewise, we can also use the AI to develop and test various teaching models for human learning tasks. Let’s say we wanted to test a series of different human teaching paradigms for learning a foreign language. We could create a different learning algorithm based on each teaching model, program one into each AI, then test for efficiency, speed, retention, generalization, etc. Indeed, seems like a no-brainer: If you want to replicate human-like thinking, collaborate with someone who understands human thinking on a fundamental and psychological level, and knows how to create a lesson plan to teach it. But things are changing. The field of AI is finally, slowly starting to appreciate the important role psychology needs to play in their research. Robopsychology may have started out as a fantasy career in the pages of a sci-fi novel, but it illustrated a very smart and useful purpose. In the rapidly advancing and expanding field of artificial intelligence, the most forward-thinking research labs are beginning to recognize the important - some even say critical - role psychology plays in the quest to engineer human-like machines. Credits: Wikipedia, Karlsruhe Institute of Technology, Monica Anderson, Andrea Kuszewski
<urn:uuid:2547cc64-4761-4775-8974-c0d069e81b5a>
CC-MAIN-2017-04
https://community.emc.com/people/ble/blog/2012/02/25/i-robot
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283301.73/warc/CC-MAIN-20170116095123-00264-ip-10-171-10-70.ec2.internal.warc.gz
en
0.962906
3,753
3.375
3
Quick Refresher on Cross-Site Request Forgery For those who do not know what it is, here is a quick overview of what a CSRF vulnerability is: HTTP is a stateless protocol, and because of this web servers cannot identify if a number of requests are coming from the same visitor or not. Though with the help of Cookies it is possible to track users' behaviour, because a cookie (or session ID) is is unique to the website visitor. The cookie or session ID is created when the visitor's browser sends the first request to the web server. During the continuation of the session, the visitor's browser sends the cookie with every subsequent request to the server and if the cookie is not sent, the server will not recognize the visitor. Exploiting a CSRF Attack To successfully craft and exploit a Cross-Site Request Forgery (CSRF) attack, the attacker tricks the victim into accessing a malicious website that transparently forces the victim's web browser to perform actions on a trusted website to which the victim is currently authenticated without the victim's knowledge. For example, - The victim clicks on a link which was sent to him by the attacker. - Upon accessing the malicious website in another browser tab, it sends a request to the trusted website to which the victim is logged on to, typically via an iframe or something similar. - The browser cannot determine if the request was triggered by the user or the malicious website, so just like any other request the browser includes the trusted website's cookies in the request. - The trusted website distinguish the user based on the cookies sent in the request and it processes the request as normal. - At this stage the attacker can emulate the user by sending requests to the trusted website using the victim's cookie and web browser. This means that the attacker can do anything that the victim can do on that website when logged. Therefore if the victim is an administrator on the trusted website, the attacker can for example add a new user or delete data on the trusted website. CSRF Vulnerability in Login Forms As a means of protection against the exploitation of CSRF attacks such as the one described above developers can programmatically create a unique and unpredictable keys in forms, which are referred to as Anti-CSRF Tokens. However, they often neglect to implement this protection in login forms because they don't consider "CSRF in Login Forms" as a security issue. Technical Details of a CSRF Vulnerability in Yandex Browser Yandex is a Russian Search Engine company which according to recent reports it is the 4th most popular search engine, even more popular than Bing. Yandex has a number of products and services, one of which is the Yandex browser. In the remainder of this article we will explain the technical details of a CSRF vulnerability in the Yandex browser, which was identified by Netsparker researchers. CSRF Vulnerability In Yandex Browser Login Form The CSRF vulnerability was found in the login screen of the Yandex Browser that is used by users to login to their Yandex account to synchronize their browser data (such as passwords, bookmarks, form values, history) between different devices they own, such as smartphones, tablets and PCs. The Google Chrome browser has the same feature. Exploiting the CSRF Vulnerability in the Yandex Browser By forcing the victim to log in with his own credentials, the attacker can access all of the victim's information that is saved in the browser such as browser history, passwords, opened tabs and bookmarks. Below is a step-by-step explanation of the proof of concept of the CSRF vulnerability in the Yandex browser: - The attacker tricks the victim into accessing a website that includes the below code that triggers a browser to send a POST request to sync the data using the attacker's credentials. form method="POST" action="https://browser.yandex.com.tr/sync/" role="form"> <input name="login" value="vvvait"> <input name="passwd" value="n3t5p4rk3r"> - When the victim accesses the URL and the code is executed, unknowingly the victim logs in to the attacker's Yandex account and the Yandex Browser is triggered to start synchronizing the data. Once the victim's browser synchronizes all the data the attacker can login to his account and access the victim's synced data. Note that unless the victim finds out what is happening the browser will keep on syncing data to the attacker's account, therefore things such as new credentials and bookmarks will be synced to the attacker's account without the victim's knowledge. By combining the Yandex browser synchronization feature and the exploitation of the CSRF attack in the login screen the attacker managed to steal the victim's passwords, browser history, bookmarks and auto complete info. In addition to that, the attacker effectively backdoored the victim's browser to keep his account synced with future updates from the user. Therefore the victim's browser will continue syncing data without the victim being aware of this browser feature and of what is happening. - 17th December 2015: We reported the vulnerability to Yandex via the Yandex Bug Bounty program. - 15th January 2016: Since by now we did not hear from Yandex we got directly in touch with one of their engineers via Twitter and were told that an email account was automatically created for us on the Yandex email system. In this account we found an email dated 22nd December 2015 in which a Yandex engineer got back to us telling us that he was unable to reproduce the issue. - 8th February 2016: We sent a video PoC. - 15th February 2016: We chased Yandex since we did not hear from them. - 2nd March 2016: Yandex replied confirming the issue and advised us that they are working on the issue. - 7th March 2016: We chased them again to see what is the status, since we were not being updated. - 16th March 2016: We chased them again to see what is the status, since we were not being updated. - 29th March 2016: We chased them again to see what is the status, since we were not being updated. - 12th April 2016: Yandex replied telling us that they are still working on the issue. By mid May we noticed that the vulnerability was addressed, though we were never updated by the Yandex team about the fix.
<urn:uuid:58d99e25-8615-4e15-9c47-a6b9c711bbc8>
CC-MAIN-2017-04
https://www.netsparker.com/blog/web-security/csrf-vulnerability-yandex-browser/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280266.9/warc/CC-MAIN-20170116095120-00256-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947502
1,361
2.96875
3
The Mars Express, which the agency launched in 2003, has begun a series of flybys of Phobos, the largest moon of Mars, that will ultimately set a new record for the closest pass to Phobos, skimming toward the surface at 50 km - or about 31 miles on March 3, the ESA said. The data collected by the satellite could help unwrap some of the mystery about the moon, the ESA said. The origin of Phobos is a mystery. Three scenarios are possible. The first is that the moon is a captured asteroid. The second is that it formed in situ as Mars formed below it. The third is that Phobos formed later than Mars, out of debris flung into Martian orbit when a large meteorite struck the red planet, according to the ESA. Mars has two tiny moons--Phobos and Deimos. According to NASA, the larger moon, Phobos, is a cratered, asteroid-like object and orbits so close to Mars that gravitational tidal forces are dragging it down. In 100 million years or so, Phobos likely will be shattered by stress caused by the relentless tidal forces, the debris forming a decaying ring around Mars, according to NASA. According to the ESA, a heavy emphasis is being placed upon the closest flyby because it is an unprecedented opportunity to map Phobos' gravity field. At that range, Mars Express should feel a difference in the pull from Phobos depending on which part of the moon is closest at the time. Previous Mars Express flybys have provided the most accurate mass for Phobos, and its High Resolution Stereo Camera (HRSC) has provided the volume. When calculating the density, this gives a surprising figure because it seems that parts of Phobos may be hollow, the ESA stated. The Mars Express's main mission has been to explore the planet Mars. Some of its goals have been to image the entire surface at high resolution (10 meters/pixel) and selected areas at super resolution (2 meters/pixel) and to produce a map of the mineral composition of the surface at 100 meter resolution. It also has as a task to image the proposed landing sites for the oft-delayed Russian Mars mission Phobos-Grunt mission, which is now targeted for sometime in 2011. NASA and the European Space Agency last year signed an agreement to "cooperate on all manner of robotic orbiters, landers and exploration devices for a future trip to Mars." Specifically, NASA and ESA agreed to consider the establishment of a new joint initiative to define and implement their scientific, programmatic, and technological goals for the exploration of Mars. The program would focus on several launch opportunities with landers and orbiters conducting astrobiological, geological, geophysical, climatological, and other high-priority investigations and aiming at returning samples from Mars in the mid-2020s. The envisioned program includes the provision that by 2016, ESA will build what it calls an Entry, Descent, and semi-soft Landing System (EDLS) technology demonstrator and a science/relay orbiter. In 2018, the ESA would also deliver its ExoMars rover equipped with drilling capability. NASA's contribution in 2016 includes a trace gas mapping and imaging scientific payload for the orbiter and the launch and, in 2018 a rover, the EDLS, and rockets for the launch. Layer 8 in a box Check out these other hot stories:
<urn:uuid:4ca77e02-be80-411b-b42e-0f5b219170fd>
CC-MAIN-2017-04
http://www.networkworld.com/article/2229862/security/mars-moon-gets-extreme-close-up-by-european-satellite.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280872.69/warc/CC-MAIN-20170116095120-00072-ip-10-171-10-70.ec2.internal.warc.gz
en
0.938657
699
3.65625
4
I have always thought the idea of scanning for viruses to be flawed, well certainly as a security measure. Yet nearly all of you reading this article will be relying on just that technology to protect your networks, PCs and laptops. The last twelve months have provided enough evidence to convince the most sceptical of analysts that the defences are broken and anti-virus scanning is just not up to the job. Slammer, Sobig, Blaster, Swen et al have all managed to wreak havoc with not only the humble home user but corporate users alike. Research carried out by Hewlett-Packard’s Matthew Williamson in their Bristol labs has confirmed my belief that the signature approach to virus detection is fundamentally flawed. Williamson’s research first published in New Scientist (September 2003) found that even if a signature is available from the moment a virus is released, it cannot stop the virus spreading if it propagates fast enough. “These fast viruses are what we are getting at the moment”, Williamson says, adding that they are getting better at being quicker. Government Health Warning So why aren’t the anti-virus vendors issuing government style health warnings with their software to warn us that they might not be able to prevent virus infection? Why is it that nearly every article I read on the subject of virus defence always urges the reader to use anti-virus software? Keeping it up to date of course! It almost feels like a conspiracy to fleece the computer user out of more and more cash. Dear reader, the situation is even worse than you might be beginning to think. Having spoken to several organisations who, despite having the latest anti-virus updates deployed still became infected, it appears in certain circumstances some products just don’t work as advertised. One possible cause of this type of incident is when remote users connect to the network it seems possible that identified viruses can sometimes slip “under the wire” undetected. There has been much debate within the anti-virus community over the past ten years about the effectiveness or otherwise of behaviour blocking techniques, as a generic protection against malicious code. The general conclusion is that behaviour blocking gives rise to too many false positives to be of use. However, I wish to contest that conclusion. There are many forms of behaviour blocker, some go to extraordinary lengths of complexity to decide whether the code in question is malicious or not. They endeavour to analyse the suspect code and by deriving its programmed actions these are then compared against a rule based database to reach a conclusion. I favour a simplistic approach. I have always maintained that your response to malicious code should be aimed at a more basic level. For all users you can make the case, there should be no reason why they should have the ability to download or copy new executable code onto their PCs. Why should this be the case? For three good reasons: firstly because of the threat of malware (all malicious code is executable by default). Secondly, because as an organisation you would want to control the use of program material used to that of properly licensed software. Thirdly so that you can properly test any new software to be run on your PCs and networks for its correct operation and that it does not conflict with any other currently installed program. Why do we continue to allow users this freedom? I think mainly because of the myth that without the ability to be able to introduce new executable code the PC and or its installed software will not function correctly. Well this myth is long out of date and needs revision. It is perfectly possible to control a network of PCs in this manner and in doing so drastically reduce the threat from malicious code without the overhead of having to keep this method of protection updated on a monthly, weekly, daily or even hourly basis. The “KISS” principle applies (Keep It Simple Stupid) to computer security just as any other. New Improved Approach Interested? Well I hope so, since we have many reported incidents of attacks where networks protected with this type of defence remain intact and “clean” whilst others under the same administration but without the benefit of this protection get infected with the latest virus or worm. Routine installation of new software or software updates can be performed by the administrator with the protection in place on a single PC or by means of a software distribution package to the entire network. I’m not suggesting for one moment that you throw away your anti-virus software, it is still useful and another level or layer of protection. What it does mean is that you will finally be using your anti-virus software in a way it was originally conceived it would be used (to detect a known virus that you have either isolated or trapped). AV software was never designed to be a security barrier, as you know it’s only as good as its last update and even then as you have learnt here that might not be enough. There is a better way forward, security as always is never just one product or technology but layers of defence. I strongly advise you to look at other means of protection to use in conjunction with your anti-virus software if you want to remain virus free into the future. Reflex Magnetics Ltd are exhibiting at Infosecurity Europe 2004 which is Europe’s number one IT Security Exhibition. Now in its 9th year, the show features Europe’s most comprehensive FREE education programme, and over 200 exhibitors at the Grand Hall at Olympia from 27th to the 29th April 2004.
<urn:uuid:19277ac8-c84f-43c0-bff8-388302373906>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2003/11/21/why-bother-virus-scanning/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00558-ip-10-171-10-70.ec2.internal.warc.gz
en
0.957309
1,122
2.625
3
In the past, IT security had been focused on securing organisational perimeters until it was realised that these were quickly breaking down due to the increasing demands of mobile workers, closer business relationships, outsourcing and other organisational challenges. This led to what is now known as IT security de-perimeterisation. At the same time, public awareness and the global regulatory environment have made the consequences of data breaches a significant business issue. Quickly, attention has moved to securing data itself by focusing on detecting and protecting data at risk. This data security is delivered via two technologies: - Leak prevention solutions, which are used to detect if data is at risk of leaving an organisation's control, for example via email or a USB drive. - Loss protection using encryption to protect individual data whilst, for example, on laptops or in email. Enterprise Data Protection is a new and evolving market place that represents the coming together of leak and loss prevention technologies.
<urn:uuid:e50d101d-935e-41e1-b81e-0dacdf4a9b3b>
CC-MAIN-2017-04
https://www.bloorresearch.com/research/bullseye-report/enterprise-data-protection/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00100-ip-10-171-10-70.ec2.internal.warc.gz
en
0.949115
195
2.515625
3
The Real World: Advantages of Practical Knowledge Shortly after the dawn of the IT certification era, these new methods of assessing technology skills were dogged by charges of insufficient reliability and security. With regards to the former, critics maintained that anyone could get a certification if they were good at taking tests and had the right study materials, regardless of their actual technical expertise. As for the second line of reasoning, it was held that the exam environments were too easily infiltrated for dastardly purposes, and that brain dumps — Web sites that post the questions and answers to exams — made it easy for no-nothing candidates to pass. Putting the veracity of those arguments aside, one of the more welcome developments for many IT certifications in the past few years has been the introduction of hands-on components that simulate real-world environments. This move has helped address some of the frequent knocks on credentialing programs, such as the notion that they don’t do anything except assess candidates’ test-taking abilities or only measure their capabilities in best-case scenarios or, worst of all, can give undeserving individuals an unearned credential through cheating. More than just refuting alleged shortcomings, though, the labs and simulations that comprise performance-based testing further demonstrate that participants’ knowledge of the subject matter will indeed translate to effectiveness on the job. They also offer a greater rationale for using hands-on methods to learn about various technologies and how they work. One of the best ways to learn is by doing, and if the assessments are performance-based, then it stands to reason that a good part of the preparation will be too. What’s more, training and testing experiences that simulate actual working environments have facilitated certification’s upward expansion into more advanced technical realms. To be sure, there is a place for foundation-level certifications that rely mostly or completely on multiple-choice, text-based exams, and many IT professionals have used them very effectively in their career progression. However, rather than topping out in their professional development via IT credentials that can only go so far, sophisticated performance-based courses and exams allow them to push themselves further along into levels of depth and complexity associated with four-year undergraduate and even postgraduate degrees found at colleges and universities. Another aspect of the performance-based movement has been increasing activity around apprenticeships and internships. In these circumstances, certification programs will line up actual work experiences for the candidates, intended to give them hands-on training and a look at how the concepts and procedures that underpin technology are applied on the job. One notable example of this is the National IT Apprenticeship System (NITAS) program, which was developed by CompTIA and the U.S. Department of Labor. NITAS combines IT certification and apprenticeships into a single development roadmap that serves as a guide for professionals getting started in the technology industry. To ensure knowledge transfer has taken place, CompTIA also tracks skill levels of participants. To be sure, the performance-based methodologies rolled out by certification programs such as Cisco, Microsoft, Novell, the SANS Institute and many others are not a development panacea. There are just too many different kinds of technologies, companies, situations and problems to be accounted for in a single credential, no matter how much experience is incorporated into it. And criticisms around relevancy and security, while perhaps somewhat diminished, remain. Yet these attempts to replicate real-world conditions are undoubtedly a boon to those who value effectual testing and training. The certification programs that have included experiential learning in their curricula deserve commendation for raising the standards of IT credentials by making them more applicable to the industry.
<urn:uuid:6bad5a08-6e37-428d-aa07-dbb4c771eb23>
CC-MAIN-2017-04
http://certmag.com/the-real-world-advantages-of-practical-knowledge/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281332.92/warc/CC-MAIN-20170116095121-00402-ip-10-171-10-70.ec2.internal.warc.gz
en
0.963568
753
2.625
3
Target Cyber Criminals to Stop Cyber Crime Focus on the people, then the technology Cyber criminals, threat actors, hackers—they know cyber crime pays. Your data and technology, stored in networks and the cloud, are vulnerable. And although the tactics, targets and technology of attacks are all important, your most powerful defense against cyber crime is to understand threat actors. To effectively prevent and respond to cyber crime, you need to establish the motivations and methodology of threat actors. Here are two ways advanced cyber attacks work: Targeted – Malware, such as spear phishing, is used to reach a specific machine, individual, network, or organization. This malware tends to be signature-less, or otherwise evades antivirus and other traditional cyber security efforts using the criminal's knowledge of the target. Persistent – Advanced cyber attacks are initiated via a series of email, file, web, or network actions. These individual actions might remain undetected by antivirus or other traditional defenses, or be ignored as harmless or low-priority. However, the malware becomes entrenched and pervasive, and culminates in a devastating attack. Malware that uses both of these methodologies simultaneously presents an advanced persistent threat, or APT. And any organization in any industry can be a target.
<urn:uuid:87f2288b-2801-4f08-9a1a-9b42fdeb649a>
CC-MAIN-2017-04
https://www.fireeye.com/current-threats/stopping-todays-cyber-attacks.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282202.61/warc/CC-MAIN-20170116095122-00310-ip-10-171-10-70.ec2.internal.warc.gz
en
0.920059
261
2.75
3
This client came to Gillware’s data recovery lab after an accidental reformat cut them off from their collection of family photos and documents. They had wanted to free up space on their drive, and thought they had their files backed up on their computer. Only once they’d continued to use the drive for a few hours after their successful hard drive reformat did they realize they did not have as much data backed up as they’d assumed. Fortunately for our client, situations like these are bread and butter for our data recovery technicians. Hard Drive Reformat Case Study: A Modern Palimpsest Drive Model: Western Digital WD10SPCX-60KHST0 Drive Capacity: 1 TB Operating System: Windows Situation: Hard drive was accidentally reformatted and briefly used Type of Data Recovered: Photos and documents Binary Read: 100% Gillware Data Recovery Case Rating: 9 The More Things Change… Here is a little riddle: How is a reformatted hard drive like the Codex Nitriensis? Believe it or not, the concept of reformatting a data storage device predates hard disk drive technology—and by a lot longer than you’d expect. For thousands of years, humans have stored data by writing it down. (Of course, we still do this today, but not as often as our ancestors did.) In antiquity, material to write on was often scarce or expensive to produce, and had to be rationed carefully and reused. A palimpsest is a manuscript that has been erased so it could be reused. Throughout history, writers have taken used scrolls or manuscripts, scraped or washed the existing text off of them, and wrote new text over them. Archaeologists and historians have, thanks to technological advancement over the past centuries, developed ever-better tools for deciphering the erased text from these palimpsests. A reformatted hard drive is simply a modern palimpsest. You take a hard drive full of data, make it appear blank, and start reusing it. But the old data lives just beneath the surface, out of sight. And just as historians and archaeologists recover data from ancient manuscripts, data recovery experts can examine reformatted hard drives and decipher the data that used to live within them. What do you do when you’ve accidentally formatted a hard drive? It’s hard to inadvertently reformat a hard drive. Accidental reformats are rarely the result of a slip of a finger. If a hard drive is experiencing intermittent firmware or connectivity issues, it may show up as blank and prompt for a reformat. When this happens, it’s easy to accidentally hit the wrong key and tell your computer to format the drive. In these situations, the user usually realizes their mistake right away. But most accidental reformats are intentional reformats which were done by mistake. You may format your hard drive to free up space, assuming you’ve backed up your important data, only to realize some of your data didn’t make it. Or maybe you have several hard drives, and you may have meant to format one hard drive, but plugged in a different one by mistake. In these situations, by the time the user tells the drive to format itself, it’s already too late. Once begun, you can’t stop or undo the reformat. But the user might not realize the error of their ways right away. It might hit them a few minutes, hours, or days later. During this time period, the user could be writing new data to the drive. This can add a layer of complexity to data recovery. If they try to install and run file recovery software, they might just make the problem worse. When you need an expert to recover data from a formatted hard drive, Gillware has your back. Our reformatted hard drive recovery specialists know the ins and outs of hard drive data storage better than anyone else. Gillware’s logical hard drive recovery technicians know where your data goes when you accidentally reformat a hard drive, as well as how to retrieve that data. How Does Gillware Recover Files From a Formatted Hard Drive? Your hard drive has filesystem metadata that defines the size and boundaries of its partitions and points to the locations of all of your files. When you reformat your hard drive, you write new metadata to the drive to define a new filesystem. But instead of erasing everything that used to exist on the drive, you just cover it up and close the paths leading to where your data lives. Immediately after reformatting your drive, most (if not all) of your data is still perfectly intact. When you write new data to the drive, though, this data falls on top of some of the old data. Keep using the reformatted hard drive and old data will gradually vanish, bit by bit. To recover data from a formatted hard drive, our technicians have to be prepared to deal with all of the ugly possibilities of heavy corruption and loss of the old filesystem metadata that can result from an accidental reformat. Many kinds of readily-available file recovery software tools can’t deal with slightly complicated file recovery situations, let alone worst-case scenarios. Our hard drive reformat recovery specialists use our own proprietary imaging and analysis tools to recover files from formatted hard drives. Hard Drive Reformat Recovery Results Our hard drive reformat specialists successfully recovered the vast majority of the client’s data from the client’s modern palimpsest. With the help of our imaging and analytical tools, our logical data recovery technicians could uncover the old filesystem prior to the user’s accidental reformat. The filesystem metadata, including file definitions, pointed to all of the client’s critical data. The vast majority of the data was fully functional. Only a small fraction of the data had been overwritten by the new files the client had made. We rated this hard drive reformat case a 9 on our ten-point rating scale. This client was fortunate that the vast majority of their data was intact. But the actions you take after an accidental reformat can have a big impact on how much of your data we can recover. And which actions you take often depend on why you reformatted the drive in the first place. Our engineers see all too frequently clients who reformatted their drives, then kept using them until they realized they were missing important files, unaware they were compromising the integrity of their old data. This can have adverse effects on data recovery. When you’ve reformatted your hard drive and lost data, you should bring it to a data recovery professional as soon as you notice that you’ve erased critical data.
<urn:uuid:7f3ec956-1f09-4d0f-975a-798324c6713e>
CC-MAIN-2017-04
https://www.gillware.com/blog/data-recovery-case/hard-drive-reformat-modern-palimpsest/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282202.61/warc/CC-MAIN-20170116095122-00310-ip-10-171-10-70.ec2.internal.warc.gz
en
0.941366
1,386
2.984375
3
Here are some of the different types of entrepreneurs. The Millennial Entrepreneur Millennials are often criticized for being lazy or entitled, or for having a poor work ethic. The truth is, the millennial work ethic is different, not necessarily better or worse. Millennials are shying away from the idea of punching in at 9 and punching out at 5; rather, they view work as more task- or project-based. In true entrepreneurial fashion, millennials are the architects of the gig economy. Having seen their parents get laid off from companies they may have worked at for a dozen years, they are far less invested in loyalty than previous generations. They may also have had a more challenging start to their career, having to compete with far more experienced workers for their first job. Faced with poor job prospects and a mountain of student loan debt, and encouraged by rapidly improving technology, many millennials took advantage of the opportunity to forge their own path. The Millennial Entrepreneur personifies outside-the-box thinking. When jobs were hard to come by, millennials created their own. When they couldn’t get traditional funding, they came up with crowdfunding. They embrace the technology that lets them do more. The Millennial Entrepreneur may take lots of risks. Being young, they may feel as though they are invincible or have little to lose. Seeking advice from an experienced mentor may help Millennial Entrepreneurs learn the right risks to take, as well as how to mitigate others with business liability insurance, for example. The Go-Getter Entrepreneur While entrepreneurs are, by definition, self-motivated, the Go-Getter Entrepreneur takes it to a whole new level. The Go-Getter Entrepreneur hits the ground running as soon as they get out of bed. They see every problem or roadblock as a challenge, and they may often be juggling more than one business at a time. The Go-Getter often started out as the kid who sold rocks at the beach, or who was walking other people’s dogs for fun and profit at eleven years old. The Go-Getter knows that, while planning and preparation are important, it’s not necessary to have every detail buttoned down before acting. A Go-Getter often learns by doing, and may be willing to start a business in an industry they have little experience in. The Go-Getter can sometimes lose interest in a venture when it comes time to pay attention to the details. Outsourcing certain functions like accounting or social media management is a good way to keep the Go-Getter focused on what excites them. The Social Entrepreneur The Social Entrepreneur, according to the Schwab Foundation for Social Entrepreneurship, “drives social innovation and transformation in various fields including education, health, environment and enterprise development.” The Social Entrepreneur applies the attitude and motivation of an entrepreneur to social issues, whether through a non-profit or a for-profit entity. Some Social Entrepreneurs may employ a non-profit model that recovers some costs through the sale of goods and/or services, or they may create a for-profit enterprise whose primary goal is to provide a socially responsible or beneficial product or service, and whose secondary goal is to generate financial returns. Social Entrepreneurs are changing the way social issues are dealt with altering the nature of philanthropy. As such, they may need to educate those who have years of experience in the non-profit sector as well as the donors and benefactors they try to woo. The Reckless Entrepreneur Entrepreneurs are risk takers by definition. Some, however, raise the bar on risk taking to the point where they become reckless. The line between calculated risk-taking that can help a business thrive and the recklessness that can sink it can be blurry. The Reckless Entrepreneur may eschew security in their private life as well. Participating in extreme sports or other thrill-seeking activities, especially without taking the necessary precautions, is a good indicator of recklessness. Another sign is going forward with a significant decision, such as introducing a new product line or securing funding, without performing the due diligence necessary to make sure the move is prudent. The Reckless Entrepreneur doesn’t consider the worst-case scenario. One way to spot a Reckless Entrepreneur is by their refusal to protect themselves against risks they can’t control but can mitigate. An entrepreneur who insists they don’t need liability insurance because ‘I’ll never get sued,” is reckless. Making sure your company is protected with commercial general liability insurance just makes good business sense. Which type of entrepreneur are you ? Or are you a combination of types? Tell us in the comments below.
<urn:uuid:3047eec1-b7ae-4876-9e55-c6a341d64a59>
CC-MAIN-2017-04
http://www.hiscox.com/small-business-insurance/blog/what-kind-of-entrepreneur-are-you/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279915.8/warc/CC-MAIN-20170116095119-00036-ip-10-171-10-70.ec2.internal.warc.gz
en
0.960231
980
2.5625
3
In the Greek mythology, there is a folklore describing picking up of creatures from the surface of the Earth and embedding them as constellations in the night sky. On the analogy, in the world of Cloud Computing, data and software are being swept up from desktop Personal Computers and server rooms into the computing Cloud. As this migration to the Cloud is progressing rapidly, more and more users and developers are showing their interest to get into the Cloud. Cloud Computing has been variably described as on-demand computing, software as a service, Internet as platform, etc. When you put software into the Cloud, the components of the software reside on Cloud servers possibly scattered across continents. The focus of innovation, indeed, seems to be ascending into the Cloud. Some substantial fraction of computing activity is migrating away from the desktop and the corporate server room. The change is affecting all levels of the computational ecosystem, from casual user to software developer, IT manager, and even hardware manufacturer. The major appeal of Cloud Computing is the ability of “liberating” programs and data from the local computing centers. The locus of computation is shifting with functions migrating outward to distant data centers reached through the Internet. Cloud is becoming the primary means of collaboration and sharing. With traditional computing, total control comes at a hefty price. Software must be installed and configured, then updated with each new release. The computational infrastructure of operating systems and low-level utilities must be maintained. Every update to the operating system sets off a series of subsequent revisions to other programs. Outsourcing computation to a Cloud computing and hosting service provider eliminates nearly all of these concerns. Cloud computing also offers end users advantages in terms of mobility and collaboration. Software sold or licensed as a product to be installed on the user’s hardware must be able to cope with a baffling variety of operating environments. In contrast, software offered in the cloud is developed, tested, and run on a computing platform of the vendor’s choice. This hastens provisioning of greater number of software programs to larger number of users. Although the new model of Cloud computing has neither hub nor spokes, it still has a core and a fringe. The aim is to concentrate computation and storage in the core, where high performance machines are linked by high bandwidth connections, and all of these resources are carefully managed. At the fringe are the end users making the requests that initiate computations and who receive the results. The kinds of productivity applications that first attracted people to personal computers years ago have now appeared as software services through the cloud. MS Office hosting in the cloud is an example. Software for major business applications (such as QuickBooks, ACT!, Peachtree, MS SQL Server, etc.) has generally been run on corporate servers, but several companies now provide it as an on-demand service. It is very desirable to outsource a well-built data center. Cloud computing vendor offers data storage priced by the gigabyte-month and computing capacity by the CPU-hour. Both kinds of resources expand and contract according to need. For most Cloud computing applications, the entire user interface resides inside a single window in a Web browser. For those deploying software out in the cloud, scalability is a major benefit. Cloud computing vendor provides resources in such a way that a program continues running smoothly even as the number of users grows. Cloud servers respond to hundreds or thousands of requests per second and coordinate information coming from multiple sources. The pattern of communication is many-to-many, with each server talking to multiple clients and each client invoking programs on multiple servers.
<urn:uuid:8ca7d8e2-58fe-433c-b8ac-fa49d607fa69>
CC-MAIN-2017-04
http://www.myrealdata.com/blog/183_cloud-computing-%E2%80%93-paving-the-path-for-development
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280410.21/warc/CC-MAIN-20170116095120-00211-ip-10-171-10-70.ec2.internal.warc.gz
en
0.937709
727
2.984375
3
Once again, big data comes to the rescue ... for identifying birds. A new app, Merlin, published by the Cornell Lab of Ornithology uses a database of 70 million bird sightings from the citizen-science eBird project. This data is used to determine what bird species are likely to be in your area. Next, based on insights into how people see and describe birds ("By participating in online activities to describe birds based on photos, [bird watchers] contributed more than three million data points that Merlin uses to deduce which birds are viewed based on people's description of color, size, and behavior"), you are asked five questions about the bird's size, it's main colors, where it's feeding, etc. and voila! Your sighting is identified. A selection of Merlin app questions to identofy your bird sighting To improve accuracy user feedback is used so that mistakes and unexpected sightings are incorporated into the data providing a source of constant improvement. The app also displays professional photos of birds (some 1,400 are currently available) along with descriptive text, sound samples, and range maps. This is a brilliant idea and beautifully executed. And as an example of capitalizing on data resources and turning them into effective, useful information it's outstanding.
<urn:uuid:76de4d23-a491-42de-bbcb-3f6f36bb6ea2>
CC-MAIN-2017-04
http://www.networkworld.com/article/2226150/software/using-big-data-to-identify-birds.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280410.21/warc/CC-MAIN-20170116095120-00211-ip-10-171-10-70.ec2.internal.warc.gz
en
0.938371
259
2.953125
3
Scientists at the Lawrence Livermore National Laboratory say they have tested technology that could eventually help them monitor and control space traffic. The driving idea behind the project is to help keep satellites and other spacecraft from colliding with each other or other debris in Low Earth Orbit. +More on Network World: Space station dodged space debris 16 times in 15 years+ Recently the Lawrence Livermore team said they used a series of six images over a 60-hour period taken from a ground-based satellite to prove that it is possible to refine the orbit of another satellite in low earth orbit. Specifically, the Livermore team refined the orbit of the satellite NORAD 27006, based on the first four observations made within the initial 24 hours, and predicted NORAD's trajectory to within less than 50 meters over the following 36 hours. By refining the trajectory of NORAD 27006 with their ground-based payload, the team believes they will be able to do the same thing for other satellites and debris once their payload is orbiting earth, the team stated. The technology used to redirect NORAD 27006 and refine its orbit are being developed for Livermore's still developing Space-Based Telescopes for Actionable Refinement of Ephemeris (STARE) mission. Ultimately STARE will consist of a constellation of nano-satellites in low earth orbit, that will provide data and work to refine orbits of satellites and space debris to less than 100 meters, the team stated. According to the Livermore web site, "Each nano-satellite in the constellation is capable of recording an optical image of space objects (debris or assets) at various range and relative velocities as scheduled by the ground infrastructure based on their closest approach distance (typically less than 1000m). The ground infrastructure processes the data received from multiple observations of the objects and reduces the positional uncertainty on the probability of collision to a level typically less than 100m, warranting taking actions such as moving assets. For an 18 nano-satellite constellation, STARE has the capability to reduce the collision false alarm rate by 99% up to 24 hours ahead of closest approach." In a 2011 paper about STARE, the Livermore scientists wrote: " STARE is a proof-of-concept mission whose goal is to improve upon the orbital ephemerides [the position of astronomical objects] obtained by ground based instruments for a small population of satellites and debris to the level where a predicted collision is actionable. To do this, two Cubesat satellites will be launched into a 700 km polar orbit where they will image other satellites at optical wavelengths during closest approach. The images will then be processed along with Global Positioning Service (GPS) data to refine the position and trajectory of the targets. If successful, the mission will pave the way for a small constellation of similar satellites capable of refining ephemerides for all of the satellites and debris pieces involved in close approaches." According to Livermore scientists, accurately predicting the location of a satellite in low earth orbit at any given time is hard because of the uncertainty in the quantities needed for the equations of motion. Atmospheric drag, for instance, is a function of the shape and mass of the satellite as well as the density and composition of the unstable atmosphere. These uncertainties and the incompleteness of the equations of motion lead to a quickly growing error in the position and velocity of any satellite being tracked in low earth orbit. To account for these errors, the US Space Surveillance Network (SSN) must repeatedly observe the set of nearly 20,000 objects it tracks; however, positional uncertainty of an object is about 1 kilometer. This lack of precision leads to approximately 10,000 false alarms per expected collision. With these large uncertainties and high false alarm rates, satellite operators are rarely motivated to move their assets after a collision warning is issued, the team stated, The STARE mission aims to reduce the 1 kilometer uncertainty down to 100 meters or smaller, which will in turn reduce the number of false alarms by roughly two orders of magnitude, the Livermore team stated. Check out these other hot stories:
<urn:uuid:2c5925e8-bb6c-4401-8d98-26cfc418fe7e>
CC-MAIN-2017-04
http://www.networkworld.com/article/2226199/security/us-lab-developing-technology-for-space-traffic-control.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280791.35/warc/CC-MAIN-20170116095120-00119-ip-10-171-10-70.ec2.internal.warc.gz
en
0.920919
832
3.671875
4
Minecraft and Watson Team Up to TeachBy Mike Vizard | Posted 2016-03-09 Email Print The gamification of learning has the potential to fundamentally change how complex information is taught across a broad range of e-learning environments. In a sign of how gaming is about to transform the way we learn, a small team of high school students at Connally High School in Austin, Texas, hooked up an instance of Microsoft's Minecraft video game to the IBM Watson artificial intelligence platform to teach medical students how viruses function. Working with these students, teacher David Conover got his class to use a medical corpse created by Tufts University to transform Minecraft into a teaching tool that uses Watson running as a cloud service to answer students' questions. This gamification of learning has the potential to fundamentally change how complex information is taught across a broad range of e-learning environments. A self-described serial entrepreneur, Conover says he found himself teaching video game design in the high school. To get the students interested in developing the application, he figured that a familiar Minecraft gaming construct would have the most appeal. So, over a summer, he worked with six students on a full-time basis to develop the application. Conover says that the students spent every free moment they had volunteering to create the app. For teens who normally eschew anything to do with school, that project represented a major breakthrough. Using Natural Language to Ask Questions IBM got involved in this project when Conover and the students discovered the application programming interfaces (APIs) that IBM makes available on the Watson cloud service. Without much effort, the students made it possible to type a question using natural language in Minecraft that would be answered by Watson. That allows the application to easily augment the information the teens spent all summer inputting into the original app. Conover says the students now plan to tap into the voice capabilities of Watson, and also to start building other gaming applications to teach other subjects. For example, they might cover topics such as the best ways to optimize traffic patterns in Austin or figuring out what would be involved in making a journey to Mars. In the meantime, gamification is all the rage in education circles. That means it’s only a matter of time before this technology is applied to all forms of training—both in and out of the classroom. The challenge now is figuring out exactly what gaming metaphor best fits the type of material being taught. Once that’s decided, cognitive computing platforms should provide all the data needed to drive that application in a place that is, quite literally, a simple API call away.
<urn:uuid:61f49244-a3fb-4b39-ab6f-6c12cd1dc79e>
CC-MAIN-2017-04
http://www.baselinemag.com/blogs/minecraft-and-watson-team-up-to-teach.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285315.77/warc/CC-MAIN-20170116095125-00329-ip-10-171-10-70.ec2.internal.warc.gz
en
0.950182
531
3.359375
3
Google Uses AI To Cool Data Centers and Slash Electricity Use By Dan Heilman / CIO Today. Updated July 20, 2016. Data centers are notorious for how much energy they require. Now, however, Google thinks it has found a way to mitigate that problem. Over the last few months, the tech giant has been using an artificial intelligence (AI) system built by DeepMind to control certain parts of its data centers to make its vast server farms more environmentally friendly. DeepMind is the British AI company that Google bought for more than $600 million two years ago. The company is famous for developing AI technology that was able to beat the world’s best player of the board game Go. Although the company has worked on projects for various U.K. healthcare companies, it hasn’t yet turned a profit for Google. 40 Percent Cut in Energy Use The use of the AI technology is "a phenomenal step forward" to help cut down energy usage in data centers, DeepMind research engineer Rich Evans and Google data center engineer Jim Gao said on Google's blog. In fact, the electricity consumption of data centers is on a pace to account for 12 percent of global electricity consumption by 2017, according to predictions in a 2015 report from Greenpeace. Some of the world’s biggest tech companies, including Apple, Amazon and Google, run some of the biggest data centers. The DeepMind AI system has been used so far to curb energy consumption by Google’s data center cooling units, which stop the company’s servers from overheating. The coolers’ energy usage has been cut by up to 40 percent so far, which Google said has helped one of its data centers reach a 15 percent reduction in power usage efficiency (PUE). PUE is the ratio of the total building energy used by equipment, such as pumps, chillers and cooling towers, to the energy used by IT equipment such as Google's servers. Google said it also plans to use DeepMind's AI across other parts of its data center infrastructure. All About Optimizing How did Google do it? The energy reduction was achieved by training DeepMind's self-learning algorithms to predict how hot data centers were going to get within the next hour, according to Evans and Gao. Armed with that data, the coolers were only able to run at the maximum temperature necessary to keep the servers sufficiently cool. Google’s data centers are used to run such services as Search, YouTube and Gmail. Using a system of neural networks that zero in on different operating scenarios and parameters within the data centers allows DeepMind to make a more efficient and adaptive framework to understand data center dynamics and optimize efficiency, according to Evans and Gao. "The implications are significant for Google’s data centers, given its potential to greatly improve energy efficiency and reduce emissions overall," Evans and Gao said. "This will also help other companies who run on Google’s cloud to improve their own energy efficiency." Similar deployments of AI are becoming more and more common, Dave Schubmehl, research director for IDC’s Cognitive Systems and Content Analytics division, told us today. "It’s all about optimization," said Schubmehl. "It can be used in supply logistics, shipping logistics and dynamic pricing in addition to keeping an industrial area at the right temperature. We’ll be seeing AI being applied to a lot more areas."
<urn:uuid:b65831d9-b1a7-4633-a824-7bcff4bee3f1>
CC-MAIN-2017-04
http://www.cio-today.com/article/index.php?story_id=023001I1W971
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280128.70/warc/CC-MAIN-20170116095120-00147-ip-10-171-10-70.ec2.internal.warc.gz
en
0.959097
708
2.953125
3
Many apps that you create will need to manipulate data in some way. A contacts list app might need to load existing contacts from a data file and save new contacts in the file. A sports app might track team statistics in a data file and update these statistics after each game. The BlackBerry 10 Native SDK supports a wide range of libraries that you can use to manipulate data, like SQLite, JSON_parser, and libxml2. Cascades also provides its own set of data management APIs to help remove some of the complexity of storing and modeling data. Additionally, Cascades provides its own set of APIs that you can use to parse and store JSON data. For SQL data, the SQLite library provides a serverless, transactional SQL database engine that you can include in your apps. To remove some of the complexity of creating a SQLite database, you can use the Cascades APIs. To learn more about using SQL in Cascades, see SQL data. As with the other types of data mentioned above, Cascades provides its own set of APIs to make managing the data type easier. For more information about managing XML data with Cascades, see XML data File system access Before you can store or retrieve data from the device, you should first make yourself familiar with the architecture of the device file system. Applications have access to their own working directories as well as a shared directory that all apps can access. For more information about the file system, see File system access. Cascades data management APIs The Cascades framework uses a modular approach to store, access, and display data in your apps. This approach makes it easy for you to store different types of data, organize and model the data in different ways, and display the data with different visual styles. The following diagram illustrates the different components that interact when you manipulate data in your app: This component represents the raw data for your app. The data could be a list of contact entries, a set of financial records, or a group of game objects. You can access data that you package with your app, but you can also create new data dynamically as your app runs. The format of this data can vary depending on your needs, and Cascades provides classes that help you manage three common data formats: JSON, SQL, and XML. This component lets you access the external data and manipulate it in your app. You can load data files, create new files and save data in them, and handle any errors that might occur during these operations. Then, you can add the data to a data model to organize it before it's displayed. You can use the JsonDataAccess, SqlDataAccess, and XmlDataAccess classes to load and save data in JSON, SQL, and XML format, respectively. You can also access SQL data asynchronously using an SqlConnection. Supporting classes, such as DataAccessError, give you more information about errors so you can handle them appropriately. This component is designed specifically as an easy-to-use adapter in QML between external data and UI components. You can use the DataSource class to declare the properties of the external data that you want to access. This data can be SQL, JSON, or XML data that's stored locally, or it can be JSON or XML data feeds that are accessed remotely. You can also use the DataSource class to control when and where the data is loaded. This component lets you organize and sort your data, and then provide the data to a list view to display it. For example, you can use a GroupDataModel to sort a list of employees by last name or employee number. Then, you can associate this data model with a list view, and your data is organized and displayed in the way you specified. To learn more about data models, see Data models . This component determines how the data from the data model is most often displayed in your app. Each entry in the data model becomes an item in the list, and you can specify how each item should appear visually. You might represent each item using a simple Label, or you might represent each item using multiple controls that you define yourself. A ListView lets you handle all visual aspects of the list, and is separate from the data and the data model that's used to provide the data to display. To learn more about list views, see Lists. Large data sets If you're using a ListView to display information from a data source, you must consider how the performance of your app is affected by the amount of data. Small amounts of data can be loaded as a complete set during initialization with little to no performance impact. Large sets of data must be managed differently to avoid start-up delays, slow scrolling, and other indicators of poor performance. For more information about managing large amounts of data, see Large data sets. Persistent data allows you to save app settings to the persistent store and load them when they are needed. The persistent store lets you save objects to persistent memory, and these objects are retained in memory after a device is restarted. For more information about the persistent store, see Persistent data. Last modified: 2015-07-24
<urn:uuid:64b5df9e-5b6f-4103-9241-eb11843bb722>
CC-MAIN-2017-04
http://developer.blackberry.com/native/documentation/device_platform/data_access/index.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280834.29/warc/CC-MAIN-20170116095120-00385-ip-10-171-10-70.ec2.internal.warc.gz
en
0.887072
1,068
2.59375
3
OpenCL (the Open Computing Language) is under development by the Khronos Group as an open, royalty-free standard for parallel programming of heterogeneous systems. It provides a common hardware abstraction layer to expose the computational capabilities of systems that include a diverse mix of multicore CPUs, GPUs and other parallel processors such as DSPs and the Cell, for use in accelerating a variety of compute-intensive applications. The intent of the OpenCL initiative is to provide a common foundational layer for other technologies to build upon. The OpenCL standard will also have the effect of coordinating the basic capabilities of target processors. In particular, in order to be conformant with OpenCL, processors will have to meet minimum capability, resource and precision requirements. This article reviews the organizations and process behind the OpenCL standard proposal, gives a brief overview of the nature of the proposal itself, and then discusses the implications of OpenCL for the high-performance software development community. The Khronos organization supports the collaborative development and maintenance of several royalty-free open standards, including OpenGL, OpenGL ES, COLLADA, and OpenMAX. OpenCL is not yet ratified, but the member companies involved have already arrived at a draft specification of version 1.0, which is currently under review. The OpenCL effort was initiated by Apple, and the development of the draft specification has included the active involvement of AMD, ARM, Barco, Codeplay, Electronic Arts, Ericsson, Freescale, Imagination Technologies, IBM, Intel, Motorola, Movidia, Nokia, NVIDIA, RapidMind, and Texas Instruments. The OpenCL specification consists of three main components: a platform API, a language for specifying computational kernels, and a runtime API. The platform API allows a developer to query a given OpenCL implementation to determine the capabilities of the devices that particular implementation supports. Once a device has been selected and a context created, the runtime API can be used to queue and manage computational and memory operations for that device. OpenCL manages and coordinates such operations using an asynchronous command queue. OpenCL command queues can include computational kernels as well as memory transfer and map/unmap operations. Asynchronous memory operations are included in order to efficiently support the separate address spaces and DMA engines used by many accelerators. The parallel execution model of OpenCL is based on the execution of an array of functions over an abstract index space. The abstract index spaces driving parallel execution consists of n-tuples of integers with each element starting at 0. For instance, 16 parallel units of work could be associated with an index space from 0 to 15. Alternatively, using 2-tuples, those 16 units of work could be associated with (0,0) to (3,3). Three-dimensional index spaces are also supported. Computational kernels invoked over these index spaces are based on functions drawn from programs specified in OpenCL C. OpenCL C is a subset of C99 with extensions for parallelism. These extensions include support for vector types, images and built-in functions to read and write images, and memory hierarchy qualifiers for local, global, constant, and private memory spaces. The OpenCL C language also currently includes some restrictions relative to C99, particularly with regards to dynamic memory allocation, function pointers, writes to byte addresses, irreducible control flow, and recursion. Programs written in OpenCL C can either be compiled at runtime or in advance. However, OpenCL C programs compiled in advance may only work on specific hardware devices. Each instance of a kernel is able to query its index, and then do different work and access different data based on that index. The index space defines the “parallel shape” of the work, but it is up to the kernel to decide how the abstract index will translate into data access and computation. For example, to add two arrays and place the sum in an a third output array, a kernel might access its global index, from this index compute an address in each of two input arrays, read from these arrays, perform the addition, compute the address of its result in an output array, and write the result. A hierarchical memory model is also supported. In this model, the index space is divided into work groups. Each work-item in a work-group, in addition to accessing its own private memory, can share a local memory during the execution of the work-group. This can be used to support one additional level of hierarchical data parallelism, which is useful to capture data locality in applications such as video/image compression and matrix multiplication. However, different work-groups cannot communicate or synchronize with one another, although work items within a work-group can synchronize using barriers and communicate using local memory (if supported on a particular device). There is an extension for atomic memory operations but it is optional (for now). OpenCL uses a relaxed memory consistency model where the local view of memory from each kernel is only guaranteed to be consistent after specified synchronization points. Synchronization points include barriers within kernels (which can only be used to synchronize the view of local memory between elements of a work-group), and queue “events.” Event dependencies can be used to synchronize commands on the work queue. Dependencies between commands come in two forms: implicit and explicit. Command queues in OpenCL can run in two modes: in-order and out-of-order. In an in-order queue, commands are implicitly ordered by their position in the queue, and the result of execution must be consistent with this order. In the out-of-order mode, OpenCL is free to run some of the commands in the queue in parallel. However, the order can be constrained explicitly by specifying event lists for each command when it is enqueued. This will cause some commands to wait until the specified events have completed. Events can be based on the completion of memory transfer operations and explicit barriers as well as kernel invocations. All commands return an event handle which can be added to a list of dependencies for commands enqueued later. In addition to encouraging standardization between the basic capabilities of different high-performance processors, OpenCL will have a few other interesting effects. One of these will be to open up the embedded and handheld spaces to accelerated computing. OpenCL supports an embedded profile that differs primarily from the full OpenCL profile in resource limits and precision requirements. This means that it will be possible to use OpenCL to access the computational power of embedded multicore processors, including embedded GPUs, in mobile phones and set-top boxes in order to enable high-performance imaging, vision, game physics, and other applications. Applications, libraries, middleware and high-level languages based on OpenCL will be able to access the computational power of these devices. In summary, OpenCL is an open, royalty-free standard that will enable portable, parallel programming of heterogeneous CPUs, GPUs and other processors. OpenCL is designed as a foundational layer for low-level access to hardware and also establishes a level of consistency between high-performance processors. This will give high-performance application and library writers, as well as high-level language, platform, and middleware developers, the ability to focus on higher-level concerns rather than dealing with variant semantics and syntax for the same concepts from different vendors. OpenCL will allow library, application and middleware developers to focus their efforts on providing greater functionality, rather than redeveloping code or lower-level interfaces to each new processor and accelerator.
<urn:uuid:9eff4d1e-5f9e-4308-8867-e9de4f620781>
CC-MAIN-2017-04
https://www.hpcwire.com/2008/11/21/opencl_update/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00349-ip-10-171-10-70.ec2.internal.warc.gz
en
0.922527
1,527
2.609375
3
The VLAN hosts reside in their own broadcast domain and can interact /communicate quite freely. VLANs can build partition of network as well as separation of traffic at layer 2 of the OSI, and as we have discussed earlier about the need of routers, in order to host or if you want any other device to interact between VLANs, then it is essential to use layer-3 device. The division of the LAN into multiple VLAN is basically the same like separating them into different physical LANs. In this case is clear that you will need a router if you want to go from one LAN to another. Therefore, in order to carry out this function, a router with an interface for each VLAN is needed. The other way and the better way is usage of router that can support frame tagging with ISL protocol or 802.10q protocol in order to support trunk link. In this case router must be connected with on interface to the switch and that link must be trunk to make routing possible. The 2600 series router is considered as the most affordable router that can support ISL or 802.10q routing. The series that do not support ISL routing includes all old 1600, 1700, and 2500 series routers. From the figures you can get an idea that if you had a few VLANs only i.e. 2 or 3 then it is possible to get a router with 2 or 3 Fast Ethernet connections. GigaEthernet is highly recommended as it works well but Fast Ethernet is okay too. In the other figure you can notice that every router interface is connected to an access link. In other words, the interface IP addresses of every single router would eventually become the default gateway address for every host in every single VLAN. If there are more VLANs available to you as compared to router interfaces then you have two choices whether you can run ISL or 802.10q routing on one Fast Ethernet interface, or you can buy a 5000 series switch i.e. route switch module (RSM). The route switch module (RSM) can easily run on the backplane of the switch and it can support up to the range of 1005 VLANs. The best alternate of router interface for every VLAN is one Fast Ethernet interface and you can run trunk link for routing. You can see how a Fast Ethernet interface on a router actually looks when it is configured with ISL or 802.10q routing. This makes it possible for all VLANs to interact through single interface. It is called “a router-on-a-stick” by Cisco. This article and all articles mentioned down here at the bottom are all about introducing virtual LANs and also about ways the Cisco switches can utilize them. We have also discussed about the breaking up of broadcast domain in a switched internetwork by VLANs. This is actually important as layer-2 switches breaks up collision domains only and, all switches combine to make up one big broadcast domain by default. The description of trunked VLANs opposite to a Fast Ethernet link is also given in this chapter. It is extremely importnat to understand the trunking technology well especially when you are managing a network with more than one switch operating several VLANs. In this chapter we have also covered a lot of information about VLAN Trunk Protocol (VTP), although it is not related to trunking. It sends VLAN information down to a trunked link that’s what you have learned in this chapter but actually the trunk configuration in itself is not related to or part of VTP. If you need more about VLAN technology, consider this: - - Network Virtualization - - Unidirectional communication filter between two VLANs - - VLAN – What are VLANs? - - Routing between VLANs - - Why we need VLANs, an Introduction to VLAN technology - - Static vs Dynamic VLANs - - VLAN Security – Main VLAN reason - - VLANs – Trunk and Access link types - - Trunking Methods – VLAN Identification methods across multiple switches - - VLANs controling broadcast propagation
<urn:uuid:96719c7e-10c4-4512-aa36-5ce7fa77f95e>
CC-MAIN-2017-04
https://howdoesinternetwork.com/2012/routing-between-vlans
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00495-ip-10-171-10-70.ec2.internal.warc.gz
en
0.93255
860
2.6875
3
As long as I can remember interference of any kind mentioned on a WiFi network was slapped with the label “interference”. Over the years, as I became educated in 802.11 networking and understanding channel contention through the operation of CSMA-CA. I realized something, interference is necessarily the right definition. In 802.11 networking, interference can really be defined in two very distinctive categories. RF Interference and Channel Contention By definition both are indeed interference. Definition of interference: the action of interfering or the process of being interfered with RF Interference - Can be defined as non 802.11 interference. In other words something that is on frequency, causing duty and resulting in 802.11 radios to go busy or interference close to a radio causing wave forms to be misinterpreted resulting in bit errors. The obvious suspects; microwave ovens, cordless phones, cameras, BT, and the list goes on. Let me give you a real world example. Say for a moment you’re listing to a speaker at a conference and directly behind you a loud conversation or argument is going on. Your ears, receiving radios, are having a hard time understand the speaker distinguishing what he is saying. Channel Contention - Can be defined as 802.11 mechanisms to gain access to the medium to trigger frames. In other words, CSMA-CA. This contention happens on layer 1 and layer 2. Layer 1 with the use of preambles and PHY headers and layer 2 through the use of NAV. CSMA-CA is the rule book or referee used by WiFi devices to gain access to the medium. Let me give you a real world example. You’re in a conference the speaker is done with his session and opens the floor up to Q&A. You have a question you raise your hand along with 10 other folks. You, like everyone else, needs to wait their turn to ask their question. CCC / CCI - Lets also go down the CCC (Co-Channel Contention) and CCI (Co-Channel Interference) road since we’re on topic. In the industry you hear folks, publications and vendors reference CCI (Co-Channel Interference). In other words, access points on the same channel causing interference with each other. We just covered its not really interference. A better definition is Co-Channel Contention. It’s contention. Because both radios, using CSMA-CA, are backing off to each other causing contention. I would like to hear your thoughts on the subject ! Fire back and let me know what you think! If you found this blog post educational or helpful support rating system! You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
<urn:uuid:946589de-6fb0-4318-9d7c-7a2fb4fcb76c>
CC-MAIN-2017-04
http://community.arubanetworks.com/t5/Technology-Blog/Definition-of-WiFi-Interference-and-Contention/ba-p/223976
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279915.8/warc/CC-MAIN-20170116095119-00037-ip-10-171-10-70.ec2.internal.warc.gz
en
0.928592
580
2.828125
3
As the digital age progressed, significant advancements in information technology were followed by an increased demand for information technology based services. Many data center owners and operators soon faced a rapidly changing consumer environment and an increased demand for space, power and cooling growth at a rate faster than most businesses were prepared to respond to. By the early dot com bubble ages, there was a rush to build to meet the demand. Post dot com bubble gives us the benefit of 20/20 hindsight. Many companies reacted too quickly in order to meet the increasing demand, resulting in the over build of data center space and infrastructure in the face of the excess supply of data center environments as the dot com bubble burst. Some may speculate that the overbuild was a factor in the demise of some once great companies. For the period immediately afterwards, the industry spent time growing into the over-built data centers with a healthy fear of new construction and rapid deployment. Fast forward to today, and the demand for IT continues to grow again at a rate that data center owners and operators struggle to keep up with. This growth along with some historical mistakes must have been a contributor to the containerized revolution that started years ago, and that we continue to see today. Many attribute the efforts of Sun and their Project Blackbox as the start of the rise in notoriety of the data center in a box concept. While Sun may have enhanced the popularization of the technology, portable data center technology had been around for a while. For industry veterans, bedtime stories of the strange, now seemingly extinct creature called InfraStruXure Express, one form of a datacenter in a box developed by APC, are probably not uncommon. Needless to say the data center in a box concept has had its challenges in gaining mass popularly. Despite the critics, we see a handful of manufacturers, such as SGI, HP and Dell that tout the capability of providing such a solution. While perhaps different flavors, the basic concept remains the same, a data center in a box. Benefits and Challenges The data center in a box concept presented a tremendous amount of potential benefit. For those looking for immediate space, power, and cooling, the data center in a box presented itself as a very attractive solution. Containerized solutions could be deployed significantly faster than a brick and mortar build, often at a fraction of the equivalent cost of a traditional build. The technology promised immediate turn-key availability of rack space, critical power, and critical cooling. Early deployment of the containerized technology had its challenges. Solutions that were "generator ready" did not lend itself well to solve solutions for facilities where base building generator systems were nearing or already at capacity. Thanks to manufacturers various manufacturers those solutions have been well developed over the years, and solutions can now be had with this problem solved. In addition to technical challenges, there was the practical challenge that many of these containers still required human interaction for maintenance activities such as equipment installation, cabling, etc. The nature of these space effective solutions naturally made for configurations that were non-conducive to how data centers were being operated at the time. Racks provided in containerized solutions often did not have the flexibility to deploy a variety of hardware equipment that were required for growth and often limited operators to racked form factor equipment only. In short, the options lacked the flexibility of the traditional data center that owners and operators were seeking. Game Changing Events During this time, significant developments in the data center market were the game changing events that were further driving growth in this type of deployment strategy. There are three key, arguably related, events that can be attributed to making the containerized solution attractive. As data centers moved to virtual environments, the legacy need to have equipment that was readily available to see and touch was eliminated. The acceptance of virtualization was the important psychological shift that needed to happen to allow for the widespread implementation of non-location specific architectures. Understand that part of the big revelation that happened with virtualization is that you didn't need your applications tied to a physical component. Once that psychological hurdle was overcome, the data center could essentially be anywhere, including sites unseen. 2. Cloud Computing Platform With cloud computing adoption, many businesses were able to gain freedom from the requirement for equipment to be located within a specific data center environment. Virtualization may have led to the critical acceptance of the non-location specific data center deployment concept. This was a critical hurdle to overcome. 3. Low Cost, High Speed Connectivity During the dot com boom, many telecommunications companies laid down an extensive communication backbone. Advances in technology also allowed for reduction of latency. The result was high speed connectivity at historically low cost points. As an analogy, consider the extensive communications backbone as a series of highways. As highways improved, it allowed cities to develop because the barrier of geographic limitation is reduced by efficient highway systems. Communications backbones do exactly the same thing for information technology. It allows technology to be implemented with reduced geographic limitation. The result was the capability of deploying a containerized data center solution, either on-site or off-site for further flexibility, nearly anywhere quickly and efficiently. With many data center owners and operators clamoring for more space, power, and cooling, it may be hard to understand why a data center in a box solution presenting such flexibility did not gain more market share in the then data center market, desperate for more infrastructure. The challenges to implementation were minor and were not show stoppers. But the same drivers that made containerized solutions so attractive also gave competitive advantage to an alternate deployment strategy, co-location. The rapid growth in the infrastructure technology industry and the associated demand in existing data center created the need for rapid growth. Data center owners and operators struggled to design, build, and operate data centers as the growth left a deficiency in human infrastructure. The talent to manage such a large industry undertaking of capacity growth had not yet caught up to the demand for more space, power, and cooling. As a result, businesses could not build net new space and net new infrastructure fast enough. Often, information technology departments were bogged down by trying to address physical infrastructure growth detracting them from their main role of information technology growth, development, and management. In addition, while not all businesses are internet based, the need for the internet, digital information storage, and other aspects of information technology was rapidly becoming a part of every business's operations. However, many of these single server, local storage operations did not contain the in-house expertise to manage the information technology growth. The demand of both large enterprise operations and smaller single server operations paved much of the way for the growth of the mega co-location facilities that we are familiar with today. Enterprise operations may require more wholesale data center space, while the smaller needs may be addressed by retailers re-selling rack level or server level space. This allowed the overcoming of a great challenge to rapid growth, the lack of human resources. Information Technology groups within organizations could now focus on the critical IT growth and management, while data center experts could focus on the design, construction, and operation of the physical infrastructure (uninterruptible power supply systems, emergency generator systems, critical cooling systems). In this new co-location model, a inherent synergy was created. IT groups and managers could focus on the specific issues affecting their area of specialty, while the co-location facilities could focus on the specific issues affecting their area of specialty, namely real estate, critical power systems, and critical cooling systems. This synergy is further underscored by the long history of reliable operations exhibited by organizations such as 365 Main (now owned by Digital Realty Trust), Equinix, Level 3, and Internap to name a few. In comparison to a containerized solution, co-location facilities provided data center space that was already designed, built, and operating. Moreover, co-location facilities were available for rapid deployment and immediate occupancy. Many of the early containerized solutions required a high initial capital investment. With the leases, businesses could lease as much or as little space and capacity as required. This offered, in many cases, a higher level of flexibility than containerized solutions. As businesses needed to run leaner in an increasingly competitive market place, co-location provided the ability to right-size your data center operations minimizing capital investment, smoothing cash flow, and maximizing efficiency in data center spending. Leases could be executed as fast, if not faster, than the time to implement a containerized solution. In addition, the co-location solution provided space, power, and cooling immediately in the familiar brick and mortar data center form that owners and operators were familiar with. The flexibility to deploy non-racked form factor equipment was provided. When assessing containerized solutions for space, power, and cooling, it is only natural to assess co-location as a solution. The same technological advances that made containerized solutions viable (virtualization, cloud computing and low cost, high speed connectivity) opened up the co-location market as a viable alternative to a data center in a box. If we evaluate co-location as an alternative to a data center in a box, it is not hard to see why co-location became, and in some respects remains, a more popular solution than a data center in a box. The primary benefits of containerized solutions have been touted as flexibility, low cost, and speed to market. Co-location presented an extremely competitive proposition to a containerized solution. Box, Co-location, or Self Operated The bad news is there is no one clear answer. The good news is that there are many viable data center deployment options. While there are proven box and co-location implementations, there are still a lot of successfully self operated data centers. The success of these self operated data centers rely heavily on a culture of high reliability and high availability, and a focus on the future with a willingness and capability to adjust to the changing times. These facilities often also have significantly high availability requirements. While new generation co-location facilities come close to meeting the associated service level targets of these facilities, when you have operations that cannot introduce third party inflicted failure risk (tenants, third party contractors), self operation is often the only guarantee to zero downtime operations. A long established culture of future planning, high reliability and availability operations gives these organizations a long future in self operated facilities for many decades to come. As some of the larger legacy data centers come to the end of their infrastructure life cycles, box and co-location technologies offer themselves as viable alternatives as self owned and operated facilities approach end of life infrastructure replacements and costly facility overhauls. Future of the Data Center in a Box Today, we find the data center in a box concept fighting for a comeback. Similar technological improvements that nearly made the data center in a box concept extinct is helping make containerized solutions a comeback. Businesses looking to survive should keep a close watch on emerging technologies such as cloud computing platforms and wireless communication systems (in many respects possibly having a similar impact to data transportation that air travel had to our people transportation industry), that may eliminate the requirement for expensive and long lead physical communication infrastructure to these data center in a box solutions, that will continue to evolve this containerized solution market. In their latest forms, the containers often are designed for little to minimal human interface. Some containerized solutions are built to tolerate server equipment failure and replacement not at the inside equipment level, rather intended as a whole container replacement. This relatively new implementation of the data center in a box concept has breathed new life into the containerized data center market. The ready-to-move availability of the data center in a box solutions provide by far greater flexibility than can be attained in nearly any other medium. As the products provided by the variety of vendors producing a data center in a box solution, are further developed, we can expect higher levels of reliability, availability, deployment flexibility, energy efficiency. While the use may not have reached critical mass, it can be said that the development opportunity for these solutions remain strong. As data centers become increasingly more architected around cloud computing platforms and wireless solutions become the norm , the low-maintenance "micro data centers" become an opportunity for businesses to exercise the opportunity to deploy data center operations without the traditional ties to data center or co-location facilities. These containerized solutions come in a variety of configurations. Most effective solutions are "just add power" while some may require adding power and water (i.e. condenser water or chilled water for HVAC). Only time will tell how these ready-mix solutions for data centers will evolve; but, what is certain is that we will continue to see more development in the containerized solutions as businesses seek competitive edge in innovative solutions such as the data center in a box.
<urn:uuid:f1b8500b-3cca-4263-9e80-06e37f72e5f5>
CC-MAIN-2017-04
http://www.datacenterjournal.com/data-center-ready-mix-just-add-power/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279915.8/warc/CC-MAIN-20170116095119-00037-ip-10-171-10-70.ec2.internal.warc.gz
en
0.960688
2,608
2.6875
3
According to the Association for Unmanned Vehicle Systems International, 90 percent of drone usage will be for agriculture. The momentum for drone-assisted farming has been building quietly for a long time — Japan, for instance, has already been using farming drones for about 15 years, and American farmers want to deploy the technology en masse, but the United States' rocky relationship with drone implementation won't make agricultural droning a smooth process, at least not initially. Advocates claim that aerial gadgets provide crucial data for more efficient farming. Pilots on the ground control drones equipped with infrared cameras and sensors that collect data on insect activity, watering, livestock migration and crop yields. In some cases, software allows drones to fly on autopilot, eliminating the need for a pilot at all. The imagery and sensor data facilitate efficient farming and save farmers time and frustration. Yet for the most part, America's leaders are unsure how to deal with non-military drone deployment. The FAA currently bans drone use for commercial purposes, but the legal restrictions may eventaully lessen in severity for various reasons. In March 2014, a judge struck down the ban in the case of a man who flew a drone over the University of Virginia's medical campus and sold the footage to an advertising company, and the FAA is considering lifting its drone ban to allow a little more than a handful of TV and film companies to use the technology for film projects. The administration also plans to create guidelines for non-military drone usage by 2015, and has selected six sites for drone testing to support their requirements. James Mackler, an attorney who focuses on drone law, told Smart Planet that he worries that the FAA's regulations won't take agriculture's unique situation into account. "I fear they're going to use a very broad brush, at least initially, and regulate for the most dangerous situation." Drone usage has been dogged by privacy concerns. People worry about being spied on by lechers or under excessive government monitoring. They also worry about drones malfunctioning and crashing into buildings, people or other aircraft, or just simply falling out of the sky onto hapless civilians. However, these dangers aren't as severe on farm lands comprising expansive crop fields devoid of hundreds or thousands of people who live in condensed, urban environments. The reduced danger of human injuries, casualties and property damage may work in farmers' favor. Wayne Smith, the executive director of the South Dakota Farm Bureau, told USA Today this spring that more than one-third of his state's farmers will use drone by 2017. He expects other farmers to follow suit nationwide.
<urn:uuid:f2c7f0cb-7108-4cc4-9e97-16cc88f309e7>
CC-MAIN-2017-04
http://www.govtech.com/videos/Is-the-United-States-Government-Ready-for-the-Farm-Drone-Boom.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282926.64/warc/CC-MAIN-20170116095122-00155-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947107
526
2.78125
3
I'd be more excited about the exoplanet scientists recently found that appears to be in a "habitable zone" if it were a bit closer than 42 light years away. It's not as if Earthlings will be visiting that planet any time soon, given that a light year is about 6 trillion miles. To put things in perspective, NASA's excited about sending astronauts to Mars by the mid-2030s, and the Red Planet is a mere 140 million miles from Earth (on average, depending on where the planets are in their respective orbits). So using that average, you'd have to travel back and forth to Mars 21,429 times to equal just one trillion miles, by my possibly incorrect space division. Multiply that by 42 and it's just science fiction. But stepping back from that reality, the fact that we have technology to find anything 42 light years away is mind-boggling. Believe it or not, the first planet discovered outside our solar system was found in 1992, and it was 980 light years from Earth! Since then another 845 exoplanets have been discovered, according to Exoplanet.eu, an online database of, well, exoplanets. The latest discovered exoplanet, in the constellation Pictor, was found by analyzing changes in light, a technique known as radial velocity. As Discovery's Irene Klotz explains: The new findings are based on a re-analysis and refinement of data collected by Europe's High Accuracy Radial velocity Planet Searcher (HARPS) instrument, a light-splitting spectrograph installed on Europe's La Silla Observatory in Chile. Planets beyond the solar system can be detected by tiny gravitational tugs they exert of the light coming from their parent stars. HARPS also was credited last month with finding the closest-known exoplanet to Earth -- Alpha Centauri Bb, just 4.37 light years from Earth. Radial velocity is the most successful technique for discovering exoplanets, credited with finding 494 total. But there are others. The "transit method" can be used if an exoplanet crosses in front of its star, slightly diminishing its brightness. This technique is credited by Exoplanet.eu with discovering 288 planets outside our solar system. Gravitational microlensing measures anomalies in the magnification of a star's light caused by its gravitational field. Sixteen exoplanets have been found using this technique. Another lesser-used technique is called pulsar timing, which measures radio waves emitting from pulsars (neutron stars). Just 17 exoplanets have been found this way, including the first. All of the above are indirect techniques for discovering exoplanets. Astronomers also have found 31 exoplanets through direct imaging -- using computer-enhanced images taken through powerful telescopes trained on the stars. Of these, 23 have been discovered since 2008. Now read this:
<urn:uuid:aca1bcee-667b-4cac-8e18-71b240607f29>
CC-MAIN-2017-04
http://www.itworld.com/article/2717631/hardware/just-how-do-you-find-an-exoplanet-.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279923.28/warc/CC-MAIN-20170116095119-00459-ip-10-171-10-70.ec2.internal.warc.gz
en
0.950184
593
3.6875
4
This section provides the description or definitions of the terms used in Desktop Central. One or more well connected (highly reliable and fast) TCP/IP subnets. A site allows administrators to configure Active Directory access and replication topology quickly and easily to take advantage of the physical network. When users log on, Active Directory clients locate Active Directory servers in the same site as the user. Domain is a group of computers that are part of a network and share a common directory database. A domain is administered as a unit with common rules and procedures. Each domain has a unique name. An organizational unit is a logical container into which users, groups, computers, and other organizational units are placed. It can contain objects only from its parent domain. An organizational unit is the smallest scope to which a Group Policy object can be linked, or over which administrative authority can be delegated. A collection of users, computers, contacts, and other groups. Groups can be used as security or as e-mail distribution collections. Distribution groups are used only for e-mail. Security groups are used both to grant access to resources and as e-mail distribution lists. The people using the workstations in the network are called users. Each user in the network has a unique user name and corresponding password for secured access. The PCs in the network which are accessed by users are known as computer or workstation. Each computer has unique name. The expansion of IP Address is Internet Protocol Address. An unique IP Address is provided for each workstation, switches, printers, and other devices present in the network for identification and routing of information. A Group Policy Object (GPO) is a collection of settings that define what a system will look like and how it will behave for a defined group of users. Desktop Central installs an Windows-compliant agent or a Client Side Extension (CSE) in the machines that are being managed. This is used to get the status of the applied configurations from the targets. Define Target is the process of identifying the users or computers for which the configuration have to be applied. The targets can be all users/computers belonging to a Site, Domain, OUs, Groups, or can be a specific user/computer. You also have an option to exclude some desktops based on the machine type, OS type, etc. Scope of Management (SOM) is used to define the computers that have to be managed using this software. Initially the administrator can define a small set of computers for testing the software and later extend it to the whole domain. This provides more flexibility in managing your desktops using this software. In a Windows Domain there may be cases where the user accounts have been created for some machines but they remain inactive for some reasons. For example, users like Guest, IUSER_WIN2KMASTER, IWAM_WIN2KMASTER, etc., will never login. These user accounts are referred to as Inactive Users. In order to get the accurate configuration status of the active users, it is recommended that the Admin User add the inactive user accounts in their domain so that these users (user accounts) may not be considered for calculating the status. Configurations that are intended for the same set of targets can be grouped as a collection. This is a subset of the patches released by Microsoft that affect your network systems / applications. This includes all the patches affecting your network irrespective of whether they are installed or not. This refers to the patches affecting your network that are not installed. This refers to the patches pertaining to the recently released Microsoft bulletins. This refers to the systems managed by Desktop Central that requires the patches to be installed. This refers to the systems managed by Desktop Central that are vulnerable. This includes all the systems that are affected irrespective of whether the patches have been installed or not. There maybe some vulnerabilities for which Desktop Central is not able to determine if the appropriate patch or work around has been applied. There could also be patches for which manual intervention is required. These are categorized as Informational Items. Remediation of these issues usually involves a configuration change or work around rather than a patch. These are patches that are outdated and have another patch that is more recently released and has taken its place (Superseding Patch). If these patches are missing, you can safely ignore them and deploy the patches that supersede them. Some definitions are adapted from Microsoft Help Documentation.
<urn:uuid:4fb85d7e-0f34-48a7-a762-0f76350e2f28>
CC-MAIN-2017-04
https://www.manageengine.com/products/desktop-central/help/misc/glossary.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00396-ip-10-171-10-70.ec2.internal.warc.gz
en
0.938508
908
2.9375
3
Support for General Recursion In this lecture, we shall consider the support for the general case of recursion. also restrict the implementation somewhat in order to illustrate the important points without “going overboard” on the complexities. In FORTRAN terms, there are two classes of procedures. Functions Those procedures that return a value and, in the ideal case, have no effect on the calling program other than passing that value. procedures that perform general actions and optionally can alter a number of variables in the calling program. In this lecture, everything will be called a procedure. The issues to be addressed in providing for recursion are as follows: 1. Passing the return address. 2. Passing the arguments to the procedure. variables declared locally in the procedure, whose scope does not extend beyond that procedure; they cannot be used directly in the calling program. Outline of the Lecture We shall cover the topics in roughly the sequence below. 1. The assumptions defining the context in which recursion is used. 2. The idea of generalizing the stack to provide efficient support for recursion. of areas of the stack for arguments to the procedure and variables local to the procedure. of static variables with scope local to the procedure. In Java, these may be called class variables. by value and call by reference. Writing code to access each of these as stored on the stack, and how to modify arguments passed by reference. few modest proposals for increasing the security of the run–time system, and consideration of the difficulties in their implementation. Assumption 1: The Linkage Code is Consistent The high–level description of the procedure linkage problem is as follows. 1. The calling program invokes the procedure. a) It places the return address on the stack. b) It places the arguments on the stack. 2. The called procedure receives control. a) It modifies the stack to create room for its variables with local scope. b) It accesses the arguments and uses them as specified. must have two sections of code that must be consistent. Design of the calling code will force the design of the receiving code. the handling of arguments passed by value must be different from that for arguments passed by reference. lecture, my assumption is that the assembly language I write for each section will be emitted by a well–designed compiler, so that consistency is enforced. Assumption 2: The Executable Code is Loaded Statically assumption here is that each block of code (main program, subroutine, function, is loaded into memory once, and that this loading is at an address that remains fixed during the execution of the program. Variables corresponding to data declarations within a procedure are static. Consider the following code, which uses a number of externally declared labels. It might be modified into a print routine that records the count of pages printed. A10LOOP MVC DATAPR,RECORDIN MOVE INPUT RECORD LH R7,PAGENUM LOAD THE OLD PAGE NUMBER AH R7,=H‘1’ INCREMENT BY 1 STH R7,PAGENUM SAVE THE NEW COUNT PUT PRINTER,PRINT PRINT THE RECORD BR R8 R8 HOLDS RETURN ADDRESS PAGENUM DC H‘0’ THE PAGE COUNTER label PAGENUM, as used above, should be considered a local variable that is declared statically. There are two interesting features of such variables. 1. It will retain its value across invocations. instances of the procedure access the same location when accessing the value denoted by PAGENUM. The Memory Map for Static Allocation Here is a sample memory layout for a MAIN program and a procedure PROC1. This is to be considered a map of the computer memory at the time this program is executing. areas that are labeled as “ARGS” are reference to a label in one of these refers to the same address in computer memory. That address is determined when the program is loaded. arrangement will not work for recursion, even if the return addresses are handled by a stack. The Problem with Static Allocation Consider the following code fragments related to the factorial program. It is easier to see this problem when DOFACT is written in a higher level language. Here is a plausible implementation. Integer DoFact (Integer N) ; var M : Integer ; // A local variable, declared static If (N <= 1) Then DoFact := 1 M = N; // Save the current value DoFact := M * DoFact(N – 1); If the storage allocation for M is on the stack, this will work well. If the storage allocation for M is static, this fails. Static Allocation: An Execution Trace Suppose that the label M is statically allocated. Here is an execution trace. DOFACT (4) is called and begins execution. value 4 is stored in the location associated with label M. DOFACT (3) is called DOFACT (3) begins execution. value 3 is stored in the location associated with label M, overwriting the previous value. DOFACT (2) begins execution value 2 is stored in the location associated with label M, overwriting the previous value. Assume that DOFACT (1) does not store a value. Our result is FACT(4) = 2·FACT(3) = 2·2·FACT(2) = 2·2·2·FACT(1) = 2·2·2 = 8. Assumption 3: No Use of Global Variables problem of global variables is illustrated by the following code fragment, which is written in a variant of PASCAL. var X, Y : Real ; var Y, Z : Integer ; Begin // Body of Sub_1 Begin // Body of main. to procedure Sub_1, is not visible within procedure Sub_1, variables X, Y, and Z all have meaning, though the variable Y denotes the local copy. problem is that of providing procedure Sub_1 with a reference to global if it is not passed as an argument. While this is easy, I choose to ignore this issue. Stack Support for Generalized Recursion lecture, I shall define one plausible stack organization to support recursive procedures with arguments and local variables. Again, I shall not consider the problem of access to global variables. The structure of the stack is dictated by how it is used in the context of calling. 1. A procedure will be executing. It accesses all of its local variables via the stack. At this point, this procedure calls another. 2. Each of the arguments is pushed onto the stack. 3. The return address is pushed onto the stack. 4. The new procedure is invoked. 5. The new procedure allocates space on the stack for its local variables. This sequence dictates the structure of the stack. That part of the stack directly used by a procedure is called an “activation record”. Implied Structure of the Stack For each procedure, here is the form of the activation record, as seen on the stack. first note that there are many ways to structure the activation record, this is just my way that I find easiest for presentation to this class. Here is the form of the stack upon entry to a procedure. Here is the form of the stack after the procedure has allocated space for local variables. A New Way to Access the Stack point, I restate the design decision that only 32–bit fullwords are placed onto the stack. Effectively, this limit values to four types. 1. The contents of a general purpose register. 2. The contents of a 32–bit fullword. 3. The contents of a 16–bit halfword, sign extended to a fullword. 4. Any address, treated as a 32–bit fullword. traditional stack structure is accessed one item at a time using only PUSH and POP operations. For this use, it will be necessary to devise new stack operations. Block allocation will be used to reserve a number of slots in the stack for use in either passing arguments or in providing space for local variables. Block deallocation will be used to move the stack pointer without necessarily popping the values from the stack. deallocation is used on return from a procedure. In essence, we just change the value of the stack pointer directly. A Slight Redefinition of the Stack In my earlier work on a stack, I had become interested in an Abstract Data Type definition, in which the value of the SP (Stack Pointer) is stored in the structure. I now find it useful to allocate a register as the SP. Here are two code fragments that show the new definition of the stack. Here are some equates that define new uses for general purpose registers. RVAL EQU 3 // FUNCTION RETURN VALUE SP EQU 4 // STACK POINTER FP EQU 5 // ANOTHER USEFUL POINTER. The stack itself is now just an allocation of memory. Let’s make it bigger. THESTACK DC 512F’0’ THE STACK NOW HOLDS 512 FULLWORDS We still require some code to initialize the stack. The key code might be LA SP,THESTACK // Load address of the stack want to postulate a recursive subroutine and describe its key features in a pseudo–language. Since I do not know what it does, it is called DOWHAT. Recall that I prefer UPPER CASE letters, as I find them easier to read. Here is the essential declaration. PROCEDURE DOWHAT ( INT X ; // PASSED BY REFERENCE INT Y ; // PASSED BY VALUE INT Z ) ; // PASSED BY VALUE // LOCAL VARIABLES INT L1, L2, L3, L4 Recall the argument passing mechanisms. It would make sense to have code such as X = Y + Z; the value of X changes. One could write code such as Y = X + Z; this would have effect only in DOWHAT. Conventions for Argument Handling Recall that the goal of the compiler designer is to convert a high–level language statement into a sequence of assembly language statements having the same effect. The requirement is to place the arguments onto the stack. For this, it will be convenient to use a standard STKPUSH. What is the sequence of pushing the arguments? Is it X, Y, Z or Z, Y, X? Are they pushed right to left or left to right. All that matters is consistency. 1. I shall push the arguments right to left. 2. Because I find it interesting, I shall also push the argument count. PUSH (Value of Z) Called by value PUSH (Value of Y) Called by value PUSH (Address of X) Called by reference PUSH (3) Three arguments passed. For now, we assume that locations X, Y, and Z have declarations of the type. X DC F‘0’ Y DC F‘0’ Z DC F‘0’ Here is the code involved in an invocation of DOWHAT. STKPUSH Z Push value of Z STKPUSH Y Push value of Y STKPUSH X,A Push address of X STKPUSH =F‘3’ Push the constant value 3 STKPUSH A1,A Push the return address B DOWHAT Now call the procedure. A1 Return Address: this code next. here is to use the explicit push operation, rather than a block allocation for the stack and direct access to its members. The State of the Stack consider the state of the stack at this point. It can be at one of two 1. Just before DOWHAT has been called. 2. Just after DOWHAT has been called, but before any of its code has executed. DOWHAT Allocates Its Local Variables point, I wish to introduce a FP (Frame Pointer) and warn that my use of this is almost certainly to be non–standard and have flaws that are not apparent to me. The four local variables are allocated on the stack. Here, I arbitrarily set the FP to indicate the location of the return address. I also elect to store the current value of FP as a local variable. This is the status of the stack during any execution of DOWHAT. Entry Code for DOWHAT entry code for DOWHAT is the code to allocate the local variables, define the value of the FP, and store a local copy for later use. local variables we want to allocate five locations on the stack. Here is the situation on entry. Here is the code to allocate the local variables. LR FP,SP // SET UP THE FRAME POINTER SH FP,=H‘4’ // IT NOW POINTS TO THE RETURN ADDR. AH SP,=H‘20’ // MOVE THE STACK POINTER ST FP,20(0,FP) // SAVE THE LOCAL COPY DOWHAT Accesses Its Arguments the status of the stack when DOWHAT is called. Here, I ignore the local variables and focus on the arguments. The value of the first argument will be accessed somewhat as follows. To get the value: L R8,–8(0,FP) // Get the address L R9,0(0,R8) // Get the value To store a value: L R8,–8(0,FP) // Get the address ST R9,0(0,R8) // Store the new value Others are accessed by value, as in L R9,–12(0,FP). For the moment, I skip what the routine might do and indicate how it returns. first step in a return is to de–allocate the local variables. This is easily done by giving the stack pointer a new value. Here is the code LR SP,FP // Change the value and thus remove // access to the local variables. LR R9,-4(0,SP) // Get the argument count AH R9,=H‘1’ // Add 1 to the value. SLA R9,2 // Multiply by 4; get byte offset SR SP,R9 // Adjust stack pointer. It now // points to the first word allocated // for this invocation of DOWHAT. LR R9,0(0,FP) // Get the return address DOWHAT Calls Itself Suppose that DOWHAT calls itself recursively with something like DOWHAT (L1, L2, L3) A2 Next instruction in DOWHAT How do we handle the call and return? First, we need to look at the stack at the time that DOWHAT calls itself. Recall that L3 is pushed first. The code for this might be . DOWHAT Processes a Return Here we consider a return from another procedure, perhaps a recursive call. Consider the code just above. DOWHAT (L1, L2, L3) A2 Next instruction in DOWHAT happens at address A2? Recall the status of the “top part” of the stack. All we do is to locate where the FP is stored and restore its value. now returned to the appropriate for this invocation of DOWHAT.
<urn:uuid:ba7e8a61-5dec-459a-9613-d2a53363cd45>
CC-MAIN-2017-04
http://edwardbosworth.com/MY3121_LectureSlides_HTML/RecursionGeneralCase.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280065.57/warc/CC-MAIN-20170116095120-00304-ip-10-171-10-70.ec2.internal.warc.gz
en
0.85226
3,458
2.953125
3
Every year the percentage of security breaches that take several months to be discovered and contained increases, why is this? Every year the percentage of security breaches that take several months to be discovered and contained increases – a statistic that clearly highlights companies’ inefficiency in identifying and responding to adverse events. Why? Being able to handle a security breach requires two main components – a well-defined attack detection capability and a structured response phase. Currently, most enterprises are failing in at least one of these components. This article will focus on issues surrounding the response phase. As soon as an Information Security Incident is declared, a specific procedure must be followed to ensure that it is treated and mitigated in a consistent way. In general any adverse event that affects CIA (Confidentiality, Integrity and Availability) is considered an Information Security Incident. A significant number of incidents that an IT infrastructure faces usually impact one of these three attributes therefore a consistent methodology has to be adopted to track their progression during their entire lifecycle. If the adverse event is caused by a system outage or as a consequence of human error, IT departments are usually able to deal with the incident and recover the situation. Generally, this is achieved through the engagement of subject matter experts from the IT team. Examples of these incidents include: Unwanted change on core systems leading to a loss of integrity The tracking and classification of these incidents is conducted by either the Incident Management or the Incident Handling team, while the IT Security team usually acts as a trusted advisor to ensure that Information Security Incidents are progressed until closure. Technical actions are generally accomplished by someone outside the Security Team, usually the technical owner of the particular platform that caused the adverse event. This works for well-defined Information Security Incidents. But is this process applicable to adverse events caused by external attackers also known as Cyber Incidents? As previously noted, Incident Management and Incident Handling teams can apply an overall framework to ensure the tracking and progression of an Information Security Incident. Technical owners of potentially compromised systems are subject matter experts from an administrative point of view, but they are not trained to deal with the unique situations caused by an intruder. IT Security team members usually act as advisors, supporting IT development and infrastructure teams through security assessments, vulnerability reporting and both high-level and technical guidelines to fix the issues. However, identifying a Cyber Attack that leads to an unauthorised access requires specialised people who can spot anomalies across systems and implement successful containment actions. These skills are generally not covered by IT security personnel or by pure forensics people. For these scenarios, a specific set of skills is required, ranging from defensive and offensive security, mixed with forensics techniques and methodologies that can be applied to both networks and hosts. In simple terms, people designated to deal with these types of adverse events must have Intrusion Forensics skills. Intrusion Forensics is not a new discipline but it still quite rare and definitely not a skill that is easy to develop in SOC-style environments or in internal Incident Handling teams. This is because it requires continuous exposure to a certain number of intrusions to develop the correct investigative mindset and enough experience in dealing with crisis situations. Attackers just need to leverage a single vulnerability to gain access to a corporate environment and the footprint left behind could be pretty small. The challenge for Intrusion Forensics is to spot the single anomaly across a vast number of systems and technologies that could prove an intrusion attempt. Knowledge of what normal behaviour looks like for systems and networks helps a lot. However, the gap between knowing an IT infrastructure and being able to identify an intrusion and respond to it in a consistent way is quite substantial. Cyber Incident Responders and Investigators are people specifically trained in Intrusion Forensics and able to apply this discipline to unauthorised access to corporate systems. These techniques can be applied to a wide range of scenarios, from non-targeted malware outbreaks to state-sponsored attacks involving lateral movements across different systems and networks. The output from this methodology will generally help to define the number of compromised systems and the attack vector. From here, a well-defined set of containment and investigative actions can be implemented based on the type and sophistication of the attack. It is important that the investigation activity feeds all of its findings back into the response actions as soon as they become available: this ensures that containment time falls within an acceptable window. Classic Incident Management and Incident Handling are currently failing against Cyber Attacks as they base decisions and incident progression on feedbacks provided by IT personnel not prepared to deal with intruders. To successfully handle these kinds of adverse events, Cyber Incident Responders with Intrusion Forensics skills are required. This will help enterprises to deal with unauthorised access attempts in a consistent way with the goal of containing incidents in a timely fashion, whilst trying to avoid mistakes that could worsen the situation. During a complex compromise, well-trained Incident Responders may be the only defense that remains, and the only personnel with the ability to contain the crisis situation.
<urn:uuid:ae4ac5a5-7be3-4c9f-9c7d-7137bbd76390>
CC-MAIN-2017-04
https://www.mwrinfosecurity.com/our-thinking/why-classic-incident-handling-fails/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00028-ip-10-171-10-70.ec2.internal.warc.gz
en
0.952395
1,024
2.515625
3
Photo: Using the computer and laser fabricating machinery, this GreenFab girl has turned recycled packing material into a professional looking sign that carries an environmental message. To complete this assignment, students post their work in a prominent place in the community. Photo by Mark Gura Often described with terms like "urban blight" and "toxic environment," the Hunts Point neighborhood in New York's Bronx does, in fact, have its share of determination and positive impact. One such example is the GreenFab educational program at Bronx Guild High School, which is designed to foster 21st-century skills in at-risk youth and prepare them for work force readiness in science, technology, engineering and math (STEM)-related fields, and primarily green collar jobs. GreenFab evolved as a response to inner-city students' educational need for instruction that connects with them. The program draws on students' environmental and economic conditions and problems as raw material from which to create an instructional program -- and the staff doesn't see the school as a technical or a job-training institution. "We're a college prep school that uses real-world experiences to improve academics," said Co-Director Jeff Palladino, adding that GreenFab impacts the kids because it exposes them to STEM subjects through real-world issues, he said. "They help our kids connect academic subject matter to real-life applications, experiment and create things, and solve problems that directly impact them, especially environmental justice issues." Work Force Preparation Some students are interested in creative technology, and some are interested in the environmental work, Palladino said, but the program provides numerous opportunities that can be customized to individual needs. View Full Story
<urn:uuid:22308f71-fd6d-4ae1-a7a1-a4b027750667>
CC-MAIN-2017-04
http://www.govtech.com/e-government/99263444.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280483.83/warc/CC-MAIN-20170116095120-00056-ip-10-171-10-70.ec2.internal.warc.gz
en
0.958942
342
3.3125
3
Argonne National Labs is part of the Department of Energy, so it’s not exactly surprising to learn that they are actively looking for ways to reduce energy use. But using Chicago’s cold winters to save $25,000 a month on cooling costs for the supers in their Leadership Computing Facility is, well, cool. I talked briefly to Pete Beckman, the division director at Argonne’s Leadership Computing Facility (ALCF), about their overall focus on energy conservation. According to Beckman, it’s an effort that pervades the entire organization, “Across the organization, everyone has been told ‘let’s find ways to reduce power.'” In computing, that mandate gets executed in two ways. The HPC staff in Beckman’s division are focused on practical ways to design datacenters, and supercomputers, to conserve energy. In the Mathematics and Computer Science division, researchers look at longer term solutions to more energy efficient computation. Among the initiatives Argonne has implemented already are thin clients in offices that don’t need full workstations, and software that automatically sleeps or turns off electronic and computer equipment after hours or during periods of non-use. Farther down the road? How about capturing the heat generated by the ALCF’s supers and doing something useful with it? As Beckman puts it: “no electricity should ever be wasted.” The ALCF also made some big decisions about energy use, including their investment in IBM’s Blue Gene/P as the centerpiece of their high performance computation. Their largest system, Intrepid, is the production workhouse with nearly 164,000 cores and over 557 TFLOPS of peak performance. This system is complemented by another BG/P used primarily for testing and code development. Intrepid is number 5 on the latest TOP500 list, but for Beckman and his team, it is just as important that the system is very energy efficient — it ranks #16 on the Green500 List released in November. The systems ahead of it on that list are other Blue Gene/P systems or systems built out of IBM’s QS22 cell processor blades, another highly energy efficient option. All told, the ALCF uses about a megawatt of power, a fraction of the amount used by less power-efficient computers at other centers. “Because the ALCF can effectively meet the demands of this world-class computer, the laboratory ends up saving taxpayers more than a million dollars a year,” said Paul Messina, director of science at the ALCF, in a statement. Interesting stat? Left uncooled, the Blue Genes would heat up the machine room to 100 degrees Fahrenheit within ten minutes. So with all that heat, how do they save that extra $25,000 a month when it’s cold outside? The ALCF’s chilled water system uses cooling towers. According to Beckman, once the temperature falls to 35 degrees or below outside, the temperature in the chilled water system is maintained solely by the cooling towers. Although humidity control is still an issue, that’s free cooling.
<urn:uuid:8ca5d708-3a1b-4d90-9fb2-f1cd2b23ee73>
CC-MAIN-2017-04
https://www.hpcwire.com/2008/12/18/baby_its_cold_outside_lets_calculate_something/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280483.83/warc/CC-MAIN-20170116095120-00056-ip-10-171-10-70.ec2.internal.warc.gz
en
0.934408
665
2.578125
3
The ESA is now seeking proposals for a Lunar Lander that would land on the south polar region of the Moon where possible deposits of water ice, heavily cratered terrain and long periods of sunlight make it ripe for explorers and scientists, the agency stated. The space agency said several European space companies have already assessed the various mission options and designs. The next step is 'Phase-B1', which will mature the mission and spacecraft design and examine in detail the demands of landing and working at specific southern lunar sites. This 18-month phase will begin this summer, taking the Lunar Lander from a design concept to hardware reality. The goal is for launch by the end of this decade, the ESA stated. The ESA said that the lander has two main goals: - To use the latest navigation technology to fly a precise course from lunar orbit to the surface and touch down safely and accurately. On the way down, it must image the surface and recognize dangerous features by itself, using its own 'intelligence'. - To investigate this unique region with a suite of advanced instruments. It will investigate the properties and possible health effects of radiation and lunar dust on future astronauts, and it will examine the soil for signs of resources that could be used by human explorers. The agency last year worked with NASA to define technologies that would let humans one day return to the Moon. At the time the agencies concluded such explorations needed: - fixed and mobile habitation units with integrated life support systems, to give human explorers a safe living environment - robotic systems that can act autonomously to prepare for human exploration, and can later work alongside crews during surface operations - power generation and storage systems of varying scales to support the energy needs of surface activities, and potentially a human lunar base - in situ resource utilization systems that can produce consumables needed by a human crew, such as oxygen and water, from material available on the Moon's surface - delivery of cargo and logistics to the lunar surface to support human excursions, mobile surface missions and possibly base operations. NASA mind you doesn't preclude its own robotic mission to the moon. In his broad outline of NASA's budget plan, the space agency's Charles Bolden recently testified that that NASA wants to look at more sustainable and advanced capabilities that will allow Americans to explore the Moon, Mars and other destinations. "This effort will include a flagship demonstration program, with international partners, commercial and other government entities, to demonstrate critical technologies, such as in-orbit propellant transfer and storage, inflatable modules, automated/autonomous rendezvous and docking, closed-loop life support systems, and other next- generation capabilities....Robotic precursor missions to multiple destinations in the solar system in support of future human exploration, including missions to the Moon, Mars and its moons," Bolden stated. What technologies end up being developed is anyone's guess. NASA is working with the ESA on all manner of robotic orbiters, landers and exploration devices for a future trip to Mars. NASA and the ESA recently agreed to consider the establishment of a new joint initiative to define and implement their scientific, programmatic, and technological goals for the exploration of Mars. The program would focus on several launch opportunities with landers and orbiters conducting astrobiological, geological, geophysical, climatological, and other high-priority investigations and aiming at returning samples from Mars in the mid-2020s. The envisioned program includes the provision that by 2016, ESA will build what it calls an Entry, Descent, and semi-soft Landing System (EDLS) technology demonstrator and a science/relay orbiter. In 2018, the ESA would also deliver its ExoMars rover equipped with drilling capability. NASA's contribution in 2016 includes a trace gas mapping and imaging scientific payload for the orbiter and the launch and, in 2018 a rover, the EDLS, and rockets for the launch. Follow Michael Cooney on Twitter: nwwlayer8 Layer 8 Extra Check out these other hot stories:
<urn:uuid:7a849a40-c07d-445c-9848-2a1c8ba0a005>
CC-MAIN-2017-04
http://www.networkworld.com/article/2230299/security/europe-s-space-agency-wants-to-do-what-nasa-can-t--fly-to-moon.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283301.73/warc/CC-MAIN-20170116095123-00266-ip-10-171-10-70.ec2.internal.warc.gz
en
0.929859
821
2.984375
3
Unintended Data Leakage holds 4th position at OWASP Mobile Top 10 As the name suggest “Unintended Data Leakage” means when developer of the application accidently leaks the data. Well any developer would never want to leak the data but in some scenarios he assumes that the particular data is only accessible to the application not to any adversary. Below are some of the example scenarios. adversary who has physical access to device Often Developers leave debugging information publicly. So any application with READ_LOGS permission can access those logs and can gain sensitive information throught that. If will use Sieve Application to demonstrate this issue. - Type pidcat com.mwr.example.sieve in Appie. Pidcat is a modified version of logcat with better viewing of logs. - Now open up Sieve Application and enter your password. - Now you will notice that your password is seen in pidcat when you entered it in Sieve. As in the picture below , my password is displayed which is 1234567890123456 Copy/Paste Buffer Caching Android provides clipboard-based framework to provide copy-paste function in android applications. But this creates serious issue when some other application can access the clipboard which contain some sensitive data. How To Fix Disable copy/paste function for sensitive part of the application. For example, disable copying credit card details. If an application crashes during runtime and it saves logs somewhere then those logs can be of help to an attacker especially in cases when android application cannot be reverse engineered. How To Fix Avoid creating logs when applications crashes and if logs are sent over the network then ensure that they are sent over an SSL channel. Analytics Data Sent To 3rd Parties Most of the application uses other services in their application like Google Adsense but sometimes they leak some sensitive data or the data which is not required to sent to that service. This may happen because of the developer not implementing feature properly. You can look by intercepting the traffic of the application and see whether any sensitive data is sent to 3rd parties or not.
<urn:uuid:ef76d227-2997-47e0-8966-3c953bbcd965>
CC-MAIN-2017-04
https://manifestsecurity.com/android-application-security-part-11/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00084-ip-10-171-10-70.ec2.internal.warc.gz
en
0.893893
441
2.515625
3
Scalability and Flexibility of VLAN technology has sent the hubs into oblivion You must have got some idea that the layer-2 switches have nothing to do with Network layer protocol but it only read frames for filtering. It forwards all the broadcasts, by default. In order to build and execute VLANs, you necessarily need to build smaller broadcast domains at layer-2 switches. In other words, the broadcasts sent in one VLAN from one node won’t be passed on to ports that are configured to be in some other VLAN. So, the users or switch ports can be easily assigned to groups of VLAN (known as a switch fabric ), you can flexibly add into that broadcast domain the users of your choice no matter where they physically exist. This setup also helps in blocking the storms of broadcast that a faulty network interface card (NIC) can cause, and it also saves an application from spreading the storms all over the internetwork. Such incidents can still take place on the VLAN from where the issue started, but the problem will simply be limited to that one infected VLAN. There is one more advantage that when a VLAN becomes too large, you can build more VLANs so that the broadcasts do not consume too much bandwidth—this means when there are few users in a VLAN, then the number of affected users by broadcasts will be few too. Although it is good but you must have an idea and understanding about the network services especially when you build your VLAN. There is no harm in keeping and trying all services, except for the internet access and e-mail that all of us need, local to every user when attainable. In order to know how a VLAN look to a switch and understand, the best thing is to look at a traditional network first. In the figure you can see how a network was built by linking/connecting physical LANs to a router with the use of hubs. In the figure you can clearly see that every network was linked with a hub port to the router (every segment had a particular logical network number, but this is not clear from the figure). The communication on the internetwork is possible only if the node connected to a specific physical network matches the network number. Here you can notice that each and every department had a separate LAN of its own, in order to include a new user to Office 1, by plugging them into the Office 1 LAN and in this way they will automatically become a part of the broadcast domain as well as Office 1 collision domain. This old system and design worked well for several years. There was a big defect in it: when the hub for Office 1 is fully occupied and you want to include another user to the Office 1 LAN then what will happen? In other words, what will happen if there’s no space available for the new user or employee? In case, if there is enough space in the Office 2 department of the building then the new employee have to adjust with the Office 2 people, which means that the poor employee will be accomodated/ plugged into the Office 2 hub. As a result the new user will become a part of the Office 2 LAN, which is not good for several reasons. The first reason is the security problem, as the new employee now is part of the Office 2 broadcast domain and so all the same servers and network services will be visible to this new user like it is visible to other users of Office 2. Another reason is that this new employee can only access the Office 1 network services through the router in order to login to the server of the Office 1—and this option is not at all efficient. Now let’s check out what a switch achieves. From the figure here you can see how switches eliminate/remove the physical boundary in order to resolve our issue. In this figure you can figure out the use of two VLANs (these are 2 and 5) to establish a broadcast domain for each segment/department. In the next step each switch port is actually assigned a VLAN membership administratively, and it all depends on the type of host as well as on the broadcast domain it exist in. This means, that in order to add a new user to the Office 1 VLAN (VLAN 2), the only need is to assign the port to VLAN 2, no matter where the new user of Office 1 team physically exist. This explains the comparison, the importance and advantages of new design of network with VLANs over the older one. Now it is so simple and clear that every host that is supposed to be in the Office 1 VLAN is only assigned to VLAN 2. You can notice that assigning started in this manner-VLANs with VLAN number 2 whichis an irrelevant number and you must be thinking about VLAN 1. Actually that VLAN serve as an administrative VLAN, and it can also be utilized for workgroup purpose, as per Cisco it is recommended that this should be only used for administrative reasons. It is not possible to edit or delete the VLAN 1 name, and by default, all the ports present on a switch are the VLAN 1 members unless you edit it. Each VLAN must have a specific subnet number as each is considered as broadcast domain. In case if you are using IPX, then remember that it is important to assign a particular IPX network number to every single VLAN. Now let’s talk about the misconception that because of switches, there is no need of routers. You can check out in the figure that there are three broadcast domains or VLANs, counting VLAN 1. It is easy for the nodes within every VLAN to interact or communicate with each other, but not possible to communicate with anything in another VLAN, due to the reason that the nodes in a given VLAN actually think that it exist in a crashed backbone. With the help of router the hosts can interact to a node or host on other networks. Passing through a router or a layer-3 device is must for those nodes the way their configuration is done for VLAN communication. This works in a similar way as it is done for connecting various physical networks. In other words, it is must for the communication between VLANs to pass through a layer-3 device. So there are no chances that routers will disappear so soon.
<urn:uuid:db813bd5-2ec4-4112-9023-98f9a1a5136f>
CC-MAIN-2017-04
https://howdoesinternetwork.com/2012/vlan
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00294-ip-10-171-10-70.ec2.internal.warc.gz
en
0.959009
1,301
2.59375
3
As digital business expands, governments are using technology improvements to create better cities. The Internet of Things (IoT) has created great opportunities for cities to become more livable. As more “things” become connected and aware, governments can use analytics to add value for their citizens. We’ll take a look at some scenarios where the IoT can improve everyday life. But first, let’s look at how collaboration throughout an organization creates value. How ecosystem networks and collaboration create value The IoT works with sensor data in a wide range of fields. The technology helps get good data on how a connected device is working in various situations, and that data allows organizations to apply the insights they learn to their operations and processes. By analyzing automated sensor data from assets, organizations can make sure the delivery of critical services and the functionality of critical infrastructure are optimized. Here are some scenarios where an IoT-based strategy could add value for governments and citizens. 1. Traffic control and parking Why is it called rush hour when nobody goes anywhere fast? Good question. But with digitized traffic control systems, that may be changing. Most people are aware that GPS systems can provide multiple routes with suggestions based on traffic. Digitization of traffic controls uses input from traffic feeds, road construction, and road closures to reroute traffic for a shorter commute. They can also adjust transit schedules to account for more accurate arrival and departure times. Cities are using GPS and sensors to update transit schedules to encourage the use of public transportation. Parking garages can monitor how many open spaces they have available, reducing time spent searching for a parking spot. In fact, some of these systems will interact with your car to direct you to an open space. Sensors built into bridges help cities prioritize replacement of aging bridges based on performance instead of a standard timetable. 2. Green living and sustainability Another area where the IoT is impacting city life is sustainability. Over half our planet’s population now lives in urban areas. This creates ecological problems. Trimming excessive carbon emissions from transportation and higher energy use are a focus for governments worldwide. What are some examples? IoT-enabled assets can connect through the Internet to a utility, which can then optimize power use during non-peak demand times. For example, the city of Chicago has deployed an array of sensors to track air quality and sidewalk traffic on a block-by-block basis. Glasgow is tracking crime and traffic issues to improve safety and congestion. The sensors also track noise and foot traffic to best determine areas that need improvement. 3. Smart cities There’s a new type of connected city on the horizon. It’s a smart city. By definition, it’s a city that uses digital technologies to enhance quality of life in urban areas. This technology is being implemented not only in cities but also on many college campuses. In both cases, analytics from data sources can help deliver insights that can drive better decisions on policy, programs, and tactical needs. So what cities are adding which features? We thought you’d never ask. Here are a few great examples. Los Angeles is adding responsive behavior to its traffic lights to reduce congestion. New York City is placing sensors to detect garbage amounts in bins so full bins are collected on time. Long Beach, California, has added smart meters to prevent illegal watering during times of drought. Boulder, Colorado, is implementing smart grids so consumers can look at their energy use in real time. Porto, Portugal, is installing Wi-Fi hotspots in the city’s 600 buses and taxis, creating the world’s largest Wi-Fi hot spot. Copenhagen uses IoT sensors in providing 20,000 LED street lights with renewable energy. Santander, Spain, uses over 12,000 sensors in refuse containers to cut their energy and waste management costs by 20-25%. The future of the connected city So what can we expect down the road? Many cities are developing mobile applications to help job seekers, tourists, and transit riders find the information they need quickly. Ports are being updated to provide superior infrastructure and supply chain operations. Big Data is providing simulation opportunities in emergency response, economic growth, and maintenance requirements. Public transportation loyalty programs in Montreal are now focusing on providing suitable entertainment opportunities. Connecting networks creates serious data growth that requires a new platform to process it. Are you ready to bring your organization into the digital revolution with cross-agency and department data sharing that helps you reach better outcomes? What challenges do you see in the process? About Regina Kunkle Regina Kunkle is responsible for the State and Local/Higher Education (SLED), as sub-industry of the U.S. public sector industry, at SAP. Regina is dedicated to helping governments transform to respond to changing regulations and citizen needs, streamline and simplify processes, and share vital information across agencies for enhanced decision making and performance.
<urn:uuid:31601c64-8cac-41ff-b618-4d106270f9ad>
CC-MAIN-2017-04
http://www.ioti.com/smart-cities/three-ways-internet-things-can-improve-citizens-lives
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283689.98/warc/CC-MAIN-20170116095123-00110-ip-10-171-10-70.ec2.internal.warc.gz
en
0.932658
1,004
3.234375
3
CERN’s Large Hadron Collider has proved to be an interesting test case for ultra-large-scale data collection and computation. The project, which generates petabytes of data for its particle acceleration mission, requires one of the largest and most widespread compute grids in the world. Less than a year after the collider began fully operating, the LHC project is already blazing a path for eScience. The dominant computational theme of the LHC work is data reduction. With petabytes of data being generated every second by the LHC detectors, the challenge is to filter out the uninteresting information so that the critical data can be more easily sent to secondary sites for storage and processing. According to an Ars Technica article penned by John Timmer, in general, the LHC cyberinfrastructure is performing even better than expected. Although a 35-year-old datacenter is forcing higher density compute clusters (and water cooling), the robust network and improving price-performance of disks have lessened the project’s reliance on tape storage. Writes Timmer: One of the reasons for the increased reliance on disks is the network that connects the global grid to CERN. “Because the networking is going so well, filling the pipes can outrun tapes,” von Rueden told Ars. Right now, that network is operating at 10 times its planned capacity, with 11 dedicated connections operating at 10Gbps, and another two held in reserve. Each connection goes to one of a series of what are called Tier 1 sites, where the data is replicated and distributed to Tier 2 sites for analysis. Von Rueden said that the fiber that powers this setup has been “faster, cheaper, and more reliable than in planning.” An interesting aspect to the LHC setup at CERN is that they’ve decided to limit hardware support contracts to no longer than three years. The rationale is that because price-performance for hardware is rising so rapidly, it doesn’t pay to keep any particular machine running too long; when it breaks down, just replace it with something cheaper and faster.
<urn:uuid:680fb242-c1ad-4f2a-9e9e-7b5b572b9102>
CC-MAIN-2017-04
https://www.hpcwire.com/2010/08/30/lhc_compute_grid_teaches_some_valuble_lessons/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00506-ip-10-171-10-70.ec2.internal.warc.gz
en
0.941133
434
3.1875
3
Definition: A set of sets whose union has all members of the union of all sets. The set cover problem is to find a minimum size set. Formal Definition: Given a set S of sets, choose c ⊆ S such that Ui=1 ci = Ui=1 Si. See also covering. If you have suggestions, corrections, or comments, please get in touch with Paul Black. Entry modified 2 September 2014. HTML page formatted Mon Feb 2 13:10:40 2015. Cite this as: Paul E. Black, "set cover", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 2 September 2014. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/setcover.html
<urn:uuid:be55853a-18ee-48d6-8803-105c870fa071>
CC-MAIN-2017-04
http://www.darkridge.com/~jpr5/mirror/dads/HTML/setcover.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280266.9/warc/CC-MAIN-20170116095120-00258-ip-10-171-10-70.ec2.internal.warc.gz
en
0.879552
182
2.828125
3
My committee has finalized a vision for our school district’s technology plan. This was an effort of about 15 people representing district staff, members of our Board of Education, teachers, students, parents, and a token technology industry analyst. I’d love feedback and comments. Let me also share some of the thinking behind the vision (some of the background pre-discussion is here). Our school district recognizes that technology is vital to prepare students for lifelong learning and workforce readiness. We will: - Integrate curriculum and technology to inspire a collaborative learning community that can effectively find, evaluate, use, and create content. - Identify and utilize existing, emerging, and cost-efficient technologies that enhance learning. - Promote the safe and ethical use of technology. - Ensure equitable access to technology. - Provide professional development and technologies necessary to deliver the curriculum, to communicate, and to access, manage, and evaluate student-related information. We fundamentally changed our view of technology compared to the previous technology plan vision. Previously, technology was considered a tool used to help educate students. Now, we consider technology an integral part of student and professional life – not just “tools”, but actually a change agent that is shaping our culture and our way of life. The education system needs to take an active role in helping shape student access, understanding, and use of technology as a part of their lifelong learning. There were five specific elements to this vision: Integrating Technology and Curriculum: Integrate curriculum and technology to inspire a collaborative learning community that can effectively find, evaluate, use, and create content. We put a lot in one bullet – this bullet encompasses the 21st century skills that we identified: - Using technologies to safely filter and find content in order to achieve our personal or professional goals. - Using technologies to create, communicate, collaborate, express oneself, and influence others. - Using technologies to safely filter and find people who can help us achieve our personal or professional goals. - Dynamic teaming and very interactive collaboration. And we tied the need to focus on these skills with the need to integrate technology completely into the educational curriculum. We also purposefully used the term “learning community”. This implicitly includes staff, students, and the student’s families. We felt that while the school district is not responsible for educating our entire community at large, we did feel that parents in particular need to understand and be engaged in the program in order to effectively educate students. Staying Up-To-Date on Technology: Identify and utilize existing, emerging, and cost-efficient technologies that enhance learning. In addition to making good choices about well-known and existing technologies, we wanted to include a forward-looking element in the vision. Technology is changing rapidly, and our students are usually among leading-edge users. The school system needs to stay on top of that. Also, the fundamental capital expense equation is changing, as technologies follow the commoditization curve, and as software as a service and cloud computing create new paradigms (e.g., email services, editing services, collaboration tools). Safe and Ethical Use: Promote the safe and ethical use of technology. If the school system is going to be more more leading-edge and proactive in using online technologies, it is even more important that the school system take an active role in educating safe and ethical use of technologies. Often, school systems abdicate their potential role here, and instead focus on limiting access to the web. It is more valuable to provide students with the education to make good decisions themselves. Equitable Access: Ensure equitable access to technology. This will be a huge challenge. One goal might be to provide one access device or laptop to every student. But is that necessary? Many students already have a laptop, a home computer, an Ipod with wi-fi access, a cell phone with web access. Very similar to consumerization taking place in the workplace, can we take advantage of that fact, and focus on filling in the gaps? Identify students who don’t have access, and provide them with the tools they need? How we fulfill this vision is still to be determined, but there are several possibilities worth trying. For example, our high school students are required to purchase a relatively high-end calculator. A low-end laptop costs only slightly more. Can the school system provide low-end laptops at a lower price – and require high school students to either supply their own or purchase ours? Ensuring equitable access is critical – however, it is important not to drive equality to the least common denominator – more important to bring up those with the least! Tools and Training for Staff: Provide professional development and technologies necessary to deliver the curriculum, to communicate, and to access, manage, and evaluate student-related information. This element is aimed more at the technologies used by the school district to manage, evaluate and communicate information such as grades, trends, etc. We would love comments and feedback. Comments or opinions expressed on this blog are those of the individual contributors only, and do not necessarily represent the views of Gartner, Inc. or its management. Readers may copy and redistribute blog postings on other blogs, or otherwise for private, non-commercial or journalistic purposes, with attribution to Gartner. This content may not be used for any other purposes in any other formats or media. The content on this blog is provided on an "as-is" basis. Gartner shall not be liable for any damages whatsoever arising out of the content or use of this blog.
<urn:uuid:f01ad631-d541-4562-8cc5-26ed3baf3498>
CC-MAIN-2017-04
http://blogs.gartner.com/thomas_bittman/2009/01/16/dissecting-a-k-12-technology-vision/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285001.96/warc/CC-MAIN-20170116095125-00064-ip-10-171-10-70.ec2.internal.warc.gz
en
0.936653
1,158
2.921875
3
Gone are the days when producing a Web page simply involved writing some HTML code or painting a screen using Microsoft's Frontpage Web design tool. These days, with the Internet going into e-commerce overdrive everyone wants dynamic Web experiences. Scripting has taken a quantum leap. There are two main categories of scripting language - either client or server-based. They are designed to describe attributes and functions that can be interpreted by browsers to produce a Web page. By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers. Client-side Web scripting started off with hypertext markup language (HTML), which was a static scripting language used purely to describe how a page would look. You can use HTML, for example, to position a headline, and decide which colour it will be. HTML has come a long way since it was first developed as the basis for the World Wide Web at the start of the 1990s. The World Wide Web Consortium (W3C), the industry body that ratifies some Internet standards, has released version 4.0 of the technology. VBScript was designed by Microsoft and is a scripting version of its Visual Basic programming language. The problem with the technology is that although it offers great functionality (it is also used in Microsoft Office to customise applications) it is only understood by Microsoft Internet Explorer. The row over browsers, however, combined with the proliferation of different client devices such as WAP phones, has led to a slow departure from client-side scripting in favour of server-side scripting. Processing everything on the server means that you can give everyone a similar experience of your Web site, while making allowances for different display types. One of the first scripting interfaces for the server was the common gateway interface (CGI), which enables applications to interpret scripting languages, carrying out different functions as a result. Perl is one of the most common languages used to write to CGI, although this language is hardly intuitive to use. Microsoft developed active server pages (.asp) as a means of taking inputs from a Web page (from a form, for example) and processing them so that they can interact with objects on the server. This means the input could be used to look up a database, for example. Once the processing has been completed the active server page can then take the output and render it into HTML for display in the browser. Sun Microsystems responded with Java server pages (JSP) another scripting language that differs because the scripts are compiled and loaded as servlets - small programs sitting on the Web server. Compiled programs are generally faster than interpreted ones, so JSP applications can provide performance advantages (see box above). According to documentation from software development company Rational, most of an application's business logic should not be held in a scripted page. Rather, it should be held in the business objects that the page interacts with. The server-side scripted page should essentially be the way for the browser to talk to a server-based program. One of the biggest steps when moving from a static environment to a server-side scripted environment is knowing how the scripts will interact with the middle tier, which contains all of the complicated programming logic that drives the application. This means that you must have a thorough understanding of the technical architecture of the application, and it also means that if the application changes, the scripting must be regression tested - tested with the new code - to make sure that it still works properly. One advantage to server-side and client-side scripts is that they are easy to implement. Rather than having to learn a complicated language like C++ or Java you can pick up much scripting functionality in the course of a few days. But don't let the ease of implementation tempt you into undisciplined development. You still need to observe conventional procedures and safety measures when changing your code. Next week: Danny Bradbury looks at browser wars and their aftermath. Learn your lines - a guide to Web scripts with its Internet information server [IIS]. Scripted pages sit on the Web server and provide an interpreted interface between the browser and the back-end application.
<urn:uuid:1d3cd324-32a2-4f7b-a312-1e4744339c95>
CC-MAIN-2017-04
http://www.computerweekly.com/feature/Scripting-languages
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285001.96/warc/CC-MAIN-20170116095125-00064-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947838
851
2.921875
3
The global introduction of electronic passports is a large coordinated attempt to increase passport security. Issuing countries can use the technology to combat passport forgery and look-alike fraud. While addressing these security problems other security aspects, e.g. privacy, should not be overlooked. This article discusses the theoretical and practical issues, which impact security for both citizens and issuing countries. Existing legacy passports are paper based and use related security features. Despite of advanced optical security features paper based travel documents are sensitive to fraud. Two forms of fraud are most notable: - Passport forgery; a relatively complex approach where the fraudster uses a false passport, or makes modifications to a passport. - Look-alike fraud; a simple approach where the fraudster uses a (stolen) passport of somebody with visual resemblance. The ICAO (International Civil Aviation Organization) has been working on what they call MRTD (Machine Readable Travel Document) technology for quite a while. This technology should help to reduce fraud and support immigration processes. The MRTD specifications became a globally coordinated attempt to standardize advanced technology to deliver strong identification methods. Rather then using common practices from the security industry the MRTD standards aimed at a revolutionary combination of advanced technology, including contactless smartcards (RFID), public key cryptography, and biometrics. The MRTD specs support storage of a certificate proving authenticity of the document data. The signed data includes all regular passport data, including a bitmap of the holder’s picture. Further data that may be stored in the e-passport include both static and dynamic information: - Custody Information - Travel Record Detail(s) - Tax/Exit Requirements - Contact Details of Person(s) to Notify Since 2005 several countries have started issuance of e-passports. The first generation of e-passports includes some, but not all, of the planned security features. Biometric verification is generally not supported by the first generation. All 189 ICAO member states are committed to issue e-passports by 2010. From 2007 onward immigration services will start using e-passports. Authorities promote e-passports by issuing visa-waiver programs for travelers with e-passports. A passport that conforms to the MRTD standard can be recognized by the e-passport logo on the cover. Figure 1: The Electronic Passport logo. Electronic Passport security mechanisms With the aim to reduce passport fraud the MRTD specs primarily addressed methods to prove the authenticity of passport and its data, and the passport holder. The technology used for this includes PKI (Public Key Infrastructure), dynamic data signing and biometrics. The latter (biometrics) however is still under discussion and not yet fully crystallized in the specifications. PKI (Public Key Infrastructure) technology was chosen to prove the authenticity of the passport data. This technology is successfully applied on the internet for e-commerce, and has gained high popularity. Certificate based authentication requires only reading the certificate by the inspection system, which can then use a cryptographic computation to validate the authenticity using the public key of the issuing country. This method is called passive authentication and satisfies with RFID chips without public key cryptographic facilities, since it involves only static data reading. Although the authenticity of the data can be verified, passive authentication does not guarantee the authenticity of the passport itself: it could be a clone (electronically identical copy). The cloning problem is addressed with an optional signing mechanism called active authentication. This method requires the presence of a asymmetric key-pair and public key cryptographic capabilities in the chip. The public key, signed by the issuing country and verified by passive authentication, can be given to the inspection system, which allows verification of a dynamic challenge signed with the private key. While the private key is well protected by the chip it effectively prevents cloning since the inspection system can establish the authenticity of the passport chip with the active authentication mechanism. For the incorporation of modern electronic technology in the existing paper documents it was decided to use (contactless) RFID chips. These chips can be embedded in a page of the document and put no additional requirements on the physical appearance of the passport. A question that arises here is whether this is the only reason to apply RFIDs instead of contact based cards. Other reasons could be related to the form factor of contact smart cards which complicates embedding in a passport booklet, or the fact that contacts may be disturbance sensitive due to travel conditions. With the choice for RFID the privacy issue arises. RFIDs can be accessed from distances up to 30 cm, and the radio waves between a terminal and an RFID can be eavesdropped from a few meters distance. An adversary with dedicated radio equipment can retrieve personal data without the passport owner’s consent. This risk is particularly notable in a hostile world where terrorists want to select victims based upon their nationality, or criminals commit identity theft for a variety of reasons. Figure 2: Radio communication between inspection system and passport. Basic Access Control To protect passport holder privacy the optional Basic Access Control (BAC) mechanism was designed. This mechanism requires an inspection system to use symmetric encryption on the radio interface. The key for this encryption is static and derived from three primary properties of the passport data: 1) date of birth of holder; 2) expiry date of the passport; 3) the passport number. This data is printed in the Machine Readable Zone (MRZ) a bottom strip (see figure Figure 3) of one of the passport pages. In a normal access procedure the MRZ data is read first with an OCR scanner. The inspection system derives the access key from the MRZ data and can then set up an encrypted radio communication channel with the chip to read out all confidential data. Although this procedure can be automated it sets high requirements to inspection systems and also impacts inspection performance. Figure 3: Passport with Machine Readable Zone (MRZ). The BAC mechanism does provide some additional privacy protection, but there are two limitations that limit the strength of this mechanism: - The BAC key is individual but static, and is computed and used for each access. An adversary needs to get hold of this key only once and will from then on always be able to get access to a passport’s data. A passport holder may perceive this as a disadvantage considering the possibility that a passport contains dynamic data. - The BAC key is derived from data that may lack sufficient entropy: the date of expiry is always in a window of less than ten years, the date of birth can often be estimated and the document number may be related to the expiry date. The author of this article discovered BAC security issues in July 2005 and showed that the key entropy that could reach 66 bits may drop below 35 bits due to internal data dependencies. When passport numbers are for instance allocated sequentially they have a strong correlation with the expiry date, effectively reducing the key entropy. An eavesdropper would then be able to compute the BAC key in a few hours and decode all confidential data exchanged with an inspection system. The Netherlands, and maybe other countries, have changed their issuance procedures since this report to strengthen the BAC key. An associated privacy problem comes with the UID (Unique Identification) number emitted by an RFID immediately after startup. This number, if static, allows an easy way of tracking a passport holder. In the context of e-passports it is important that this number is dynamically randomized and that it cannot be used to identify or track the e-passport holder. The reader should note that these privacy issues originate from the decision to use RFID instead of contact card technology. Had this decision been otherwise the privacy debate would have been different as it would be the passport holder who implicitly decides who can read his passport by inserting it into a terminal. Inspection system security issues The use of electronic passports requires inspection systems to verify the passport and the passport holder. These inspection systems are primarily intended for immigration authorities at border control. Obviously the inspection systems need to support the security mechanisms implemented in an e-passport. This appears to be a major challenge due to the diversity of options that may be supported by individual passports. In terms of security protocols and information retrieval the following basic options are allowed: - Use of Basic Access Control (including OCR scanning of MRZ data) - Use of Active Authentication - Amount of personal data included - Number of certificates (additional PKI certificates in the validation chain) - Inclusion of dynamic data (for example visa) Future generations of the technology will also allow the following options: - Use of biometrics - Choice of biometrics (e.g. finger prints, facial scan, iris patterns, etc) - Biometric verification methods - Extended Access Control (enhanced privacy protection mechanism). In terms of cryptography a variety of algorithms and various key lengths are (or will be) involved: - Triple DES - RSA (PSS or PKCS1) - SHA-1, 224, 256, 384, 512 The problem with all these options is that a passport can select a set of preferred options, but an inspection system should support all of them! An associated problem in the introduction of the passport technology is that testing inspection systems becomes very cumbersome. To be sure that false passports are rejected the full range of options should be verified for invalid (combinations of) values. Finally, a secure implementation of the various cryptographic schemes is not trivial. Only recently a vulnerability was discovered by Daniel Bleichenbacher that appeared to impact several major PKCS-1 implementations. PKCS-1 also happens to be one of the allowed signing schemes for passive authentication in e-passports. This means that inspection systems should accept passports using this scheme. Passport forgery becomes a risk for inspection systems that have this vulnerability. Immigration authorities can defend themselves against this attack, and other hidden weaknesses, by proper evaluation of the inspection terminals to make sure that these weaknesses cannot be exploited. Biometrics and Extended Access Control The cornerstone of e-passport security is the scheduled use of biometric passport holder verification. The chip will contain the signed biometric data that could be verified by the inspection system. It is only this feature that would prohibit the look-alike fraud. All other measures do address passport forgery, but the primary concern of look-alike fraud requires a better verification that the person carrying the passport is indeed the person authenticated by the passport. Many countries have started issuance of e-passports, but the use of biometrics is delayed. There are two main reasons: - Biometric verification only works if the software performs a better job than the conventional verification by immigration officers. The debate on the effectiveness of biometric verification, and the suitability of various biometric features, is still ongoing. Also there are some secondary problems, like failure to enroll, that need to be resolved. - Biometric data are considered sensitive. The threat of identity theft exists, and revocation of biometric data is obviously not an option. Countries do not necessarily want to share the biometric data of their citizens with all other countries. The impact of first issue is decreasing in the sense that the quality of biometric systems gets better over time, although it may slow down the introduction of biometrics in e-passports. At least at this moment, there is still limited experience of representative pilot projects. The second issue is more fundamental, issuing countries will always consider who to share sensitive data with. To alleviate these concerns the ICAO standardization body has introduced the concept of Extended Access Control. Extended Access Control (EAC) The earlier described Basic Access Control (BAC) mechanism restricts data access to inspection systems that know the MRZ data. EAC goes further than that: it allows an e-passport to authenticate an inspection system. Only authenticated inspection systems get access to the sensitive (e.g. biometric) data. Inspection system authentication is based upon certificate validation, (indirectly) issued by the e-passport issuing country. An e-passport issuing country therefore decides which countries, or actually: which Inspection System issuers, are granted access to the sensitive data. EAC requires a rather heavy PKI. This is for two reasons: - Each Inspection System must be equipped with certificates for each country whose biometric details may be verified. - Certificates should have a short lifetime; otherwise a stolen Inspection System can be used to illegally read sensitive data. The current EAC specification foresees a certificate lifetime of several days. The two conditions above will result in an intensive traffic of certificate updates. A problem acknowledged by the EAC specification is the fact that e-passports have no concept of time. Since the RFID chips are not powered in between sessions, they do not have a reliable source of time. To solve this problem, an e-passport could remember the effective (starting) date of validated certificates, and consider this as the current date. This could potentially lead to denial-of-service problems: if an e-passport accepts an inspection system’s certificate whose effective date has not yet arrived, it may reject a subsequent inspection system certificate that is still valid. To avoid this problem the specification proposes to use only certificates of trusted domestic terminals for date synchronization. Although date synchronization based on domestic certificate effective dates would give the e-passport a rough indication of the current date this mechanism leaves a risk for some users. Infrequent users of e-passports and users being abroad for a long time will experience that their e-passport date is lagging behind significantly. For example, if an e-passport has validated a domestic EAC capable terminal 6 months ago, it will reveal sensitive data to any rogue terminal stolen over this period. The above problem could be alleviated by using a different date synchronization method. Instead of using effective dates of inspection system certificates we would use a separate source of time. For this ICAO, or another global Certification Authority, should issue date certificates on a daily basis, and inspection systems should load and update their date certificates frequently. A passport could then use the date certificates signed by a trusted party to get a reliable, and more accurate, source of time. This approach could be better since we can also synchronize on foreign systems and we could use the current date in stead of the inspection system certificate effective date. With respect to EAC and biometrics several practical and standardization issues are yet to be resolved. Although EAC, in its current specification, offers strong benefits over the simpler BAC it is certainly not a panacea, and there is room for improvement. Nevertheless, migration to biometrics in e-passports is needed to effectively combat look-alike fraud. The global introduction of electronic passports delivered a first generation of e-passports that support digital signatures for document authentication. The system builds on the newest technology, and a high level of expertise is needed for a secure implementation and configuration of both the e-passports and the inspection systems. The technology got increasingly complex with the decision to use contactless RFID technology. Additional security measures were introduced as a result of privacy concerns. But these measures appear to offer limited privacy protection at the cost of procedural and technological complexity. The next generation e-passports will include biometrics and Extended Access Control (EAC). The standardization of these features is unfinished and could still be improved. Future e-passports, using all security features, will offer strong fraud protection: - Passport forgery is more difficult with an e-passport that supports active authentication. - Look-alike fraud is more difficult with an e-passport that supports biometrics. This level of security can only be reached if all passports implement these features; otherwise fraudsters can fall back to less advanced or legacy passports. Therefore it is important for ICAO to finalize the EAC standardization, and for issuing countries to continue the migration process and enhance their passports with biometrics.
<urn:uuid:05f4d7a9-ca7c-4759-9259-b96dd7665482>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2007/12/03/on-the-security-of-e-passports/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279923.28/warc/CC-MAIN-20170116095119-00460-ip-10-171-10-70.ec2.internal.warc.gz
en
0.922424
3,320
2.625
3
Internet Of Things: Expectation Of Privacy For years we have had an expectation of privacy while using our computers, tablets, phones, email, etc. However, with the advent of big data analysis and everything being on the internet, the internet of things, there is no longer the veil that makes up an Expectation of Privacy. Big Data has allowed us to be tracked in new ways and as we add more devices onto the internet, more of our habits will be tracked: Such as location of boats, planes, your mobile device. Purchasing habits, your location within a store, or theme park. Perhaps even your usage of your toaster, house doors, your refrigerator, etc. To read the complete article, CLICK HERE
<urn:uuid:9a5ea51f-4b8a-4e48-a4bf-3305f8fbaddf>
CC-MAIN-2017-04
http://it-tna.com/2013/06/07/internet-of-things-expectation-of-privacy/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00092-ip-10-171-10-70.ec2.internal.warc.gz
en
0.928384
149
2.546875
3
Sharing content is easy. Email and social media have made it simple to download, post and view content regularly. However, not all of the content that is viewed was supposed to be shared.With increasingly advanced software and more savvy viewers, more and more confidential content is being exposed, stolen or illegally altered. Unfortunately, it can be difficult or expensive to track content to know exactly where it was leaked and, once it has been leaked, there is no getting that content back. Dynamic watermarking is one option for those who want to better protect their content. A dynamic watermark is an image or text that is overlaid on selected content and can change based on a set of factors. A dynamic watermark can display something as simple as the date or offer more protection by displaying a viewer’s contact information such as name or email address. Static vs. Dynamic Watermarks A static watermark is used as a way to trace content back to its original source, but is often used as a way to try to protect content. These digital watermarks are commonly seen on photographs or television shows. A static watermark is generally a logo, name or phrase that remains the same on the content each time it is viewed. Stock photos have static watermarks placed on them to try to deter viewers from stealing the image and, many times, television networks place their logo in the bottom corner of the screen of a t.v. show so that it may be traced back to the rightful owner. Dynamic watermarking can also be a logo, name or phrase, but it has the ability to change. For example, a phrase could be displayed in a different language based on the location in which a piece of content is viewed, or the name of the viewer can appear on a confidential document. A dynamic watermark adds an extra layer of security that a static watermark is unable to provide. For a confidential video, a viewer’s name or email address can be displayed on the video deterring them from recording or sharing that video. Because dynamic watermarks change, they offer more protection and are much more difficult to remove from content than a static watermark. Dynamic watermarks lead to less illegal sharing and altering than static watermarks. Digital watermarks have grown in popularity as a simple way to add a level of security to a video, image, PDF or other important file. They’re easy to apply and take time and skill to remove. Having to remove a watermark decreases the likelihood that a piece of content will be misused. Because dynamic watermarks change based on different factors when opened, they provide extra security to important or confidential content that isn’t meant to be shared or altered. Dynamic watermarking can be effective on its own, but works best when combined with other security measures such as file encryption and tracking. To learn more about dynamic watermarking and how to protect confidential materials, click here to request a demo from Content Raven.
<urn:uuid:ae315fa8-abf0-4322-8dd2-3048906b50fc>
CC-MAIN-2017-04
http://blog.contentraven.com/security/what-is-dynamic-watermarking
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280065.57/warc/CC-MAIN-20170116095120-00305-ip-10-171-10-70.ec2.internal.warc.gz
en
0.94509
607
2.75
3
Definition: An integer n can be solved uniquely mod LCM(A(i)), given modulii (n mod A(i)), A(i) > 0 for i=1..k, k > 0. In other words, given the remainders an integer gets when it's divided by an arbitrary set of divisors, you can uniquely determine the integer's remainder when it is divided by the least common multiple of those divisors. Note: For example, knowing the remainder of n when it's divided by 3 and the remainder when it's divided by 5 allows you to determine the remainder of n when it's divided by LCM(3,5) = 15. After LK. If you have suggestions, corrections, or comments, please get in touch with Paul Black. Entry modified 17 December 2004. HTML page formatted Mon Feb 2 13:10:39 2015. Cite this as: Paul E. Black, "Chinese remainder theorem", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 17 December 2004. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/chineseRmndr.html
<urn:uuid:cf2dcd6d-acc0-42c0-a4b5-b8c49f7a9293>
CC-MAIN-2017-04
http://www.darkridge.com/~jpr5/mirror/dads/HTML/chineseRmndr.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00543-ip-10-171-10-70.ec2.internal.warc.gz
en
0.876958
265
3.375
3
Five Imperatives for Extreme Data Protection in Virtualized Environments #1: Minimize impact to host systems during backups In virtual environments, numerous virtual machines (VMs) share the resources of the single physical VM host. Backups – which are among the most resource intensive operations – negatively impact the performance and response time of applications running on other VMs on the same host. On a large virtual machine host with many VMs, competing backup jobs have been known to bring the host to a grinding halt, leaving critical data unprotected. There are various approaches for minimizing the impact to host systems during backups, though each has drawbacks. The simplest approach is to limit the number of VMs on a given system, making sure you do not exceed the number you can effectively back up. While effective, this goes counter to the purpose of virtualization, which is to consolidate applications to the fewest possible physical servers. It would also limit the financial benefits accrued from consolidation, perhaps significantly. A second approach is to stagger the scheduling of VM backups. For example, if performance is impacted when four backups are running simultaneously, limit backups to three at a time. This can solve the performance issue, but it can create other challenges. For example, backup jobs cannot be scheduled without referencing all the other existing jobs. What if a particular job runs longer than expected and the next set of jobs start? Suddenly, the performance limit has been surpassed. As data grows over time, jobs may take longer to run, creating backup overlap. There is also no clear way to account for full backups and incrementals in such a scheme. Even if the scheduling is worked out, the total backup window has now been extended significantly by stretching backups over time. An early technical attempt at solving the backup problem was the use of a proxy server. For VMware, this model is known as VMware Consolidated Backup, commonly called VCB. With VCB, a separate server is dedicated for running the backups directly off the storage. The virtual machines do not participate in backups. While this seemed good in theory, in practice there was still significant performance impact due to the use of VMware snapshots. It also proved complex to configure. The result was that few users adopted this model and VMware has dropped support for it going forward. In response to this, with vSphere 4.0 VMware released a new storage API called vStorage APIs for Data Protection. This introduced the concept of Changed Block Tracking (CBT). Simply put, CBT tracks data changes at the block level, rather than the file level. This results in significantly less data being moved during backup, making them faster and more efficient. CBT goes a long way toward solving the problem of backup impact, though it does still rely on VMware snapshots, which create impact, and the data tracking overhead can cause slower performance of virtual machines. CBT also requires backup software to integrate with the APIs, which can result in the need to upgrade the backup environment or change to a new vendor. A final approach is to install an efficient data protection agent on each virtual machine, and then run backup jobs just as they would be run in a physical environment. The efficient agent requires technology that deftly tracks, captures, and transfers data streams at a block level without the need to invoke VMware snapshots. By doing so, no strain is placed on the resident applications, open files are not an issue, and the file system, CPU, and other VMs are minimally impacted. #2: Reduce network traffic impact during backups to maximize backup speed 10 Way to Improve Data Backup Every aspect of the data center environment can stand a little improvement. But if your backup capabilities are like most, they are in dire need of an upgrade. Reduction of network traffic is best achieved through very small backups, which dart across the network rapidly, eliminating network bottlenecks as the backup image travels from VM to LAN to SAN to backup target disk. Block-level incremental backups achieve this while full base backups, and even file-level incrementals, do not. Minimal resource contention, low network traffic and small snapshots all lead to faster backups, which deliver improved reliability (less time in the transfer process means there is less time for network problems) and allowance for more frequent backups and recovery points. In a virtual environment, this also means more VMs can be backed up per server, increasing VM host density and amplifying the benefits of a virtualization investment. Technologies such as CBT and other block-level backup models are the best way to limit network impact. #3: Focus on simplicity and speed for recovery Numerous user implementations have revealed that server virtualization introduces new recovery challenges. Recovery complications arise when backups are performed at the physical VM host level (obscuring and prolonging granular restores) or through a proxy (necessitating multi-step recovery). It is important to consider the availability of a searchable backup catalog when evaluating VM backup tools. Users of traditional, file-based backup often assume that the searchable catalog they are used to is available in any backup tool. But with VMs this is not always the case. Systems that do full VM image backups or use snapshot-based backups often are not able to catalog the data, meaning there is no easy way to find a file. Some provide partial insight, allowing users to manually browse a directory tree, but not allowing a search. The Backup and Recovery Conundrum The lack of a strategic approach to data backup and recovery is creating an enormous challenge for IT organizations. It is also important to understand how the tool handles file history. A common recovery use case is the need to retrieve a file that has been corrupted, but the exact time of corruption is not known. This requires the examination of several versions of a file. A well-designed recovery tool will allow input for both the file name and a date range to detect every instance of the file housed in the backup repository. While this may seem a minor point, it can make the difference between an easy five-minute recovery process and a frustrating hour or two hunting around for files. Fast and simple recovery, at either a granular or virtual machine level, can be achieved if point-in-time sever backup images on the target disks are always fully “hydrated” and ready to be used for multiple purposes. In fact, with a data protection model that follows this practice, immediate recovery to a virtual machine, cloning to virtual machine, and even quick migrating from a physical to virtual machine are all done the same way – by simply transferring a server backup image onto a physical VM host server. #4: Minimize secondary storage requirements Traditional backup results in multiple copies of the entire IT environment on secondary storage. Explosive data growth has made those copies larger than ever, and the need for extreme backup performance to accommodate more data has necessitated the move from tape backup to more expensive disk backup. The result is that secondary disk data reduction has become an unwanted necessity. Deduplication of redundant files can be achieved at the source or at the target. In isolation, each approach has drawbacks. Each new data stream needs to be compared with an ever-growing history of previously stored data. Source-side deduplication technology can impact performance on backup clients because of the need to scan the data for changes. They do, however, reduce the amount of data sent over the wire. Target-side deduplication does nothing to change the behavior of the backup client or limit sent data, though it does significantly reduce the amount of disk resources required. A hybrid approach combining efficient data protection software with target-side deduplication can help organizations achieve the full benefits of enterprise deduplication without losing the other benefits. #5: Strive for administrative ease-of-use Very few users have a 100% virtualized environment. Consequently, a data protection solution that behaves the same in virtual and physical environments is desirable. A data protection solution in which a backup agent is installed on each VM can help ease the transition from physical to virtual. Concerns about backup agents needing to be added to every new virtual machine are overstated because each VM needs to be provisioned anyway – with an operating system and other commonly deployed applications and software. New virtual machines cloned from a base system will already include the data protection agent. When evaluating solutions, it is vital to consider the entire backup lifecycle, from end to end. For example, if some data sets need to be archived to tape, a deduplication device may not allow easy transfer of data to archive media. This might then require an entire secondary set of backup jobs to pull data off the device and transfer it to tape, greatly increasing management overhead. This kind of “surprise” is not something organizations want to discover after they have paid for and deployed a solution. Ease of use can also be realized with features such as unified platform support, embedded archiving, and centralized scheduling, reporting, and maintenance – all from a single pane of glass. A holistic view of virtualization To maximize the value of a virtualization investment, planning at all levels is required. Data protection is a key component of a comprehensive physical-to-virtual (P2V) or virtual-to-virtual (V2V) migration plan. The five imperatives recommended here can help significantly improve organizations' long-term ROI around performance and hardware efficiencies and accelerate the benefits of virtualization. To complete this holistic vision, organizations must demand easy to use data protection solutions that rate highly on all five of the imperatives. Decision makers who follow these best practices may avoid the common data protection pitfalls that plague many server virtualization initiatives. Syncsort is exhibiting at 360�IT, the IT Infrastructure Event held 22nd – 23rd September 2010, at Earl's Court, London. The event provides an essential road map of technologies for the management and development of a flexible, secure and dynamic IT infrastructure. For further information please visit www.360itevent.com
<urn:uuid:6cc6897d-036b-4e0c-9a2c-3e00d5cae359>
CC-MAIN-2017-04
http://www.enterprisenetworkingplanet.com/print/datacenter/datacenter-blog/five-imperatives-extreme-data-protection-virtualized-environments
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00085-ip-10-171-10-70.ec2.internal.warc.gz
en
0.930211
2,049
2.625
3
You most likely have your proprietary software thoroughly tested, QAed and reviewed via static code analysis on a regular basis. But what about the open source components? Open source components may have a direct impact on the quality of your software or service. Open source components may have a direct impact on the quality of your software or service. Security vulnerabilities in open source components are discovered from time to time, and while often fixed very quickly, you need to make sure that you know of them when they are discovered and can apply the right measures when necessary.Here are 3 measures you should take to control the risks that open source components may introduce to your software: 1 – Know what’s in your software. Open source components are part of your software, so you need to know which ones are embedded in your software, at all times. What makes this task hard is that most open source components have dependencies (components they use) – in fact, we researched the 300K open source components (out of a database of millions) that are most commonly used by our customers, and discovered that on average, each has 7.1 dependencies. So if you have 50 open source components that you know of in your software, the actual number is probably much higher. Knowing what you are using is an essential step on the path to full control of your software. 2 – Control what’s being added into your software. Checking what open source components are added in real time allows you to check whether they conform to your open source license and risk policy and decide whether you want to use them before too much effort is put into developing your software around these components. You just don’t want to spend precious development resources on components that you cannot use because their license is too restrictive. There are usually plenty of alternatives with friendly licenses your development team can consider if they know that they should. 3 – Track security vulnerabilities and stale libraries. Another reason to check open source components as they are added to your software is security vulnerabilities. As we mentioned above, since open source software is like any other software, it too may contain security vulnerabilities. The good news: open source components are tested, used and fixed by an entire community. Our research of over 6,000 projects proves that if open source components are properly managed and regularly patched, most projects (98% of them) would not include an unfixed security vulnerability. Controlling what’s in the software and what’s being added to it, is the first step on the path to secure software. Being able to create a full detailed report in a click, having full visibility and transparency and making all these part of a foolproof process that does not rely on the development team are key to successful management of open source usage. | About the author: Rami Sass (@whtsrc) is CEO and Founder of WhiteSource. WhiteSource helps engineering executives to effortlessly manage the use of open source components in their software.Open source components make up a significant part of commercial software but are often undermanaged. WhiteSource fully automates all open source management needs, reducing risks, and guaranteeing the continuity and integrity of open source component management.Editor’s Note: The opinions expressed in this article are solely those of the contributor, and do not necessarily reflect those of Checkmarx. Sign up today & never miss an update from the Checkmarx blog Interested in trying CxSAST on your own code? You can now use Checkmarx's solution to scan uncompiled / unbuilt source code in 18 coding and scripting languages and identify the vulnerable lines of code. CxSAST will even find the best-fix locations for you and suggest the best remediation techniques. Sign up for your FREE trial now. Checkmarx is now offering you the opportunity to see how CxSAST identifies application-layer vulnerabilities in real-time. Our in-house security experts will run the scan and demonstrate how the solution's queries can be tweaked as per your specific needs and requirements. Fill in your details and we'll schedule a FREE live demo with you.
<urn:uuid:a8908f68-64cb-4681-abc5-07fbed163bb0>
CC-MAIN-2017-04
https://www.checkmarx.com/2015/03/05/application-open-source-components1
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283689.98/warc/CC-MAIN-20170116095123-00111-ip-10-171-10-70.ec2.internal.warc.gz
en
0.939299
841
2.53125
3
A multidimensional database is optimized for data online analytical processing (OLAP) applications and for data warehousing. They are often created with the input from relational databases. Multidimensional databases can be used for queries around business operations or trends. Multidimensional database management systems (MDDBMS) can process the data in a database a high speed and can generate answers quickly. GT.M is multidimensional database engine that is used at financial and health institutions worldwide. It is scalable and can process in real-time. The GT.M has no restrictions of the type of data that can be indexed and stored and the application logic can be adapted to suit own needs. SciDB is a data management system especially designed for scientific research. It is optimized for big data and big data analytics. SciDB is not suited for online transaction processing (OLTP), but it is built around analytics. Data will be write-once, read-many. SciDB is built around multidimensional array storage. Rasdaman stands for Raster Data Manager and this project extends standard relational database systems with the possibility to store and recover multidimensional arrays through SQL-style language. Rasdaman is flexible, scalable and offers high-speed performance. It offers a Java API and a C++ API.
<urn:uuid:83222cb9-a886-44ed-b0b4-85152dd6c034>
CC-MAIN-2017-04
https://datafloq.com/big-data-open-source-tools/os-multidimensional/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00377-ip-10-171-10-70.ec2.internal.warc.gz
en
0.90157
266
2.578125
3
As both standalone and networked computing capabilities continue to grow in-line with Moore’s law, key sizes for the most widely used public-key cryptographic systems have to grow disproportionately fast. This trend makes a switch to elliptic-curve cryptography (ECC) more and more attractive. Unfortunately, ECC has a reputation for being difficult to understand. And this reputation, deserved or not, deters many from exploring the principles on which it is based. This is particularly unfortunate now, when we are called upon to make informed judgments about the soundness of elliptic curve standards and the processes by which they were developed. We need more people willing to enter into the debate, and, in order for this to happen, more people must have a grasp of the basics. A full appreciation of all aspects of ECC does demand a thorough grounding in some advanced branches of mathematics. But, few of us need the depth of understanding required to design new elliptic-curve schemes, or implement existing schemes in software or hardware. The basic principles, on the other hand, are easily understood by anyone who studied mathematics through high-school. And a wider understanding of the basics will result in a wider circle of informed discussion. It’s time to dispel the myth that knowledge of ECC is out of reach to all but the mathematical elite.
<urn:uuid:ae8b1b45-00d9-4bb6-9594-731206078160>
CC-MAIN-2017-04
https://www.entrust.com/elliptic-curve-cryptography-simplified/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00103-ip-10-171-10-70.ec2.internal.warc.gz
en
0.956594
276
2.9375
3
Unfortunately, Green House Data does not have a wind turbine built into our data center roof. We wish we did, but because it is impossible to separate renewable power from conventional methods of generation on the power grid, we have to buy Renewable Energy Credits instead. This is combined with purchases from Cheyenne wind farms to help us reach our 100% renewable-powered goal. In fact, most companies who use renewable energy purchase renewable credits, including huge operators like Google. So what are Renewable Energy Credits (RECs)? Everyone plugs into the same power grid—there is no separate plug marked "Wind Power" in our data center! That means it's impossible to tell whether the energy used is generated by a solar panel or by a coal plant, once it makes it to the outlet. A Renewable Energy Credit, sometimes called a green tag, is a certificate proving that 1 megawatt-hour (MWh) of electricity was generated by a renewable energy resource. RECs can be sold, traded, or bartered according to the US Department of Energy, and are sold separately from the energy itself. That means we have to buy it directly from companies specializing in renewable projects. They can be controversial because the buyer of an REC is basically investing in another renewable project, not buying renewable energy themselves. RECs help encourage renewable growth when direct purchases aren't available and are relatively new commodities, so the potential for fraud must be considered. How can companies be sure their money is going towards the development of new renewables? Who keeps track of the certificates themselves to ensure they aren't sold twice? Every REC must be tracked from their origin to the final user, making sure that each megawatt-hour of renewable energy has indeed entered the regular power grid. The EPA suggests two methods of certification: individual audits and regional certificate tracking systems. Audits are certified by third-party organizations and are generally used when the REC is purchased in a far away region. Tracking systems are electronic databases used to monitor RECs, with unique numbered certificates for every MWh of electricity. Once the wattage has been used, the certificate expires. Each certificate number can only belong to a single account. There are a number of certificate tracking systems in use throughout the United States. In the Rocky Mountains, the Western Renewable Energy Generation Information System (WREGIS) tracks renewable energy generation from units that register in the system by using verifiable data and creating RECs for this generation. WREGIS is an independent organization governed by committee, and was developed after California charged that state’s Energy Commission with developing a tracking system in 2003. In Green House Data's case, we know that some of our data center power comes from the Happy Jack and Silver Sage Wind Farms, and one of the reasons we choose to locate in Wyoming is because of the abundance of renewable energy fed into the regular power grid. At our New Jersey location, we also have some direct solar power generated by solar panels on the roof of the data center building, but we must also supplement this power with RECs. Every year we receive a certificate verifying our RECs as a total MWh amount, which then expires when we have used that amount of electricity. Green House Data has also been an EPA Green Power partner with 100% Green Power usage for several years. In order to meet the requirements, an organization must track their electricity usage and purchase a corresponding amount of RECs, so their power is essentially supplied by renewable sources. The EPA also requires power to be sourced from the United States, as well as from renewable facilities built within the past 15 years, in order to continuously drive the creation of more renewables. RECs come from a certain "vintage", meaning they are certified for renewable project spending within a calendar year. If we purchase an REC for 2013, that money is spent during 2013 to build more renewable energy sources. The EPA Green Power partner program requires participants to purchase RECs of the current vintage, or up to 3 months ahead of time. Organizations can purchase vintages ahead of time, but they will not count towards a 100% Green Power rating until that year has started. RECs are only one way Green House Data strives to set an example as a leading provider of responsible cloud computing services. For more information, check out our energy efficient practices. Posted By: Joe Kozlowicz
<urn:uuid:ff20b6d4-fb60-4b77-aafc-570481dee9e2>
CC-MAIN-2017-04
https://www.greenhousedata.com/blog/wait-you-dont-have-your-own-turbines
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280730.27/warc/CC-MAIN-20170116095120-00011-ip-10-171-10-70.ec2.internal.warc.gz
en
0.954439
888
3.015625
3
Here’s a big idea that’s taken hold in natural language processing: meanings are vectors. A text-understanding system can represent the approximate meaning of a word or phrase by representing it as a vector in a multi-dimensional space. Vectors that are close to each other represent similar meanings. Vectors are how Luminoso has always represented meaning. When we started Luminoso, this was seen as a bit of a crazy idea. It was an exciting time when the idea of vectors as meanings was suddenly popularized by the Google research project word2vec. Now this isn’t considered a crazy idea anymore, it’s considered the effective thing to do. Luminoso’s starting point — its model of word meanings when it hasn’t seen any of your documents — comes from a vector-based representation of ConceptNet 5. That gives it general knowledge about what words mean. These vectors are then automatically adjusted based on the specific way that words are used in your domain. But you might well ask: if these newer systems such as word2vec or GloVe are so effective, should we be using them as our starting point? The best representation of word meanings we’ve seen — and we think it’s the best representation of word meanings anyone has seen — is our new ensemble that combines ConceptNet, GloVe, PPDB, and word2vec. It’s described in our paper, “An Ensemble Method to Produce High-Quality Word Embeddings“, and it’s reproducible using this GitHub repository. We call this the ConceptNet Vector Ensemble. These domain-general word embeddings fill the same niche as, for example, the word2vec Google News vectors, but by several measures, they represent related meanings more like people do. Expanding on “retrofitting” Manaal Faruqui’s Retrofitting, from CMU’s Language Technologies Institute, is a very cool idea. Every system of word vectors is going to reflect the set of data it was trained on, which means there’s probably more information from outside that data that could make it better. If you’ve got a good set of word vectors, but you wish there was more information it had taken into account — particularly a knowledge graph — you can use a fairly straightforward “retrofitting” procedure to adjust the vectors accordingly. Starting with some vectors and adjusting them based on new information — that sure sounds like what I just described about what Luminoso does, right? Faruqui’s retrofitting is not the particular process we use inside Luminoso’s products, but the general idea is related enough to Luminoso’s proprietary process that working with it was quite natural for us, and we found that it does work well. There’s one idea from our process that can be added to retrofitting easily: if you have information about words that weren’t in your vocabulary to start with, you should automatically expand your vector space to include them. Faruqui describes some retrofitting combinations that work well, such as combining GloVe with WordNet. I don’t think anyone had tried doing anything like this with ConceptNet before, and it turns out to be a pretty powerful source of knowledge to add. And when you add this idea of automatically expanding the vocabulary, now you can also represent all the words and phrases in ConceptNet that weren’t in the vocabulary of your original vector space, such as words in other languages. The multilingual knowledge in ConceptNet is particularly relevant here. Our ensemble can learn more about words based on the things they translate to in languages besides English, and it can represent those words in other languages with the same kind of vectors that it uses to represent English words. There’s clearly more to be done to extend the full power of this representation to non-English languages. It would be better, for example, if it started with some text in other languages that it could learn from and retrofit onto, instead of relying entirely on the multilingual links in ConceptNet. But it’s promising that the Spanish vectors that our ensemble learns entirely from ConceptNet, starting from having no idea what Spanish is, perform better at word similarity than a system trained on the text of the Spanish Wikipedia. On the other hand, you have GloVe For some reason, everyone in this niche talks about word2vec and few people talk about the similar system GloVe, from Stanford NLP. We were more drawn to GloVe as something to experiment with, as we find the way it works clearer than word2vec. When we compared word2vec and GloVe, we got better initial results from GloVe. Levy et al. report the opposite. I think what this shows is that a whole lot of the performance of these systems is in the fine details of how you use them. And indeed, when we tweak the way we use GloVe — particularly when we borrow a process from ConceptNet to normalize words to their root form — we get word similarities that are much better than word2vec and the original GloVe, even before we retrofit anything onto it. You can probably guess the next step: “why don’t we use both?” word2vec’s most broadly useful vectors come from Google News articles, while GloVe’s come from reading the Web at large. Those represent different kinds of information. Both of them should be in the system. In the ConceptNet Vector Ensemble, we build a vector space that combines word2vec and GloVe before we start retrofitting. You can see that creating state-of-the-art word embeddings involves ideas from a number of different people. A few of them are our own — particularly ConceptNet 5, which is entirely developed at Luminoso these days, and the various ways we transformed word embeddings to make them work better together. This is an exciting, fast-moving area of NLP. We’re telling everyone about our vectors because the openness of word-embedding research made them possible, and if we kept our own improvement quiet, the field would probably find a way to move on without it at the cost of some unnecessary effort. These vectors are available for download under a Creative Commons Attribution Share-Alike license. If you’re working on an application that starts from a vector representation of words — maybe you’re working in the still-congealing field of Deep Learning methods for NLP — you should give the ConceptNet Vector Ensemble a try.
<urn:uuid:8ee36a45-0fff-4f2c-b97f-9fa1325b1038>
CC-MAIN-2017-04
https://blog.luminoso.com/2016/04/06/an-introduction-to-the-conceptnet-vector-ensemble/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280929.91/warc/CC-MAIN-20170116095120-00185-ip-10-171-10-70.ec2.internal.warc.gz
en
0.944498
1,403
2.8125
3
The National Center for Atmospheric Research (NCAR) is building a new high performance computing center in Wyoming, just west of Cheyenne. The facility will host Yellowstone, a petascale supercomputer, as well as new storage, visualization, and data analytics clusters. The machines will be used to support research in weather, climate, air pollution, earthquakes, carbon sequestration and water issues. The idea is to give Earth scientists access to much greater computing and storage capabilities in order to create more accurate simulations of these atmospheric and geophysical models. The Republic covered the construction of NCAR-Wyoming Supercomputing Center, where Yellowstone will be housed. The 153,000 square foot building is costing roughly $70 million, funded by business groups, the state government and the NSF. The center is set to open on October 15th. IBM won the bid to build the supercomputer, beating out three other competitors. Based on Big Blue’s iDataPlex server platform, the system will consist of 4,518 dual-socket Sandy Bridge EP nodes, amounting to 72,288 cores. Each 16-core node will be equipped with 32 GB of DDR3-1600 memory. The nodes will be hooked together with Mellanox FDR (56 Gbps) InfiniBand. The system is being installed now and is expected to come online by summer’s end. Delivering an estimated 1.55 peak petaflops, Yellowstone is expected to earn a top ten spot on the upcoming TOP500 list. As such, it will deliver about 30 times the performance of Bluefire, the NCAR supercomputer that Yellowstone is in line to replace. Such power does not come cheap though. The system is expected to cost between $25 and $30 million, which will be covered by the state and the University of Wyoming (UW). $20 million has been provided by the state, while the University will pay $1 million each year over the next 20 years. As part of Yellowstone’s supporting cast are three data analysis and visualization (DAV) systems – Geyser, Caldera, and a Knights Corner cluster, which will be used to post-process the data produced by simulation runs. Like Yellowstone, all the clusters will be outfitted with FDR InfiniBand. Geyser, a 16-node IBM x3850 cluster, will provide large-scale analytics for the supercomputer. Each Geyser node will have a terabyte of memory and house four 10-core Westmere EX processors plus an NVIDIA GPU. The visualization cluster, Caldera, will also have 16 nodes, but in this case, each node has a much smaller memory footprint (64 GB), less CPU performance (two Sandy Bridge EP processors) and more graphics horsepower (two NVIDIA GPUs). The third DAV system is an Intel Knights Corner-powered system. Again, it’s a 16-node cluster, with each node pairing two of the MIC coprocessors with two Sandy Bridge EP chips. Interestingly, that system is scheduled to be installed in November 2012, a few months before the Knights Corner parts are expected to be in volume production. The new NCAR center will also house a data storage system, known as GLADE. It will act as a centralized file resource for Yellowstone and the DAV clusters. GLADE will be made up of 76 IBM DCS3700 storage servers and run GPFS. Using 2TB disk drives, total usable storage capacity will be 10.9 petabytes. The next phase of the system, scheduled for Q1 2014, will incorporate 3TB drives and increase that capacity to 16.9 petabytes. With petascale storage, compute and visualization, the new NCAR facility will represent one of more impressive HPC setups in the world when it comes online later this year.
<urn:uuid:3bf68044-7979-434f-bf6c-204e0de3c066>
CC-MAIN-2017-04
https://www.hpcwire.com/2012/06/11/wyoming_plays_host_to_top_10_super/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280929.91/warc/CC-MAIN-20170116095120-00185-ip-10-171-10-70.ec2.internal.warc.gz
en
0.914802
787
2.765625
3
Debugging The Myths Of Heartbleed Does Heartbleed really wreak havoc without a trace? The media and many technical sites seemed convinced of this, but some of us were skeptical. Now that IT organizations across the globe have had time to recover from the recent Heartbleed flaw, what can we learn from this incident? The vulnerability was discovered in an OpenSSL library used by thousands of websites on public and private networks and had gone unnoticed for years. Attackers could force a web server to reveal data from inside an SSL session, completely bypassing encryption. As if that weren't bad enough, initial reports claimed that the Heartbleed attack on the TLS/DTLS heartbeat extension occurred "without a trace." It would not be an understatement to characterize Heartbleed as one of the creepiest security vulnerabilities ever to lurk across the Internet. Here's a quick summary of its timeline: - 2011 -- A German coder accidently creates a security vulnerability in an OpenSSL extension with a simple line of code - 2011 to 2014 -- Years go by and no one notices this vulnerability; despite the code being open source, it will become a problem for millions of users - March 21, 2014 -- The vulnerability is discovered independently by Google engineer Neel Mehta and the Finnish security firm Codenomicon - March 21 to April 7 -- Google, CloudFlare, Akamai, Red Hat, and Facebook complete unannounced patching of their OpenSSL libraries - April 7 -- MITRE Organization officially reports the Heartbleed bug in CVE-2014-0160 and the OpenSSL Project immediately issues version 1.0.1g that fixes the vulnerable code - April 7 until now -- Vendors of products that use OpenSSL scramble into a frenzy to identify, diagnose, and update their products. What's in this bug and what's at stake Let's be clear: this isn't a flaw in SSL/TLS or in the heartbeat extension (RFC 6520). The vulnerability exists within the OpenSSL implementation of the extension. Heartbleed exploit code allows attackers to force a web server to reveal 64 KB chunks of certain memory regions by overflowing a buffer. While it isn't possible to predict what might be revealed, successful attacks have obtained session keys, passwords, and other information that should normally remain confidential. Finding the hidden evidence Does Heartbleed really wreak its havoc while leaving nary a trace? The media and many technical sites seemed convinced of this, but some of us were skeptical. The Heartbleed attacks surely leave some evidence behind: packets. Packets almost always tell a detailed story of what has really happened, including in the case of Heartbleed. The trick, of course, is to have the packets. It's true that a server attacked with a Heartbleed exploit is unlikely to reveal any evidence. Stored packets, meanwhile, do tell the story of a successful Heartbleed exploit even after the adversary has stopped an active attack. Detecting a prior Heartbleed exploit Continuous monitoring of a network can reveal active Heartbleed attacks. But even more importantly, with a sufficiently large rolling buffer of packet capture data, it becomes possible to look back in time, before the public disclosure of the Heartbleed vulnerability. An investigation of this data may reveal whether an actual exploit of vulnerable servers has occurred. A Berkeley Packet F (BPF) placed in the network can automatically flag larger-than-normal TLS heartbeat responses from servers. Wireshark, tcpdump, and other tools can analyze the captured packets for confirmation of an attack. Why BPF? BPF engines are fast -- something of a requirement given the sheer amount of traffic passing through modern networks. Important for the use case here, a BPF capture is a common format for packet processing, understood by the majority of operating systems and packet-analyzing software. The wide availability of BPF is the main reason it can become easy to detect Heartbleed attacks. BPF engines are available for Linux, for Mac OS, for Windows (via WinPcap), and can be placed on cloud computing instances. BPF and tcpdump are present on most network appliances (such as firewalls, load balancers, and application delivery controllers) and often accessible through an administrative console for troubleshooting purposes. And, of course, packet analysis and storage engines in products designed for supporting network performance management almost all support BPF. As a result, many individuals in the technical community have responded with plenty of resources to develop a BPF filter appropriate for detecting Heartbleed attacks. Most of these filters examine traffic from port 443/tcp (the default HTTPS port). The usual size threshold is 69 bytes; this can be adjusted upwards or downwards to reduce false positives or false negatives if necessary. Having a nimble awareness of the data in your network, a basic understanding of how secure services should normally operate, and the ability to investigate anomalies can inoculate you from the unavoidable hype. Packets do not lie -- but you have to capture them to reveal their truths. A large rolling buffer of packet capture data establishes an ideal forensics basis. From this, you can determine what has actually occurred during the time when vulnerability exploits run wild. Certainty is always better than mere speculation over hypothetical breaches in security. Steve actively works to raise awareness of the technical and business benefits of Riverbed's performance optimization solutions, particularly as they relate to accelerating the enterprise adoption of cloud computing. His specialties include information security, compliance, ... View Full Bio
<urn:uuid:6ed98463-ed03-4cf4-b69a-dcfc891446e6>
CC-MAIN-2017-04
http://www.darkreading.com/vulnerabilities---threats/debugging-the-myths-of-heartbleed-/a/d-id/1298134?_mc=RSS_DR_EDT
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00093-ip-10-171-10-70.ec2.internal.warc.gz
en
0.924628
1,139
2.703125
3
June 6th, 2014 - by Walker Rowe Here we look at some cool tools that the Linux system administrator will find useful or even indispensable. The tools we’ll discuss in this article are all free. Puppet is available in both a free open source version and a paid commercial version known as Puppet Enterprise, which includes extra features and support. Puppet is used for IT automation, orchestration and reporting. With Puppet, you can define the desired state of your system, simulate the changes before implementing them, enforce and deploy the desired state automatically, and then report the differences between the two states of before and after Puppet has run and enforced the desired state. The desired state is defined on the Puppet master, and your Puppet agents will be installed on those servers that you want to control: the agents will get the desired state from the master and then implement it. To understand better how this works here are some examples of what Puppet is capable of doing; in each example imagine that you have hundreds of servers that you manage. - Control files: Linux is based around files; by modifying files you can control almost everything. Rather than editing files on different servers in the way that you require, you can specify this on the puppet master. The puppet agents will ensure that the same file is present on the server where it is installed. For example, you have a custom /etc/ssh/sshd_config and /etc/sudoers file that locks down SSH and gives root access to admins. You can roll out these files automatically and any future changes with Puppet. If someone such as an attacker changes or overwrites any of the Puppet controlled files the desired changes will be put back. - Set cron jobs: Puppet can set cron jobs on a server from the Puppet master on all of your servers, so you can schedule jobs anywhere without having to manually set these up. - Install or remove packages: You can ensure that packages are installed or removed from your servers. For instance, if you need Apache you can ensure that it’s always there along with the required configuration files that you have set. Alternatively, if you want to ensure that Apache is removed you can set this as well, if someone installs Apache then Puppet will remove it. - Ensure services are running: Puppet can check to ensure that services are running or stopped. For example, you can ensure that Apache is always up and running. If Puppet detects that Apache is not running it can try to start the service. - Execute commands: You can also set Puppet to run a command on all of your servers. So, you could use Puppet to run the auto install command for the Anturis agent to have it installed automatically on all of your servers that you wish to monitor, saving a lot of time. It’s important to note that the above only happen when the Puppet agent completes a run. When this happens it will get the desired configuration from the Puppet master. By default the Puppet run will happen every 30 minutes. However, you can change this time. MCollective is a framework to build server orchestration or parallel job execution systems. The service is separate to the puppet agent but typically installed with it. It can scan your network for virtual machines based on command-line criteria. Then it lets you send them messages - for example, to find out which ones are down, or to restart the processes on the machine or the whole machine from one central location. The application is written in Ruby, so you could copy some of that and adapt it to your specific needs. The data comes from Puppet, Chef, Facter, and other plugins. It reads the metadata left behind when you used those tools to build the machines. Here are some example MCollective commands and a list of what you can do with the tool: - mc-find-hosts: finds all virtual machines. - mc-facts shows which machines are located in what countries. - mc-service –with-class /dev_server/ httpd status: finds machines that are running web servers. - use mc-rpc to send messages to machines and discover those with errors when they do not echo back the same message. - mc-service –with-class /dev_server/ httpd restart: restarts development web servers. Webmin is an open source tool for managing server configuration; it allows you to administer your server through a web interface via a browser rather than working directly with files through SSH, for example. You can use it to set up user accounts, disk quotas and Apache config files, and to create user accounts, enable file sharing - and more. The system is configured using administrative modules, which are .gz module files and can be added to increase functionality and implement updates. Below are some example modules that are available. There are plenty more that can be downloaded. - Apache - configure almost all Apache directives, which is a lot easier than typing them by hand - Bind DNS Server - BSDS Firewall - Backup Configuration Files - Change Password - CD Burner - DHCP Server - File Manager - File System Backup - LDAP Client - search and edit LDAP records - LDAP Server - manage OpenLDAP - Network Configuration This is a free tool used to erase data from all disks on a server. It’s a boot disk that will erase any disks that can be detected making it useful during the server decommissioning process. Be careful where you use it! It works with Windows and Linux and is useful when you need to ensure that data has been securely erased so that it cannot be recovered - so be sure that you really want to remove the data stored on the disks on the server you mount the ISO to. Wireshark is a network protocol analyzer, also known as a network sniffer. It’s similar to other tools such as tcpdump but with a graphical interface and the ability to more easily filter traffic by type, source and destination addresses and ports. Wireshark can run on all of the popular operating systems, it uses libpcap on Linux to capture packets and WinPcap on Windows. You can use it to troubleshoot network and application issues or simply to monitor what traffic there is on the network. On a security issue note, any snoop tool can only sniff packets going to and from your machine. To change that you would have to set up your router in promiscuous mode or plug your machine into a hub, router, or switch in your network. On a busy network the output from Wireshark will go by on the screen very fast. You can log the traffic into a .pcap file for review at a later time. This is a custom live CD of Ubuntu Linux used for data recovery and forensics. You can boot it with a USB drive or CD and then inspect disks at the block level, meaning that it can work on any operating system’s file system, such as Windows or Linux. Once booted you will be provided with a Linux shell and various tools to help with data recovery, it includes ddrutility for instance, which shows file fragments and names in unrecoverable disk blocks. It also includes many other useful data recovery tools such as PhotoRec, The Sleutch Kit, Gnu-fdisk, and ClamAV which can be useful for scanning for malware and viruses. The project is currently no longer supported or maintained by the developer. However, it still works great. If it does what you need then you shouldn’t have any problems. If not, don’t bother to write for support until someone else takes over the project’s source code. As it has been around for a while you probably won’t find any problems or limitations that you cannot work around. TightVNC provides remote access to a graphical user interface (GUI) allowing you to control a Linux system without actually being at the machine. Typically, Linux servers are administered over command line with SSH, however some users will prefer to manage the server with the GUI, especially if it is a desktop machine. VNC works in a similar way to Microsoft’s remote desktop application; you just need to install and run the VNC server onto the machine you want to connect to. Once configured, you will be able to connect remotely. Something to be aware of is that VNC is not secure. Traffic is transferred over the network in plain text or in a way that can be cracked, so connecting to VNC with a username and password, for instance, is generally not secure by default; this information can be obtained using a tool such as Wireshark. To increase security most VNC clients will allow you to tunnel VNC over SSH. This is secure as SSH is used to establish an encrypted connection to the machine to which you can then connect with VNC over the top. There are VNC clients available for Windows/Linux/Mac OS X so you can connect from almost anywhere. There are even Android clients as well so you can connect from a mobile device, and one of the most popular ones currently is ‘Ripple’. VNC is platform-independent so there are many different versions around that you can use, such as TigerVNC, TightVNC and RealVNC. CSF is a suite of scripts which provide firewall, login, intrusion detection, and more. The firewall is essentially a front end to iptables with plenty of additional useful features. Firstly, it uses the iptables firewall, allowing you to implement rules without needing to understand the details and syntax of iptables. This allows you to secure your server by locking down both inbound and outbound traffic. For instance you can set up your server to only allow SSH connections on port 22 from 192.168.0.1 and to deny all other requests to port 22. CSF is also able to actively block attacks. So, if you are allowing 192.168.0.1 into port 22 and someone compromises that server, if they then try to SSH into your server, despite being allowed in the firewall, if they fail to log in a set amount of times within a set period of time, their source IP address will be blocked in the firewall temporarily. If enough temporary blocks occur they can be blocked permanently as an attacker. This works for a lot more than just SSH, it also works for failed logins to web pages protected with htpasswd, Exim SMTP authentication, Mod_Security failures, or even FTP from vsftpd/proftpd/pure-ftpd. While CSF can be used on a standalone Linux server, it has a GUI component if used on a WHM/cPanel server. It can be configured and managed via WHM, rather than through the command line, which is great as WHM hosting is very popular. Capistrano is an open source tool written in Ruby for remote server automation and deployment. It supports scripting and execution of tasks, and can be used to deploy web applications to multiple machines simultaneously or in sequence, perform data migration, run automatic audits (checking logs, applying patches, etc), and to execute other tasks. You can add other source control management software into Capistrano to expand its capabilities. Capistrano is a Ruby gem, meaning it gives you complex functionality that you can use in a simple manner, as explained here. Fabric is a Python library and command line tool that streamlines the use of SSH for application development or administration tasks. It provides operations for executing local or remote shell commands. You create a module containing one or more functions, then execute them via the fab command line tool. Once this task is defined, you can run it on multiple servers. For instance, run “fab –H localhost,remoteserver host_type” and you will get the output of ‘uname –s’ from all specified servers, in this case from localhost and remoteserver. You only have to define a module once: you then invoke it by typing that name at the command line. As you can see, this functionality can be quite powerful. MySQL Tuner is a Perl script which can be used to quickly examine MySQL on your server and to provide suggestions to increase performance and stability. While the advice provided by the script is generally good, only put changes in that you understand and know what they are actually doing. If you look up the MySQL documentation for the various variables it should explain to you how they work. Basically, you should perform changes on a test server if possible and not straight onto a production server; some changes will require a MySQL restart, potentially leading to downtime. Poorly set variables can also reduce performance and stability.
<urn:uuid:6ba0e04a-9af3-4ca5-8a43-49305df9143d>
CC-MAIN-2017-04
https://anturis.com/blog/11-awesome-tools-for-linux-sysadmins/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282935.68/warc/CC-MAIN-20170116095122-00001-ip-10-171-10-70.ec2.internal.warc.gz
en
0.923447
2,635
2.5625
3
Structure of an Interrupt Handler I/O devices and their controllers fall into three major classes. 1. Program Controlled 2. Interrupt Driven 3. Direct Memory Access Of these classes, only the latter two can generate interrupts. Here we focus on how the CPU processes the interrupts associated with such devices. This lecture focuses on what I call the “Interrupt Controller Hub”. This hub processes interrupts from multiple devices and sends a single INT signal to the CPU when an interrupt is recognized. The CPU sends a single ACK signal back to the hub, which sends it to the appropriate device. Interrupt Priority and CPU Priority We follow the design of the PDP–11, developed by the Digital Equipment Corporation (now defunct) in discussing an interrupt structure. We begin with the idea of a CPU execution priority. This is specified by a 3–bit number in the program status register (PSR). Here is the structure that we shall use. 15 – 8 We postulate a 16–bit program status register with the following bits: N, Z, V, & C Status of the previous arithmetic (Always included in the PSR, so we use these too) I Interrupts enabled. When I = 0, the CPU ignores any interrupt. Priority A 3–bit unsigned integer representing the CPU The I Bit and CPU Execution Priority These four bits are used in processing interrupts. Disabling Interrupts (I = 0) This should be done very seldom. Only the Operating System can set the I bit. are certain times in processing an interrupt during which another interrupt cannot be processed. During these short times, the CPU sets I = 0. Normal programming practice allows for multiple interrupt priorities and nested interrupts. A high priority device, such as a disk, can take precedence over the processing of an interrupt for a low priority device, such as a keyboard. manage devices at various priorities, each interrupt is processed as follows. 1. A device interrupts with priority K. 2. The CPU sets I = 0 and saves various registers. 3. The CPU sets its priority to K (the same number), sets I = 1, and then begins execution of the interrupt handler. Devices with higher priorities can now interrupt and have their interrupts handled. More on CPU Priority We follow the PDP–11 convention. Priority = 0 All user programs execute at this level. = 1, 2, 3 Various Operating System Utilities operate at these levels. We generally ignore these levels. Priority = 4, 5, 6, 7 Interrupt handlers operate at these levels. The CPU will acknowledge and process an interrupt only if the priority of the interrupting device is higher than the CPU execution priority. For this reason, almost all interrupt handlers are written to execute with a CPU priority exactly equal to the device priority. convention is that all hardware devices are assigned one of four interrupt 4, 5, 6, and 7. Priority 4 is the lowest hardware priority, reserved for the keyboard, etc. Priority 7 is the highest hardware priority, reserved for disks, etc. How does the CPU identify the device that asserted the interrupt and begin execution of its interrupt handler? More on this later, but for now: 1. The device sends its “vector”, which is an address of a data structure. This is most often an address in low memory, say addresses 0 – 1023. 2. The data structure at the specified address contains the following. a) The address of the interrupt handler associated with the device. b) The CPU execution priority for the interrupt handler. The interrupt handling sequence can be elaborated. 1. Clear the Interrupt Enabled bit (set I = 0) to block other interrupts. 2. Store the essential registers, so that the user program can be restarted later. 3. Load the PSR with the execution priority and load the PC with the address. 4. Set I = 1 to allow nested interrupts and start execution of the handler. Interrupt Lines and Assertion Levels The structure of the interrupt lines on our computer is as follows: have four interrupt lines, one for each of the four priority levels. Each is paired with an acknowledge line for the same priority. Interrupts are asserted low; that is, the signal goes to 0 when the device interrupts. Acknowledgements are asserted high; that is, the signal goes to logic 1 to acknowledge. Mechanism for Asserting an Interrupt Each interrupt line is attached to a “pull down” resistor. the device asserts an interrupt, it sets its Interrupt Flip–Flop. Thus Q = 1. This enables the tri–state, which becomes a closed switch with very low resistance. With the tri–state enabled, all the voltage drop is across the resistor so that the voltage on the Interrupt Line becomes 0. The interrupt is asserted. the tri–state disabled, it becomes an open switch, a line with very high All the voltage drop is across the tri–state, so the Interrupt Line stays at voltage. Multiple Devices on One Line By design, Interrupts are active low in order to facilitate attaching multiple interrupting devices on a single interrupt line. Here we see four devices attached to a single line. If no device is interrupting, we have Int = Logic 1 If any one device is interrupting, we have Int = Logic 0. The interrupt is asserted. more than one device is interrupting, we still have Int = Logic 0. The devices cannot interfere with each other. The Big Picture Here is the entire circuit for the controller hub. The Hub: Part 1 Look first at the output of the PSR. If PSR3 = 0, the output of the not gate is 1. We see later how this disables interrupts. PSR2 = 0, the CPU execution priority is less than 4. This is less than the priority of any I/O device. PSR2 = 1, the output of the 2–to–4 decoder indicated the CPU which must be in the range 4 to 7 inclusive. The Hub: Part 2 Remember that interrupts are asserted low. This circuit has at most one of its outputs equal to 0. The rest are 1. Possibly all are logic 1. PSR3 = 0, the input from the left is logic 1, and the output of each of the four OR gates is also 1. No interrupt is recognized. all of Int4, Int5, Int6 and Int7 are logic 1 there is no interrupt being The output of each of the four OR gates is also 1 and no interrupt is recognized. Int7 = 1 and Int6 = 0. The output of OR gate 7 is 1. The output of OR gate 6 is logic 0. Since the negation of Int6 goes into OR gates 4 and 5, the output of each of those gates is 1. Thus only one gate has a logic 0 output. The Hub: Part 3 this point, either all of the OR gates at the top are outputting a logic 1, or exactly one has output of logic 0. The NOR gates at the bottom are a key point of converting the active low device interrupt to an active high acknowledgement signal. all OR gates output a logic 1, each of the NOR gates will output a logic 0. No acknowledgement will be generated. The Hub: Part 4 We now consider what happens when exactly one of the OR gates at the top has an output of logic 0. Say that OR gate 5 has an output of logic 0. Each of NOR gates 4, 6, and 7 will have an output of 0. NOR gate 5 will have an output of logic 1 if and only if all of its inputs are 0. Look at the decoder. This will occur only if the CPU priority is less than 5. The Hub: Part 5 now have one of two cases: 1. The output of all the NOR gates is 0, so INT = 0 and nothing happens. 2. The output of exactly one NOR gate is 1, so INT = 1 and the CPU is signaled. Say that NOR gate 5 has an output of logic 1. the CPU can process the interrupt, it asserts its single ACK signal high. Thus ACK = 1. still have the output of NOR gate 5 at 1, so we generate Ack5 = 1. We also have Ack4 = 0, Ack6 = 0, and Ack7 = 0. Situation: One or more device at level 5 has interrupted. Ack5 is asserted. No device at level 6 or 7 has interrupted. We have no information about level 4. daisy chaining, the ACK is passed from device to device until it hits a device that has asserted an interrupt. the I/O device has not asserted an interrupt, it passes the ACK to the next “downstream” I/O device. Priority rank is by physical proximity to the CPU. Daisy Chaining (Part 2) The ACK is passed down the line to the first I/O device attached. that device has not raised an interrupt, it passes the ACK to the next device. When a device that asserted an interrupt gets the ACK, it captures it and does not pass it.
<urn:uuid:cbe69869-523e-4300-96c6-1051633841ba>
CC-MAIN-2017-04
http://edwardbosworth.com/My5155_Slides/Chapter12/InterruptHandling.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280065.57/warc/CC-MAIN-20170116095120-00306-ip-10-171-10-70.ec2.internal.warc.gz
en
0.909237
2,004
3.828125
4
You don’t need to be a mathematician to appreciate the beauty and elegance of fractal geometries, those infinitely complex patterns that are self-similar across different scales. Recently, a group of computing and software students from McMaster University in Ontario, Canada, created stunning fractal geometries using the University of Toronto’s powerful IBM Blue Gene/Q supercomputer. Despite being generated by a simple mathematical formula, the resulting images appear in infinite variations. The word “fractal,” Latin for broken or fractured, was coined by mathematician Benoit Mandelbrot in 1975. Initially applied to theoretical fractional dimensions, the term was extended to include geometric patterns in nature. “Each pixel in an image is assigned coordinates,” explains Ned Nedialkov, associate professor in computing and software. “These starting coordinates are then fed into a formula, resulting in new coordinates, which are plugged into the same formula for the next iteration, and so on.” Exploring a fractal can be compared to zooming in and out on a digital map. “Imagine the whole eastern coast of Canada laid out on a map,” Nedialkov adds. “Then, as you zoom in and get closer, you can see the actually coast line, then the details of the beach, individual stones, pieces of sand, and then every molecule that makes up the sand.” The shapes that unfold at each magnification level are based on computations that would take months to complete on a standard desktop computer. With supercomputers like the Blue Gene, that time is condensed to a matter of hours. The Mandelbrot set fractal, brought to life in the video below, used 1,024 Blue Gene cores and took about nine and a half hours to execute. Named in honor of the famed mathematician, the set illustrates self-similarity – meaning as the image is enlarged, the same general pattern re-appears. This is in contrast to another type of fractal, which has shapes that are exactly the same at every scale.
<urn:uuid:4b2b9bed-c279-46a6-a6da-098cf755c6c5>
CC-MAIN-2017-04
https://www.hpcwire.com/2014/05/12/fractal-art-combines-math-computing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280410.21/warc/CC-MAIN-20170116095120-00214-ip-10-171-10-70.ec2.internal.warc.gz
en
0.940494
430
3.90625
4
Another two fiber optic testers are optical power meter and optical light source. Optical power meters are used to measure the absolute optical power or the relative length of optical fiber optical power loss. Measuring optical power is the most basic in the fiber-optic system. Very much like the electronics multimeter, in optical fiber measurement Optical Power Meter is a heavy-duty commonly used table, and it is suggested that each of the fiber optic technicians should staff one. Through the measurement of the absolute power of the transmitting end optical network, a power meter can be able to evaluate the performance of the light end equipment. With an optical power meter and stabilized optical light source used in combination, it is possible to measure the connection loss, test continuity and help evaluate the transmission quality of fiber link. For any manufacturing of optical fiber transmission system, installation, operation and maintenance, optical power measurement is essential. In the field of optical fiber, if there are no optical power meters, any engineering, laboratory, manufacturing floor or telephone maintenance facilities are unable to work. For example, Optical Power Meter can be used for measuring the output power of the laser light source and the LED light source; used to confirm the estimate of the loss of the optical fiber link; most important of which is that it is a test optical components (fiber, connectors, connecting sub, attenuator key performance indicators, etc.) instrument. Specific applications for users to select the appropriate optical power meter, should pay attention to the following points: 1). Select the optimal probe type and interface type. 2). The evaluation calibration accuracy and manufacturing calibration procedures, and the required range of optical fiber and connector match. 3). Determine if the model is consistent with the measurement range and display resolution. 4). With the direct dB insertion loss measurement function. Optical light source are used transmitting the light with known power and wavelength to optical system. As mentioned, a stabilized light source together with an optical power meter, can measure the optical loss of the optical fiber system. In off-the-shelf optical fiber system, usually the transmitting end of the system machine plays a role as a stable light source. If the end of the machine does not work or does not end machine, you need to separate stable light source. The stability of the wavelength of the light source should be as consistent as possible with the wavelength of the system end machine. After installation of the system, often need to measure the end-to-end loss, in order to determine whether the connection loss to meet the design requirements, such as: measuring connectors, splices point loss, and optical fiber body loss. Optical light source transmit the light wtih known power and wavelength to enter the optical system in the process of measuring loss. Optical power meter which is used calibrating a specific wavelength light source, will receive light from the optical fiber network and convert it into electrical signals. To ensure the accuracy of loss measurement, the transmission equipment taht used in the fiber optic light source simulation should have these features. 1). The same wavelength, and uses the same type of light (LED, laser). 2). During the measurement, ensure the stability of the output power and spectral (time and temperature stability). 3). To provide the same connection interface, and use the same type of fiber. 4). The size of the output power should meet the worst-case system loss measurements. When the transmission systems require a separate stable light source, the optimal choice of light source should simulate the characteristics of the system Optical and measurement needs. When you select light source, it should be considered to the following aspects: a laser tube (LD) from the the LD light emitted from a narrow wavelength bandwidth, almost monochromatic, i.e. a single wavelength. Compared with the LED, through its spectral band (less than 5nm) the laser is not continuous, and on both sides of the center wavelength, but also the emission the several lower peak explants wavelength. Compared to the LED light source, laser source provides more power, but its price is higher than the LED. Laser tube is commonly used in the loss of more than 10dB of long-haul single-mode system. You should avoid measuring multimode fiber with laser light source.
<urn:uuid:7869a557-6d4a-441d-8726-64a3b04aa0ec>
CC-MAIN-2017-04
http://www.fs.com/blog/depth-analysis-of-fiber-optic-testers-part-2.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00424-ip-10-171-10-70.ec2.internal.warc.gz
en
0.882197
872
3.515625
4
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers. Hyper-threading allows software written with multiple threads to run those threads on one processor simultaneously. It is designed to provide performance close to that of a dual-processor machine from a single processor. Intel said hyper-threading should boost performance by 25% both in consumer and business applications. In a demonstration at the confenece Intel showed the performance gap between a system equipped with a normal 3.06GHz chip and one equipped with hyper-threading. The demonstration appeared to show improvements running macros in Microsoft Excel, scanning for viruses and playing digital video on the machine with hyper-threading, compared with the standard configuration. Intel said it will use its performance boosting hyper-threading technology in desktop processors, starting with a 3.06GHz Pentium 4 processor due out in the fourth quarter of this year.
<urn:uuid:c5fb5f35-ccfa-4d27-8bc6-cbd3ad25ac97>
CC-MAIN-2017-04
http://www.computerweekly.com/news/2240047404/Intel-squeezes-dual-processor-performance-out-of-single-chip
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00242-ip-10-171-10-70.ec2.internal.warc.gz
en
0.936718
193
2.640625
3
EMC OEM Solutions First in a series of posts focusing on the Carbon War Room report: “Machine-to-Machine Technologies: Unlocking the Potential of a $1 Trillion Industry” 1.) The revolution in human communications is spurring a second revolution: the ability to transmit, analyze and act on machine-generated data. 2.) Dramatic size and growth projections of the M2M market over the next decade suggest a huge economic opportunity for high-tech vendors. 3.) At the same time, the efficiencies offered by M2M technologies can lower greenhouse gas emissions without adversely impacting economic growth. 4.) EMC’s federated businesses (EMC, RSA, Pivotal, VMware) and our OEM customers stand to benefit from a joint, comprehensive M2M strategy. Do you believe in magic? Stop for a moment to consider where we are in history today. On March 7, 1876, Alexander Graham Bell was issued U.S. patent number 174,465 for discovering that a voice could be transmitted across a wire immersed in a conducting liquid. And now, 137 years later (a mere blip in time), humans are able to connect to each other from anywhere on earth, over wires or wirelessly – nearly instantaneously. This is nothing short of revolutionary. Stemming from this revolution in human connectivity, sensor-based objects can now be programmed to connect directly to one another, and to data aggregation points, without human intervention. It is now possible to design systems that automatically transmit and collect machine-generated data at a scale never before possible. And thanks to advances in computing power and data processing capabilities, this data can be analyzed and acted upon – again, nearly instantaneously. This ability – to generate, transmit, analyze, and act upon device-generated data – goes by a number of different names. Some call it the “Internet of Things”. GE calls it the “Industrial Internet”. Others have referred to it as “pervasive computing”, or “ubiquitous computing”. Whichever moniker you choose, the technology that underlies it all is called “Machine-to-Machine Communications”, or “M2M”. Good for business What is an M2M device? Smartphones and tablet computers quickly come to mind. But any device that can transmit data wired or wirelessly, via a communications network, or peer-to-peer to other devices, is an M2M device. This includes a computer onboard a car that can send diagnostic information back to the manufacturer, a pacemaker that can send health-check data to a mobile phone, and a smart thermostat that can monitor and report usage to both the owner and the utility company. Indeed, the possibilities are endless and even include things like emergency watches, smart toothbrushes, and distress collars to monitor the heart rates of sheep. Given the wide variety of M2M devices, it’s not surprising that their number is projected to grow from just over 1 billion devices today to over 12 billion by 2020. This explosion in the number of connected devices offers growth opportunities across a number of technologies – machines, sensors, networking, analytics software, and the supporting IT infrastructure. By one estimate, M2M has the potential of adding $10-15 trillion to global GDP over the next 20 years – equal to the current size of the U.S. economy. M2M products and services themselves are projected to grow by over 20 percent annually for the next seven years, generating annual revenues of almost a trillion dollars by the end of the decade. Tens of billions of devices. Trillions of dollars. There are a lot of big numbers being thrown around in reference to M2M and it’s probably safe to bet that these numbers are only rough guesses. The consensus, however, is clear: M2M represents a tremendous economic opportunity. Good for the planet While the economic opportunity of M2M has caught the attention of many, M2M presents what many may find to be an even more compelling opportunity – and that is the chance to “lower greenhouse gas (GHG) emissions without imposing restrictions on production, consumption, or economic growth”. A report from Richard Branson’s Carbon War Room project, refers to this opportunity as the “decoupling of economic growth from GHG emissions”. The idea that it is possible to generate economic value while simultaneously abating carbon emissions is a profound shift in thinking. Typically, when GHG emissions reductions are debated, the underlying assumption is that there is a zero-sum tradeoff between economic growth and energy conservation. Proposed remedies like fuel taxes, production limits, energy rationing, and fines for emissions regularly spark protest from businesses and generate resistance among consumers. But now, we’re looking at the opportunity to create high-tech jobs, to lower business costs and increase profitability, and to generate revenue from new products and services, through the smart application of analytics that can simultaneously reduce carbon intensity across numerous industry sectors. According to research cited in the Carbon War Room report, “M2M and related ICT technologies could reduce GHG emissions by 9.1 Gigatons CO2e annually, a figure equal to 18.6 percent of the world’s total 49 Gt CO2e emitted in 2011—approximate to the total emissions of the United States and India in 2010.” Additionally, “M2M can lessen the CO2 generated by things as diverse as widespread deforestation, automobile exhaust, the production of basic primary materials, and the generation of the electricity that powers our lives.” What are some examples of this? Utility companies can dynamically turn energy generation sources on or off in response to demand. Transportation companies can optimize the routes of planes, trucks and other vehicles to reduce mileage and save fuel. HVAC systems in buildings can be built to dynamically adjust to temperature, per-room occupancy, time of day, and other conditions. And agricultural producers can optimize the use of water and fertilizer and more efficiently manage land and livestock. The coolest part of all this is that information and communications technology are the prime enablers of M2M, which means that EMC and its OEM customers stand to benefit from a comprehensive M2M strategy. Indeed, EMC business units are already working closely with key customers to help them find near-term opportunities and to develop the next-gen applications that will benefit both the bottom line and the common good. Stay tuned for future posts on this topic. The Carbon War Room report details use cases across the Energy, Transportation, Smart Building and Agriculture sectors. We will take a look at some of these use cases, paying particular attention to those areas where EMC, RSA, Pivotal and VMware technologies can be brought to bear in building the needed solutions. Meanwhile, please provide your thoughts around this topic in the comments section below. It will be interesting indeed to hear what you think about the M2M opportunity and, specifically, its role in the reduction of GHG emissions.Tags: big data, industrial internet, internet of things, M2M, machine-to-machine communications, oem, Pivotal, RSA, sustainability, VMWare
<urn:uuid:a219eb26-2055-4b7c-a868-085517c14456>
CC-MAIN-2017-04
http://design4datablog.emc.com/2013/05/29/sustainability-and-economic-growth-can-big-data-allow-us-to-have-our-cake-and-eat-it-too/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00480-ip-10-171-10-70.ec2.internal.warc.gz
en
0.927922
1,511
2.53125
3
Continuing our blog series on OBIEE security, when discussing WebLogic security, the WebLogic Scripting Tool (WLST) needs to understood. From a security risk perspective, consider WLST analogous to how DBAs use SQL to manage an Oracle database. Who is using WLST and how they are using it needs to be carefully reviewed as part of any WebLogic security assessment. WebLogic Scripting Tool (WLST) The WebLogic Scripting Tool (WLST) is a command-line scripting environment that is used to create, manage, and monitor WebLogic. It is based on the Java scripting interpreter, Jython, version 2.2.1. In addition to supporting standard Jython features such as local variables, conditional variables, and flow control statements, WLST provides a set of scripting functions (commands) that are specific to WebLogic Server. From a security risk perspective, consider WLST analogous to how DBAs use SQL to manage an Oracle database. Who is using WLST and how they are using it needs to be carefully reviewed as part of any WebLogic security assessment. WLST uses the WebLogic Security Framework to enforce the same security rules as when using the WebLogic user interface. WLST scripts, similar to SQL scripts, are created and edited using any text editor and the operating system user running a WLST script can easily be different than the user referenced in the script. WLST scripts can be run in either on or offline mode and, aside from modifying and copying configurations, (e.g. to create a test server), they can be used to add, remove, or modify users, groups, and roles. Securing the WLST Connection Both Integrigy Corporation and Oracle recommend that when using WLST only connect through the administration port. The administration port is a special, secure port that all WebLogic Server instances in a domain can use for administration traffic. By default, this port is not enabled, but it is recommended that administration port be enabled in production. Separating administration traffic from application traffic ensures that critical administration operations (starting and stopping servers and changing configurations) do not compete with application traffic on the same network connection. The administration port is required to be secured using SSL. As well, by default, the demonstration certificate is used for SSL. The demo SSL certificate should not be used for production. Writing and Reading Encrypted Configuration Values Some attributes of a WebLogic Server configuration are encrypted to prevent unauthorized access to sensitive data. For example, JDBC data source passwords are encrypted. It is highly recommended to follow the WebLogic scripting tool documentation for specific instructions on working with encrypted configuration values however WLST is used - manually (ad-hoc), in scripts, offline and on line. A security assessment should include a discussion, if not a review, of WLST scripts that set or manipulate encrypted values. Running WLST Scripts WLST scripts permit unencrypted passwords at the command line. WebLogic security policies need to address how WLST scripts should provide passwords. Storing passwords incorrectly can easily and needlessly expose passwords in scripts, on monitor screens and in logs files. When entering WLST commands that require an unencrypted password, the following precautions should be taken: - Enter passwords only when prompted. If a password is omitted from the command line, it is subsequently prompted for when the command is executed - For scripts that start WebLogic Server instances, create a boot identity file. The boot identity file is a text file that contains user credentials. Because the credentials are encrypted, using a boot identity file is much more secure than storing unencrypted credentials in a startup or shutdown script. - For WLST administration scripts that require a user name and password, consider using a configuration file. This file, can be created using the WLST storeUserConfig command and contains: - User credentials in an encrypted form - A key file that WebLogic Server uses to unencrypt the credentials If you have questions, please contact us at firstname.lastname@example.org -Michael Miller, CISSP-ISSMP
<urn:uuid:008bb6ce-0e47-4fb7-85b7-c255ba979d54>
CC-MAIN-2017-04
https://www.integrigy.com/oracle-security-blog/obiee-security-and-weblogic-scripting-tool-wlst
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00508-ip-10-171-10-70.ec2.internal.warc.gz
en
0.877253
876
2.515625
3
An Overview of Cryptography: Basic concepts Cryptography basically means keeping information in secret or hidden. There are a number of features associated with cryptography. One is confidentiality which basically means that we need to be sure that nobody will see our information as it travels across a network. Authentication and access control is also another capability provided by cryptography. Some other capabilities provided by cryptography are non-repudiation and integrity which are explained below. Symmetric vs. asymmetric Symmetric and asymmetric encryptions are data encryption methods are being used in today's networks and computers. Symmetric encryption is a kind of encryption where one uses the same key to encrypt and decrypt the same information. This means that the same information is required or used during the encryption and decryption process. Being the same encryption key for both ends, it must be kept secret. This therefore means that if a person gets the key, he or she can read all the information that we had encrypted. In case they key gets lost, it is important that one replace it immediately. The security is a symmetric key is normally quite a challenge owing to the fact that one is not sure whether to t to many people or a single individual. Symmetric encryption is currently being heavily used due to the fact that it is very fast to use. This is because of the fact that very few resources are required. With respect to this aspect, many people tend to combine both symmetric and asymmetric not only for security but also for a quick and efficient working. Asymmetric encryption is another kind of encryption that one will come across. It is commonly referred to as public key cryptography since there are two keys that are needed. One of the keys will be a private key that one will have to keep it to one's self and not share it with others. There is also a public key which one can give to everyone. One can put it on a public server for instance. This is a key that everyone should have access to. The private key should only be available to one. This is due to the fact that the private key is the one that enables people to send one data in encrypted forms while the private key enables one to decrypt the information encrypted with a public key. This therefore means that no one can decrypt data using a public key. With the combination of symmetric and asymmetric encryption, there is a lot of flexibility in terms of encrypting the data, sending it to other people and decrypting it. Session keys are special types of cryptographic keys that can only be used once. This means that if a session key encrypts some information at a particular time, it cannot be used again to encrypt any other information. Fundamental differences and encryption methodsBlock vs. stream Block cipher encryption entails taking one full block of information and encrypting it as a full block all at the same time. In most cases, the blocks are normally of 64-bits or 128-bits. This means that their size is predetermined and remains the same during encryption and decryption. When using the block cipher method, one needs to ensure that to have some confusion so that the encrypted data seems far much different. When using the block ciphers one can also implement the diffusion concept where the output becomes totally different from the input. Stream cipher is another kind of encryption that is used with symmetric encryption. Contrary to block where all the encryption is done all at once, encryption in stream ciphers is done one bit at a time. This is a type of encryption that can run at a very high speed and requires low hardware complexity. An important aspect to know when using stream ciphers is that the initialization vector should never be the same when one are starting to do some of the streams because someone may easily figure out the initialization vector and encryption key one are using and use it every time one send data across the network. Make sure that one's initialization vector is always changing when one are using it to encrypt information. Transport encryption is an aspect of cryptography that involves encrypting data that is in motion. In this case, one has to ensure that data being sent across a network cannot be seen by other people. In addition, the encryption keys should not be visible to others. Transport encryption can be implemented with the use of a VPN concentrator. If one is outside one's office, one will use some software to send data to the VPN concentrator where it will be decrypted and then sent to one's local network in a manner that it can be understood. With this kind of encryption, it becomes very difficult for an individual to tap into one's network and look into a conversation between two workstations since the information is already scrambled up. Non-repudiation means that information that we have received cannot be attributed to someone else and there is no way they can take it back. In terms of cryptography, we add a different perspective to that where we can add a proof of integrity so that we know that the information we received is intact and we can be sure that the information we received came from the source. With this, one can also have a proof of origin where one has high assurance that the origin of the information is authenticated from the source. A cryptographic hash is a way of taking existing data, a file, picture, email or text that one have created and create a string of message digest from it. If one wants to verify a cryptographic hash, one can send the message to another person and ask him to hash it and if the hashes match, then the file is the same on both sides. An important characteristic of hashing is a one-way trip. This means that one cannot look at the hash and figure out what the original text was. This is a method that is used to store passwords since if someone gets the hash, he or she cannot figure out the original password. A hash can also act as a digital signature in that it can offer some authentication of one's files and data. It also ensures that the data one receives has integrity. This therefore means that one does not have to encrypt all one's information. One should also make sure that the hashes have no collision. This basically means that two different messages containing different information cannot have the same hash. Basically, when we are talking about escrow we are talking about a third party that is holding something for us. In the context of cryptography, this refers to the encryption keys. In this case, it requires that a third party stores the encryption key so that we can decrypt information in case the original key gets lost. In this case, the encryption key should be kept in a very safe place so that it is not accessed by others. Key escrow also helps when it comes to the recovery of data. Symmetric encryption in the context of key escrow means that one are keeping one's key somewhere making sure that it is put in a safe so that no one can get access to it. Asymmetrical encryption in this case means that one need to have an additional private key that one can use to decrypt information. The process of getting to the key escrow is as important as the key since so have to be aware of what circumstances can prompt one to get the key and who can access the key. Having the right process in place and one have the right ideas behind what one are doing with the key escrow, it then becomes a valuable part of maintaining the integrity and security of one's data. Steganography is a way of encrypting or hiding information but one still has the information in plain sight all the time. This is a way to secure things by making them obscured which in reality is not security. Messages appear invisible but it is right there before one. In other cases, it may be embedded within pictures, sounds and documents. This means that in such a case, all that we see is the cover text of that is above the hidden information. One way of implementing steganography is by hiding the information in network packets. It is obvious that packets move really fast and therefore it is possible to send a lot of information that is embedded in the packets. One can also use an image in steganography. This means that one can embed one's information in the image itself. In cryptography, digital signatures are used to check for non-repudiation. This basically means that we are digitally signing a message or file. In this case, no type of encryption on the message is required since with the digital signature, an individual is in a position to verify that the message came from one and was not changes in the course. One can sigh it with one's private key and people to whom one have sent the message will use one's public key so as to verify that the message was from one. This is the important bit of having public keys in that one are in a position to verify the senders of the various messages one receives. If one verify a digital signature with its source, then one are assured that the file or piece of information has not undergone any changes in between the sender and the receiver. Use of proven technologies There are many different ways through which people can implement cryptography. In most case, people might not be familiar with the most cryptographic technologies and in this case, they are advised to use proven technologies to encrypt their data. In this case, people reduce their over reliance on the most common data encryption types. In addition, with the use of proven encryption technologies, one is able to have a wide range from which one can choose from. Elliptic curve and quantum cryptography The Elliptic curve cryptography is an emerging technology in cryptography. This is a technology that was created so as to deal with the numerous constraints associated with asymmetric encryption such as numerous mathematical numbers. This cryptography method uses curves instead of numbers where each curve has a mathematical formula associated with it. Quantum cryptography is also another emerging technology in cryptography. Just as the name suggest, this is a technology that employs the use of quantum physics and applying that into the calculations and methods of encryption we are doing inside of our cryptography. Ephemeral keys are special types of cryptographic keys that are generated so as to execute each key establishment process. There are cases where an ephemeral key is used more than once in a single session especially in cases where only one ephemeral key pair is generated for each message. Perfect forward secrecy Perfect forward secrecy is also another kind of cryptographic technology whose main aim is to ensure that information or rather data packets being sent across a network are sent with top level secrecy so as to avoid detection. In this case, such packets are normally sent when there is a lot of traffic travelling through a network since it is very difficult to identify a specific packet if the transmission is fully loaded. Generally, cryptography is a technology whose use and implementation is rapidly increasing. This is because it is a very good method of ensuring information safety. Through cryptography, any piece of information can be encrypted or written in such a manner that it can be very difficult for another person to read if he or she is not able to decrypt it.
<urn:uuid:97fc6795-79b4-407b-9aad-24e424e5c567>
CC-MAIN-2017-04
https://www.examcollection.com/certification-training/security-plus-an-overview-of-cryptography-basic-concepts.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280587.1/warc/CC-MAIN-20170116095120-00324-ip-10-171-10-70.ec2.internal.warc.gz
en
0.960945
2,264
3.84375
4
Screenshots are just images of what is displaying on your monitor. These images can be very helpful for capturing error messages or other things on your computer you’d like to record. No special screenshot taking programs are required despite what many will try to convince you of. It’s a function built into Windows. There are two ways to take a screenshot. One will take a picture and include everything you can see on the monitor the other will take a picture of only the active window (the top most window that you have actively selected/you’ve clicked on most recently). Here is a screenshot of my desktop to better explain the Active Window: The entire image was taken by pressing the Print Screen button and then pasted into Paint.NET and saved. This is my entire desktop. The Notepad with the red border is the active window. You can tell because the title bar (top of the window) is blue. If you pressed Alt+Print Screen, you would get a screenshot of just this window and it would look like this: Back to the first image, the Notepad window with the green border is an inactive window. Until you click on it or minimize/close the current active window, it will continue to be the inactive window and there is no way to get a screenshot of just this window without selecting it first. 1. Get whatever you want to take a picture of viewable on the screen. 2. Press the Prt Scr button to take a picture of the entire screen or hold Alt and press Prt Scr to take the active window. 3. This will take the image and put it on your Clipboard (where things go temporarily whenever you copy/paste or cut/paste). Note: If you copy/paste anything or take another screenshot, it will overwrite the image that you just took with no way to retrieve the original image. Vice versa is true as well. If you cut/paste something to move it and then take a screenshot, the cut/paste object will be overwritten. 4. Open an image editing program like Paint. Then go to Edit, Paste. 5. Save your image or modify it as needed. Paint or any other image program will work. I highly recommend Paint.NET. It’s free and more comparable to Photoshop than anything else. If you want a graphics program that specializes in taking screenshots, I recommend: Hardcopy.
<urn:uuid:d77f51a0-344e-4004-84c4-f35d083ec5d6>
CC-MAIN-2017-04
https://www.404techsupport.com/2009/11/simple-tip-how-to-take-a-screenshot/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00168-ip-10-171-10-70.ec2.internal.warc.gz
en
0.917733
501
2.625
3
Are overwritten files really, truly gone? Or could some snoop, given enough money and resources, get that data back? In my story last week, Surviving a home data disaster: How Shirley got her files back, several readers questioned a statement made by Ontrack's Sean Barry that it's no longer possible to recover data from an overwritten area of a disk. “There is no chance of recovery with overwritten clusters. The bit density on hard disk drives is so great now that when the magnetics are rewritten, the data is gone,” he said. Barry is Ontrack's Remote Data Recovery Manager and has 10 years of experience recovering files for private business as well as government agencies. Thomas Feher was one reader who challenged that statement. He writes: "Even if the files are overwritten with new data on the hard disk drive, such as the DDL case you mentioned in the article, it is still possible to recover the images," he says, if you send it into a recovery lab that uses special equipment that can read the residual magnetism that exists around the edge of the track where the new data was written. "They will physically dismantle the hard drive in a clean room environment and use special probes to read the magnetism. They will detect the traces of previous signals and rebuild the HDD's contents, even if deliberately overwritten several times." Feher says Kuert Information Management is one firm capable of such a recovery. True? Or legend? Here is Barry's response: Back in 1996, Peter Gutman, computer science professor at Auckland University in New Zealand, published a paper proposing how data could be recovered from hard disk or floppy disk sectors that had been overwritten. The idea behind this is based on the fact that the read/write heads are never precisely positioned over the same exact area twice and that by using electron-microscopes (Scanning Tunneling Microscopy) it would be possible to find a 'shadow' of the previously written sector. The hard drives mentioned in this 1996 paper are MFM and RLL drives, which were the first generation of hard drives used for personal computers (IBM called them the Winchester drives). The largest MFM and RLL drives made got up to about 130MB in size and were quickly replaced by IDE/ATA hard drives. At the time Professor Gutman's paper was published, the MFM/RLL hard disk technology was already 10 years old. [See the time lines of hard driver here and here]. Technology has continued to advance for hard drives and the most important advances have been in the form of higher bit density per square inch. Getting the data that small has required evolutionary changes in magnetic storage and head design. When Professor Gutman did his research, the track spacing between groups of sectors was very wide and the bit density was low, thereby providing a valid means of recovering a shadow of the previous sector. Of course if you could only read just the top level of bits per sector that had not been overwritten, you would only be able to recover an extremely small percentage of the original sector--at best you would only be able to recover just a small sliver of the original sector. Today's hard disks have a bit density far greater and the track sizes are extremely small--down to the nano scale in size. Notice the advances in the past 10 years: For a detailed overview of today's technology, read of the feature, Hard disk-drive technology revolutionizes processing. Daniel Feenberg highlights some other interesting points about this topic. The notion that overwritten sectors can be recovered by searching for 'shadow' copies on today's hard drives is false.
<urn:uuid:148d91d6-6dfe-4272-bc4d-68953e45af57>
CC-MAIN-2017-04
http://www.computerworld.com/article/2477228/data-privacy/is-overwritten-data-really-unrecoverable-.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284411.66/warc/CC-MAIN-20170116095124-00222-ip-10-171-10-70.ec2.internal.warc.gz
en
0.958453
751
2.671875
3
A transceiver module is a self-contained component that can both transmit and receive. The transceiver acts to connect the electrical circuitry of the module with the optical or copper network. Devices such as routers or network interface cards provide one or more transceiver module slot (like GBIC, SFP, XFP) into which you can insert a transceiver module which is appropriate for that connection. The optical fiber, or wire, plugs into a connector on the transceiver module. There are multiple types of transceiver module available for use with different types of wire, fiber, different wavelengths within a fiber, and for communication over different distances. The reason why the fiber optic transceiver received so much attention, because it has so many advantages compared with other means of communication, such as large capacity, long distance transmission, small size, light weight, easy to construct and maintain, cost less and so on. In the following I would like to have a brief introduction of 40G QSFP+ Module and CFP module. What is QSFP+? The 40GBASE QSFP+ (Quad Small Form-Factor Pluggable Plus) modules offer customers a wide variety of high-density 40 Gigabit Ethernet connectivity options for data center, high-performance computing networks, enterprise core and distribution layers, and service provider transport applications. Features and Benefits Main features of 40GBASE QSFP+ modules include: • Support for 40GBASE Ethernet; • Hot-swappable input/output device that plugs into a 40-Gigabit Ethernet QSFP+ Cisco switch port; • Flexibility of interface choice; • Interoperable with other IEEE-compliant 40GBASE interfaces available in various form factors; • Support for “pay-as-you-populate” model; • Support for the Cisco quality identification (ID) feature which enables a Cisco switch to identify whether the module is certified and tested by Cisco. What is CFP? CFP stands for C form-factor pluggable, it is a multi-source agreement to produce a common form-factor for the transmission high-speed digital signals. The c stands for the Latin letter C used to express the number 100 (centum), as the standard was primarily developed for 100 Gigabit Ethernet systems. The CFP was designed after the SFP interface, but is significantly larger to support 100 Gigabit by using 10 lanes in each direction (RX, TX) with 10Gb/s each. While the electrical connection of a CFP uses 10 x 10 Gbit/s lanes in each direction (RX, TX), the optical connection can support both 10 x 10 Gbit/s and 4 x 25 Gbit/s variants of 100 Gbit/s interconnects (typically referred to as 100GBASE-LR10 and 100GBASE-LR4 in 10 km reach, and 100GBASE-ER10 and 100GBASE-ER4 in 40 km reach respectively.) In March 2009, Santur demonstrated a 100 Gigabit pluggable CFP Transceiver prototype, Santur 100G CFP Transceiver. The CFP module is specified by a multi-source agreement (MSA) between competing manufacturers. The CFP MSA defines hot-pluggable optical transceiver form factors to enable 40Gb/s and 100Gb/s applications, including next-generation High Speed Ethernet (40GbE and 100GbE). Pluggable CFP, CFP2 and CFP4 transceivers will support the ultra-high bandwidth requirements of data communications and telecommunication networks that form the backbone of the internet.
<urn:uuid:70e839db-5d65-4bab-a551-3c97aeeb776b>
CC-MAIN-2017-04
http://www.fs.com/blog/40g-qsfp-module-and-cfp-module-wiki.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00434-ip-10-171-10-70.ec2.internal.warc.gz
en
0.901884
753
2.765625
3
Joined: 08 Jun 2007 Posts: 71 Location: Zoetermeer, the Netherlands 2 significant diff's: 1 - inter program communitation. A stored procedure can be called from any other application on any other platform, as long as they have database connection (db2 client or JDBC level 4) 2 - output. output can be represented as parameters (analogy with linkage storage) or output can be represented as a database-table. Just imagine a cursor in COBOL: code the SQL in "declare cursor", build the result set with "open cursor" and obtain the rows with "fetch cursor" and clean up your mess with "close cursor". familiar stuff, right? When you DO NOT code the fetch and close after the open cursor, and just return to the caller by means of "GOBACK" the resulst-set is passed to the caller and can be processed on the client. one more thing to add on. Stored procedures are helpful in reducing network traffic. In a normal cobol-db2 program if we have 10 sql staments in a cobol-db2 program,there will be 10 I/0 across the network. but using by stored procedure with some logic we can embbed all the 10 sql in a single sp,and can execute in a single I/o thus reducing the network traffic.
<urn:uuid:20c03304-3851-4213-97a8-6c33ddc59d61>
CC-MAIN-2017-04
http://ibmmainframes.com/about24608.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280899.42/warc/CC-MAIN-20170116095120-00342-ip-10-171-10-70.ec2.internal.warc.gz
en
0.884791
286
2.71875
3
Jan. 6 — A research team led by the National Oceanic and Atmospheric Administration (NOAA) is performing simulations at the Argonne Leadership Computing Facility (ALCF), a U.S. Department of Energy (DOE) Office of Science User Facility, to develop numerical weather prediction models that can provide more accurate wind forecasts in regions with complex terrain. The team, funded by DOE in support of its Wind Forecast Improvement Project II (WFIP 2), is testing and validating the computational models with data being collected from a network of environmental sensors in the Columbia River Gorge region. Wind turbines dotting the Columbia River Gorge in Washington and Oregon can collectively generate about 4,500 megawatts (MW) of power, or more than that of five, 800-MW nuclear power plants. However, the gorge region and its dramatic topography create highly variable wind conditions, posing a challenge for utility operators who use weather forecast models to predict when wind power will be available on the grid. If predictions are unreliable, operators must depend on steady power sources like coal and nuclear plants to meet demand. Because they take a long time to fuel and heat, conventional power plants operate on less flexible timetables and can generate power that is then wasted if wind energy unexpectedly floods the grid. To produce accurate wind predictions over complex terrain, researchers are using Mira, the ALCF’s 10-petaflops IBM Blue Gene/Q supercomputer, to increase resolution and improve physical representations to better simulate wind features in national forecast models. In a unique intersection of field observation and computer simulation, the research team has installed and is collecting data from a network of environmental instruments in the Columbia River Gorge region that is being used to test and validate model improvements. This research is part of the Wind Forecast Improvement Project II (WFIP 2), an effort sponsored by DOE in collaboration with NOAA, Vaisala—a manufacturer of environmental and meteorological equipment—and a number of national laboratories and universities. DOE aims to increase U.S. wind energy from five to 20 percent of total energy use by 2020, which means optimizing how wind is used on the grid. “Our goal is to give utility operators better forecasts, which could ultimately help make the cost of wind energy a little cheaper,” said lead model developer Joe Olson of NOAA. “For example, if the forecast calls for a windy day but operators don’t trust the forecast, they won’t be able to turn off coal plants, which are releasing carbon dioxide when maybe there was renewable wind energy available.” The entire article can be found here. Source: Katie Jones, Argonne Leadership Computing Facility
<urn:uuid:87091b89-fb10-46a7-9118-89c645525cb9>
CC-MAIN-2017-04
https://www.hpcwire.com/off-the-wire/supercomputer-simulations-helping-improve-wind-predictions/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00370-ip-10-171-10-70.ec2.internal.warc.gz
en
0.916853
547
3.03125
3
Internet of Things as a buzzword has caught the attention of all of us. This course will help you gain adequate knowledge on the Internet of Things. You will be able to understand the potential of the Internet of Things for our society, in terms of impact on the lives of billions of people and on the world economy. You will also understand the underlying technology that powers the Internet of Things, as well as the challenges that comes with such technologies. We will explore many real-life examples of IoT devices that are commercially available, and you will have a glimpse of the future of the Internet of Things. High impact, proven training – 30000+ professionals trained globally Experienced, expert instructors – Our Instructors come with a rich, 10+ years of industry experience. Classroom Training programs delivered across 50+ locations globally Content is developed in-house in GreyCampus by highly experienced industry experts 1 Year E-learning Access 2 PMI PDUs for PMI Credential Holders Video lectures of about 1hour of duration that cover introduction to IoT, different protocols and future of the IoT End of module tests to test your knowledge After the completion of the course, participants will be able to Explain what is the Internet of Things Understand how Internet of Things devices interact together & with users Learn about the protocols used by Internet of Things devices Discover the different platforms that are available to develop applications Learn about commercially available devices that are already using the Internet of Things Understand the current challenges of the Internet of Things Who should take this course? Anyone with an interest in the Internet of Things, those who intend to know the potential and wish to build a career in the field. Module 1: Introduction to the Internet of Things Module 2: Different Cases of Interactions Module 3: Internet of Things Protocols Module 4: Development Platforms Module 5: Three Applications of the IoT Module 6: Future of the Internet of Things Q. What are the pre-requisites for this course A: The Pre-requisites are: Q. What is the Internet of Things A: The Internet of Things is the concept of having all our devices around us connected to the Internet. Therefore, we could potentially access all devices, they could communicate with us, but also communicate together for more interesting applications. It has many applications in home automation, transportation, healthcare, and several other domains. Q. What is the difference between the Internet of Things and the regular Internet A: The Internet of Things is the idea of all devices being connected together, which uses the Internet & other wireless networks for devices to communicate together.
<urn:uuid:3acaab3e-f227-40ad-985c-1c6d5746ec98>
CC-MAIN-2017-04
http://www.greycampus.com/internet-of-things-101
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280929.91/warc/CC-MAIN-20170116095120-00186-ip-10-171-10-70.ec2.internal.warc.gz
en
0.903422
546
3.15625
3
Google is helping the U.S. government accomplish research on climate change by donating 50 million hours of cloud computing time as well as other resources. Google is donating 50 million hours of cloud computing time to the U.S. government to assist in a recently announced Climate Data Initiative that aims to help organizations and communities use public data to look at climate conditions in their areas. "Up until now, it's been difficult for the public to locate detailed, timely data relevant to climate-related risks such as extreme weather events ," Tyler Erickson, developer advocate for the Google Earth Engine, wrote in a March 19 post on the Google Lat Long Blog . "To help address this challenge, Google is donating cloud computing storage and access to other tools to support institutions that are driving climate change resilience." The efforts are being made as part of the Climate Data Initiative announced by the White House earlier this month, wrote Erickson. To help in those efforts, Google is providing 50 million hours of high-performance computing on the Google Earth Engine geospatial analysis platform, according to Erickson. "Earth Engine brings together the world's satellite imagery with tools to help detect changes and map trends on the Earth's surface. Earth Engine has already been applied to unlock valuable information from the 40+ year treasure trove of Landsat satellite data [USGS/NASA], including an interactive time lapse of the planet from 1984-2012, the first high-resolution global maps of deforestation , and a near-real-time deforestation alert system that allows anyone interested in forest monitoring to take part. We hope that with this new donation, researchers will focus on applying Earth Engine to address climate-related risks such as managing agricultural water supplies and modeling the impacts of sea-level rise and storm surge." In addition, Google is teaming up with leading researchers and "allowing them to scale their work with Earth Engine and quickly move from the laboratory into people's hands," he wrote. "Together with academic partners in the western U.S., we'll produce the first high-resolution, near-real-time drought monitoring and mapping products for the entire continental United States—and make them freely available to the public." Google is also providing free data storage to support the initiative and its work, wrote Erickson, including one petabyte [1 billion megabytes] of cloud storage to house satellite observations, digital elevation data, and climate/weather model data sets." We encourage the global community to work with us on this project by contributing and curating data, and developing public-benefit applications. We're already collaborating with researchers at NASA Jet Propulsion Laboratory, University of Bristol U.K. and the government of Australia." Google has supported climate research in the past, as well. In February 2013, Google announced the winners of its first-ever Google App Engine Research Awards , which included one project that analyzed global climate data sets. The project, a Cloud Computing-Based Visualization and Access of Global Climate Data Sets, was conducted by Enrique Vivoni, an associate professor of Hydrologic Science, Engineering & Sustainability at Arizona State University; Giuseppe Mascaro, a research engineer; Jyothi Marupila, a graduate student; and Mario A. Rodriguez, a software engineer. The project uses Google App Engine for analyzing global climate data within the Google Maps API. The objective is to provide scientific data on global climate trends by allowing map-based queries and summaries, according to an eWEEK The project was one of seven winning entries that used the App Engine platform's abilities to work with large data sets for academic and scientific research. The program, which was announced in the spring of 2012, brought in many proposals for a wide variety of scientific research, including mathematics, computer vision, bioinformatics, climate and computer science. Google, which is a huge consumer of electricity for its modern data centers, offices and operations around the world, is always looking for ways of conserving energy and using renewable energy sources. The company has been making large investments in wind power for its data centers since 2010. Energy production is known to have a huge impact on the Earth's climate. In January 2013, Google announced an investment of $200 million in a wind farm in western Texas near Amarillo, as the company continued to expand its involvement in the renewable energy marketplace. Google has also invested in the Spinning Spur Wind Project in Oldham County in the Texas Panhandle. Other Google renewable energy investments include the Atlantic Wind Connection project , which will span 350 miles of the coast from New Jersey to Virginia to connect 6,000 megawatts of offshore wind turbines; and the Shepherds Flat project in Arlington, Ore., which is one of the world's largest wind farms with a capacity of 845 megawatts. Shepherds Flat began operating in October 2012.
<urn:uuid:77cca2da-2078-4c2f-bcd9-2d98d5346422>
CC-MAIN-2017-04
http://www.eweek.com/cloud/google-provides-cloud-computing-resources-for-climate-change-research.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00094-ip-10-171-10-70.ec2.internal.warc.gz
en
0.949928
1,000
2.640625
3
Using the SNMP Device monitor in Anturis Console, you can set up monitoring of MIB object values for any SNMP device connected to a hardware component (a server computer) in your infrastructure. Anturis supports SNMPv1 and SNMPv2c. Simple Network Management Protocol (SNMP) is used to manage devices on an Internet Protocol (IP) network. You can remotely monitor and manage network devices that support SNMP, such as servers, routers, switches, printers, surveillance cameras, and so on. This is done from a management host (SNMP manager). SNMP is designed to have minimum impact on the managed devices and network traffic. It is generally very stable and will continue working even when other network applications fail. This makes it a great tool to monitor network performance and hardware status (such as CPU and memory usage on a server), to detect failures and unauthorised access, and even to perform simple configuration of network devices. For example, you can monitor your server’s temperature using SNMP, which will enable it to avoid severe overheating. You can also monitor the bandwidth on each port of a network switch to analyse the workloads. The capabilities are limited only by the amount of data that a device was designed to provide through SNMP. When an SNMP manager sends a request, the value is retrieved from the management information base (MIB) on the device. MIB is a database of entities managed by an SNMP, such as the version of software running on the device, the amount of free disk space, temperature, and so on. You retrieve an MIB object from an SNMP device by sending a Get message from the SNMP manager. The device returns either the requested value or an error that enables you to understand why the value is irretrievable (for example, the server is not reachable). SNMP references each MIB object using an object identifier (OID). The OID defines the location of the object in the MIB database. The OID consists of numeric or text sub-identifiers separated with periods. A value that represents the current state of the object is associated with each OID. SNMP uses a community string to authenticate requests. This is a type of password shared by SNMP managers and devices. You configure the managers and devices as members of one SNMP community. The community string is transmitted with the request as clear text, and enables SNMP devices to accept or reject requests based on their lists of acceptable community names. ©2017 Anturis Inc. All Rights Reserved.
<urn:uuid:ea3bfef5-88cf-4598-989e-3ac6b4bf1ee1>
CC-MAIN-2017-04
https://anturis.com/monitors/snmp-monitor/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282935.68/warc/CC-MAIN-20170116095122-00002-ip-10-171-10-70.ec2.internal.warc.gz
en
0.905131
531
2.578125
3
Who doesn't love Pandora Radio? But listening to it on my Android phone is the fastest way to kill the battery and what good is a mobile phone if it has to be constantly plugged in? New research shows that Android phones are the most data hungry smartphones out there. A group of researchers at AT&T Labs are calling on app makers to fix this problem by building more energy-aware apps. Not surprisingly, Pandora is one of their test subjects. (Facebook is another). It's about time! All smartphones could use more energy efficient apps, but Android users suffer the most. New research from Nielsen found that although iPhone users engage in as much or more data-intensive activities (downloading apps, streaming music or video), as Android users, Android phones gobble up more monthly data. Each month, Androids are consuming 90MB more data than iPhone users. As every smartphone user knows, the more data transferred, the faster the battery drains. But apparently it's not the OS that's the issue ... it's the underlying apps, researchers say. Enter new research being done by AT&T to create energy-efficient apps that recognize they are on a cell network, and limit both the number of times an app connects to the network and the time needed to connect. They have developed a tool that helps app developers figure out when their apps really need full power connections (download speeds of around 7.1Mbits/sec) or when the app can get by on a proposed "intermediate state" which consumes half the power and transmits less data at a slower speed, typically by sharing a low-speed channel, (often 16kbps). For instance, the researchers found that when they ran Pandora for 12 minutes, the app conducted a series "of short bursts once every 62.5 seconds ... While the music itself was sent simply and efficiently as a single file, the periodic audience measurements — each constituting only 2KBs or so — were being transmitted at regular 62.5-second intervals. The constant cycle of ramping up to full power (2 seconds to ramp up, 1 second to download 2KB) and back to idle (17 seconds for the two tail times ... was extremely wasteful," they wrote. After reading the paper, I had many questions. I contacted the researcher Alexandre Gerber, a principal member of the technical staff at AT&T Labs Research, and asked them. How do the different platforms compare when it comes to energy efficiency already? Apple/iPhone/ios vs. Android vs. BlackBerry vs. Windows Phone 7. Does development platform influence this? Are some better than others? Different OSs may have different energy efficiency in terms of some system components such as CPU and memory. But we are looking at the efficiency of accessing network (3G power consumption contributes about 50% of overall handset power consumption). That is mostly determined by the application instead of OS. How do different network speeds/types influence this … 3G vs. 4G vs. WiFi? It is the resource control policy of different networks that influence the energy efficiency. Cellular networks usually have similar resource control mechanisms, but cellular network technology is also getting better over time. WiFi has a different approach and it is more energy efficient than cellular. The paper mentions, “One popular app was found to be using 40% of its power consumption to transmit 0.2% of its data.” … was this a typical finding? Or was it an extreme finding? This is a common observation for applications with periodic data transfers (e.g. ads, keep alive, pull instead of the more efficient push, audience measurements), although the numbers may not always be that high. In terms of hours of battery life, how much power overall would you guess is wasted by apps that do a poor job of managing state? (What I mean is, if you have a battery that is supposed to give you six hours of talk time, but dies in three hours of app usage, how much battery life would you get back if all of your apps were energy efficient? A few minutes? A few hours?) Clearly that depends on the application you are using. For a large Internet radio, for instance, if 40% of its radio power, which contributes to 50% of total device power, is wasted, then you can save about 40% * 50% = 20% of overall battery life. So this could end up being a significant amount of time. How much is app battery usage influenced/dependent on the handset? Do the same apps consume different amounts of energy on different handsets (and HTC android phone vs. a Motorola one? An iPhone 3 vs an iPhone 4)? Yes they differ. The table below compares power consumption of three radio states (IDLE, FACH, and DCH) of two phones: HTC TyTn II and Google NexusOne. These are measurements made as part of our study in our Research group; it is independent of measurements made by our official device testing group: Radio State | TyTn | NexusOne P(IDLE) | 0 | 0 P(FACH) | 460mW | 450mW P(DCH) | 800mW | 600mW In your paper, you detailed the results of analyzing the Pandora app, what smartphone platform did you use to analyze it? Generally speaking, did you discover the Facebook app was more (or less) energy efficient than Pandora? This is an apple to orange comparison. These are applications that are difficult to compare; the content is completely different. A comparison would only make sense between the same type of applications. For instance, we noticed that Pandora is more efficient than other Internet radios because it is sending data in bursts followed by long periods of inactivity, as opposed to continuously streaming content like some other Internet radios.
<urn:uuid:fe4e6845-c2b5-4262-831c-cd5ab49b13a3>
CC-MAIN-2017-04
http://www.networkworld.com/article/2229434/opensource-subnet/at-t-researchers-call-for-smartphone-apps-that-won-t-suck-your-battery-dry.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00488-ip-10-171-10-70.ec2.internal.warc.gz
en
0.954814
1,196
2.609375
3
What is the SIP Protocol? Definition: SIP, or session initiation protocol is a signaling protocol for IP-based telephony applications. A signaling protocol provides the control layer for communications such as the establishment and release of a voice call. History of SIP Previous signaling protocol such as SS7 were designed for circuit-switched networks. These networks use dedicated T1 channels for carrying telephony communications and signaling. With dedicated T1 channels, SS7 is able to provide high-quality voice communications, but at high cost due to the requirement of end-to-end dedicated channels. With the advent of IP and packet-based networks, telephony traffic could be routed more efficiently and cheaply. But this required a new packet-based signaling protocol to be developed. SIP was born. Initially designed for voice communications, today it can manage instant messaging, video conferencing, and file transfers. At its simplest, SIP architecture consists of SIP user agents and servers. User agents are endpoints for communications. Examples of user agents are a softphones, IP phones, or mobile phones. SIP servers are required to locate other user agents. Additionally, SIP servers can provide other services such as accounting and SIP forwarding. SIP Protocol Basics SIP is an application layer protocol that is very similar to text based application layer protocols like HTTP. In fact, it is also uses request and response message transactions and header fields. The following shows the request and response message transactions for a call initiated by User Agent A to User Agent B. For transport, SIP can run over TCP, UDP, or SCTP transport layer protocols. The following is an example SIP request message. INVITE sip:firstname.lastname@example.org SIP/2.0 Via: SIP/2.0/UDP 192.168.1.2:5060;branch=z9hG4bKnp85213694-430aa1de192.168.1.2;rport From: "arik" <sip:email@example.com>;tag=51449dc To: <sip:firstname.lastname@example.org> Call-ID: email@example.com CSeq: 1 INVITE User-Agent: Nero SIPPS IP Phone Version 220.127.116.11 Expires: 120 Accept: application/sdp Content-Type: application/sdp Content-Length: 270 Contact: <sip:firstname.lastname@example.org> Max-Forwards: 70 Allow: INVITE, ACK, CANCEL, BYE, REFER, OPTIONS, NOTIFY, INFO For comparison, here is a HTTP request message. GET /download.html HTTP/1.1 Host: www.ethereal.com User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.6) Gecko/20040113 Accept: text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,image/jpeg,image/gif;q=0.2,*/*;q=0.1 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Connection: keep-alive Referer: http://www.ethereal.com/development.html Right away you can see the similarities to HTTP. SIP starts with the Method line and then follows with several headers and possibly message content. SIP Request Methods SIP borrows the Method field from HTTP to likewise determine the type of request. SIP has 14 Method request types. The most commonly used Methods are INVITE, ACK, BYE, and REGISTER which are used during voice calls. The first line of a SIP request message includes the Method type and the request URI which is the current destination of the request. Table of SIP request methods |INVITE||Indicates a client is being invited to participate in a call session.| |ACK||Confirms that the client has received a final response to an INVITE request.| |CANCEL||Cancels any pending request.| |OPTIONS||Queries the capabilities of servers.| |REGISTER||Registers the address listed in the To header field with a SIP server.| |SUBSCRIBE||Subscribes for an Event of Notification from the Notifier.| |NOTIFY||Notify the subscriber of a new Event.| |PUBLISH||Publishes an event to the Server.| |INFO||Sends mid-session information that does not modify the session state.| |REFER||Asks recipient to issue SIP request (call transfer.)| |MESSAGE||Transports instant messages using SIP.| |UPDATE||Modifies the state of a session without changing the state of the dialog.| SIP Request Headers Header fields are used to configure the SIP request message. The following are some common headers for a request message. Many more headers are available. Common SIP request headers |Via||Contains an address that is used to route back replies.| |From||Contains the SIP URI of the caller.| |To||Contains the SIP URI of the callee.| |Call-ID||Contains the globally unique identification for this call using the caller's domain.| |CSeq||Contains the sequence number of this message for this SIP conversation.| |Contact||Contains the SIP URI to be used for future requests for this caller.| |Content-Type||Contains the content type for the message body.| |Content-Length||Contains the byte count length for the message body.| SIP Addressing & The SIP URI The SIP URI used in the To, From, and Contact header fields represents a user's SIP number. It is very similar to an email address. It consists of three components <protocol:user@gateway>. The protocol can be sip or sips where the latter is secured with TLS. The user is a unique user on a SIP gateway or server. SIP Response Status Codes Again similar to HTTP, SIP responses provide status codes to indicate the result of a SIP request. The following lists common SIP response status codes. Common SIP Response Status Codes |100||The server is trying to reach the callee.| |180||The callee is ringing.| |181||The call is being forwarded.| |200||The request is successful.| |302||The user has temporarily moved. Try the Contact field SIP URI.| |404||The user does not exist.| SIP Message Body Content & SDP SIP is content agnostic. Although it is well known for voice call signaling, it is also used for establishing sessions for messaging, video conferencing, SMS and more. The SIP message body determines the type of media session being established. The message body is typically included in a SIP Invite request as part of the initial session establishment. Session Description Protocol (SDP) is a special content type used for VoIP. The content type is specified as application/sdp. In the following SIP Invite message, the SDP specifies the available voice codecs for a VoIP call. SIP Invite with SDP Message Body INVITE sip:email@example.com SIP/2.0 Via: SIP/2.0/UDP 192.168.1.2:5060;branch=z9hG4bKnp85213694-430aa1de192.168.1.2;rport From: "arik" <sip:firstname.lastname@example.org>;tag=51449dc To: <sip:email@example.com> Call-ID: firstname.lastname@example.org CSeq: 1 INVITE User-Agent: Nero SIPPS IP Phone Version 18.104.22.168 Expires: 120 Accept: application/sdp Content-Type: application/sdp Content-Length: 270 Contact: <sip:email@example.com> Max-Forwards: 70 Allow: INVITE, ACK, CANCEL, BYE, REFER, OPTIONS, NOTIFY, INFO v=0 o=SIPPS 85214742 85214739 IN IP4 192.168.1.2 s=SIP call c=IN IP4 192.168.1.2 t=0 0 m=audio 30000 RTP/AVP 0 8 97 2 3 a=rtpmap:0 pcmu/8000 a=rtpmap:8 pcma/8000 a=rtpmap:97 iLBC/8000 a=rtpmap:2 G726-32/8000 a=rtpmap:3 GSM/8000 a=fmtp:97 mode=20 a=sendrecv SIP VoIP Session Call Flow Now that we have the basics down, let us put it all together for a SIP call flow to establish a VoIP call. There are four basic parts to establish a call: registration, call establishment, the VoIP call, and the call termination. When a user agent (say a softphone) launches, it needs to register with a SIP server in order to be found by other user agents. The SIP Register request message is used for this. It provides the location bindings through the To and From SIP URIs. Optionally, an additional binding can be provided through the Contact field. SIP Register Message REGISTER sip:sip.cybercity.dk SIP/2.0 Via: SIP/2.0/UDP 192.168.1.2;branch=z9hG4bKnp151248737-46ea715e192.168.1.2;rport From: <sip:firstname.lastname@example.org>;tag=903df0a To: <sip:email@example.com> Call-ID: 578222729-4665d775@578222732-4665d772 Contact: <sip:firstname.lastname@example.org:5060;line=9c7d2dbd8822013c>;expires=1200;q=0.500 Expires: 1200 CSeq: 68 REGISTER Content-Length: 0 Max-Forwards: 70 User-Agent: Nero SIPPS IP Phone Version 22.214.171.124 B: Call Establishment Call establishment is where the magic happens. There are a few steps here, so let's cover them one-by-one in sequence. - SIP Invite Request - The SIP Invite starts the call establishment attempt. This message contains the callee (SIP URI in the To field). This is sent from the caller to the SIP server where it looks up the callee. In a larger network, the SIP server may need to consult other SIP servers if the callee is not local. Once the callee is located, the Invite is forwarded. For VoIP, the Invite also includes a SDP message body with the parameters for the VoIP call. - SIP Response 100 (Trying) - This message is sent from the SIP server to the callee to confirm the Invite request. - SIP Response 180 (Ringing) - This message indicates that the Invite was received by the callee and their user agent is alerting the user. - SIP Response 200 (OK) - When the user picks up, a 200 response is sent back to confirm the call. Additionally, the callee sends a SDP message body with its VoIP call parameters. As a result of this message and the initial Invite from the caller, an exchange and negotiation of VoIP call parameters has occurred. - SIP Ack Request - Finally, the caller confirms with an Ack request back to the callee. The callee then initiates the VoIP call to the caller. C: VoIP Call The VoIP call itself is transmitted between the user agents using RTP (real-time transport protocol). This protocol is used for delivering audio and video data over IP networks. An additional protocol RTCP (RTP control protocol) is used to provide statistics and control for the RTP transmissions. We will cover RTP and RTCP in an upcoming blog. D: Call Termination When a user decides to terminate a call, a SIP Bye request is sent. Either side of the call can terminate. The other user agent then responds with a SIP 200 status code response to confirm the termination. So there you have it. That's SIP in a nutshell. Want to go deeper? Check out this example dashboard of ExtraHop's SIP monitoring capabilities, then try our online interactive demo.
<urn:uuid:7c04b664-0210-467c-8b39-98ff10a6260d>
CC-MAIN-2017-04
https://www.extrahop.com/community/blog/2016/sip-protocol/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00031-ip-10-171-10-70.ec2.internal.warc.gz
en
0.808777
2,950
3.6875
4
Fox-IT has encountered various ways in which ransomware is being spread and activated. Many infections happen by sending spam e-mails and luring the receiver in opening the infected attachment. Another method is impersonating a well-known company in a spam e-mail stating an invoice or track&trace information is ready for download. By following the link provided in the e-mail, the receiver can download the file which contains the malware from a convincing looking website. Distributing ransomware through malvertising, an exploit kit being served on an advertisement network, is also a common way for criminals to infect systems. In the past few months, Fox-IT’s incident response team, FoxCERT, was involved in several investigations where a different technique surfaced: activating ransomware from a compromised remote desktop server. Before we get to why this might be lucrative for the criminals, how do they get access in the first place? RDP, or Remote Desktop Protocol, is a propriety protocol developed by Microsoft to provide remote access to a system over the network. This can be the local network, but also the Internet. When a user successfully connects to a system running remote desktop services (formerly known as terminal services) over RDP, the user is presented with a graphical interface similar to that when working on the system itself. This is widely used by system administrators for managing various systems in the organization, by users working with thin clients, or for working remotely. Attackers mostly tend to abuse remote desktop services for lateral movement after getting foothold in the network. In this case however, RDP is their point of entry into the network. Entries in the log files show the attackers got access to the servers by brute forcing usernames and passwords on remote desktop servers that are accessible from the internet. Day in, day out, failed login attempts are recorded coming from hundreds of unique IP-addresses trying hundreds of unique usernames. Connecting remote desktop servers directly to the internet is not recommended and brute forcing remote desktop services is nothing new. But without the proper controls in place to prevent or at least detect and respond to successful compromises, brute force RDP attacks are still relevant. And now with a ransomware twist as well. Image 1: Example network with compromised RDP server and attacker deploying ransomware. After brute forcing credentials to gain access to a remote desktop server, the attackers can do whatever the user account has permissions to on the server and network. So how could an attacker capitalize on this? Underground markets exist where RDP credentials can be sold for an easy cash-out for the attacker. A more creative attacker could attempt all kinds of privileged escalation techniques to ultimately become domain administrator (if not already), but most of the times this is not even necessary as the compromised user account might have access to all kinds of network shares with sensitive data. For example Personally identifiable information (PII) or Intellectual property (IP) which in its turn can be exfiltrated and sold on underground markets. The compromised user account and system could be added to a botnet, used as proxy server, or used for sending out spam e-mail messages. Plenty of possibilities, including taking the company data hostage by executing ransomware. Depending on the segmentation and segregation of the network, the impact of ransomware being executed from a workstation in a client LAN might be limited to the network segments and file shares the workstation and affected user account can reach. From a server though, an attacker might be able to find and reach other servers and encrypt more critical company data to increase the impact. The power lies in the amount of time the attackers can spend on reconnaissance if no proper detection controls are in place. For example, the attackers have time to analyze how and when back-ups are created of critical company data before executing the ransomware. This helps to make sure the back-ups are useless in restoring the encrypted data which in its turn increases the chances of a company actually paying the ransom. In the cases Fox-IT was involved in investigating the breaches, the attackers spend weeks actively exploring the network by scanning and lateral movement. As soon as the ransomware was activated, no fixed ransom was demanded but negotiation by e-mail was required. As the attackers have a lot of knowledge of the compromised network and company, their position in the negotiation is stronger than when infection took place through a drive-by download or infected e-mail attachment. The demanded ransom reflects this and could be significantly higher. Image 2: Example ransomware wallpaper. Prevention, detection, response Connecting Remote Desktop Services to the Internet is a risk. Services like that, which are not essential, should be disabled. If remote access is necessary, user accounts with remote access should have hard to guess passwords and preferably a second factor for authentication (2FA) or second step in verification (2SV). To prevent eaves dropping on the remote connection, a strong encryption channel is recommended. Brute force attacks on remote desktop servers and ransomware infections can be prevented. Fox-IT can help to improve your company’s security posture and prevent attacks, for example by an architecture review, security audit or training. If prevention fails, swift detection will reduce the impact. With verbose logging securely stored and analyzed, accompanied by 24/7 network and end point monitoring an ongoing breach or malware infection will be detected and remediated. The Cyber Threat Management platform can assist in detecting and preventing attacks. And if business continuity and reputation are at stake, our emergency response team is available 24/7. Wouter Jansen, Senior Forensic IT Expert at Fox-IT
<urn:uuid:0af2c5ea-cdc8-4b6e-a279-4dfa6c23a8aa>
CC-MAIN-2017-04
https://www.fox-it.com/en/insights/blogs/blog/ransomware-deployments-brute-force-rdp-attack/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00031-ip-10-171-10-70.ec2.internal.warc.gz
en
0.940505
1,134
2.90625
3
How to close the security gaps in Bluetooth - By William Jackson - Sep 29, 2011 Bluetooth technology has been integrated into many types of devices, including cell phones, laptops, automobiles, printers, keyboards, mice and headsets. This allows users to form ad hoc networks between varieties of devices but it also introduces risks of eavesdropping, hijacking and compromise. The National Institute of Standards and Technology is updating its recommendations for securing Bluetooth-enabled devices that addresses new versions of the standards and the threats to them. A draft of Special Publication 800-121 Rev. 1, "Guide to Bluetooth Security," has been issued for public comment. It gives recommendations on securing Bluetooth effectively, but warns that the mitigation and controls described cannot guarantee a secure environment. NIST puts together a plan for securing wireless LANs 4 threats to wireless security “Each organization should evaluate the acceptable level of risk based on numerous factors, which will affect the level of security implemented by that organization,” the document says. “To be effective, Bluetooth security should be incorporated throughout the entire lifecycle of Bluetooth solutions.” Bluetooth is an open-standards protocol for short-range, personal-area wireless networking commonly used to connect peripherals with desktop or handheld computing devices. The growing use of personal mobile devices and the introduction of new applications, such as links in on-board automobile systems, have resulted in a growing use of Bluetooth and a number of new versions and features. SP 800-121 was published originally in 2008, describing the security capabilities of Bluetooth and giving recommendations on their use. Much of the information originally had been included in NIST guidance on WiFi network security, but commenters wanted a separate publication for the Bluetooth material. The revised version includes information on the latest vulnerabilities and their mitigations for Secure Simple Pairing, which was introduced in Bluetooth v2.1 + Enhanced Data Rate (EDR), as well as an introduction to and discussion of Bluetooth v3.0 + High Speed and Bluetooth v4.0 Low Energy security mechanisms and recommendations. Bluetooth allows users to form ad hoc voice and data networks among a wide variety of devices, and operates in the same band as some 802.11 WiFi versions. It uses frequency-hopping spread spectrum technology, and this, plus power controls to limit the effective range of a device, provide limited security from eavesdropping. But hopping sequences can be easily determined with free open-source software. Authentication, confidentiality and authorization services are included in the Bluetooth standard, and each version has its own security features, although they often are not robust. The publication includes a summary and assessment of Bluetooth vulnerabilities for each version, including seven that apply to all versions: - Link keys are stored improperly. - Strengths of the pseudo-random number generators (PRNG) are not known. - Encryption-key length is negotiable. - No user authentication exists. - End-to-end security is not performed. - Security services are limited. - Discoverable and/or connectable devices are prone to attack. The document also describes a number of threats and ways to mitigate them. General recommendations for secure use of Bluetooth include: - Use the strongest Bluetooth security mode available for Bluetooth devices. The available modes vary based on the Bluetooth specification version supported by the device. - Address Bluetooth technology in security policies, and change default settings of Bluetooth devices to reflect the policies. The policy should include a list of approved uses for Bluetooth, a list of the types of information that may be transferred over Bluetooth networks, and requirements for selecting and using Bluetooth personal identification numbers where applicable. - Ensure that Bluetooth users are made aware of their security-related responsibilities regarding Bluetooth use. Users should also be made aware of other actions to take regarding Bluetooth device security, such as ensuring that Bluetooth devices are turned off when they are not needed to minimize exposure to malicious activities, and performing Bluetooth device pairing as infrequently as possible and, ideally, in a physically secure area where attackers cannot observe passkey entry or eavesdrop on Bluetooth pairing-related communications. Comments on draft SP 800-121 Revision 1 should be sent by Oct. 28 to email@example.com with "Comments on SP 800-121" in the subject line. William Jackson is a Maryland-based freelance writer.
<urn:uuid:d8bd700e-5d72-481f-ae53-c3de4a086a5c>
CC-MAIN-2017-04
https://gcn.com/articles/2011/09/29/nist-bluetooth-security.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00517-ip-10-171-10-70.ec2.internal.warc.gz
en
0.925399
893
2.765625
3
Open Source as a Driver of Globalization Open-source software is a hot-button issue because it leads to some sticking points in terms of intellectual property rights and the ability of certain segments of the IT industry to sustain financial compensation. Critics argue that without such compensation for labor hours spent on development, motivation toward the creation of desirable and useful software might be rendered inert. But open source has significant potential as an equalizer of sorts, a means toward making technology as widely available as possible. In this way, the open-source movement can act as a potent driver of globalization. An example of this can be seen in the One Laptop Per Child program. Launched two years ago by Nicolas Negroponte, MIT Media Laboratory chairman emeritus, One Laptop has since been established as an independent nonprofit. One Laptop Per Child has developed a machine called the XO, billed as a “$100 laptop.” Ninety percent of XOs programming was taken from code available in the open-source community, which is one of the reasons the machine’s cost is so low. By doing so, One Laptop Per Child is acting to put the XO in the hands of children in developing countries such as Argentina, Brazil, Libya, Nigeria, Pakistan, Thailand and Uruguay. Each country gets versions programmed specifically to its native languages. Manufactured by Quanta Computer Inc., the XO is a small, white unit with a green keyboard and framework, and it comes with a manually operated battery charger. When turned on, students are greeted by a screen with a stick figure icon in its center, signifying themselves, the end-users. The figure is surrounded by a ring populated with icons for programs running on the machine. In this way, the XOs operating system escapes the enforcement of a computer organized by files and folders. In fact, the machine has no hard drive. It does, however, feature three USB ports and headphone and microphone jacks, as well as an internal microphone and dual internal speakers. The keyboard is a sealed rubber membrane, accompanied by a touchpad and cursor-control keys. Because it’s intended for use by students who might be having their first experience with a computer when they pick it up, it’s designed to function as an organized presentation of programs as tools for learning, creating and communicating rather than merely working. Toward that end, the XO is Wi-Fi interactive. During classroom operation, end-users will see other stick figures in different colors appearing on their screen, signifying other students in the vicinity. Moving the computer’s cursor to these figures produces students’ profiles and from there, they can chat or work together on projects. On One Laptop Per Child’s Web site (www.laptop.org), Negroponte discussed why it is important for students in developing countries to have their own computers. “One does not think of community pencils — kids have their own,” Negroponte stated. “They are tools to think with, sufficiently inexpensive to be used for work and play, drawing, writing and mathematics.” A computer, Negroponte points out, is the same thing, but far more powerful. “Laptops are both a window and a tool, a window into the world and a tool with which to think,” he stated. “They are a wonderful way for all children to learn learning through independent interaction and exploration.” Chris Blizzard of Red Hat Inc. served as lead software integrator on the project and doesn’t think One Laptop Per Child would have been possible without the availability of open-source software. “If you went to Microsoft and said, ‘We need an operating system for these laptops, one that provides tools for learning and exploration and is geared to kids,’ they would probably say ‘Well, we have Windows. Take it or leave it,’” Blizzard said. “Because of the way free and open-source software works, we’re able to take it, make the changes we need and put it in the hands of kids with the right context and experience attached to it. “That’s just not
<urn:uuid:cb7a0bba-d244-4898-b53b-7b7d790cc9e8>
CC-MAIN-2017-04
http://certmag.com/open-source-as-a-driver-of-globalization/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00425-ip-10-171-10-70.ec2.internal.warc.gz
en
0.959914
879
2.578125
3
Forest fires present peril to the environment, infrastructure and human lives. Most of them can be prevented or considerably minimized with timely detection and a fast response. Great efforts for technical developments, such as remote surveillance, are made to prevent forest fires more efficiently. Forest fires present peril to the environment, infrastructure and human lives. Most of them can be prevented or considerably minimized with timely detection and a fast response. Great efforts for technical developments, such as remote surveillance, are made to prevent forest fires more efficiently. With the aggravation of global warming, climate change significantly influences ecological systems. The average global surface temperature has increased 0.9 degrees Celsius since 1960. The Intergovernmental Panel on Climate Change predicts it will get even hotter, increasing from 1.1 to 6.4 degrees Celsius, during the 21st century. Extreme weather and failure to reach environmental equilibrium is expected to increase forest fires. According to a report, forest fires, including using fires to clear tropical rain forests, will halve the world forest stand by the year 2030. In Europe, up to 10,000 square kilometers of vegetation are destroyed by fire every year, with another 100,0000 square kilometers in North America and Russia. ¨Forest video surveillance is a niche but fast growing market. Southern Europe and California are the most important among the main markets, because of the extremely high number of fires, particularly during the summertime," said Raffaella Amoroso, Marketing Manager of Fluidmesh Networks. With the observable risks, forest fire surveillance demands substantial attention. ¨It is very new, including new technologies and new requirements," said Wai King Wong, Country Manager, Axis Communications Australia and New Zealand. Forest fire surveillance plays a vital role in all spheres of forestry. Advanced solutions include video surveillance systems integrating remotely controlled video cameras to monitor multiple sites. A remote monitoring center is equipped with video presentation and video storage devices, which are connected by wired or wireless Internet to remote video cameras. The cameras usually require zoom, so monitoring staff can easily inspect areas. Axis offers a dome camera with powerful 35x optical and 12x digital zoom, enabling long distance tracking of moving objects and great detail. As infrared cameras typically have lower resolution and are less durable, digital CCD cameras are mainly deployed for forest surveillance. Moreover, weatherproof housing, high resolution and vibration resistance are required for forest fire cameras. Video Analytics on the Way IMS Research forecast the world market for video smoke detection (VSD) to grow at a compound annual growth rate of 38.8 percent, reaching US$36 million by 2011. VSD is expected to become an indispensable solution for detecting the start of a forest fire before it causes any real damage. ¨Utilizing video analytics software for forest surveillance is required, as it helps the authority to constantly monitor and provide accurate alerts on security breaches and forest fires, et cetera," said Kevin Lee, Director of APAC Sales and Marketing for Seagate Technology. VSD features comprehensive monitoring of large areas with a rotating, high-resolution camera system and automatic recognition of smoke clouds by computeraided image processing. According to an IMS Research report, most VSD technology has been server-based, but the software is increasingly being embedded into video surveillance devices, such as network cameras. This is expected to bring solution costs down, making it more affordable. IMS predicts that as suppliers gain further legislative approval and market experience, VSD will increasingly be used. A solution from axonX embeds fire and smoke detection analytics onto IP cameras. The camera is then networked to a network video recorder (NVR), where video and alarms are recorded. It is capable of detecting and triggering alarms for events including flames, reflected fire light, smoke plumes, ambient smoke and intrusion detection. ¨We use standard IP transmission protocols such as Internet or cellular wireless to communicate alarms and video," said Mac Mottley, CEO of axonX. Once an alarm occurs, the camera can be signaled through contact closures or by digitally streamed transmissions over IP. axonX is conducting a pilot project with the Maryland Department of Natural Resources. There are fire towers located throughout Maryland forests, which typically have a small building attached. axonX will install eight IP cameras on a tower and place the NVR in the building. Mottley said, ¨We are looking at a wireless cellular communication solution. When a smoke cloud is detected, the camera records it to initiate the response." ¨Nonetheless, the system has not been widely applied because we just finished indoor testing of the network cameras and have not tested extensively outdoors yet," Mottley said. ¨It is hard to light a forest fire to test your system. That is why working with the Maryland Department of Natural Resources is so important, because they can place us in a tower that overlooks an area that does controlled burns." Axis is also dedicated to developing cameras with video analytics. Yet accuracy and reliability concerns for automatic VSD remain due to weather conditions, light levels, and so on. Meanwhile, the processing power required is still rather high, Wong said. Long-term archiving and analysis of video are growing trends in the field. The need for large capacity and reliable hard drives — with data integrity being an issue of utmost importance — is key to the success of a forest surveillance installation. For surveillance storage, all the cameras stream video back to the central monitoring station, which utilizes NVRs or video management software to monitor the cameras and record their footage. ¨With the usage of video analytics or intelligent video surveillance in such applications, high capacity storage is often required to enable the recording at high resolutions and frame rates," Lee said. ¨Superior hard drives are thus critical." Seagateˇs hard drives provide SATA storage for multidrive enterprise network surveillance applications, and for higher-end applications like IVS and IP surveillance. The hard drives can store up to a terabyte, with higher resolution and longer archival periods for outdoor surveillance, digital video recorders. Energy saving technology minimizes the amount of power required. The drives can operate in extreme environments subject to temperature, humidity, altitude and vibration. Lee said, ¨IP will be the most suitable application for forest surveillance, (since) the requirement of storage will be important as a backup plan in case of network failure.〃 With the large amount of bandwidth required for video transmission, storage on the edge becomes much more essential, allowing compressed video to be transferred over the network as needed, instead of in real time. Forest fire surveillance relies on network connectivity since the video streams from cameras in remote areas must arrive to a control room many kilometers away. Several transimssion methods like wired, wireless, mobile, satellite and microwave featuring different applications are available. Dual-frequency wireless mesh technology by Fluidmesh is specially designed for video and data streaming, transmitting simultaneously on the 2.4 GHz and 5 GHz range. The wireless mesh architecture enables reliable and redundant networks, where every unit can route packets around physical obstacles, sources of interference or low-quality links. It also has an IP 68-rated submersible enclosure capable of working in harsh environments. With low power consumption below 10 watts, the product usually incorporates solar and battery-powered solutions. This allows for easy installation in environments with electricity. ¨Using a mesh network can greatly increase the reliability of the system, providing multiple paths to reach the control room and recording site," Amoroso said. Mobile networks — meaning 3-G networks such as GPRS, CDMA or WLAN — are another transmission solution for real-time forest fire surveillance. ¨In Australia, we are using Next G cards with the 2100 MHz 3G technologies run by Telstra, delivering turbo-charged speeds for data and downloads,〃 Wong said. ¨The card has been inserted on the router devices or routers, linking access to the cameras." Monitoring companies can automatically dispatch resources to the exact location while providing important real-time information for effective planning and preparation. Mobility plays a major role, as the entire system can be taken down in five minutes and deployed at another location in five minutes. Additionally, satellite and microwave transmissions with good bandwidth and quality image resolution are suitable for monitoring forests remotely. However, microwaves are highly susceptible to attenuation by the atmosphere and limited to line of sight transmission links. Solar Power and Easy Installation Forest fire surveillance is limited by the extensive area that needs to be covered for power or communication infrastructure. As a result, Wong said, energy supply is fundamental and crucial to carrying on surveillance systems. Solar panels generating electric current from sunshine provide an effective method to operate surveillance systems. The current flows to a device that regulates the voltage and current and also controls the charging of the battery bank. However, it is difficult to produce energy with solar panels alone, due to unpredictable weather. The best solution to power the cameras and transmission devices would be to combine solar panels with wind turbines or batteries. Furthermore, maintaining low power consumption of devices is imperative as well. ¨Currently, we have done mobility solutions deployed with mobile networks and solar panel systems," Wong said. For an easier installation and better connectivity, ¨a partner we have has designed the camera in the housing and customized the poles," Wong said. ¨Basically, every component needed is configured in one device. You can bring the pole to wherever you want and plug in the solar panel. Once the camera is on, you are automatically connected to the server." Although there are no specific market figures or statistics showing market size of forest fire surveillance, in almost every country which encounters high risk of forest fires are investing the technical development of forest fire surveillance systems. ¨If a relatively stable well-performing system can be designed, and the communication and power issues resolved, then there would be a quite substantial worldwide market," Mottley said.
<urn:uuid:0ec77a7d-21de-4b9c-9c52-b2497fc59655>
CC-MAIN-2017-04
https://www.asmag.com/print_article.aspx?id=6217
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00243-ip-10-171-10-70.ec2.internal.warc.gz
en
0.936146
2,023
3.21875
3
Table of Contents A basic, but important, concept to understand when using a computer is cut, copy and paste. These actions will allow you to easily copy or move data between one application and another or copy and move files and directories from one location to another. Though the procedures in this tutorial are considered to be basic concepts, you would be surprised as to how many people do not understand these essential features. Even more importantly, once you understand these fundamentals you will be able to use this knowledge on almost any computer operating system as long as you know the corresponding keys that are used for these features. For the purpose of this tutorial I will cover how to cut, copy and paste with the Windows operating system. Other operating systems, such as Linux, Unix, and Apple support these concepts as well but will not be discussed in this tutorial. Windows has a feature called the Windows Clipboard. The clipboard gives Windows users the ability to store information in memory and than retrieve that information for later use. The cut, copy and paste functions rely on the clipboard in order to work. The process of placing data into the clipboard is know as copying or cutting. The process of retrieving the data from that clipboard and placing it into another location is called pasting. We will cover the specifics on these actions in more detail later in the tutorial. For now it is important to understand that the clipboard is used to contain the data that you want to paste into another location. If there is no information contained in the clipboard, then you will not be able to paste anything. Any data that is stored in the clipboard via a copy or cut command will stay there until it is overwritten by another copy or cut command. When you paste that data, the data is not removed from the clipboard, and can be pasted over and over as many times as wish. The data in the clipboard will be erased, though, when you shut down or restart your computer. Before you can copy, cut, or paste text data you must be able to highlight, or select, the text that you want to perform the action on. This is called highlighting and allows you to select all the information in a document or certain portions of it. Once the text is highlighted you can then copy or cut that information depending on your needs. An example of what highlighted text looks like is below: Figure 1: Example of highlighted text There are four standard methods used to highlight text and they are as follows: Now that you know how to highlight text, you should practice the the art of highlighting text. You can do this by opening Notepad and typing in a few lines of text. Then practice the different methods of highlighting text. To open Notepad you can double-click its icon found in the Start Menu under the Accessories submenu. Just as you can highlight, or select, text you can also highlight files and folders for use with the copy, cut, and paste commands. Selecting files and folders work a little differently then text though. When selecting text you must select the text so that the highlighted text is next to the other highlighted text. Files and folders, on the other hand, can be selected as seen fit and the files do not have to be next to each other. You can see an example of this in the figure below: Figure 2. Select Files and Folders As you can see from the image above, files and folder can be selected as needed and do not have to be right next to each other. To select files or folder you can use the following methods: To test this, open your My Documents folder and practicing selecting files and folders. What if you were working on a word processing document and you need to take text that is located in another document and add it to the current document. You could manually type the information found in the original document into the new document but that could take quite a long time. Luckily for us, operating systems give us the ability to copy text from one document to another document called Copy. When you copy highlighted data, this data is stored in the clipboard until you are ready to paste that data into another program To copy something you must first highlight the text that you would like to copy using one of the methods described above. When when you have the text highlighted that you would like to copy, you can copy it to the clipboard in one of three ways: Once you use one of these methods a duplicate of the highlighted text will be placed in the clipboard allowing you to paste it in another document or application. Cutting is very similar to copy in that they both place the highlighted item into the clipboard for future pasting. The difference is that you when you Cut the highlighted text, it will remove, or cut, the highlighted text from the original location and place it into clipboard. It is therefore important to be careful when using this command as it is possible to lose data if you mistakenly cut the data from the document and then save the file. To cut text you must first highlight the text or data that you would like to cut using one of the methods described above. When when you have the text highlighted that you would like to cut, you can cut it to the clipboard in one of three ways: Once you use one of these methods a copy of the highlighted text will be placed in the clipboard and the highlighted data will be removed from the document. It is important to note that the text will only be removed from a document if that document is editable. For example, you can not cut text from a document set to read-only or a web page because it is not editable. Now that you know how to Copy and Cut data from a document and have it placed in the clipboard, you need to learn how to retrieve that data and place it in your document. Once data has been copied or cut from a document, you can then paste it into another document, or the same document, by retrieving that information from the clipboard using the Paste command. Simply move your cursor to the location where you would like the data to be pasted into your document and then choose one of the methods for pasting the data. After you use one of the above commands the data contained in the clip board will now be pasted into the document. It is also possible to use the same key combinations and commands on files and folder. Simply select a file(s) or folder(s) and cut or copy it. Then you can select another location to paste it to. If you paste a copied file or folder in the same location that the original resides in, Windows will automatically append Copy of in front of the file name. For example if I copy and paste the file test.txt to the same directory the original is in, it will paste the file as a new file called Copy of test.txt. When cutting files and folders, a duplicate of the file or folder will placed where you paste it to and the original will be deleted. Do not worry, though, as the original document you cut will not be deleted until a valid copy is pasted elsewhere. Now that you know how to cut, copy and paste text and files from one location to another you have a powerful tool at your disposal. Now you can quickly take information from another document and paste it into a document of your choice. You also have the ability to cut or copy files from one location and place them into another location. If you have any questions please feel free to post them in our computer help forums. Bleeping Computer: Basic Operating System Concepts Tutorial BleepingComputer.com: Computer Help & Tutorials for the beginning computer user. The Snipping Tool is a program that is part of Windows Vista, Windows 7, and Window 8. Snipping Tool allows you to take selections of your windows or desktop and save them as snips, or screen shots, on your computer. In the past if you wanted a full featured screen shot program you needed to spend some money to purchase a commercial one. If you needed basic screen shot capability, past versions of ... Windows gives you the ability to take a snapshot of what is shown on your computer screen and save it as a file. You can then view this image at a later date to see what your screen looked like or share this image with other people to view. You may be asking why this is important and why you would want to share screen shots of your computer. A very common question we see here at Bleeping Computer involves people concerned that there are too many SVCHOST.EXE processes running on their computer. The confusion typically stems from a lack of knowledge about SVCHOST.EXE, its purpose, and Windows services in general. This tutorial will clear up this confusion and provide information as to what these processes are and how to find out more ... Windows Safe Mode is a way of booting up your Windows operating system in order to run administrative and diagnostic tasks on your installation. When you boot into Safe Mode the operating system only loads the bare minimum of software that is required for the operating system to work. This mode of operating is designed to let you troubleshoot and run diagnostics on your computer. Windows Safe Mode ... Before Windows was created, the most common operating system that ran on IBM PC compatibles was DOS. DOS stands for Disk Operating System and was what you would use if you had started your computer much like you do today with Windows. The difference was that DOS was not a graphical operating system but rather purely textual. That meant in order to run programs or manipulate the operating system ...
<urn:uuid:6def3078-cb27-4414-a1d3-78f6b68581da>
CC-MAIN-2017-04
https://www.bleepingcomputer.com/tutorials/cut-copy-and-paste-in-windows/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280483.83/warc/CC-MAIN-20170116095120-00059-ip-10-171-10-70.ec2.internal.warc.gz
en
0.941178
1,952
4.34375
4
- Ransomware is a type of malware that restricts access to an infected computer system and demands a ransom payment to remove the restriction. - Some ransomware encrypt the files on the system’s hard drive, while others may simply lock the system and display threatening messages to force the user to pay. - Cryptowall is a ransomware Trojan which targets Windows. It first appeared in early 2014. - The latest version, Cryptowall 4.0, appeared in November 2015 and it is considered a very prevalent ransomware. - Cryptowall 4.0 is the fourth version of the popular ransomware. It recently emerged with improved encryption tactics and better evasion techniques that help it deceive some antivirus platforms. - Cryptowall 4.0 can exploit many more vulnerabilities than the previous versions. It is also better at staying under the radar and avoiding sandbox detection. - Cryptowall 4.0 includes advanced malware dropper mechanisms to avoid antivirus detection. - Detection rates of Cryptowall 4.0 in certain anti-virus and firewall products have decreased significantly compared to the previously successful Cryptowall 3.0 ransomware. Check Point Protections - Check Point Anti-Virus and Anti-Bot blades protect against Cryptowall 4. - This includes a wide variety of network signatures, C&C URLs and file hashes. - Check Point protections block Cryptowall’s communication with its C&C, preventing it from fetching encryption keys and encrypting the victim’s files. Check Point Observation & Guidance - Check Point analysis showed that almost no changes in the communication methods with the C&C domains occurred between Cryptowall 3 and Cryptowall 4. Therefore the same network signatures apply to both. - Check Point continues to monitor and follow up on C&C domains for all versions of Cryptowall. Encrypting Ransomware: https://en.wikipedia.org/wiki/Ransomware#Encrypting_ransomware Technical Description: http://www.theregister.co.uk/2015/11/09/cryptowall_40/
<urn:uuid:e6ec59c0-d288-4063-8b55-16ef0901391d>
CC-MAIN-2017-04
http://blog.checkpoint.com/2016/01/15/check-point-threat-alert-cryptowall-4/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283301.73/warc/CC-MAIN-20170116095123-00269-ip-10-171-10-70.ec2.internal.warc.gz
en
0.859777
440
2.578125
3
Nearly one in four pedestrians are engaged with a mobile device while crossing busy intersections, according to results released Wednesday of an observational survey conducted last summer. Conducted by the Harborview Injury Prevention & Research Center at the University of Washington, the survey determined that texting while crossing a busy street "is the most distracting, and potentially most dangerous, activity" -- though hardly the only way pedestrians increase the odds of getting hit by a vehicle or bike by ignoring basic road-safety rules. The researchers studied the behavior of more than 1,000 pedestrians crossing 20 different intersections during different times of the day last summer in Seattle. Sadly, three out of four (74%) would have failed a basic road-crossing safety test requiring pedestrians to 1) wait for a green crossing signal 2) cross at the right spot and 3) look both ways before venturing into the street. Of the 1,102 pedestrians observed, nearly 30% were engaging in a potentially distracting activity while crossing a Seattle intersection, including: * 11% were listening to music * 7% were texting * 6% were talking on a mobile phone Other distractions observed included talking with fellow pedestrians and interacting with children and pets. Not surprisingly, distractions slowed down the time it took for pedestrians to cross intersections by anywhere from 0.75 to 1.29 seconds. And texters took nearly 2 seconds longer to cross an average three- or four-lane intersection than pedestrians who weren't texting. It gets worse: Texters were 3.9 times more likely than non-texters to ignore at least one of the three road-crossing rules (cited above) they should have learned in kindergarten. And there were gender differences, with researchers reporting that "female pedestrians, whether distracted or not, were somewhat less likely to look both ways before crossing the street." The research team concludes: Pedestrian distraction in general, and text messaging in particular, is associated with slower crossing times and unsafe pedestrian behaviours. The steady rise in the prevalence of text messaging and the use of mobile devices for a wide range of functions such as playing games suggests that the risk of distraction will increase. Solutions are likely to include the three ‘Es’ of injury prevention: education of the public about risks, engineering and environmental modifications, and enforcement. Well, if laws against distracted road-crossing are enforced with the vigor of laws against texting or talking on a mobile phone while driving, we can strike that last solution off the list. Now read this:
<urn:uuid:70baba36-7303-428d-afde-7cde96ae5679>
CC-MAIN-2017-04
http://www.itworld.com/article/2716998/mobile/why-did-the-pedestrian-cross-the-road--who-knows--but-she-really-should-stop-texting.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00297-ip-10-171-10-70.ec2.internal.warc.gz
en
0.96641
508
2.53125
3
IT departments have been urged to start preparing staff for the introduction of Internet Protocol version 6 to reduce the security risks of deploying the technology for new applications. By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers. IPv6 is the replacement for IPv4, the protocol used to send and receive network traffic. The main benefit of the new version is that it offers an almost unlimited number of IP addresses. This is important as the number of internet users and connected devices, each requiring a unique IP address, is set to increase rapidly over the next few years. Although operating systems such as Unix and Linux already support IPv6, there is expected to be a huge increase in usage with the release of Windows Vista, the next version of the Microsoft operating system, next year. Roy Hills, technical director at internet research firm NTA Monitor, warned that many users do not fully understand IPv6. "Since people have not had to use it there has been no requirement for systems administrators to understand IPv6," he said. One risk for users is that no one is sure how IPv6 will perform on networks, said Phil Cracknell, chief technology officer at IT security supplier netSurity. "There is a total absence of test data on how it will perform in terms of applications, management and security infrastructure," he said. Because of potential security vulnerabilities that could be created by using IPv6, businesses should test it in a development environment before rolling out the technology, Cracknell added. Richard Brain, technical director at security consultancy Procheckup, said, "Modern firewalls support IPv6 effectively, though there might be some bugs in lesser-known protocols using IPv6, such as ICMP." He urged users to keep on using IPv4 and disable IPv6 where possible. Brain said there have been serious security holes found in IPv6 implementations. "Only use IPv6 if there is a need to - its main function is to increase the number of addresses available. "IPv6 uses a lot more bandwidth than IPv4, as the packet size is 250% larger." Unless users have plenty of spare bandwidth and are running out of IP addresses, there is no need to migrate to IPv6, Brain said.
<urn:uuid:6b8b377b-c356-43f4-9505-94dec9db58eb>
CC-MAIN-2017-04
http://www.computerweekly.com/news/2240075028/IT-departments-urged-to-prepare-staff-for-IPv6
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283689.98/warc/CC-MAIN-20170116095123-00113-ip-10-171-10-70.ec2.internal.warc.gz
en
0.944267
468
2.78125
3
The sounds that 3D-printer nozzles make as they cross the machine bed can be recorded, analyzed and then used to duplicate prototypes, say scientists. Smartphones, casually posed adjacent to printers by thieves, can surreptitiously capture the recording of print head movements. The phone can then be recovered and used to reverse engineer the part elsewhere. It might be a big problem with no fix. Manufacturing plants need to curtail smartphone use near the machines. There’s currently no way to stop the theft, think the cyber-physics engineers who discovered the hack from University of California Irvine. “Companies stand to incur large financial losses,” says Mohammad Al Faruque of the university’s Advanced Integrated Cyber-Physical Systems Lab, in a press release. The “precise movements” of the nozzle produces a signature sound, and that’s what is captured, to be reverse-engineered later into the intellectual property being printed, say the researchers. "If process and product information is stolen during the prototyping phases” companies could lose out, Al Faruque thinks. “There’s no way to protect these systems from such an attack today, but possibly there will be in the future,” he says. Acoustic jamming techniques would probably be the way to go, he reckons. But in the meantime plants must stop “people carrying smartphones near the rapid prototyping areas when sensitive objects are being printed,” he says. It’s the unique vibrations and acoustic “emissions” that give the game away—despite software being encrypted. The G-code, which is the standard format source file that holds the intellectual property, can easily be encrypted. It’s thus protected as it’s shifted from design studio to print house, thwarting any eavesdropping on the code. But energy is “converted from one form to another,” it’s not “consumed,” says Al Faruque. “Electromagnetic to kinetic...” he says of the energy conversion. “Some forms of energy are translated in meaningful and useful ways, others become emissions, which may unintentionally disclose secret information.” Those “emissions” from the print head nozzle as it extrudes plastic spatially on the X,Y and Z axis, along with the motors pushing the raw material in some cases, can all be captured. You actually don’t need to try to hack into the print head-controlling G-code to perform industrial espionage and decipher corporate design secrets, the scientists think. Sounds have been used to hack before. Phreaking is a classic hack that dates back to the 1950s. ‘Phone Phreaks,’ as they were called, developed ways to listen in on Touch Tones, the audible tones used to route telephone calls. The sub-culture used those tones to explore the network and made free calls. Sound signatures, of the kind Al Faruque talks of, can also be used in other industrial scenarios. They can be used to detect when equipment is failing—the machines emit a specific sound. And new developments in sound fingerprinting might be used in the future to detect hacks on industrial controls and in IoT—the sound that the physical movement of a control turning a valve, say, can be used to identify a spoof because the bogus control doesn’t make the right sound. Al Faruque says there’s no fix for the 3D-printer hack. “In many manufacturing plants, people who work on a shift basis don’t get monitored for their smartphones,” Al Faruque said. That should be curtailed, he thinks. This article is published as part of the IDG Contributor Network. Want to Join?
<urn:uuid:56ba115b-bec1-4191-80dd-1280291c4e03>
CC-MAIN-2017-04
http://www.networkworld.com/article/3041436/security/3d-printers-wide-open-to-hacking.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00417-ip-10-171-10-70.ec2.internal.warc.gz
en
0.922414
806
2.984375
3
The Defense Advanced Research Projects Agency's experimental Hypersonic Technology Vehicle (HTV-2), lost significant portions of its outer skin and became uncontrollable after three minutes of sustained Mach 20 speed last August. That was the conclusion of an independent engineering review board (ERB) investigating the cause of what DARPA calls a "flight anomaly" in the second test flight of the HTV-2. From the ERB report: The flight successfully demonstrated stable aerodynamically-controlled flight at speeds up to Mach 20 (twenty times the speed of sound) for nearly three minutes. Approximately nine minutes into the test flight, the vehicle experienced a series of shocks culminating in an anomaly, which prompted the autonomous flight safety system to use the vehicle's aerodynamic systems to make a controlled descent and splashdown into the ocean. Based on state-of-the-art models, ground testing of high-temperature materials and understanding of thermal effects in other more well-known flight regimes, a gradual wearing away of the vehicle's skin as it reached stress tolerance limits was expected. However, larger than anticipated portions of the vehicle's skin peeled from the aerostructure. The resulting gaps created strong, impulsive shock waves around the vehicle as it travelled nearly 13,000 miles per hour, causing the vehicle to roll abruptly. Based on knowledge gained from the first flight in 2010 and incorporated into the second flight, the vehicle's aerodynamic stability allowed it to right itself successfully after several shockwave-induced rolls. Eventually, however, the severity of the continued disturbances finally exceeded the vehicle's ability to recover. "The initial shockwave disturbances experienced during second flight, from which the vehicle was able to recover and continue controlled flight, exceeded by more than 100 times what the vehicle was designed to withstand," said DARPA Acting Director, Kaigham Gabriel in a statement. "That's a major validation that we're advancing our understanding of aerodynamic control for hypersonic flight." Prior to the Aug. 11, 2011 flight DARPA said its technical team completed the most sophisticated simulations and extensive wind tunnel tests possible. But these ground tests have not yielded the necessary knowledge. "Filling the gaps in our understanding of hypersonic flight in this demanding regime requires that we be willing to fly. In the HTV-2's first test in April 2010, we obtained four times the amount of data previously available at these speeds. Today more than 20 air, land, sea and space data collection systems were operational. We'll learn. We'll try again. That's what it takes," said then DARPA director Regina Dugan. The HTV-2 could fly anywhere in the world in less than 60 minutes. This capability requires an aircraft that can fly at 13,000 mph, while experiencing temperatures in excess of 3,500F. With that information as a backdrop, DARPA describes the Falcon as a "data truck" with numerous sensors that collect data in an uncertain operating envelope. For its second test flight, engineers adjusted the HTV-2's center of gravity, decreased the angle of attack flown, and will use the onboard reaction control system to augment the vehicle flaps to maintain stability during flight operations, the agency stated. The first flight of the Falcon in 2010 "collected data that demonstrated advances in high lift-to-drag aerodynamics; high temperature materials; thermal protection systems; autonomous flight safety systems; and advanced guidance, navigation, and control for long-duration hypersonic flight." Moving forward, DARPA said the HTV-2 program will incorporate new knowledge gained to improve thermal uncertainties and heat-stress allowances for the vehicle's outer shell. The remediation phase will involve further analysis and ground testing using flight data to validate new tools for this type of flying. Layer 8 Extra Check out these other hot stories:
<urn:uuid:35ca382f-bfea-4c28-b4a9-fc17aefd800b>
CC-MAIN-2017-04
http://www.networkworld.com/article/2222207/security/hypersonic-test-aircraft-peeled-apart-after-3-minutes-of-sustained-mach-20-speed.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280587.1/warc/CC-MAIN-20170116095120-00325-ip-10-171-10-70.ec2.internal.warc.gz
en
0.9436
780
2.6875
3
The Evolution of Backup The Evolution of Backup has been through many iterations and is centuries old. 3000 BC: Clay tablets, used by Sumerians, are used to record commercial transactions. Backup consisted of duplicating the tablets. 2500 BC: Papyrus starts being used in Egypt. Manually copying a papyrus is the only form of safeguarding it. 197 BC: Vellum, or parchment, made from animal skin eventually replaced the papyrus. Backup consisted of manually making copies and was sealing in a bronze box for archiving. 105 AD: Paper is invented. Bookkeeping, financial transactions, and other important business records are kept in paper, and kept in safes for storage and protection. 1800s: Typewriters are invented, gaining momentum in the 1860s. Carbon paper was used to create copies of typed documents, to be stored in filing cabinets. 1890s: Herman Hollerith invented the punch card, so that data could be recorded in a medium that could also be read by a machine. Copies of punch cards had to be made for backup. 1900 to 1950s: Punch cards are the primary method for data entry, data storage, and data processing. Cards had to be handled with care and stored accordingly. 1960s: Magnetic tapes start replacing punch cards for data storage. The first backup strategies started to appear and backup tapes gain momentum. 1969: First floppy disk is introduced. 1970’s: Tape cartridges and cassette tapes start being widely used especially in home computers as a low cost data storage system. 1980’s: CD-R and CD-RW conquered the market for software distribution, slowly replacing floppy disks. 1982: Maynard Electronics is founded, and Archive, a backup software that later is renamed Backup Exec is created. 1987: DAT (digital audio storage) tapes hit the market, instead of being used by the music industry as first intended, gain interest from the corporate world for data storage and backup. Early 90’s: CD drives and the declining costs of rewritable CD’s (CD-RW) help boost the media as a viable alternative for file backups. Later replaced by DVD’s. 1994: IOMEGA Zip Drive, a removable hard disk storage with 100MB and 250MB capacity is released, becoming very popular. A year later IOMEGA Jaz hits the market with 1GB capacity. Late 90’s: SAN’s (Storage Area Networks) gain traction in the corporate world. VTL (virtual tape libraries) start being used to replace tape libraries. 2000’s: The USB flash drive enters the market. Decreasing prices and increasing storage capacity kills IOMEGA and their removable disks. Mid 2000’s: Due to faster USB and Firewire ports, external hard drives start gaining market share and being used for backups especially for personal computers. Late 2000’s: NAS (Network Attached Storage) climbs in reputation in the corporate world. Tapes continued to be replaced by disk-to-disk backup strategies. 2005: VTL gains wider adoption, replacing tape libraries in increasing number of companies. Symantec acquires VERITAS and with it, Backup Exec. 2006: Amazon launches EC2, the popular cloud computing platform, and S3 (simple storage service) for cloud storage. 2007: Dropbox founded, makes cheap cloud storage available for the masses. Axcient is founded as a new type of cloud platform for data protection and recovery, focusing on the corporate market. 2009: SSD (solid state storage) exceeds 11 million units shipped, showing the technology starts gaining traction and wider adoption. 2013: Symantec exists the cloud space shutting down Backup Exec.cloud. Gartner coins the term “Recovery-as-a-Service” based on the popularity of cloud-based disaster recovery solutions like Axcient and estimates a projected compound annual growth rate (CAGR) of at least 21% during the next three years. 2015: Gartner releases the first Magic Quadrant for Disaster Recovery-as-a-Service. 2015: IDC publishes the first MarketScape report for DRaaS. DRaaS gains momentum as the next step in the evolution of data protection and recovery.
<urn:uuid:d6d79e71-6b40-4f4b-a00a-6cab2b7fcee5>
CC-MAIN-2017-04
https://axcient.com/the-evolution-of-backup/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281226.52/warc/CC-MAIN-20170116095121-00141-ip-10-171-10-70.ec2.internal.warc.gz
en
0.922543
891
3.359375
3
More than 30 years ago, Vint Cerf and colleague Robert Kahn - performing research sponsored by the Defense Advanced Research Projects Agency - created the core standards that allow computers across the globe to link together. The two men developed the transmission control protocol/Internet protocol (TCP/IP) suite, a stack of networking protocols that forms the Internet's foundation. Ultimately their work revolutionized how citizens, businesses and governments use and share information. Today Cerf is vice president and chief Internet evangelist of Google, where part of his job is to identify new Internet applications and technologies. In addition, Cerf is chairman of the Internet Corporation for Assigned Names and Numbers (ICANN) a nonprofit organization that coordinates Internet domain names and IP addresses globally. Cerf spoke with Government Technology at Google's Washington, D.C., offices. During the hour-long conversation, Cerf discussed numerous issues that will shape the Internet's future, including Net neutrality, municipal wireless projects and mobile connectivity. View Full Story
<urn:uuid:883b20a9-a767-41f5-b616-8df3f801a711>
CC-MAIN-2017-04
http://www.govtech.com/e-government/Cerf-on-the-Net.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00379-ip-10-171-10-70.ec2.internal.warc.gz
en
0.904322
205
2.59375
3
|English: A black version of an emblem used by iOS app developers to indicate that something is available for download from the App Store. It has a image of an iPhone and reads “Available on the App Store”. (Photo credit: Wikipedia)| A security Flaw was in Apple iOs store from years which allowed attackers to steal the passwords and could install unwanted or expensive applications and this Flaw was founded by a Google developer Elie Burszteinand he helped Apple to fixed that security Flaw in their application store. Actually this Flaw allow attackers to Hijack the connection, because Apple always neglected to use the encryption when iPhone or any other mobile phone tries to connect to the App store. Elie Bursztein also said in his blog that after this flaw he alerted the Apple but the Apple only turned on the HTTPS for the app store. What is Process of this Flaw? We can tell you in short that how it can be done , An attacker only should be on the same network on which victim is and from there attacker can intercept the communications and insert his own commands. What can Attacker do more? - Steal the Passwords - Forcing to purchase an app by swapping it with a different app that the buyer actually intented to get or by showing fake app updates - Prevent the victim to install an app
<urn:uuid:6bea8348-e5bd-4569-b877-6e6a3a88124f>
CC-MAIN-2017-04
http://www.hackersnewsbulletin.com/2013/03/an-attacker-can-steal-your-passwords-while-you-are-using-apple-app-store.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00379-ip-10-171-10-70.ec2.internal.warc.gz
en
0.955849
282
2.578125
3
As a new year begins, it is customary to make resolutions for ourselves, our families and if we’re educators, our classrooms. Among the most important of goals is to keep children safe, and one of the greatest areas of possible risk to their safety is the Internet. The best preventative measure against Internet danger is continued Internet safety education, so we thought of a few resolutions for your school’s Internet usage. 1. Resolve to be personally safe online. As educators, being an example to young people is par for the course. But before you can talk to students about how to be safe online, you need to make sure you practice what you preach. Take some time to look at your social media habits, and resolve to keep yourself in check: Download the Educator’s Guide to Online Communication Tools by NetSmartz here. 2. Resolve to teach the importance of cybersecurity. We hear about cybersecurity on the news all the time, especially during the holidays with online shopping at a high point. Students may think they know about how to be secure online, but it is most likely not at the front of their mind. He may fall victim to identity theft regardless of what he thinks he knows. To keep kids safe, resolve to teach them about the importance of creating safer passwords, not sharing those passwords with others and how to protect their online identity. Teach students about the security tools are available to use on most computers to further protect themselves, their personal information, and their computer from viruses, spyware and spam. For grade appropriate lesson plans and class activities on cybersecurity, See StaySafeOnline.org’s resources here. Find the Nearpod + Common Sense Education Digital Citizenship Curriculum here. 3. Resolve to have the hard conversations and provide resources to students. The Internet is a scary place if a young person doesn’t have guidance. The headlines show terrible things happening to kids every day – sexting scandals, bullying, grooming and threats of violence, radicalization and terrorism. Although students may know that these things happen, they may not feel like it could happen to them because they aren’t hearing about it from trusted sources. As they say in sports, the best defense is a good offense (or vice versa). Students trust their teachers and educational staff; that is why it is important to have the difficult conversations surrounding the dangers that lurk on the Internet and what to do about them. If a topic comes up during class, take the time to talk it out, and then provide resources that students can use for further education or to help them report issues. Assure students that you are there for them should they have any concerns. Here are some online resources on Internet safety for students that we think are helpful: 4. Resolve to have a safety pledge. Once you’ve taught students about cybersecurity and digital citizenship, get your students to pledge that they will continue to be safe online throughout the year and their lives. If they’ve made a promise to you and to themselves, they are more likely to remember what they’ve been taught. Additionally, a pledge of rules can be referenced if there is any question as to what online activities are appropriate. A pledge should contain the rules of Internet safety and could include real world safety standards, too. You and your students can create a pledge together, or instead of “reinventing the wheel”, use a pre-written document found online. Find NetSmarts’ age appropriate Internet safety pledges here 5. Resolve to keep parents involved. Parents are deeply concerned about the Internet safety of their children. But with everything going on in a parent’s world, it can help to send reminders of what to look for and how to keep communicating with their kiddo. Resolve to send a friendly email about what is being done in the classroom to keep students safe online and provide some resources for parents to mirror those concepts at home. The safety pledge that the students have signed should be sent home, too. Here are some helpful websites that can be referenced when communicating with parents about online safety: 6. Resolve to examine and update school policies on Internet safety. Whether you are a teacher, a school administrator or a school IT professional, you need to know what the policies are for Internet safety in your district. Do you know the laws of your state? Are they reflected in your Acceptable Use Policy? Is your technology policy updated to include social media safety, app use and the monitoring of student online activities? No matter what role you play in your school, it is worth it to take the time to examine the technology policies in place and bring up the need to update them if necessary. It is always a good time to make sure we are keeping kids safe. From all of us at Impero Software to all of you, have a wonderful and safe start to your new year! Impero Education Pro software provides schools with the ability to proactively monitor the online activities of digital devices while they are being used in classrooms. To find out more about this solution, go to the product features page here. Impero offers free trial product downloads, webinars, and consultations. Call us at 877.883.4370 or email us at email@example.com today for more information.
<urn:uuid:f7214cd9-eb8a-49ff-a959-b6f3881d362d>
CC-MAIN-2017-04
https://www.imperosoftware.com/6-resolutions-for-better-internet-safety-in-schools/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00379-ip-10-171-10-70.ec2.internal.warc.gz
en
0.955875
1,111
3.21875
3
Dietary behaviour of children attending priaaary school in Italy found by the surveillance system okkio alla salute [I comportamenti alimentari dei bambini della scuola primaria in Italia fotografati dal sistema dl sorveglianza nazionale okkio alla salute] Nardone P.,Centro Nazionale Of Epidemiologia Sorveglianza E Pomozione Della Salute | Lauria L.,Centro Nazionale Of Epidemiologia Sorveglianza E Pomozione Della Salute | Buoncristiano M.,Centro Nazionale Of Epidemiologia Sorveglianza E Pomozione Della Salute | Pizzi E.,Centro Nazionale Of Epidemiologia Sorveglianza E Pomozione Della Salute | And 2 more authors. Epidemiologia e Prevenzione | Year: 2015 OBJECTIVES: To describe the dietary behaviour of children attending primary school and the school activities which promote healthy dietary habits. DESIGN OF THE STUDY: surveillance system with biannual prevalence studies SETTING AND PARTICIPANTS: The fourth round of data collection of the surveillance system OKkio alia SALUTE took place in 2014, promoted and financed by the Ministry of Health and coordinated by the National Institute of Health in collaborations with all regions. 2,408 schools, 48,426 children and 50,638 parents participated. Stratified cluster sampling (with third grade classes as units) was used; information was collected using questionnaires completed by children, parents, teachers and head-teachers. OUTCOME MEASURES: consumption of breakfast, mid-morning snack, fruit and vegetables, sweetened and gassy drinks; school initiatives to promote healthy dietary habits RESULTS: 31% of children have an adequate breakfast and 8% skip this meal; 52% consume an energy-dense mid-morning snack; 25% do not eat fruit and vegetables daily; 41% drink sweetened/gassy beverages daily. The unhealthy dietary habits are more common among children who have less educated parents or live in the South (more deprived area of the Country). Data show an improvement in the period 2008-2014, except in the consumption of fruit and vegetables. 74% of the schools include nutritional education in the curriculum, 66% have started initiatives of healthy dietary habits and 55% distribute healthy food; 35% involve parents in their initiatives. In the schools of the South nutritional education and involvement of parents are more frequent, while the distribution of healthy food and refectories are less common. CONCLUSIONS: The high frequency of unhealthy dietary behaviour and their geographic and social inequalities show that there is a great potential for improvement. Schools are very involved in initiatives of promotion, but they need more support from the institutions and involvement of the families. Source
<urn:uuid:9ce629df-adc4-406b-b3f1-8bc180c1484e>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/centro-nazionale-of-epidemiologia-sorveglianza-e-pomozione-della-salute-2116100/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00379-ip-10-171-10-70.ec2.internal.warc.gz
en
0.89019
613
2.8125
3