text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
In 2002, the University of Tennessee at Chattanooga was asked by the U.S. Department of Defense (DOD) to undertake a project that involved the release of a chemical or biological agent and determine if there was a way to detect where the agent was going. The DOD soon got its answer and went its own way, but the university continued on with the project as an academic research topic for graduate students. But that’s changing with a current pilot project that university researchers hope develops, by the end of the year, into a real-time decision support tool. Not only would the tool identify a chemical or biological agent, but would also notify officials of its location, nature and where it was going. The tool — with sensors to detect and report the substance, detailed GIS information and an intelligent traffic system involving 700 traffic monitors — would give emergency management and first responder personnel the critical information they would need to evacuate residents, redirect traffic and otherwise mitigate the presence of the agent. Henry McDonald of the University of Tennessee, Chattanooga and SimCenter Enterprises is piloting a disaster mitigation system. Photo courtesy of SimCenter Enterprises Inc. and the University of Tennessee at Chattanooga. During the pilot, researchers are running computerized scenarios that include the release of a chemical or biological agent, with a goal of getting emergency responders the information they need to respond in real time. During a real incident, the sensors would immediately identify the agent and report it to an emergency operations center. There the data would be meshed with weather and traffic conditions for decision-making. There is still considerable work to be done to get to the deployment stage, including how to update information in the system in real time during an incident, according to Henry McDonald, chair of excellence in computational engineering at the university. “To take it from a planning tool to a real-time decision support tool requires a lot of work, and we are embarking on that now,” McDonald said. He said the key is to figure out how to update the system in real time, “because nothing is going to go according to plan during an emergency.” McDonald said Chattanooga is ideal for such a system because of its extensive fiber network. He said more than 100,000 homes are wired with smart electric meters. After the pilot, the National Science Foundation will decide if it will continue to support the project. The foundation, along with U.S. Ignite and other local philanthropic organizations have pumped about $1.5 million into the project. The university could also offer the system to metropolitan communities, installing it on their hardware and maintaining it.
<urn:uuid:cc8e15d6-cdee-473e-9519-35e5ab40aeed>
CC-MAIN-2017-04
http://www.govtech.com/em/disaster/Disaster-Response-Tool-Would-Identify-Chemical-Agents.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00279-ip-10-171-10-70.ec2.internal.warc.gz
en
0.961217
537
2.984375
3
Note: This book does not provide details about SQL syntax, error messages returned or any use of SQL outside of the COBOL environment. For details of these, refer to the documentation supplied by your database vendor. Server Express includes a number of SQL preprocessors (OpenESQL, the DB2 ECM and COBSQL) which enable you to access relational databases by embedding SQL statements within your COBOL program: OpenESQL is provided on UNIX platforms for which Open Database Connectivity (ODBC) drivers are available. OpenESQL is an integrated preprocessor that enables you to use Embedded SQL in your COBOL applications to access ODBC-enabled data sources. Unlike separate preprocessors, OpenESQL is controlled by specifying the SQL directive when you compile your application. A set of high-performance ODBC drivers is provided for the major relational databases, including Oracle, Sybase, Informix. OpenESQL in Server Express is compatible with applications developed using Micro Focus Net Express. This means that applications developed using the OpenESQL Assistant in Net Express can be easily ported to UNIX platforms. Use OpenESQL if your application is designed to use different relational database systems or you are unsure which relational database system you may be using in the future. If you are using an Oracle database and compile with the directive TARGETDB set to ORACLEOCI then, at run time, the application will make OCI calls rather than ODBC calls. This means that the deployed application will not require an ODBC driver, which has potential cost and performance benefits. For more details, see Oracle OCI Support in the chapter OpenESQL. The DB2 External Checker Module (ECM) is a new type of integrated preprocessor provided with Server Express and designed to work closely with the Micro Focus COBOL Compiler. The DB2 ECM converts embedded SQL statements into the appropriate calls to DB2 database services. It is intended for use with: COBSQL is an integrated preprocessor designed to work with COBOL precompilers supplied by relational database vendors. It is intended for use with: You should use COBSQL if you are already using either of these precompilers with an earlier version of a Micro Focus COBOL product and want to migrate your application to Server Express. For any other type of embedded SQL development, we recommend that you use OpenESQL. Each of the preprocessors works by taking the SQL statements that you have embedded in your COBOL program and converting them to the appropriate function calls to the database. Within your COBOL program, each embedded SQL statement must be preceded by the introductory keywords: and followed by the keyword: EXEC SQL SELECT au_lname INTO :lastname FROM authors WHERE au_id = '124-59-3864' END-EXEC The embedded SQL statement can be broken over as many lines as necessary following the normal COBOL rules for continuation, but between the EXEC SQL and END-EXEC keywords you can only code an embedded SQL statement, you cannot include any ordinary COBOL code. You can use any SQL statement between the EXEC SQL and END-EXEC keywords. A description of standard SQL is beyond the scope of this book and you should refer to the reference manuals supplied with your relational database or one of the many reference books on SQL that are available. However, Embedded SQL provides some extensions to standard SQL that either change the behavior of standard SQL statements or add new functionality. These extensions are summarized in the table below. A full description of the syntax for each of these statements, together with examples of its use, is given in the appendix Embedded SQL Statements. |BEGIN DECLARE SECTION||Marks the beginning of a host variable declaration section| |BEGIN TRANSACTION3||Opens a transaction in AUTOCOMMIT mode| |CALL3||Executes a stored procedure| |CLOSE||Ends row-at-a-time data retrieval initiated by the OPEN statement| |COMMIT||Commits a transaction| |COMMIT WORK RELEASE4||Commits a transaction and disconnects from the database| |CONNECT||Connects to a database| |DECLARE CURSOR||Defines a cursor for row-at-a-time data retrieval| |DECLARE DATABASE||Identifies a database| |DELETE (POSITIONED)1||Removes the row where the cursor is currently positioned| |DELETE (SEARCHED)||Removes table rows that meet the search criteria| |DESCRIBE||Populates an SQLDA data structure| |DISCONNECT2||Closes connections to one or all databases| |END DECLARE SECTION||Marks the end of a host variable declaration section| |EXECSP3||Executes a stored procedure| |EXECUTE||Runs a prepared SQL statement| |EXECUTE IMMEDIATE||Runs the SQL statement contained in the specified host variable| |FETCH||For a specified cursor, gets the next row from the results set| |INCLUDE||Defines a specific SQL data structure for use by an application| |INSERT||Adds data to a table or view| |OPEN||Begins row-at-a-time data retrieval for a specified cursor| |PREPARE||Associates an SQL statement with a name| |QUERY ODBC3||Queries the ODBC data dictionary| |ROLLBACK||Rolls back the current transaction| |ROLLBACK WORK RELEASE4||Rolls back a transaction and disconnects from the database| |SELECT DISTINCT||Associates a cursor name with an SQL statement| |SELECT INTO1||Retrieves one row of results (also known as a singleton select)| |SET AUTOCOMMIT3||Controls AUTOCOMMIT mode| |SET CONCURRENCY3||Sets the concurrency option for standard-mode cursors| |SET CONNECTION3||Specifies which database connection to use for subsequent SQL statements| |SET OPTION3||Assigns values for query-processing options| |SET SCROLLOPTION3||Sets the scrolling technique and row membership for standard-mode cursors| |SET TRANSACTION ISOLATION3||Sets the transaction isolation level mode for a connection| |UPDATE (POSITIONED)1||Changes data in the row where the cursor is currently positioned| |UPDATE (SEARCHED)||Changes data in existing rows, either by adding new data or by modifying existing data| |WHENEVER||Specifies the default action (CONTINUE, GOTO or PERFORM) to be taken after a SQL statement is run| In the table above: The case of embedded SQL keywords in your programs is ignored, for example: EXEC SQL CONNECT exec sql connect Exec Sql Connect are all equivalent. The case of cursor names, statement names and connection names must match that used when the variable is declared. For example, if you declare a cursor as C1, you must always refer to it a C1 (and not as c1). The settings for the particular database determine whether other words, such as table and column names, are case-sensitive. Hyphens are not permitted in SQL identifiers (in table and column names, for example). Once you have written your COBOL application containing embedded SQL, you must compile it specifying the appropriate Compiler directive such that the preprocessor converts the embedded SQL statements into function calls to the database: Specify the SQL Compiler directive. For details, see the OpenESQL chapter. Specify the DB2 Compiler directive. For details, see the DB2 chapter. Specify the PREPROCESS"COBSQL" Compiler directive. For details, see the COBSQL chapter. Multiple embedded SQL source files, compiled separately and linked to a single executable file, can share the same database connection at run time. This is also true for programs that are compiled into separate callable shared objects. If subsequent program modules (in the same process) do not process a CONNECT statement, they share the same database connection with the module that included the CONNECT statement. In a program that includes multiple, separately compiled modules, only one module should be compiled with the INIT option of the SQL Compiler directive. All other modules within the program should share that first automatic connection or make explicit connections using the CONNECT statement. OpenESQL - DB2 With COBSQL, if the INIT directive is specified more than once, second and subsequent uses are ignored. Copyright © 2000 MERANT International Limited. All rights reserved. This document and the proprietary marks and names used herein are protected by international law.
<urn:uuid:d43711e8-7db7-4624-aa63-5edb640763b7>
CC-MAIN-2017-04
https://supportline.microfocus.com/documentation/books/sx20books/dbintr.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00279-ip-10-171-10-70.ec2.internal.warc.gz
en
0.803703
1,876
2.53125
3
Word of Mouth: Full Employment Federal Reserve Board Governor Susan Bies told an audience at Duke University's Fuqua School of Business this morning that the U.S. economy is "basically running at full employment." Does this mean that the unemployment rate fell to zero percent over the long holiday weekend? Sadly, no. But it doesn't mean that full employment isn't its own mixed blessing. In simplest terms, full employment is when everyone who is willing to work at the going wage for their type of labor is employed. It can be different percentages in different countries at different times, but it is always a number greater than zero percent. The OECD (Organization for Economic Cooperation and Development) estimated in 1999 that the "full-employment unemployment rate" in the U.S. was about 4 to 6.4 percent, which is why with the current unemployment rate of 4.6 percent, Biels called it as she did. In 1944, President Franklin D. Roosevelt proposed "full employment" for Americans, declaring that having a job was a "basic human right," yet Congress has stopped short of legislating full employment many times in the years since for the simple reason that if the private sector came up short of good jobs for those who wanted them, the U.S. government would be called in to fill the gap. So why isn't this fantastic news all around? Policy makers associate low unemployment with high wages; if talent is hard to come by, workers can drive up heir worth, employers respond by raising prices to cover labor costs, and an inflationary spiral will be set in place that is considered more damaging than a rising unemployment rate. But, it doesn't mean it isn't a good time to be an U.S. worker, and an even better time to ask your bosses to show you the money.
<urn:uuid:3339e7a9-d1b6-4631-b065-3d79fd250711>
CC-MAIN-2017-04
http://www.eweek.com/careers/word-of-mouth-full-employment.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00095-ip-10-171-10-70.ec2.internal.warc.gz
en
0.971464
374
2.6875
3
As the amount of newly discovers planets and systems outside our solar systems grows, NASA is assembling a virtual team of scientific experts to search for signs of life. +More on Network World: NASA: Kepler's most excellent space discoveries+ The program, Nexus for Exoplanet System Science (NExSS) will cull the collective expertise from each of NASA’s science communities including Earth, Planetary, Heliophysicists, Astrophysicists and key universities to better analyze all manner of exoplanets, as well as how the planet stars and neighbor planets interact to support life, the space agency stated. The need is obvious: Since the launch of NASA’s Kepler space telescope six years ago, more than 1,800 exoplanets have been confirmed. There are thousands more exoplanet candidates waiting for confirmation, the space agency said. “With our current technologies, we have primarily measured the physical and astronomical properties of exoplanets -- such as their masses or sizes, and their orbital properties,” said Natalie Batalha, NASA’s Kepler mission scientist and co-director of NExSS at NASA in a statement. Scientists are developing ways to confirm the habitability of these worlds and search for biosignatures, or signs of life, NASA said. And key to this effort is understanding how biology interacts with the atmosphere, geology, oceans, and interior of a planet, and how these interactions are affected by the host star. This “system science” approach will help scientists better understand how to look for life on exoplanets. “In the field of exoplanets, finding exoplanets that could host life is no longer the goal. The quest is to find the signatures of life,” said Steve Desch, an associate professor in the Arizona State School of Earth and Space Exploration in a statement. “To do that we need to know for which types of exoplanets are oxygen and methane biosignatures, as opposed to natural geochemical outcomes. The team will help classify the diversity of worlds being discovered, understand the potential habitability of these worlds, and develop tools and technologies needed in the search for life in space. NExSS will be led by scientists from the NASA Ames Research Center, the NASA Exoplanet Science Institute at the California Institute of Technology, and the NASA Goddard Institute for Space Studies. Other members include Yale, Penn State, Berkeley/Stanford University, Arizona State, University of Washington. Check out these other hot stories:
<urn:uuid:5da14a9a-dcb2-43b3-b2ad-807c5ec6609a>
CC-MAIN-2017-04
http://www.networkworld.com/article/2912689/it-skills-training/nasa-teams-scientific-experts-to-find-life-on-exoplanets.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00095-ip-10-171-10-70.ec2.internal.warc.gz
en
0.917006
521
3.125
3
TCP and Transport Layer Protocols (e) - Flash - Course Length: - 1 hour of eLearning NOTE: While you can purchase this course on any device, currently you can only run the course on your desktop or laptop. As the communications industry transitions to wireless and wireline converged networks to support voice, video, data and mobile services over IP networks, a solid understanding of IP and its role in networking is essential. IP is to data transfer as what a dial tone is to a wireline telephone. A fundamental knowledge of IPv4 and IPv6 networking along with use of IP based transport protocols is a must for all telecom professionals. A solid foundation in IP has become a basic job requirement in the carrier world. Understanding of TCP and other IP based transport layer protocols is an important part of building this foundation. Starting with a basic definition, the course provides a focused basic level introduction to the fundamentals of IP based transport layer protocols like TCP, UDP and SCTP. It is a modular introductory course only on IP basics as part of the overall eLearning IP fundamentals curriculum. The course includes a pre-test and a post-test. This course is intended for those seeking a basic level introduction to the IP-based transport layer protocols - TCP, UDP and SCTP. After completing this course, the student will be able to: • Explain the key transport layer functions and the concept of ports • Describe User Datagram Protocol (UDP) and Transmission Control Protocol (TCP) • Explain how TCP provides reliable communication over IP and achieves optimal transmission • Define the special requirements for carrying telecom signaling over IP networks • List the key functions of Stream Control Transmission Protocol (SCTP) 1. Overview of the Transport Layer 2. User Datagram Protocol 3. Transmission Control Protocol 4. Stream Control Transport Protocol Purchase for:Login to Buy Create a flexible eLearning plan to purchase eLearning courses for one or more individuals, where course prices are discounted dependent on the number of courses purchased.
<urn:uuid:7b77e280-baaf-44e3-9f5d-ff6fb93d18d0>
CC-MAIN-2017-04
https://www.awardsolutions.com/portal/elearning/tcp-and-transport-layer-protocols-e-flash?destination=elearning-courses
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00123-ip-10-171-10-70.ec2.internal.warc.gz
en
0.875215
420
2.96875
3
ALOHAnet: The World’s First Wireless LAN Believe it or not, there was a time when communication between computers was done point-to-point, necessarily though a wired connection, by manual control. True, this would have been back around the time Richard Nixon appeared on “Laugh-In,” but it did happen. What changed this — or, at least, one of the developments that facilitated its change — occurred in the southernmost state in the United States: Hawaii. In 1970, faculty at the University of Hawaii wanted to create a network to link computers at campuses that were geographically far removed from one another. So they developed ALOHAnet, a computer networking system that used radio transmitters as ports passing data from machine to machine. As it was originally designed, ALOHAnet used two different radio frequencies to transmit data, with a hub machine sending packets of data on the “outbound” channel and client machines sending data back to the hub on the “inbound” channel. Data received was immediately sent back, allowing client machines to know whether the data had been received. A machine receiving back corrupted data would wait and resend the packet. Data would be corrupted when it “collided” with other data, that is to say, two client machines had attempted to send data at the same time. ALOHAnet’s main challenge was managing these collisions. Under its original configuration, the network had a throughput rate of just 18 percent, with a vast majority of the available bandwidth wasted. The first attempt at a solution to this problem was to create time slots assigned to all the computers in the network, during which time they were allowed to send data packets, and others were not. The flaw here was that if a certain machine on the network had nothing to send during its slot, that time was wasted. Nevertheless, this did double the throughput rate. With the second attempt at a solution to this problem, the developers of ALOHAnet really hit on something. The idea was to have client machines on the network “listen” in on the channel to determine whether it was in use, and if it wasn’t, begin sending data packets. To avoid one client machine getting on the frequency and staying on it for too long, thereby blocking other client machines trying to send packets, the data was broken into small packets, so all machines on the network could share the channel continually. The idea became known as carrier sense multiple access (CSMA), an innovation the developers of ALOHAnet turned around and improved by having client machines also listen to see whether their send packet made it back to the central hub machine on the network. This addressed problems that would arise if two client machines attempted to send a data packet at the same time, which could happen even with the machines having listened to see whether the channel was open. This idea became known as collision detection (CD). Put it all together, and you have carrier sense multiple access with collision detection (CSMA-CD). The ALOHAnet is not in use anymore, but in fostering the development of CSMA-CD, it provided a sophisticated network control protocol that proved to be a huge step forward in the development of the Ethernet. And it prefigured many of the design characteristics of modern wireless LANs.
<urn:uuid:88b99996-1774-42b5-9387-610c3721e5ea>
CC-MAIN-2017-04
http://certmag.com/alohanet-the-worlds-first-wireless-lan/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00425-ip-10-171-10-70.ec2.internal.warc.gz
en
0.976621
694
3.171875
3
OSI protocol is a powerful set of protocols that enables the efficient exchange of data. OSI protocol is widely deployed. The use of a set of protocols for the interoperability of systems is essential for a successful network connection to multiple computers. In 1984, computer experts have developed the OSI protocol. The OSI protocol has been superseded by the famous TCP/IP. While TCP/IP runs on five layers, OSI protocol manages the exchange of data using seven distinct layers. Session and Presentation layers do not exist in the TCP/IP where the Application layer directly relies on the TCP service. OSI protocol attracted both by its performance and ease of use. This protocol is effective in limiting transmission errors as to transfer from machine to machine. Marben, specialist of solutions based on OSI protocol Marben is helping professionals who want to manage as efficiently as possible their network and offers them sophisticated solutions based on OSI protocol. It also offers in conjunction other software solutions allowing you to effectively manage your network. For all your network management software needs, you can contact Marben. Our company has been excelling for more than 25 years in the design of software solutions for these specific areas. It is the preferred supplier of renowned firms such as Nokia, Ericsson, Ciena, Tellabs, Adtran or HP. Marben constantly ensures the quality of its products and services. Software solutions offered by Marben contribute to the success and performance of thousands of businesses worldwide.
<urn:uuid:a2028b8f-e9d1-43ab-ada5-a9e7b3637ec6>
CC-MAIN-2017-04
http://www.marben-products.com/marben/osi-protocol.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00425-ip-10-171-10-70.ec2.internal.warc.gz
en
0.936548
305
2.6875
3
Data breaches have become an everyday occurrence and numerous well-known organisations have been named and shamed, denting their reputations and wreaking financial damage. But any organisation, whatever its size or line of business, can be a target. Every organisation has some form of sensitive data such as financial records, customer details and employee information that is highly prized by criminals and the vast majority of organisations rely on technology to run their business. Technology, especially the use of disruptive technologies such as big data and cloud-based services, provides for greater productivity, flexibility and improved information access. But it also increases the chances that sensitive information can be inappropriately accessed, lost or stolen. As well as this, there are many regulations and industry standards that require that stringent safeguards are applied to personal and sensitive data. Of these, the EU data protection rules affect many organisations. Now, they are set to get tougher, with higher sanctions available for non-compliance and affecting a wider range of organisations than previously. This document discusses the changes being made to the European data protection landscape and suggests that encryption should be the default choice for protecting data. However, this should just be part of the overall data security strategy, which must be comprehensive and consistent.
<urn:uuid:a4658583-c9ca-4a0a-ad35-fc9d6be75c3f>
CC-MAIN-2017-04
http://www.bloorresearch.com/research/white-paper/for-the-eu-s-new-data-protection-regulation-encryption-should-bethe-default-option/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00333-ip-10-171-10-70.ec2.internal.warc.gz
en
0.951368
246
2.75
3
Last post we looked at dial-peers and the syntax that is needed to address endpoints. In this segment I want to explore the concept of creating destination-pattern that would be used on H.323 or SIP gateways to forward a call out of a POTS (Plain old Telephone System) port to the PSTN. In order to accomplish this, we will need to use variables to represent numbers that can be reached. The possible variables that could be used are: . — represents a single number position in which the range of numbers would be 0 through 9 including # and *. , — this is used for pausing before dialing additional digits [ ] — used to indicate a range of numbers in which only one single digit is matched from this range. For instance [02-9] would match the following possibilities, 0 2 3 4 5 6 7 8 and 9. Notice in the example 1 is not included in the range. [^ ] — this is used to indicate an exclusion range of numbers. For instance [^289] would match 0 1 8 and 9 since it is not part of this exclusion range. Exclusions are rarely used in destination-patterns to match numbers. ? — Indicates that the preceding digit occurred zero or one time. It is important to remember to enter ctrl-v before entering ? from your keyboard. % — value is used to indicate the preceding digit occurred zero or more times. This functions the same as the “*” used in regular expression. + — Indicates that the preceding digit occurred one or more times. T — Indicates the interdigit timeout. The router pauses to collect additional dialed digits. Now let’s see how we can build destination patterns with these variable types to represent the NANP (North American Numbering Plan). First, we want to include all the service codes within the NANP which would be 211 311 411 511 611 711 811 and 911. The pattern on the gateway would be [2-9]11. For seven digit dialing sequence we would include potentially two patterns: [2-9][02-9]….. and [2-9].[02-9]…. the purpose behind this is not to overlap static emergency numbers that are set in the dial-plan being 911 and 9911. The second statement would use a trunk access code of 9 if used. For 10 digit local dialing we would simply add the area code directly. So for the San Diego Area, two patterns would look like: 619[2-9]…… and 858[2-9]…… the reason for [2-9] at the beginning of the NANP prefix is making sure the rule that the area code or prefix cannot begin with a 0 or 1. 0 would indicate operator request and 1 for long distance service. For long distance you would use 1[2-9]..[2-9]…… making sure you included enough . to build a total of 11 digits. The last pattern will be used to call countries with varying dial-plan lengths like Europe. This would look like: 011T in which ‘011’ is the North American request for international access and the T would keep collecting digits until the interdigit timer would go into effect which would be approximately 10 seconds for most gateways. Next blog we will address incoming versus outgoing dial-peers and how it is used to bring up call legs. Author: Joe Parlas Variables used with destination patterns
<urn:uuid:b13f1868-216a-41f5-b77a-7afb1c9a2be1>
CC-MAIN-2017-04
http://blog.globalknowledge.com/2009/11/30/dial-peers-part-2/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280128.70/warc/CC-MAIN-20170116095120-00545-ip-10-171-10-70.ec2.internal.warc.gz
en
0.920449
736
2.546875
3
Disease surveillance through a murky crystal ball Critics say too many systems are collecting data on disease outbreaks with little coordination among them About 300 systems at federal, state and local agencies monitor disease outbreaks and chemical exposure. Some critics say that multiplicity is a problem. Serena Vinter, a senior research associate at the Trust for America’s Health, a nonprofit public health organization, said having so many monitoring systems results in a murky rather than crystal-clear picture. “There are too many disease surveillance systems, and they do not necessarily communicate with each other,” Vinter said. “It is hard to get a good picture.” Others say the organizations in charge of alerting the public are often slow to act. Veratect, a company that tracks disease outbreaks, said it had sounded the alarm about the swine flu outbreak in Mexico March 30, but several weeks passed before the World Health Organization and Centers for Disease Control and Prevention began widely publicizing the illness. Furthermore, much of the disease tracking for public health purposes occurs at the state and local levels. Officials are trying to upgrade their systems during a time of budget cuts for lab testing and epidemiologists. CDC’s role has been to work with states to aggregate and analyze data via a number of reporting networks. For example, CDC conducts flu surveillance every year from October to May, when it gathers data from 150 laboratories, 3,000 outpatient care sites, and 56 state and territorial health departments. The data is reported weekly on CDC’s Web site.
<urn:uuid:d954e35a-6dc3-430d-bc0f-b2bacbc4597a>
CC-MAIN-2017-04
https://fcw.com/Articles/2009/10/26/CDC-side-murky-crystal-ball.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280128.70/warc/CC-MAIN-20170116095120-00545-ip-10-171-10-70.ec2.internal.warc.gz
en
0.957881
322
2.8125
3
OTDR inner workings can be very complex but i would like to try and describle how it works in basic terms. They use the actual properties of the fiber cable as they respond to light. We normally want to see light traveling forward down a fiber, however, light can also travel backwards. This backwards light or also called backscatter is what the OTDR uses to calculate measurements. This backscatter is also used in both photography and medical instruments. All of them can be described as simple as backscattering measuring devices but with different applications. Provide a visual effects technician usually available on the screen and printed version. It is now to interpret these results as a doctor to interpret the results from an x-ray technician work. Experienced fiber technicians can see certain results and quickly determine that is showing a splice, a cut or even a loss caused by a connector. It is important that a technician keep practice just like an intern doctor would get practice or experience. The results are only as good as the person interrupting them and also the data that is entered. The data you enter is just as important as the person interrupting the data. Imagine if an intern doctor was given the results of a blood pressure test form a lab technician and it showed extremely low pressure. The doctor would look for other symptoms before they send them off to get some medicine for low blood pressure. You all times need to check your data points. A fiber technician would enter such parameters as mode, pulse width, length and attenuation. Enter some of the wrong parameters and it could give you a false sense of security. All of these things will come in time as a technician gets familiar with this useful but complex device. An OTDR is a must have to determine the health of your fiber optic cable network. The data entered is as important as the technician analyzing the results. Every fiber technician should have one these devices within reach to quickly determine if their fiber system is not work. FiberStore suppliers the best OTDRs, to know more otdrs information, please click otdr tester sale.
<urn:uuid:2b920420-e4ea-4dfd-bb8b-6ab224d304df>
CC-MAIN-2017-04
http://www.fs.com/blog/how-otdr-works-in-basic-terms.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00389-ip-10-171-10-70.ec2.internal.warc.gz
en
0.96442
425
2.71875
3
Who has the strongest voice in the high-profile public forum of Wikipedia, and on which topics? The answer to that question depends on your address and your language. “No information is neutral or devoid of power-relations,” says Mark Graham of the Oxford Internet Institute (OII), an academic center devoted to studying the societal implications of the Internet. “Accordingly, we need to understand where information is produced and what it is produced about.” To investigate who was creating Wikipedia content—particularly content related to the Middle East and North Africa—Graham and his team at OII worked with TraceMedia, a London-based web application firm specializing in visualizations and mapping tools. Together, they created the Mapping Wikipedia project. The resulting data visualization gave Graham a stark impression of the extent to which Wikipedia’s geographic coverage was uneven. “I knew it would be [uneven], but not to the degree which it is,” says Graham. “This doesn’t mean that Wikipedia is inherently flawed; it just means that we can—and should – become more aware of the biases in the knowledge that we use.” What the visualization shows: Drop-down menus let users choose from seven languages (Arabic, Egyptian Arabic, English, Farsi, French, Hebrew and Swahili) and select one or more global locations including countries, regions and continents. The visualization also allows users to view the data by the dates Wikipedia articles were created, the word counts of articles, the number of authors, images and links to other Wikipedia articles, density (the number of articles falling under a pixel at a given zoom level) and the number of anonymous edits associated with an article posted from a certain location. Based on these selections, Mapping Wikipedia displays all matching geotagged Wikipedia articles as colored dots illuminated against a grey-on-black map. The largest data set (English language geotagged Wikipedia articles for the entire world) can take more than five minutes to load, but other smaller data sets (French language geotagged articles for North Africa, for instance) load in just a few seconds. Once the sets have loaded onto the map, users can toggle back and forth rapidly among a number of different sorting methods, each with its own color scheme. Sorting the French language North African Wikipedia articles according to the number of images they contain yields mainly blue and purple dots; few articles have photos. Viewing the same data set according to the date when the articles were created lights up the map with green dots showing that many articles were created in 2012. Hovering over any dot on the map reveals the title of that Wikipedia article. Clicking on the dot opens an information box filled with metrics on the article—the longitude and latitude of its geotag, its creation date, word count, number of authors and so forth. The information box also contains a link to the original Wikipedia article. Putting the Map in Motion On May 17, TraceMedia launched a new version of Mapping Wikipedia that shows dots light up on the map along a timeline that shows when articles were created. “In some regions, the animation shows patterns emerge gradually, as you would expect when humans are creating articles one by one,” says Gavin Baily, producer and developer at TraceMedia. “For France and Italy, you see whole blocks of stub articles appear overnight, which is a telltale sign of bot activity.” Below is a video clip showing the density of articles in English on the timeline. Source of the data: The Oxford Internet Institute derived its data from Wikipedia XML dumps of languages relating to the Middle East and Africa, a region of the world where OII researchers were particularly interested in analyzing article production. OII then created a MySQL database of geo-tagged articles and assigned country codes for each article from the XML dumps. What the designers did: TraceMedia wrote a PHP (hypertext preprocessor) middleware application to examine different aspects of articles, including concentrations of author activity and image contributions, to gauge article quality and identify bot-generated article stubs. Technically, the main challenge was figuring out a way to display close to one million interactive points on an OpenStreetMap, a free open source web map. After experimenting with various mapping libraries, TraceMedia decided to use an Open Layers map with an HTML5 Canvas renderer optimized for large point datasets on top of a base map made from styled Google Map tiles. “Previously, having even 1,000 interactive points on a map would have been a lot, and 20,000 points would have been considered a ridiculous number,” says Gavin Baily, producer and developer at TraceMedia. “Hundreds of thousands of interactive points were just not possible in any way until the latest browser versions that use HTML5 Canvas and run Java so well and so quickly that you can now plot a million points at semi-interactive frame rates.” Insights from the Mapping Wikipedia project - Israelis “are far more active in creating/reproducing knowledge in one of the world’s most used websites than their counterparts in the Middle East and North Africa.” - As a portion of the Internet population, users in Italy, Scandinavia, the Baltic States and Ukraine are more likely to make an edit to Wikipedia than authors from Great Britain or Germany. - Mapping Wikipedia users have found a higher-than-expected number of Swahili language geo-tagged articles in Turkey and English articles tagged for Poland. In both cases, it seems that either dedicated Wikipedia editors or bots have gone out of their way to create Wikipedia entries for a large number of relatively obscure towns and localities. Key Presentation Choice: “The brilliant thing about using a stylized Google base map is that you have access to fantastic APIs, wonderful tools and the ability to easily render the map in any way you want. Rather than having the standard blue sea and grey land masses, we were able to develop the dark Mapping Wikipedia base map,” says Baily. “The Wikipedia Foundation tends to be keen on having people use OpenStreetMap, which lets you create tiles, but you have to host them yourself. If you have no budget, you would need a really good reason to pick OpenStreetMap over Google.” As for the dark map background illuminated with pinpoints of light, the total effect brings to mind nighttime shots of Earth from space with the cities and highways glowing with manmade illumination. “Whether we chose light-on-dark or dark-on-light, the key thing for us was to make sure that the articles were clearly visible,” says Baily. “Ultimately, we made the choice to use the dark background mainly for aesthetic reasons. We found that light dots on a dark background worked best in terms of being able to distinguish a range of colors within each of the Mapping Wikipedia metrics.” Reaction: The result caught the attention of many data visualization followers from Information Aesthetics, to the Fast Company Co.Design blog which called Mapping Wikipedia “a magnificently complicated project that only grows more fun at every turn.” Aaron Dalton is a freelance writer based in Nashville.
<urn:uuid:ea261c60-3080-45db-b33f-db46e2a74831>
CC-MAIN-2017-04
http://data-informed.com/data-visualization-identifying-geographical-origins-in-wikipedia-data/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283689.98/warc/CC-MAIN-20170116095123-00507-ip-10-171-10-70.ec2.internal.warc.gz
en
0.933233
1,501
3.046875
3
DARPA-funded IBM researchers today said they have developed a human brain-inspired computer chip loaded with more than 5 billion transistors and 256 million “synapses,” or programmable logic points, analogous to the connections between neurons in the brain. In addition to being one of the world’s largest and most complex computer chips ever produced it requires only a fraction of the electrical power of conventional chips to operate, IBM and Defense Advanced Research Projects Agency (DARPA) stated. The developers said the chip, which can be tiled to create large arrays, is built on Samsung Foundry's 28nm process technology, the 5.4 billion transistor chip has one of the highest transistor counts of any chip ever produced. Each chip consumes less than 100 milliWatts of electrical power during operation. When applied to benchmark tasks of pattern recognition, the new chip achieved two orders of magnitude in energy savings compared to state-of-the-art traditional computing systems, DARPA and IBM said. The high energy efficiency is achieved, in part, by distributing data and computation across the chip, alleviating the need to move data over large distances. In addition, the chip runs in an asynchronous manner, processing and transmitting data only as required, similar to how the brain works. The new chip’s high energy efficiency makes it a candidate for defense applications such as mobile robots and remote sensors where electrical power is limited, IBM and DARPA stated. “Computer chip design is driven by a desire to achieve the highest performance at the lowest cost. Historically, the most important cost was that of the computer chip. But Moore’s law—the exponentially decreasing cost of constructing high-transistor-count chips—now allows computer architects to borrow an idea from nature, where energy is a more important cost than complexity, and focus on designs that gain power efficiency by sparsely employing a very large number of components to minimize the movement of data. IBM’s chip, which is by far the largest one yet made that exploits these ideas, could give unmanned aircraft or robotic ground systems with limited power budgets a more refined perception of the environment, distinguishing threats more accurately and reducing the burden on system operators,” said Gill Pratt, DARPA program manager in a statement. The chip was developed on the auspices of DARPA’s Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) program which looks to speed the development of a brain-inspired chip that could perform difficult perception and control tasks while at the same time achieving significant energy savings, the agency says. The goal is to develop systems capable of analyzing vast amounts of data from many sources in the blink of an eye, letting the military or civilian businesses make rapid decisions in time to have a significant impact on a given problem or situation. According to DARPA, programmable machines are limited not only by their computational capacity, but also by an architecture requiring (human-derived) algorithms to both describe and process information from their environment. In contrast, biological neural systems such as human brains, autonomously process information in complex environments by automatically learning relevant and probabilistically stable features and associations, DARPA stated. As compared to biological systems for example, today’s programmable machines are less efficient by a factor of one million to one billion in complex, real-world environments. Many tasks that people and animals perform effortlessly, such as perception and pattern recognition, audio processing and motor control, are difficult for traditional computing architectures to do without consuming a lot of power. Biological systems consume much less energy than current computers attempting the same tasks, the researchers stated. The new chip is just one of DARPA’s programs that aims to more deeply into how computers can mimic a key portion of our brain. Last Fall it issued a Request For information, on how it could develop systems that go beyond machine learning, Bayesian techniques, and graphical technology to solve "extraordinarily difficult recognition problems in real-time." What DARPA said it was interested in is looking at mimicking a portion of the brain known as the neocortex which is utilized in higher brain functions such as sensory perception, motor commands, spatial reasoning, conscious thought and language. Specifically, DARPA said it is looking for information that provides new concepts and technologies for developing what it calls a "Cortical Processor" based on Hierarchical Temporal Memory. "Although a thorough understanding of how the cortex works is beyond current state of the art, we are at a point where some basic algorithmic principles are being identified and merged into machine learning and neural network techniques. Algorithms inspired by neural models, in particular neocortex, can recognize complex spatial and temporal patterns and can adapt to changing environments. Consequently, these algorithms are a promising approach to data stream filtering and processing and have the potential for providing new levels of performance and capabilities for a range of data recognition problems," DARPA stated. "The cortical computational model should be fault tolerant to gaps in data, massively parallel, extremely power efficient, and highly scalable. It should also have minimal arithmetic precision requirements, and allow ultra-dense, low power implementations." Check out these other hot stories:
<urn:uuid:d4bf876c-b724-42b3-bd5f-2257246e443f>
CC-MAIN-2017-04
http://www.networkworld.com/article/2462846/ibm/ibm/darpa-turn-out-brain-like-5-billion-transistor-superchip.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00233-ip-10-171-10-70.ec2.internal.warc.gz
en
0.942716
1,063
3.65625
4
Spam, Spam, Spam & More Spam According to security vendor PandaLabs, its analysis on 430 million email messages from 2008 revealed only 8.4% of messages that reached companies were legitimate. Some 89.88% of messages were spam, while 1.11% were infected with some type of malware. Only January 2008 witnessed levels of spam below 80%. The amount of spam fluctuated throughout the year, peaking in the second quarter at 94.27% of all mail reaching companies. With respect to infected messages in 2008, the Netsky.P worm was the most frequently detected malicious code. This type of malware activates automatically when users view the infected message through the Microsoft Office Outlook preview pane. It does this by exploiting a vulnerability in Internet Explorer that allows automatic execution of email attachments. "The fact that these two malicious codes often act in unison explains the high number of detections of both," said Luis Corrons, technical director of PandaLabs, in a press release. "Cyber crooks often launch several strains of malware with each exploit to increase the chances of infection, so even if users whose systems are up-to-date are immune to the exploit, they could still fall victim to infection by the worm if they run the attachment." The Rukap.G backdoor Trojan, designed to allow attackers to take control of a computer, and the Dadobra.Bl Trojan were also among the most prevalent malicious code. Much of this spam was circulated by the extensive network of zombie computers controlled by cyber-crooks. A zombie is a computer infected by a bot, a type of malware allowing cyber criminals to control infected systems. Frequently, these computers are used as a network to drive malicious actions such as the sending of spam. Just in the last three months of the year, 301,000 zombie computers were being put into action every day. With respect to the different types of spam in circulation, 32.25% of spam in 2008 was related to pharmaceutical products with sexual performance enhancers accounting for 20.5%. Spam relating to the economic situation also grew significantly throughout 2008. False job offers and fraudulent diplomas accounted for 2.75% of all junk mail in the year, while messages promoting mortgages and fake loans were responsible for 4.75%. Spam promoting fake brand products, such as Swatches, was responsible for 16.75% of the total. This last category nevertheless, dropped from 21% in the first half of the year to 12.5% in the last six months. To view an entire breakdown of the variety of spam subjects that PandaLabs discovered, please access the data here: http://www.flickr.com/photos/panda_security/3234535186/.
<urn:uuid:0efa5d57-ada8-4af5-b380-e84a5852cc55>
CC-MAIN-2017-04
http://www.cioupdate.com/print/research/article.php/3799581/Spam-Spam-Spam--More-Spam.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00049-ip-10-171-10-70.ec2.internal.warc.gz
en
0.957274
563
2.546875
3
How much RAM can I access with a 32-bit or 64-bit operating system? Whether you use a 32-bit or 64-bit operating system depends on the amount of memory you require in the system. 64 bit operating systems, except for their access to more memory, do not provide improved performance, and sometimes there a problems finding 64 bit drivers. Extra RAM also doesn’t help improve performance unless its needed i.e. you are running multiple memory intensive applications that are currenlty using up the physcial RAM. Once the system goes into using Virtual memory (hard disk) for storing the excess, it’s a 1000 times slower.-
<urn:uuid:5000a116-144a-46ad-bdb2-2727483043b3>
CC-MAIN-2017-04
http://technicians-blog.kingcomputer.com.au/how-much-ram-can-i-access-with-a-32-bit-or-64-bit-operating-system/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281746.82/warc/CC-MAIN-20170116095121-00443-ip-10-171-10-70.ec2.internal.warc.gz
en
0.884232
137
2.78125
3
RIP is a protocol that is used for routing IP networks. It was designed in the early 1980’s for communication between gateways (computers with two NIC’s). It is the oldest routing protocol used by the network industry and is considered by many to be inefficient or border-line obsolete. However for CCNA students it important to understand RIP, as well as how to configure and troubleshoot it. This post covers RIP’s baseline features. First, the metric for RIP is hop count, where a hop is a router or gateway. When a router or gateway receives a packet, processing required for the delivery of the packet would insert latency (Processing Delay). The delay is cumulative, so the best path will always be the chosen based on fewest number of hops. However due to this delay, RIP has a maximum hop count of 15, so 16 hops is considered inaccessible. Second, RIP is a distance vector routing protocol. Distance Vector routing protocols exchange their routing tables through update packets to their directly connected neighbors on periodic intervals. These updates are sent even when there isn’t a change. When there is a change, triggered updates are sent to neighboring routers so they can adjust their tables. As a result, distance vector routing protocols have slow convergence due to these timers. The timers are: Update, Invalid, and Flush. You can verify these timers with the show ip protocols command as shown in Example 1. The period between routing information sent between neighbors is the Update interval. This is the primary timer used in RIP and convergence is derived. Also each RIP update can only contain a maximum of 25 network entries. A router running RIP with 100 routes would send a 4 updates packets every period. Imagine if a routing table had 10000 routes. Four hundred updates would be sent every 30 seconds! By default, the timer is set to 30 seconds. Example 2 displays updates being sent and received every 30 seconds. The next timer is the invalid timer, is Interval of time after which a route is declared invalid. This interval is measured from the last update received for the route. The route becomes invalid when there is an absence of updates during the invalid time that refresh the route. The Invalid timer is by default 180 seconds. Above in example 3, the show ip route command shows that the route 188.8.131.52/32 displays that the last time and update was received from neighbor 184.108.40.206 was 179 seconds. At 180 seconds the route was become invalidated and will be sent in the update to neighbors with the metric of 16. Debug ip rip is running in this example. You can see verify the rip database with the show ip rip database command and see that the routes are marked as possibly down. This means that the router will still attempt to send packets destined to 220.127.116.11 until the flush timer is reach. The hold-down interval is the duration in which routing information regarding better paths is suppressed. A route enters into a hold-down state when an update packet is received that indicates the route is unreachable. The route is marked inaccessible (possible down as seen from example 4) and it will also be advertised as unreachable to other downstream neighbors. However, a router will continue to forward packets until an update is received with a better metric or until the hold-down time expires. When the hold-down expires, routes advertised by other sources are accepted and the route is no longer inaccessible. By default this interval is 180 seconds. The flush interval is the period of time that must pass before the router will flush an inaccessible route from its routing table. The default is 240 seconds. This concludes the baseline for RIP. The next blog discussion we will focus on the difference between the versions of RIP. Author: Jason Wyatte
<urn:uuid:5324e166-1b38-41fc-978f-ea9f02bfc3ae>
CC-MAIN-2017-04
http://blog.globalknowledge.com/2009/07/30/basics-of-understanding-rip/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00379-ip-10-171-10-70.ec2.internal.warc.gz
en
0.950052
780
4.1875
4
IBM Seeks Organic Solution to Power Systems Challenge, Global Warming December 16, 2008 Alex Woodie IBM and a team of Harvard university researchers launched a new initiative last week to develop and deliver power systems that utilize more efficient solar technology. The Clean Energy Project–the newest project to join the World Community Grid–seeks to tap into the computing power of millions of idle PCs and servers to crunch scientific data in the hopes of speeding the discovery of organic chemical compounds that have better electric properties than silicon, the main–but inefficient–ingredient in today’s solar panels. Current silicon-based solar cells are only about 20 per cent efficient and cost about $3 per watt of electricity generated, according to IBM. Researchers are developing a newer form of solar cells that are built with plastic at the core, which is more flexible, lighter in weight, and less expensive to make. If a sizable breakthrough could be made in solar technology, people would be more inclined to invest in solar energy, and the world could reduce its reliance on fossil fuels and the global warming that many fear is the result of burning fossil fuels. It would take us about 100 days of computational time to screen each of the thousands of compounds for electronic properties using traditional computing methods, says Alan Aspuru-Guzik, the principal investigator and a professor in the Department of Chemistry and Chemical Biology at Harvard. “Yet with World Community Grid’s free computing power, augmented by cloud computing, the project is estimated to complete in two years. It was estimated to have taken 22 years to run on a regular scientific cluster.” IBM already has more than a million computers hooked into the World Community Grid, representing more than 413,000 members across 200 countries. The grid has been used to search for answers to many pressing problems since it was launched by IBM in late 2004, including researching cancer, AIDS/HIV, climate change, human proteins, agriculture (specifically rice production), and various viruses, including dengue fever, hepatitis C, West Nile fever, and Yellow fever. The client-side software that processes work units and coordinates results with the World Community Grid was developed by the University of California, Berkeley, and runs on Windows, Linux, and Mac OS. Unfortunately, IBM Power Systems servers will not be able to assist in the search for more efficient power systems.
<urn:uuid:a0503c41-725f-4621-9fff-c869c897899a>
CC-MAIN-2017-04
https://www.itjungle.com/2008/12/16/fhs121608-story10/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00379-ip-10-171-10-70.ec2.internal.warc.gz
en
0.930815
484
3.09375
3
Last week’s fatal shootings of two black men and several Dallas police officers are causing many Americans (and others) to ponder racism, extremism and violence in ways they have not before. Today, we have the technology and ability to watch the heartbreaking and terrifying footage of Alton Sterling being shot all over the internet. Thus, our young people, children and students have access to this media as well. Although technology does a lot of good for us, having access to violent materials online is not a benefit for young, formative minds. Racist ideologies and extremist views are said to form at an early age. Social media and the internet make it easy for young people to witness violence online. Because of this, it’s of extreme importance for schools, parents and communities to educate and safeguard students from harmful messages and materials — not just physically, but also on school technology, which is becoming the most prevalent pathway for student communication. Students face a multitude of risks on the internet today. The following statistics paint the picture of the dangers students face today: It’s everyone’s responsibility to intervene with these dangers and risks. These startling statistics clearly show that American youths are subject to a multitude of dangerous issues and risks. They encounter them predominantly while on the internet and social media. To combat and address these factors, educators and parents must find ways to educate youth on them. Some parents might allow their children to roam the internet freely as long as they have a rule that the child must discuss and ask questions about anything troubling they’ve seen, for example. Schools must have measures in place in which to detect, address and intervene when students are at risk online. Online monitoring software can help eliminate risks, educate students, and keep them safe Impero Education Pro internet safety software can help parents and educators alike keep children safe and informed on the violent or disturbing materials on the internet. Impero Education Pro’s monitoring software monitors students’ online activity, which helps schools detect issues, spot patterns in online behavior and intervene appropriately. Not only can monitoring detect threats of racial bullying, but it can also flag issues such as extremism and radicalization, LGBT derogatory language, suicide, weapons, violence and self-harm. By identifying these issues safeguarding staff can intervene, mentor and — in cases where students are being exposed to racial and religious bias, and/or extremist content — offer vital counter narratives that can educate students on the dangers of these behaviors and ideals. Impero Education Pro updated online monitoring keyword libraries are available now. Impero recognizes the importance of student safety and has developed an updated online monitoring keyword library with the help of nonprofit organization experts and school focus groups. These libraries detect and flag an exhaustive list of phrases and definitions in the following categories: The updated keyword library function is free to existing Education Pro customers by contacting Impero Customer Support Portal for download instructions. New customers receive the full updated keyword library with purchase of Education Pro. For the full updated keyword library launch press release, click here. To find out more about how Impero education network management software can help your school with online monitoring, request a free demo and trial on our website. To talk to our team of education experts, call 877.883.4370, or email Impero now to arrange a call back.
<urn:uuid:7b725587-abfb-4f10-a777-801e92c6fe0d>
CC-MAIN-2017-04
https://www.imperosoftware.com/promoting-student-safety-online-in-times-of-turmoil-and-trouble/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280730.27/warc/CC-MAIN-20170116095120-00407-ip-10-171-10-70.ec2.internal.warc.gz
en
0.937906
681
2.953125
3
Breaking the wireless bottleneck with MegaMIMO It might be after an earthquake, hurricane or a Superbowl victory. Too many people try to access the cell or wireless network and few can get through. In first-responder situations, of course, such network overloads can result in lost lives. Hariharan Rahul has a solution called MegaMIMO. MIMO stands for multiple-input multiple-output, a technology that uses multiple transmitters and receivers to transfer more data at the same time. In fact, all wireless products using the 802.11n standard support MIMO. MIMO technology takes advantage the fact that radio waves bounce off surfaces. As a result, a single transmission arrives at a receiver at slightly different times. With MIMO, receivers are able to combine those data streams, effectively increasing the throughput and range. Using MIMO, a wireless access point with three antennas can deliver transmission speeds of 600mbps, compared to 300mbps for an access point with two antennas. MegaMIMO is an enhancement of MIMO that uses new signal processors and signal-processing algorithms to increase the throughput of current wireless access points by a factor of 10, and to more than double their range. What's more, MegaMIMO offers signal processing features that allow access points to handle multiple transmissions simultaneously. "In today's networks, multiple access points cannot talk at the same time on the same wireless channel," said Rahul, who was on an MIT team that developed MegaMIMO, "If they do it will interfere with each of them.” MegaMIMO is a combination of signal processing algorithms and software coordination that allows multiple access points to talk together at the same time on the same channel." By monitoring the signals, says Rahul, MegaMIMO detects patterns and is able to identify each transmission signal. While the improved connectivity will help all wireless users, Rahul notes that it is especially critical for the military and first-responders. "If you're working in challenged situations where there is a lot of potential interference, you either have to transmit at very high power or you have to settle for very low throughput," he said. "MegaMIMO allows multiple access points to transmit together so that each user gets the data they want. Rahul, received his PhD in 2012 and launched MegeMIMO Inc. in 2013. He's currently developing a prototype wireless access point that he expects to complete in the second half of 2014. He expects MegaMIMO access points, when they hit the market, to be priced "in the same ballpark" as current access points. The technology can also be applied to cellular networks, and Rahul says he plans to pursue that avenue after bringing the WiFi version to market. "It's easier to demonstrate with Wi-Fi than in the cellular spectrum," he explained. To implement MegaMIMO on a cellular network will require working closely with a cellular provider. Posted by Patrick Marshall on Dec 20, 2013 at 11:17 AM
<urn:uuid:8c8c3c90-43d7-475f-9329-4ac5a626f302>
CC-MAIN-2017-04
https://gcn.com/blogs/emerging-tech/2013/12/megamimo.aspx?admgarea=TC_EmergingTech
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280891.90/warc/CC-MAIN-20170116095120-00315-ip-10-171-10-70.ec2.internal.warc.gz
en
0.94712
626
3.109375
3
In spite of your best efforts to prevent compromise and downtime, they will occur. Thus, you must not only plan to prevent problems, you must also plan to handle failures when they occur. This form of planning is known as Incident Management. In order to be reasonably prepared for the unexpected or the “it occurred anyway” events, your organization needs to address several essential incident response concepts including: - Audit & Log Analysis - Continuous Monitoring/SIEM - Hacking/Cyber Warfare - Incident Response - Reverse Engineering Audit & Log Analysis In relation to Threat Management and Access Control, it is important to audit, log, and monitor events that occur whether caused by or focused on systems and processes or subjects. But what is even more important is to review and analyze the collected data. Analysis involves the processing of the audit logs with the goal of gaining a perspective and understanding of events as well as the status of security. Analysis may include forensics which seeks to understand the meaning behind items recorded by auditing. Auditing and analysis thus lead to resolving employee issues, tracking down criminals, and improving the security of the organization. Continuous monitoring and Security Information and Event Management (SIEM) is the combination of two previously separate disciplines — namely SIM (security information management) and SEM (security event management). SIEM focuses on providing real-time analysis of incidents detected by hardware and software. SIEM systems provide detailed incident reports which are often useful in incident management and response and also for compliance verification. SIEM systems can aggregate data from a plethora of sources, find correlations within bulk data, automatically issue alerts, provide real-time insight into the status of the infrastructure, and assist in event record retention. The ability to have a continuous and consistent monitoring mechanism, such as a SIEM solution, can be a vital part of an enterprise’s ability to track down and ultimately resolve issues and compromises quickly. Forensics is the art and science of collecting information and assets, analyzing the collected items, and presenting findings to a legal authority, often in relation to a civil or criminal case. Forensics is generally the science of evidence collection, preservation, analysis, and reporting. Specifically, computer forensics focuses on gathering evidence from computer systems and storage devices. Since such evidence is stored as binary data on volatile and mostly magnetic media, it is easy to damage and change even in the act of discovery and collection. If an organization takes action based on information they discover or uncover in their auditing processes or system maintenance procedures, they must make sure to abide by the rules of evidence in order for it to be admissible in a court of law. Often, organizations will have trained forensics personnel on staff, on call as consultants, or have an established relationship with local law enforcement. The use of proper forensic procedure can make the difference between obtaining justice and allowing a suspect to get away with a crime. Hacking is the ability to use a system in a way that it wasn’t designed, to gain new benefits or capabilities by modifying a product, or to locate vulnerabilities and take advantage of them. When hacking is performed for illicit gain or simply to cause harm to the target, it is a criminal action and may be linked to cyber warfare actions. Anyone with a little time and interest can learn to become a hacker. There is an abundance of very powerful hacking tools available on the Internet along with tutorials and training materials. Your organization needs to be prepared to defend itself against the international hacker as well as a disgruntled employee. Learn the skills, tools, techniques, and methodologies of criminal hackers, and then use them against your own organization’s security structure to identify deficiencies that need to be addressed. Once you are aware of a compromise, intrusion, or other form of unplanned downtime, you need to respond quickly to contain damage, remove the offending elements, and restore the environment back to normal. The success of incident response is based on preparation. Incident response preparation includes written policies and procedures, an established CIRT (Computer Incident Response Team), training, drills, simulations, with thorough follow-up/post-mortem reviews. When your organization is under threat from an attack, has been damaged by mother nature, or a key component of the infrastructure fails without warning, only a well-designed and executed incident response will be able to provide you with fast recovery and prevent a true disaster from occurring. Reverse engineering is the ability to analyze an item, hardware or software, in order to understand how it works without having access to its source code or original design blueprints. Reverse engineering can enable someone to develop a reproduction or simulation of the original item. This could be useful when the original source code is unavailable or there are patent restrictions preventing/hindering use of the original. Reverse engineering can be used to solve problems or create more efficient alternatives. However, reverse engineering can also be used to develop new exploits and attacks. For example, a technique called fuzzing repeatedly sends random input sets to a target to watch how it reacts. If an abnormal reaction occurs, this could be a symptom of a bug or coding error, which may be converted into an exploit. Reverse engineering is also useful in understanding how malicious code works — including its means of distribution, infection, and payload delivery. Through reverse engineering techniques, defenses against exploits and malware may be developed.
<urn:uuid:185768fa-00ef-4be0-be58-aa5cf5a1422b>
CC-MAIN-2017-04
http://blog.globalknowledge.com/2012/05/21/incident-management-how-do-you-handle-failures/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00223-ip-10-171-10-70.ec2.internal.warc.gz
en
0.944219
1,088
2.578125
3
Science.gov launches a new version - By Doug Beizer - Sep 23, 2008 A new version of Science.gov — a free, single-search gateway for science and technology information from 17 organizations in 13 federal science agencies — was launched recently. Science.gov 5.0 lets users to search additional collections of science resources. It also makes it easier to target searches and find links to information on a variety of science topics. The Energy Department hosts the site that was announced Sept. 15. The new version of the Web gateway has seven new databases and portals which allow researchers access to more than 200 million pages of scientific information. New information available includes thousands of patents from Energy Department research and development, documents and bibliographic citations of DOE accomplishments, the department said. Science.gov 5.0 also has a clustering tool which helps target searches by grouping results, by subtopics or dates.The new version of the Web site also provides links to related EurekAlert! Science News and Wikipedia, and provides the capability to download research results into personal files or citation software. Science.gov is hosted by DOE’s Office of Scientific and Technical Information in DOE’s Office of Science. It is supported by contributing members of the Science.gov Alliance that include the Agriculture, Commerce, Defense, Education, Health and Human Services, and Interior departments, the Environmental Protection Agency, the Government Printing Office, the Library of Congress, NASA and the National Science Foundation. Doug Beizer is a staff writer for Federal Computer Week.
<urn:uuid:bcb679af-b0c0-430f-82ed-1fd24fa1fcc9>
CC-MAIN-2017-04
https://fcw.com/articles/2008/09/23/sciencegov-launches-a-new-version.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279915.8/warc/CC-MAIN-20170116095119-00435-ip-10-171-10-70.ec2.internal.warc.gz
en
0.916845
317
2.71875
3
As wildfires ravaged parts of Colorado earlier this summer, a Web-based tracking tool was helping responders quickly and more accurately find the blazes caused by lightning strikes. Called the Lightning Decision Support System, the Boulder Office of Emergency Management (BOEM) started using the technology a couple days prior to the Flagstaff Fire that started on June 26 and eventually burned 300 acres. As lightning pummeled the county, emergency workers were able to pinpoint the location of strikes in real time and more confidently send responders to the scene. Mike Chard, BOEM’s director, said his deputy director, Sgt. Dan Barber, was manning the software the night of the Flagstaff Fire. Chard recalled that it was so dry and windy in the area, before Barber was able to get the lightning location data down to 911 dispatch personnel, the office had already received calls about it. That first lightning hit was just the beginning of a series of strikes that the tracking software — developed by Weather Decision Technologies (WDT) — helped locate that evening in Boulder County. “That night we had another four or five lightning strike fires, so we were going crazy around here, and the lightning strike software seemed to pick all those up and help us out directing people to these strikes,” Chard said. Monitoring the weather has become commonplace for emergency officials in Boulder County. The BOEM established a severe weather protocol a couple of years ago in response to a wildfire in Fourmile Canyon that destroyed 169 homes. Once lightning or other harsh weather begins, Chard’s team goes into a monitoring phase where it pushes out information and warnings to first responders and engages weather spotters. Chard said the lightning strikes tracking tool was a natural extension of that protocol. “One of the philosophies that we are using here is instead of waiting for disaster to hit, we are trying to fill another void, which is to be more predictive in our efforts to pass [along] information, increase awareness and decrease reaction time,” Chard said. “It seems to be working well.” The Lightning Decision Support System operates off a Google Map interface that can be accessed via a computer or mobile device. The program imports cloud-to-cloud and cloud-to-ground lightning data from Earth Networks’ Total Lightning Network, which features more than 550 lightning sensors measuring atmospheric conditions. The sensors are scattered around the world. The information is compiled using a mathematical formula to show the location of each lightning incident. In an email to Government Technology, David VandenHeuvel, senior vice president of enterprise solutions for WDT, explained that data from the sensors is processed by the company in real time and displayed via standard Internet connections. The information is collocated with the National Weather Center. “This allows us direct connections to many high-volume data sets from the National Weather Service,” VandenHeuvel wrote. “WDT also uses satellite dishes and other types of connections to receive data in the fastest methods possible.” When an emergency worker accesses the system online, he or she see markers that indicate where lightning has hit, including the latitude and longitude coordinates of the strikes. Chard said the program plots the lightning strikes within 300 meters of their actual locations. In the foothills and at higher elevations, the results tend to be a little less accurate. But prior to the system being installed, emergency personnel relied on reports from the community, so the technology has been helpful to speed up response time to lightning fires. “Most of the time you are driving up and down rural roads because someone saw a strike over in a ridgeline and it’s very difficult to gauge [where the lightning impact occurred],” Chard said. “So this at least gives people an idea of generally where they should go.” The lightning program also provides alerts when weather conditions start to deteriorate. For example, when a lightning strike occurs within a 20-mile ring around the county, the program sends out an advisory that lightning is approaching the area. If another strike occurs within 10 miles of the county, another warning is automatically generated. If no lightning is detected in Boulder County during a 30-minute period, an “all clear” message is sent out. The technology isn’t perfect, however. Chard recalled that at times, you could hear the crack of lightning outside the county’s emergency office, and see a marker pop up on the screen 20 seconds later. Other times a lightning strike has taken minutes to register. One of the system’s other useful parts is a prediction feature. If the option is enabled, the detection system tracks and predicts the path of a lightning storm. Chard said his team has experimented with the prediction component and found it to be fairly reliable. Boulder County officials can use the system to estimate storm advancement and the probability of lightning hitting specific locations, which will help preparedness efforts for various community events. The Boulder Office of Emergency Management plans to use the predictive technology when a race route for the U.S. Pro Cycling Challenge travels through the area on Saturday, Aug. 25. “We will be able to give [the data] to incident commanders and event planners,” Chard said. “There is actual radar technology that goes with it — it just doesn’t show lightning, it also shows the storm and its intensity through radar imagery and gives you storm attributes. It gives you a vector direction of the storm … and a timeline of when that is going to hit a location.” The software subscription costs BOEM approximately $6,900 per year, according to Chard. The system is monitoring a 50 by 50-mile area that completely covers Boulder County and a good chunk of neighboring counties so staff members can see potential severe weather events moving into the area. Investing almost $7,000 each year might be a bit steep for some local governments, but Chard said it’s a fairly small investment considering the loss of life and property damage that wildfires can cause. VandenHeuvel said prices start at $2,400 per year and up, depending on the amount of lightning coverage needed and specialty layers added to the system. WDT has a variety of advancements on the horizon for the lightning detection system. VandenHeuvel said the sensor network is being continually upgraded as new sensor locations are added. The company also continues to create new weather layers for its program and recently added a live chat feature so users can go back and forth with WDT meteorologists. Chard felt that one of the system’s most useful features is its archived data. Users can go back and look at any date and time to get lightning strike information. However, the search can take some time because the data exists in one-hour increments. Chard would like to see that improved in a future iteration of the program. The lightning display screen shuts down after an hour. Once restarted, it boots up with fresh data, not the previously displayed information. This could pose problems for a user trying to find a strike that someone out in the field is investigating. “Lightning strikes may take a couple of hours to even manifest, [so] you have to go back through layers in order to find … the strike that someone may be going to,” Chard said. The process is a bit difficult. “It would be nice to see the ability to tweak the viewing time frame of the data.”
<urn:uuid:1f06424f-6d34-4f44-bdfb-52d21735f669>
CC-MAIN-2017-04
http://www.govtech.com/public-safety/Tracking-Tool-Monitors-Colorado-Lightning-Strikes.html?page=2
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00372-ip-10-171-10-70.ec2.internal.warc.gz
en
0.942646
1,556
2.84375
3
Why NASA thinks a supercomputer on the moon might not be pure fiction - By John Breeden II - Oct 05, 2012 Earlier this month, Southern California University grad student Ouliang Chang proposed that we begin the huge undertaking involved with building a supercomputing facility on the moon. And while that might seem like science fiction, NASA may actually be interested in exploring this idea. Putting a permanent facility on the moon may be a bit of a stretch, but NASA has been thinking about upgrading its Deep Space Network into a true Internet in space since at least 2009. The current Deep Space Network supports space missions exploring the solar system and some Earth-orbiting missions. Its backbone consists of several large dish antenna arrays at three locations approximately 120 degrees apart on Earth. They are located at Goldstone, Calif., in the Mojave Desert; near Madrid, Spain; and near Canberra, Australia. The problem, and hence the possible need for Chang’s moon base, is that space is getting too crowded to process all the data coming from the varous probes, satellites and robots we have wandering the solar system. Missions are already competing for time and bandwidth, and the situation will only get worse. Each time a new space ship launches, it’s like adding a new client to the network. The moon base idea would be like adding a new router and server to that network, which would accept signals from space, store them, process them if needed and then relay the data back to Earth as time and bandwidth allows. Robots based on other planets are under even more constraints than satellites, since the distance between Earth and, say, Mars, is constantly changing. A robot such as the Curiosity rover needs to use orbiters around Mars to relay its signals, which can only be done during certain times when everything lines up just right -- and that’s just the first link in a long chain the data has to follow over millions of miles back to Earth. During those limited times, the space Internet needs to be open to accepting Curiosity’s signal, or a bottleneck back on Earth could cause valuable data to be lost. NASA has known about this potential problem for some time and proposed launching a Mars Telecommunications Orbiter back in 2005, which would have provided a robust network for all Mars-based craft to send their signals back to Earth. That project was cancelled in favor of the Glory satellite mission, which was designed to study climate change on Earth, but which sadly was destroyed on launch due to a rocket malfunction. Chang’s proposal is somewhat difficult in scope. He wants to put a supercomputer data center inside a moon crater, facing deep space. The crater would protect the computers in the data center from most solar radiation and the heat of the sun. It would in fact be permanently shadowed. He proposes deploying antennas using inflatable balloons, and using the cold environment of the dark side of the moon to cool the computers. He notes a few interesting problems, including the need to detect incoming asteroids. The moon has no atmosphere for them to burn up in, as does Earth, so the antennas would have to retract when an asteroid gets close. He also notes that the facility could either be manned or unmanned. I wonder how much a job as the tech support person of the moon would pay? To call Chang’s proposal bold would be an understatement. There may well be other ways of getting the same thing done, such as with a series of spaceships like the canceled Mars Orbiter. Also, improving the ground network by using arrays of smaller antennas instead of a few large dishes could increase available bandwidth without needing to head into space. But as we continue to reach out into that dark sea known as space, conquering both it and our oceans of ignorance, we will need to have bright eyes and good technology, or we won’t journey very far. John Breeden II is a freelance technology writer for GCN.
<urn:uuid:fab7ccb5-9caa-4079-a565-b8ce5a5ed0cc>
CC-MAIN-2017-04
https://gcn.com/articles/2012/10/05/supercomputer-on-the-moon.aspx?admgarea=TC_NETCOMMSYS
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00372-ip-10-171-10-70.ec2.internal.warc.gz
en
0.95787
810
3.375
3
NASA, Japan release the most detailed digital model yet of Earth Topographical map, with images taken from Terra spacecraft, covers 99 perent of land mass - By Kevin McCaney - Jun 30, 2009 NASA and a Japanese agency have upped the ante of digital images of the Earth, releasing a topographical map that covers 99 percent of the planet's land mass, created from nearly 1.3 million images taken from NASA’s Terra spacecraft. The global digital elevation model can be downloaded here and here, although the number of tiles that can be downloaded at one time could be restricted because of heavy server traffic. The map is divided into 22,600 tiles, each 1 degree by 1 degree, NASA spokesman Steve Cole said. Bite-size samples are available here, where users can get detailed images of locations such as the Los Angeles Basin, Death Valley and Himalayan glaciers in Bhutan. The images were collected by the Japanese Advanced Spaceborne Thermal Emission and Reflection Radiometer (Aster) aboard Terra. Japan’s Ministry of Economy, Trade and Industry (METI) developed the data set, and released the map with NASA, the space agency said.. Aster, one of five Earth-observing instruments launched on Terra in December 1999, collects visible and thermal infrared images, with spatial resolutions ranging from about 50 to 300 feet. Each elevation measurement point on the new map, which NASA calls the Global Digital Elevation Model, is 98 feet apart, the space agency said. "This is the most complete, consistent global digital elevation data yet made available to the world," said Woody Turner, Aster program scientist at NASA headquarters. The U.S. science team on the project is based at the Jet Propulsion Laboratory (JPL) in Pasadena, Calif. The map covers nearly all of the Earth’s land mass, from 83 degrees north latitude and 83 degrees south, NASA said, leaving only the poles uncovered. Before Aster, NASA's Shuttle Radar Topography Mission had developed the most complete topographic map publicly available, covering 80 percent of Earth's land mass, between 60 degrees north latitude and 57 degrees south. Aside from covering more territory, the map also provides more detail. "The Aster data fill in many of the voids in the shuttle mission's data, such as in very steep terrains and in some deserts," said Michael Kobrick, Shuttle Radar Topography Mission project scientist at JPL. "NASA is working to combine the Aster data with that of the Shuttle Radar Topography Mission and other sources to produce an even better global topographic map." Mike Abrams, Aster science team leader at JPL, said the topographic data would be of value in a variety of applications, including energy exploration, environmental management, public works and firefighting. Data from the project is being contributed to the international Group on Earth Observations, based at World Meteorological Organization in Geneva. It will be distributed by NASA's Land Processes Distributed Active Archive Center and METI's Earth Remote Sensing Data Analysis Center. The data was validated by NASA, METI and the U.S. Geological Survey, with support from the U.S. National Geospatial-Intelligence Agency and other organizations, NASA said. Kevin McCaney is a former editor of Defense Systems and GCN.
<urn:uuid:a2ad07a8-1f7b-4d4f-9100-cdaf73990b19>
CC-MAIN-2017-04
https://gcn.com/articles/2009/06/30/nasa-japan-digital-topographic-aster-map.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00280-ip-10-171-10-70.ec2.internal.warc.gz
en
0.907987
681
3.03125
3
Data migration involves converting data from one format and/or system to another, newer format and/or system. Migration projects do not occur frequently, so IT staff are often unprepared for the task, which is much more complicated than they first think. Adding to the challenge is that the focus is typically on the system or software migration (e.g., migrating to an Exchange Server) with not enough attention paid to the data migration (e.g., converting e-mail data where necessary for the new platform). Prevent project delays and data loss by developing and following a thorough data migration plan that includes appropriate resources. Ensure critical data has the focus it deserves.
<urn:uuid:4bcceb3e-eed7-42ea-8295-22743ee7448b>
CC-MAIN-2017-04
https://www.infotech.com/research/keys-to-a-successful-data-migration
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00004-ip-10-171-10-70.ec2.internal.warc.gz
en
0.932713
133
2.515625
3
Active Directory is foundation of identity management of each and every technology in cooperate/enterprise environment. SCCM (ConfigMgr) engineers and desktop support engineers should have basic idea of Active Directory. Otherwise, it would very difficult for them to troubleshoot on the issue. Veeam is always here to support IT Pros community. In this free eBook about Active Directory under the hood. Active Directory basics Under the hood of Active Directory Sander Berkouwer MVP on Directory Services. I’ve done two posts on learning How to learn SCCM and how to learn Desktop support skills. Active Directory basics skills are very much required for all Windows Support engineers. Veeam also provides FREE tool for Active Directory recovery, download the tool from here. I’ll try to cover the Active Directory recovery tool in a future post. Microsoft’s Active Directory offers a central way for IT systems administrators to manage user accounts and devices within an IT infrastructure network. Changes in Active Directory can be made by these administrators centrally for consistency across the environment. More details about the content of the eBook AD basics. What is Domain Controllers? How to do Grouping of Domain Controllers? More details Inside the Active Directory database ! What is Containers and objects? What is Replication and High Availability? How to create Intrasite and intersite replication? What is Global Catalog servers? More details about Flexible single-master operations, Functional levels, Active Directory and its networking services, DNS , DNS Domain Names, DNS Zones, DNS Records, DNS Servers and DHCP !! More details here. http://go.veeam.com/active-directory-under-the-hood-ty.html
<urn:uuid:695f2c1f-2f63-415f-be46-6fbefbaf90e4>
CC-MAIN-2017-04
https://www.anoopcnair.com/download-free-ebook-ad-active-directory-hood/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00298-ip-10-171-10-70.ec2.internal.warc.gz
en
0.905417
356
2.828125
3
Password crackers have a surprising secret weapon How can you defend against a new line of attack? - By Kevin McCaney - Aug 16, 2010 Among the oft-cited weaknesses in using passwords for authentication are that people choose bad, easily guessed passwords, such as “123456” or, even, “password.” But even carefully chosen passwords are not enough, at least if they are too short, according to researchers at the Georgia Tech Research Institute. The reason: graphics processing units, which are powerful enough to conduct quick, effective brute-force attacks on password-protected systems. GPUs traditionally have been used in graphics cards to render screen displays on PCs. But they also can be used to accelerate some applications, especially those involving floating-point operations. Apple’s Snow Leopard and Windows 7 operating systems are designed to hand off some processing chores to the GPU. In a post describing their research, the GTRI team (researchers Joshua Davis and Richard Boyd, and undergraduate researcher Carl Mastrangelo) said they have been using a commonly available graphics processor to test password strength. Revealed: Our picks for best password strategies Password management’s secret ingredient "Right now we can confidently say that a seven-character password is hopelessly inadequate,” Boyd said in the post, “and as GPU power continues to go up every year, the threat will increase." The researchers pointed out that GPUs have been amped-up over the years to handle increasingly sophisticated computer games, and in the process have achieved the power of a mini-supercomputer. Some GPUs today, even those that typically cost less than $500, can process information at a rate of nearly 2 teraflops, or two trillion floating-point operations per second. Ten years ago, the fastest supercomputer in the world, built at a cost of $110 million, ran at about 7 teraflops. Developers began adapting them to other uses after Nvidia – one of two companies, along with AMD’s ATI, that control essentially the entire GPU market – in 2007 released a software development kit that allowed developers to program a GPU using the C programming language, the researchers said. “If you can write a C program, you can program a GPU now,” Boyd said. And one of the programs they can be used for is password-cracking. Brute-force attacks, in which a program tries to guess every possible combination until the right one turns up, have been around a long time. But the relatively new ability to use GPUs, which are designed as parallel processors, for brute-force attacks could put a lot of password-cracking power into the hands of a lot of people. Some of whom might not be honest. The length of a password is important in preventing cracking, Davis said in the post. Any password with fewer than 12 letters, numbers and special characters will soon be ineffective, if it’s not already. Like many readers who responded to our request in May for password tips, he recommended pass phrases – sentences, including upper and lower case characters, symbols and numbers – as a way to avoid having passwords cracked. Many Web sites and networks defend against brute force attacks already by limiting the number of incorrect log-in attempts, blocking out users after a set number of failed attempts. The downside of the approach is that an attacker could cause a denial-of-service attack by deliberately locking out authorized users, according to the University of Virginia’s System Administrator Database. An attacker also could use the responses from lock-outs to determine the names of authorized users, because only legitimate accounts can be locked out. Agencies have gradually been moving toward two-factor authentication systems, which take some of the pressure off of passwords. As the processing units available to attackers become increasingly powerful, two-factor systems could become even more necessary. Kevin McCaney is a former editor of Defense Systems and GCN.
<urn:uuid:f8bdf28d-0629-401f-a578-28ab5165edb5>
CC-MAIN-2017-04
https://gcn.com/articles/2010/08/16/gpus-brute-force-password-hacks.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00206-ip-10-171-10-70.ec2.internal.warc.gz
en
0.945669
822
2.65625
3
Network function virtualization (NFV) is envisioned as a savior for telecom network operators and service providers that boosts economics, innovation and agility needed to address explosive traffic growth in their networks with reduced capital and operating expenditure. It will bring significant change to the way services are developed, tested, and deployed faster than ever before. So what is NFV? Is it the same as cloud computing? How is it related to software defined networking (SDN)? These are all questions you've probably heard or even asked yourself. Let me explain in this post the answers to these basics questions. What is cloud computing? Often referred to as "the cloud", it is the delivery of on-demand resources, everything from hardware to software applications, over the internet on a pay-for-use basis. It consolidates hardware infrastructure (standard high-volume servers, networking, switches and storage), software platform (everything required to build and deliver web-based applications/services), and software services (hosted by web service providers that connect to their end customers) in data centers; that is managed and orchestrated using proprietary software or open source such as OpenStack and CloudStack. The business initiators of cloud are commercial web service providers and enterprises and that many have successfully built billion dollar businesses enabling infrastructure-as-a-service (IaaS) (e.g., Amazon), platform-as-a-service (PaaS) (e.g., Google) and software-as-a-service (SaaS) (e.g., SalesForce) in public or private or community or hybrid cloud as end customers’ security requirements demands. Let us take a quick break to refresh fundamentals learned as students: the Open System Interconnection (OSI) model. It defines a networking framework to implement protocols in seven layers: - Data Link - Application Layer Various equipment/devices (software and hardware) are used within the data centers that connects servers and storage, as well as different data centers. Some of these equipment supports all seven layers (e.g., web server (1-7)); or only part of the seven layers (e.g., router (1-3), switch (1-2) and optical cable (1)) that are capable of interoperating with standard protocols to deliver IaaS, PaaS and SaaS services to the customers. On a practical note, most of the web service/application (1-7) are now deployed in the cloud as software-only devices on the industry standard commercial off-the-shelf (COTS) servers. What is software-defined networking (SDN)? Initiated by Enterprise IT, born on college campus, and matured in data centers, SDN replaces expensive closed and proprietary firmware based 1-3 network layer devices that supports both network control and data traffic forwarding (e.g., router). How is it replaced using SDN? It is achieved by software-based control (programmable network through software abstraction—SDN controller) using globally aware open protocols such as OpenFlow to provision, control and forward data traffic through less expensive bare metal switch (or virtual switches) devices within and between data centers. By adopting cloud and SDN, many enterprises IT and web service providers have been successfully enjoying boosted economics, faster innovation, and agility. What is NFV? Initiated and lead by telecom network operators and service providers, NFV, the savior for telecom business, aims to transform the telecom industry to perform similar to the web and enterprise IT. Unlike web and enterprise network, today’s telecom network comprises of an expensive, large and increasing variety of dedicated proprietary hardware to deliver telecom services (VoLTE, ViLTE etc.). NFV is the concept of replacing these dedicated proprietary hardware—such as firewall and session border controller (SBC)—with software running on commercial off-the-shelf servers (same as web servers in the cloud), which could be located in data centers, network nodes and in the end-user premises. NFV has an option (not mandatory) to enjoy greater benefit integrating SDN in the telecom applications (within or with external SDN controller) to control network resources for telecom data traffic forwarding. Is NFV the same as cloud for telecom networks and services? As explained above, most of it (concept, technologies, benefits, etc.) are similar but not the same. The fundamental requirement of carrier-grade telecom infrastructure is different than that of enterprise IT and web platforms. The management, security, quality, performance, availability, reliability and scalability requirements are different and stringent (5 nine's) in telecom networks. These requirements are challenging and in many cases not possible to achieve in the existing cloud infrastructure. For example, fault deduction is expected within sub-second not in minutes. The European Telecommunications Standards Institute (ETSI), working together with leading telecom companies, opensource communities and other standard organizations (SDO), has defined and continues to define and address these challenges by defining architecture, interfaces, PoC and all that apply. In conclusion, as the innovation cycles and traffic in the network continue to accelerate, telecom service providers and network operators have to find ways to boost economics, innovation and agility needed to fully automate network and services. NFV is the savior and that SDN is complementary for the benefits of software-defined networking to be fully realized.
<urn:uuid:d11f990c-e638-4ecb-8a89-221d18d1168b>
CC-MAIN-2017-04
http://exfo.com/corporate/blog/2016/what-is-nfv
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00114-ip-10-171-10-70.ec2.internal.warc.gz
en
0.925557
1,110
2.734375
3
Security Antivirus is a rogue from the same family as Virus Doctor. This rogue is promoted through the use of Trojans and fake online anti-malware scanners. When installed Security Antivirus will be configured to start automatically when you log into Windows. The installer will also create numerous fake malware files that will be detected as malware when the program scans your computer. The list of fake malware files that it installs is: When Security Antivirus scans your computer it will find the above files and state that they are infections. It will not, though, allow you to remove any of them until you first purchase the program. In reality, the above files are harmless and can cause no harm to your computer. They are only being created in order to validate the scan results. As these infections are fake, please do not purchase the program based upon anything that this program displays. While the program is running it will also display numerous security alerts and warnings on your desktop. These alerts will state that your computer is under attack, sending SPAM, or that your personal data is at risk. Some of the alerts that you may see are: An unauthorized program has been prevented from accessing your PC remotely. #Port:433 from 184.108.40.206 An unauthorized software C:\Program Files\Internet Explorer\Iexplore.exe which is potentially malicious and able to modify system files has been prevented from being installed on your PC. Security Antivirus has detected potentially harmful software in your system. It is strongly recommended that you register Security Antivirus to remove all found threats immediately. Potentially harmful programs have been detected in your system and need to be dealt with immediately. Click here to remove them using Security Antivirus. Your PC may still be infected with dangerous viruses. Security Antivirus protection is needed to prevent data loss and avoid theft of your personal data and credit card details. Click here to activate protection. Suspicious software which may be malicious has been detected on your PC. Click here to remove this threat immediately using Security Antivirus. Click here to remove all potentially harmful programs found immediately using Security Antivirus. Malicious applications, which may contain Trojans, were found on your computer and are to be removed immediately. Click here to remove these potentially harmful items using Security Antivirus. No real-time malware, spyware and virus protection was found. Click here to activate. Just like the scan results, these fake warnings should be ignored as they are just another attempt to make you think your computer has a security problem. This infection will also hijack your web browser's default search engine and set it to findgala.com. Last, but not least, this infection will add entries to your HOSTS file so that when you visit certain sites such as Google or Bing, you will be redirected to a site under the control of the malware developers. As you can see, you should not purchase this program regardless of what it may state. If you have already purchased the program, then please contact your credit card company and dispute the charges. Finally, please use the guide below to remove this infection and any related malware for free. Self Help Guide - Print out these instructions as we may need to close every window that is open later in the fix. - It is possible that the infection you are trying to remove will not allow you to download files on the infected computer. If this is the case, then you will need to download the files requested in this guide on another computer and then transfer them to the infected computer. You can transfer the files via a CD/DVD, external drive, or USB flash drive. - Before we can do anything we must first end the processes that belong to so that it does not interfere with the cleaning procedure. To do this, please download RKill to your desktop from the following link. RKill Download Link - (Download page will open in a new tab or browser window.) When at the download page, click on the Download Now button labeled iExplore.exe download link. When you are prompted where to save it, please save it on your desktop. - Once it is downloaded, double-click on the iExplore.exe icon in order to automatically attempt to stop any processes associated with and other Rogue programs. Please be patient while the program looks for various malware programs and ends them. When it has finished, the black window will automatically close and you can continue with the next step. If you get a message that rkill is an infection, do not be concerned. This message is just a fake warning given by when it terminates programs that may potentially remove it. If you run into these infections warnings that close RKill, a trick is to leave the warning on the screen and then run RKill again. By not closing the warning, this typically will allow you to bypass the malware trying to protect itself so that rkill . So, please try running RKill until the malware is no longer running. You will then be able to proceed with the rest of the guide. Do not reboot your computer after running RKill as the malware programs will start again. If you continue having problems running RKill, you can download the other renamed versions of RKill from the RKill download page. Both of these files are renamed copies of RKill, which you can try instead. Please note that the download page will open in a new browser window or tab. - At this point you should download Malwarebytes Anti-Malware, or MBAM, to scan your computer for any any infections or adware that may be present. Please download Malwarebytes from the following location and save it to your desktop: Malwarebytes Anti-Malware Download Link (Download page will open in a new window) - Once downloaded, close all programs and Windows on your computer, including - Double-click on the icon on your desktop named mb3-setup-1878.1878-220.127.116.119.exe. This will start the installation of MBAM onto your computer. - When the installation begins, keep following the prompts in order to continue with the installation process. Do not make any changes to default settings and when the program has finished installing, make sure you leave Launch Malwarebytes Anti-Malware checked. Then click on the Finish button. If MalwareBytes prompts you to reboot, please do not do so. - MBAM will now start and you will be at the main screen as shown below. Please click on the Scan Now button to start the scan. If there is an update available for Malwarebytes it will automatically download and install it before performing the scan. - MBAM will now start scanning your computer for malware. This process can take quite a while, so we suggest you do something else and periodically check on the status of the scan to see when it is finished. - When MBAM is finished scanning it will display a screen that displays any malware that it has detected. Please note that the infections found may be different than what is shown in the image below due to the guide being updated for newer versions of MBAM. You should now click on the Remove Selected button to remove all the seleted malware. MBAM will now delete all of the files and registry keys and add them to the programs quarantine. When removing the files, MBAM may require a reboot in order to remove some of them. If it displays a message stating that it needs to reboot, please allow it to do so. Once your computer has rebooted, and you are logged in, please continue with the rest of the steps. - You can now exit the MBAM program. - As this infection also changes your Windows HOSTS file, we want to replace this file with the default version for your operating system. Please note that if you or your company has added custom entries to your HOSTS file then you will need to add them again after restoring the default HOSTS file. In order to protect itself, changes the permissions of the HOSTS file so you can't edit or delete it. To fix these permissions please download the following batch file and save it to your desktop: - We now need to delete the C:\Windows\System32\Drivers\etc\HOSTS file. Once it is deleted, download the following HOSTS file that corresponds to your version of Windows and save it in the C:\Windows\System32\Drivers\etc folder. If the contents of the HOSTS file opens in your browser when you click on a link below then right-click on the appropriate link and select Save Target As..., if in Internet Explorer, or Save Link As.., if in Firefox, to download the file. Windows XP HOSTS File Download LinkYour Windows HOSTS file should now be back to the default one from when Windows was first installed. Windows Vista HOSTS File Download Link Windows 2003 Server HOSTS File Download Link Windows 2008 Server HOSTS File Download Link Windows 7 HOSTS File Download Link - Now reboot your computer. - As many rogues and other malware are installed through vulnerabilities found in out-dated and insecure programs, it is strongly suggested that you use Secunia PSI to scan for vulnerable programs on your computer. A tutorial on how to use Secunia PSI to scan for vulnerable programs can be found here: How to detect vulnerable and out-dated programs using Secunia Personal Software Inspector Your computer should now be free of the Security Antivirus program. If your current anti-virus solution let this infection through, you may want to consider purchasing the PRO version of Malwarebytes Anti-Malware to protect against these types of threats in the future.
<urn:uuid:96b1420b-eef6-4098-9d86-757cfaac125b>
CC-MAIN-2017-04
https://www.bleepingcomputer.com/virus-removal/remove-security-antivirus
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00352-ip-10-171-10-70.ec2.internal.warc.gz
en
0.873624
2,060
2.703125
3
As an Atlanta-based company we certainly are no strangers to annual hurricane alerts. It comes with the territory and, over the years, it is remarkable to note how a hurricane’s path has come to be so accurately predicted. We almost always know when, and if, we’re going to be affected. The part that still eludes forecasters, however, also happens to be the most important part: how severe will the weather be by the time it makes landfall? This is because the most critical area of a hurricane—the place that reveals the most about a storm’s intensity and potential staying power—is at its lowest point - right where the sea meets the air. That is where the key dynamics that affect intensity, including the rate of evaporation that fuels a hurricane, take place. It is easy to understand why meteorologists and researchers haven’t been able to gather proper data from this area. The perils of sending in manned aircraft are just too extreme. Would you want to be in that plane? But advances in two key technologies could be changing the way hurricanes get studied, and it may eliminate the guesswork from predicting a storm’s strength. The first is the steady evolution of drone aircraft, which over the past few years have gotten smaller, more agile, more power efficient and, most importantly, cheaper. The second involves the continuous evolution of wireless communications, sensor miniaturization and high-performance processing to create M2M devices that are robust enough to collect and transmit data from even the harshest of surroundings. The question is, including hurricanes? All these advancements are packaged together in what the National Oceanic and Atmospheric Administration appropriately calls its new Coyote drone. The 3-foot disposable craft is designed to float on a hurricane’s air currents in a slow descent to the sea, capturing vital data for the duration of its journey. NOAA meteorologist Joseph Cione sums up the new methodology perfectly: “The way we're measuring things now is a snapshot. The Coyote will give us a movie.” Armed with better data, responders will know the strength of a storm while it is still out to sea—thus gaining precious lead time to plan evacuations as necessary. Just as importantly, they can avoid the costly, burdensome, and often embarrassing, situations when it turns out that the storm whimpers out by the time it hits land. On a wider scale, if this technology is proven able to collect precise data from a hurricane’s deepest section, where the sea and winds churn violently, think of what that means if you are a municipal sanitation manager, or a habitat scientist, or an environmental engineer; the opportunity to gather data from previously inaccessible areas might provide an early read on possibly dangerous conditions for people, property or ecosystems, and allow for remediation before those conditions have a chance to reap their effects. That will be priceless indeed, so unleash the Coyote!
<urn:uuid:821d6f48-db34-49ec-bf8d-8466460f08d7>
CC-MAIN-2017-04
http://www.koretelematics.com/blog/could-m2m-and-a-coyote-lead-towards-better-hurricane-forecasting
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00170-ip-10-171-10-70.ec2.internal.warc.gz
en
0.939218
609
2.9375
3
NASA today said it wants to gage industry interest in the agency holding one of its patented Centennial Challenges to build the next cool unmanned aircraft. NASA said it is planning this Challenge in collaboration with the Federal Aviation Administration and the Air Force Research Lab, with NASA providing the prize purse of up to $1.5 million. The purpose of this exploration or Request For Information is to: - Determine the unmanned aircraft community's level of interest in competing in this Challenge, - Gather feedback on the draft rules (see below for a link) - Identify potential partners interested in (a) providing a venue for the flight competition, as well as assisting NASA in managing and executing this Challenge which may include qualification of potential competitors. The type of challenge NASA said it is envisioning would be no easy task as it is looking to address one of the more complicated drone issues - sensing and avoiding other aircraft. BACKGROUND: What the drone invasion looks like "NASA is considering initiation of an Unmanned Aircraft Systems Airspace Operations Challenge (UAS AOC or Challenge) focused on finding innovative solutions to the problems surrounding the integration of UAS into the National Airspace System. The approach being considered would require competitors to maintain safe separation from other air traffic while operating their UAS in congested airspace, under a variety of scenarios. This will be accomplished through the use of sense and avoid technologies, as envisioned in the Next Generation Air Transportation System," NASA said. NASA said the Challenge would be divided into two parts. The Level 1 Competition - with a $500,000 purse -- "would focus on a competitors ability to fly 4-Dimensional Trajectories to provide a reasonable expectation that the drones will be where they are supposed to be, when they are scheduled to be there, successfully employ Automatic Dependent Surveillance Broadcast (ADS-B), maintain safe separation from other ADS-B equipped air traffic, and operate safely in a number of contingency situations. ADS-B in equipped aircraft are able to receive messages broadcast from other aircraft and the air traffic management system that describe the current position, heading, and speed of nearby air traffic." The Level 2 Competition - with a $1 million purse -- would go beyond the first level and add a "requirement to maintain safe separation from air traffic not equipped with ADS-B and a requirement that the vehicle be able to communicate verbally with the Air Traffic Control system under lost link conditions. Competitors would be required to have a working Hardware-in-the-Loop Simulation for their flight vehicle. The HiLSim would be used at the beginning of the competition, prior to flight, to verify that a competing UASs flight operators, ground software, and flight software exhibit the proper responses in a variety of safety-critical situations. It would also be used to verify that a team is capable of performing the basic tasks required by the competition. HiLSim test suites would be provided prior to the competition to allow competitors to verify they are in compliance with contest requirements during development." NASA on the Challenge: "Each competitor will be required to deploy and operate their UAS on a relatively tight schedule to avoid disrupting the UAS AOC schedule and negatively impacting other challenge competitors. Every event that requires a competitor to fly their aircraft for scoring is called a 'mission.' The five distinct segments of a mission are: aircraft launch, pre-4DT loiter, 4DT flight, post-4DT loiter, and aircraft recovery. This structure enables surrounding air traffic to be created using a combination of real and virtual aircraft working synchronously to create specific scenarios for the competitors. Prior to each mission, competitors must declare several details about their aircraft and how they intend to operate it. Chief among these is their preferred cruise speed for their aircraft. This cruise speed is used to establish the overall size of the geo-fence, the waypoint hit radius, and other characteristics of the 4DT that will define the missions assigned to them. Tailoring the size of the course to the capabilities of each competing UAV, while keeping event timing for the sometimes-complex mission scenarios constant, will enable fair competition between UAS that vary significantly in size and performance. Required air traffic separation distances will be chosen to capture important scale effects inherent in operating different classes of aircraft." NASA said in the past its Centennial Challenges program is designed to get to get what it calls "unconventional solutions from non-traditional sources." It also hopes to identify new tech talent and stimulate the creation of new businesses. Unlike contracts and grants based on proposals, prizes are only awarded after competitors have successfully demonstrated their innovations. A recent round of Challenges included: - The Sample Return Robot Challenge is to demonstrate a robot that can locate and retrieve geologic samples from wide and varied terrain without human control. This challenge has a prize purse of $1.5 million. The objectives are to encourage innovations in automatic navigation and robotic manipulator technologies. - The Nano-Satellite Launch Challenge is to place a small satellite into Earth orbit, twice in one week, with a prize of $2 million. The goals of this challenge are to stimulate innovations in low-cost launch technology and encourage creation of commercial nano-satellite delivery services. - The Night Rover Challenge is to demonstrate a solar-powered exploration vehicle that can operate in darkness using its own stored energy. The prize purse is $1.5 million. The objective is to stimulate innovations in energy storage technologies of value in extreme space environments, such as the surface of the moon, or for electric vehicles and renewable energy systems on Earth. Last year NASA awarded what it called the largest prize in aviation history to Pipistrel-USA.com, a company that flew their aircraft 200 miles in less than two hours on less than one gallon of fuel or electric equivalent. The aircraft known as the Taurus G4 was a twin fuselage aircraft that featured a 145 kW electric motor, lithium-ion batteries, and retractable landing gear. While safely flying unmanned aircraft in the nation's airspace hasn't been listed as a grand challenge just yet, scientists and researchers at DARPA and the White House Office of Science and Technology Policy recently put out a public call for ideas that could form what they call the Grand Challenges - ambitious yet achievable goals that that would herald serious breakthroughs in science and technology. In defining what the government groups are looking for, Thomas Kalil, Deputy Director for Policy for the White House Office of Science and Technology Policy said that while there might not be universally accepted definition of what constitutes a Grand Challenge, they typically do have certain attributes including: - They can have a major impact in domains such as health, energy, sustainability, education, economic opportunity, national security, or human exploration. - They are ambitious but achievable. Proposing to end scarcity in five years is certainly ambitious, but it is not achievable. As Arthur Sulzberger put it, "I believe in an open mind, but not so open that your brains fall out." - Grand Challenges are compelling and intrinsically motivating. They should capture the public's imagination. Many people should be willing to devote a good chunk of their career to the pursuit of one of these goals. - Grand Challenges have a "Goldilocks" level of specificity and focus. "Improving the human condition" is not a Grand Challenge because it does not provide enough guidance for what to do next. One of the virtues of a goal like "landing a man on the moon and returning him safely to the earth" is that it is clear whether it has been achieved. Grand Challenges should have measurable targets for success and timing of completion. On the other hand, a Grand Challenge that is too narrowly defined may assume a particular technical solution and reduce the opportunity for new approaches. - Grand Challenges can help drive and harness innovation and advances in science and technology. I certainly do not want to argue that technology is going to solve all of our problems. But it can be a powerful tool, particularly when combined with social, financial, policy, institutional and business model innovations. As for flying drones safely in the national airspace, a recent report from the Government Accountability Office lists the sense and avoid challenge a critical safety issue that must be addressed. From the GAO: "The inability for unmanned aircraft to detect, sense, and avoid other aircraft and airborne objects in a manner similar to "see and avoid" by a pilot in a manned aircraft. To date, no suitable technology has been deployed that would provide unmanned systems with the capability to sense and avoid other aircraft and airborne objects and to comply completely with FAA regulatory requirements of the national airspace system. However, research and development efforts by FAA, DOD, NASA, and MITRE, among others, suggests that potential solutions to the sense and avoid obstacle may be available in the near term. With no pilot to scan the sky, most UAS do not have an on-board capability to directly 'see' other aircraft. Consequently, unmanned aircraft must possess the capability to sense and avoid an object using on-board equipment, or within the line-of-sight of a human on the ground or in a chase aircraft, or by other means, such as ground-based sense and avoid. Since 2008, FAA and other federal agencies have managed several research activities to support meeting the sense and avoid requirements. DOD officials said the Department of the Army is working on a ground-based system that will detect other airborne objects and allow the pilot to direct the UAS to maneuver to a safe location. The Army has successfully tested one system, but it may not be useable on all types of drones." Check out these other hot stories:
<urn:uuid:bf492193-755f-44a8-b122-68922d61fd67>
CC-MAIN-2017-04
http://www.networkworld.com/article/2223337/security/nasa-exploring--1-5-million-unmanned-aircraft-competition.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00078-ip-10-171-10-70.ec2.internal.warc.gz
en
0.946934
1,965
2.609375
3
Definition: A list of characters, usually implemented as an array. Informally a word, phrase, sentence, etc. Since text processing is so common, a special type with substring operations is often available. See also string matching, string matching with errors, longest common subsequence, end-of-string, text. Note: The term string usually refers to a small sequence of characters, such as a name or a sentence. The term text usually refers to a large sequence of characters, such as an article or a book. If you have suggestions, corrections, or comments, please get in touch with Paul Black. Entry modified 14 December 2005. HTML page formatted Mon Feb 2 13:10:40 2015. Cite this as: Paul E. Black, "string", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 14 December 2005. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/string.html
<urn:uuid:ed94ded9-2895-41c9-baa3-98be4ba9cab8>
CC-MAIN-2017-04
http://www.darkridge.com/~jpr5/mirror/dads/HTML/string.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00564-ip-10-171-10-70.ec2.internal.warc.gz
en
0.902475
221
3.125
3
Chapter 11 – Parallel Connections several times in previous chapters, data busses can be divided into serial and parallel connections. The last part of the previous chapter described the method of noise amelioration called “differential signaling”, in which each data line is doubled. Each of the serial and parallel busses can be implemented using differential signaling. A serial bus would use one pair of lines, while an N–bit parallel bus would use N pairs of lines (2N lines total). Put simply, it is not the number of data lines that define the bus, but the number of bits are to be transmitted at once. As mentioned before, it would appear obvious that a parallel data bus would give much better performance than any serial data bus. We shall return to the tradeoff between serial and parallel connections when we discuss the ATA bus and SATA bus, later in this chapter. For now, let’s look at parallel busses. computer contains a number of busses, at various levels. The CPU contains data busses for its internal work. This chapter focuses on busses external to the CPU. Some early designs used a single bus allowing the CPU to communicate with the other components. The single–bus design is logically correct, but it no longer represents the actual organization of any modern commercial computers. The problem is that the bus cannot handle the work load associated with modern computers, especially the requirement for fast access to memory and intense data rates to the graphics card. Here is a refinement of the above diagram. We now have two fast busses, one each for the graphics card and the memory. I/O devices are relegated to the system bus, operating at a slower speed. This design is getting closer to reality. We now turn to commercial realities, specifically legacy I/O devices. When upgrading a computer, most users do not want to buy all new I/O devices (expensive) to replace older devices that still function well. The I/O system must provide a number of busses of different speeds, addressing capabilities, and data widths, to accommodate this variety of I/O devices. Here we show the main I/O bus connecting the CPU to the I/O Control Hub (ICH), which is connected to two I/O busses: one for slower (older) devices one for faster (newer) devices. The requirement to handle memory as well as a proliferation of I/O devices has lead to a new design based on two controller hubs. One important function of each hub is to handle data transfer between two busses that operate at different clock speeds. These hubs are: 1. The Memory Controller Hub or “North Bridge” 2. The I/O Controller Hub or “South Bridge” Such a design allows for grouping the higher–data–rate connections on the faster controller, which is closer to the CPU, and grouping the slower data connections on the slower controller, which is more removed from the CPU. The names “Northbridge” and “Southbridge” come from analogy to the way a map is presented. In almost all chipset descriptions, the Northbridge is shown above the Southbridge. In almost all maps, north is “up”. It is worth note that, in later designs, much of the functionality of the Northbridge has been moved to the CPU chip. Backward Compatibility in System Buses The early evolution of the Intel microcomputer line provides an interesting case study in the effect of commercial pressures on system bus design. We focus on three of the earliest models, the Intel 8086, Intel 80286, and Intel 80386. All three had 16–bit data lines. The Intel 8086 had a 20–bit address line. It could address 1 MB of memory. The Intel 80286 had a 24–bit address line. It could address 16 MB of memory. The Intel 80386 had a 32–bit address line. It could address 4 GB of memory. Here is a figure showing the growth of the address bus structure for these three designs. Note that an old style (Intel 8086) bus interface card could be inserted into the 20–bit slot of either the Intel 80286 or Intel 80386, and still function normally. The Intel 80286 interface, with its 24 bits of address split into two parts, could fit the 24–bit (20 bits and 4 bits) slot of the Intel 80386. The Intel 80286 was marketed as the IBM PC/AT (Advanced Technology). Your author fondly remembers his PC/AT from about 1986; it was his first computer with a hard disk (40 MB). Detour: The IBM Micro–Channel Bus The Micro–Channel Architecture was a proprietary bus created by IBM in the 1980’s for use on their new PS/2 computers. It was first introduced in 1987, but never became popular. Later, IBM redesigned most of these systems to use the PCI bus design (more on this later). The PS/2 line was seen by IBM as a follow–on to their PC/AT line, but was always too costly, typically selling at a premium. In 1990, the author of this textbook was authorized to purchase a new 80386–class computer for his office. The choice was either an IBM MCA unit or a PC clone. This was put out for bids. When the bids were received, the lowest IBM price was over $5,000, while a compatible PC of the same power was $2,900. According to Wikipedia MCA was a huge technical improvement over ISA, its introduction and marketing by IBM was poorly handled. IBM did not develop a peripheral card market for MCA, as it had done for the PC. It did not offer a number of peripheral cards that utilized the advanced bus-mastering and I/O processing capabilities of MCA. Absent a pattern, few peripheral card manufacturers developed such designs on their own. Consequently customers were not provided many advanced capabilities to justify the purchase of comparatively more expensive MCA systems and opted for the plurality of cheaper ISA designs offered by IBM's competition.” patents on MCA system features and required MCA system manufacturers to pay a license fee. As a reaction to this, in late 1988 the "Gang of Nine", led by Compaq, announced a rival bus – EISA. Offering similar performance benefits, it had the advantage of being able to accept older XT and ISA boards.” suffered for being a proprietary technology. Unlike their previous PC bus design, the AT bus, IBM did not publicly release specifications for MCA and actively pursued patents to block third parties from selling unlicensed implementations of it, and the developing PC clone market did not want to pay royalties to IBM in order to use this new technology. The PC clone makers instead developed EISA as an extension to the existing old AT bus standard. The 16–bit AT bus was embraced and renamed as ISA to avoid IBM's "AT" trademark. With few vendors other than IBM supporting it with computers or cards, MCA eventually failed in the marketplace.” and MCA battled it out in the server arena, the desktop PC largely stayed with ISA up until the arrival of PCI, although the VESA Local Bus, an acknowledged stopgap, was briefly popular.” [R92] Notations Used for a Bus We pause here in our historical discussion of bus design to introduce a few terms used to characterize these busses. We begin with some conventions used to draw busses and their timing diagrams. Here is the way that we might represent a bus with multiple types of lines. The big “double arrow” notation indicates a bus of a number of different signals. Some authors call this a “fat arrow”. Lines with similar function are grouped together. Their count is denoted with the “diagonal slash” notation. From top to bottom, we have 1. Three data lines D2, D1, and D0 2. Two address lines A1 and A0 3. The clock signal for the bus F. Not all busses transmit a clock signal; the system bus usually does. Power and ground lines usually are not shown in this type of diagram. Note the a bus with only one type of signal might be drawn as a thick line with the slash, as in the 3 – bit data bus above. Maximum Bus Length In general, bus length varies inversely as transmission speed, often measured in Hz; e.g., a 1 MHz bus can make one million transfers per second and a 2 GHz bus can make two billion. Immediately we should note that the above is not exactly true of DDR (Double Data Rate) busses which transfer at twice the bus speed; a 500 MHz DDR bus transfers 1 billion times a second. Note that the speed in bytes per second is related to the number of bytes per transfer. A DDR bus rated at 400 MHz and having 32 data lines would transfer 4 bytes 800 million times a second, for a total of 3.20 billion bytes per second. Note that this is the peak transfer rate. The relation of the bus speed to bus length is due to signal propagation time. The speed of light is approximately 30 centimeters per nanosecond. Electrical signals on a bus typically travel at 2/3 the speed of light; 20 centimeters per nanosecond or 20 meters per microsecond. A loose rule of thumb in sizing busses is that the signal should be able to propagate the entire length of the bus twice during one clock period. Thus, a 1 MHz signal would have a one microsecond clock period, during which time the signal could propagate no more than twenty meters. This length is a round trip on a ten meter bus; hence, the maximum length is 10 meters. Similarly, a 1 GHz signal would lead to a maximum bus length of ten centimeters. The rule above is only a rough estimator, and may be incorrect in some details. Since the bus lengths on a modern CPU die are on the order of one centimeter or less, we have no trouble. It should be no surprise that, depending on the feature being considered, there are numerous ways to characterize busses. We have already seen one classification, what might be called a “mixed bus” vs. a “pure bus”; i.e., does the bus carry more than one type of signal. Most busses are of the mixed variety, carrying data, address, and control signals. The internal CPU busses on our design carry only data because they are externally controlled by the CPU Control Unit that sends signals directly to the bus end points. One taxonomy of busses refers to them as either point–to–point vs. shared. Here is a picture of that way of looking at busses. An example of a point–to–point bus might be found in the chapter on computer internal memory, where we postulated a bus between the MBR and the actual memory chip set. Most commonly, we find shared busses with a number of devices attached. Another way of characterizing busses is by the number of “bus masters” allowed. A bus master is a device that has circuitry to issue command signals and place addresses on the bus. This is in distinction to a “bus slave” (politically incorrect terminology) that can only transfer data in response to commands issued by a bus master. In the early designs, only the CPU could serve as a bus master for the memory bus. More modern memory busses allow some input/output devices (discussed later as DMA devices) to act as bus masters and transfer data to the memory. Another way to characterize busses is whether the bus is asynchronous or synchronous. A synchronous bus is one that has one or more clock signals associated with it, and transmitted on dedicated clock lines. In a synchronous bus, the signal assertions and data transfers are coordinated with the clock signal, and can be said to occur at predictable times. bus is one without a clock signal. The data transfers and some control signal assertions on such a bus are controlled by other control signals. Such a bus might be used to connect an I/O unit with unpredictable timing to the CPU. The I/O unit might assert some sort of ready signal when it can undertake a transfer and a done signal when the transfer is complete. In order to understand these busses more fully, it would help if we moved on to a discussion of the bus timing diagrams and signal levels. Bus Signal Levels Many times bus operation is illustrated with a timing diagram that shows the value of the digital signals as a function of time. Each signal has only two values, corresponding to logic 0 and to logic 1. The actual voltages used for these signals will vary depending on the technology used. A bus signal is represented in some sort of trapezoidal form with rising edges and falling neither of which is represented as a vertical line. This convention emphasizes that the signal cannot change instantaneously, but takes some time to move between logic high and low. Here is a depiction of the bus clock, represented as a trapezoidal wave. Here is a sample diagram, showing two hypothetical discrete signals. Here the discrete signal B# goes low during the high phase of clock T1 and stays low. Signal A# goes low along with the second half of clock T1 and stays low for one half clock period. A collection of signals, such as 32 address lines or 16 data lines cannot be represented with a simple diagram. For each of address and data, we have two important states; the signals are valid, and signals are not valid consider the address lines on the bus. Imagine a 32–bit address. At some after T1, the CPU asserts an address on the address lines. This means that each of the 32 address lines is given a value, and the address is valid until the middle of the high part of clock pulse T2, at which the CPU ceases assertion. Having seen these conventions, it is time to study a pair of typical timing diagrams. We first study the timing diagram for a synchronous bus. Here is a read timing diagram. What we have here is a timing diagram that covers three full clock cycles on the bus. Note that during the high clock phase of T1, the address is asserted on the bus and kept there until the low clock phase of T3. Before and after these times, the contents of the address bus are not specified. Note that this diagram specifies some timing constraints. The first is TAD, the maximum allowed delay for asserting the address after the clock pulse if the memory is to be read during the high phase of the third clock pulse. Note that the memory chip will assert the data for one half clock pulse, beginning in the middle of the high phase of T3. It is during that time that the data are copied into the MBR. Note that the three control signals of interest ( ) are asserted low. We also have another constraint TML, the minimum time that the address is stable before the The purpose of the diagram above is to indicate what has to happen and when it has to happen in order for a memory read to be successful via this synchronous bus. We have four discrete signals (the clock and the three control signals) as well as two multi–bit values (memory address For the discrete signals, we are interested in the specific value of each at any given time. For the multi–bit values, such as the memory address, we are only interested in characterizing the time interval during which the values are validly asserted on the data lines. Note that the more modern terminology for the three control signals that are asserted low would be MREQ#, RD#, and WAIT#. The reader will note that the figures in this chapter make use of both styles for writing these control signals; translation to a uniform notation is bothersome. diagram for an asynchronous bus includes some additional information. Here the focus is on the protocol by which the two devices interact. This is also called the “handshake”. The bus master asserts MSYN# and the bus slave responds with SSYN# when done. The asynchronous bus uses similar notation for both the discrete control signals and the multi–bit values, such as the address and data. What is different here is the “causal arrows”, indicating that the change in one signal is the causation of some other event. Note that the assertion of MSYN# causes the memory chip to place data on the bus and assert SSYN#. That assertion causes MSYN# to be dropped, data to be no longer asserted, and then SSYN# to drop. A bus may be either multiplexed or non–multiplexed. In a multiplexed bus, bus data and address share the same lines, with a control signal to distinguish the use. A non–multiplexed bus has separate lines for address and data. The multiplexed bus is cheaper to build in that it has fewer signal lines. A non–multiplexed bus is likely faster. There is a variant of multiplexing, possibly called “address multiplexing” that is seen on modern memory busses. In this approach, an N–bit address is split into two (N/2)–bit addresses, one a row address and one a column address. The addresses are sent separately over a dedicated address bus, with the control signals specifying which address is being sent. Recall that most modern memory chips are designed for such addressing. The strategy is to specify a row, and then to send multiple column addresses for references in that row. Some modern chips transmit in burst mode, essentially sending an entire row automatically. Here, for reference, is the control signal description from the chapter on internal memory. Command / Action Deselect / Continue previous operation NOP / Continue previous operation Select and activate row Select column and start READ burst Select column and start WRITE burst Modern Computer Busses The next major step in evolution of the computer bus took place in 1992, with the introduction by Intel of the PCI (Peripheral Component Interconnect) bus. By 1995, the bus was operating at 66 MHz, and supporting both a 32–bit and 64–bit address space. According to Abbott [R64], “PCI evolved, at least in part, as a response to the shortcomings of the then venerable ISA bus. … ISA began to run out of steam in 1992, when Windows had become firmly established.” Revision 1 of the PCI standard was published in April 1993. The PCI bus standard has evolved into the PCI Express standard, which we shall now discuss. (Peripheral Component Interconnect Express) is a computer expansion card standard designed to replace the older PCI bus standard. The name is abbreviated as “PCIe”. This is viewed as a standard for computer expansion cards, but really is a standard for the communication link by which a compliant device will communicate over the bus. Wikipedia, PCIe 3.0 (August 2007) is the latest standard. While an outgrowth of the original PCI bus standard, the PCIe is not compatible with that standard at the hardware level. The PCIe standard is based on a new protocol for electrical signaling. This protocol is built on the concept of a lane, which was defined in Chapter 10 as a full–duplex connection based on two pairs of lines, each implementing differential signaling. As a brief review, one of the pairs of lines in a lane is called the signal transmitter and the other pair the signal receiver. We shall denote the signals T and R. Each pair of lines carries two voltages to represent the voltage V being transmitted, V+ = V/2 and V– = –V/2. Here is another depiction of a full–duplex lane. A PCI connection can comprise from 1 to 32 lanes. A 16–lane PCI bus would be used as a 16–bit parallel bus, transmitting 16 bits at a time. There have been three standards so far: Version Per Lane 16–Lane Slot Version 1 250 MB/s 4 GB/s Version 2 500 MB/s 8 GB/s Version 3 1 GB/s 16 GB/s The PCI express standard calls for a point–to–point bus (one device communicating with exactly one other device), as opposed to the shared bus topology of earlier standards. The CPU can communicate with a number of devices over a single shared bus. To do so via the PCI express, the CPU must communicate via a device called a “host root complex”, which connects the CPU to a number of PCI express busses, one bus for each device. Interfaces to Disk Drives The disk drive is not a stand–alone device. In order to function as a part of a system, the disk must be connected to the motherboard through a bus. We shall discuss details of disk drives in the next chapter. In this one, we focus on two popular bus technologies used to interface a disk: ATA and SATA. Much of this material is based on discussions in chapter 20 of the book on memory systems by Bruce Jacob, et al [R008]. organization is shown in the figure below. We are considering the type of bus used to connect the disk drive to the motherboard; more specifically, the host controller on the motherboard to the drive controller on the disk drive. While the figure suggests that the disk is part of the disk drive, this figure applies to removable disks as well. The important feature is the nature of the two controllers and the protocol for communication between them. One of the primary considerations when designing a disk interface, and the bus to interface, is the size of the drive controller that is packaged with the disk. As Jacob [R008] put it: “In the early days, before Large Scale Integration (LSI) made adequate computational power economical to be put in a disk drive, the disk drives were ‘dumb’ peripheral devices. The host system had to micromanage every low–level action of the disk drive. … The host system had to know the detailed physical geometry of the disk drive; e.g., number of cylinders, number of heads, number of sectors per track, etc.” “Two things changed this picture. First, with the emergence of PCs, which eventually became ubiquitous, and the low–cost disk drives that went into them, interfaces became standardized. Second, large–scale integration technology in electronics made it economical to put a lot of intelligence in the disk side controller” As of Summer 2011, the four most popular interfaces (bus types) were the two varieties of (Advanced Technology Attachment, SCSI (Small Computer Systems Interface0, and the FC (Fibre Channel). The SCSI and FC interfaces are more costly, and are commonly used on more expensive computers where reliability is a premium. We here discuss the two ATA busses. The ATA interface is now managed by Technical Committee 13 of INCITS (www.t13.org), the International Committee for Information Technology Standards (www.incits.org). The interface was so named because it was designed to be attached to the IBM PC/AT, the “Advanced Technology” version of the IBM PC, introduced in 1984. To quote Jacob again: “The first hard disk drive to be attached to a PC was Seagate’s ST506, a 5.25 inch form factor 5–MB drive introduced in 1980. The drive itself had little on–board control electronics; most of the drive logic resided in the host side controller. Around the second half of the 1980’s, drive manufacturers started to move the control logic from the host side and integrate it with the drive. Such drives became known as IDE (Integrated Drive Electronics) drives.” In recent years, the ATA standard has being explicitly referred to at the “PATA” (Parallel ATA) standard to distinguish it from the SATA (Serial ATA standard) that is now becoming popular. The original PATA standard called for a 40–wire cable. As the bus clock rate increased, noise from crosstalk between the unshielded cables became a nuisance. The new design included 40 extra wires, all ground wires to reduce the crosstalk. As an example of a parallel bus, we show a picture of the PDP–11 Unibus. This had 72 wires, of which 56 were devoted to signals, and 16 to grounding. This bus is about 1 meter in length. Figure: The Unibus of the PDP–11 Computer Up to this point, we have discussed parallel busses. These are busses that transmit N data bits over N data lines, such as the Unibus™ that used 16 data lines to transmit two bytes per transfer. Recently serial busses have become popular; especially the SATA (Serial Advanced Technology Attachment) busses used to connect internally mounted disk drives to the motherboard. There are two primary motivations for the development of the SATA standard: clock skews and noise. The problem of clock skew is illustrated by the following pair of figures. The first figure shows a part of the timing diagram for the intended operation of the bus. While these figures may be said to be inspired by actual timing diagrams, they are probably not realistic. In the above figure, the control signals MREQ# and RD# are asserted simultaneously one half clock time, after the address becomes valid. The two are simultaneously asserted for two clock times, after which the data are read. imagine what could go wrong when the clock time is very close to the gate delay found in the circuitry that generates these control signals. For example, let us assume a 1 GHz bus clock with a clock time of one nanosecond. The timing diagram above calls for the two control signals, MREQ# and RD#, to be asserted 0.5 nanoseconds (500 picoseconds) after the address is valid. Suppose that the circuit for each of these is skewed by 0.5 nanoseconds, with the MREQ# being early and the RD# being late. What we have in this diagram is a mess, one that probably will not lead to a functional read operation. Note that MREQ# and RD# are simultaneously asserted for only an instant, far too short a time to allow any operation to be started. The MREQ# being early may or may not be a problem, but the RD# being late certainly is. A bus with these skews will not work. As discussed above, the ribbon cable of the PATA bus has 40 unshielded wires. These are susceptible to cross talk, which limits the permissible clock rate. What happens is that crosstalk is a transient phenomenon; the bus must be slow enough to allow its effects to dissipate. We have already seen a solution to the problem of noise when we considered the PCI Express bus. This is the solution adopted by the SATA bus. The standard SATA bus has a seven–wire cable for signals and a separate five–wire cable for power. The seven–wire cable for data has three wires devoted to ground (noise reduction) and four wires devoted to a serial lane, as described above for PCI Express. As noted above in that discussion, the serial lane is relatively immune to noise and crosstalk, while allowing for very good transmission speeds. One might note that parallel busses are inherently faster than serial busses. An N–bit bus will transmit data N times faster than a 1–bit serial bus. The experience seems to be that the data transmission rate can be so much higher on the SATA bus than on a parallel bus, that the SATA bus is, in fact, the faster of the two. Data transmission on these busses is rated in bits per second. In 2007, according to Jacob “SATA controllers and disk drives with 3 Gbps are starting to appear, with 6 Gbps on SATA’s roadmap.” The encoding used is called “8b/10b”, in which an 8–bit byte is padded with two error correcting bits to be transmitted as 10 bits. The two speeds above correspond to 300 MB per second and 600 MB per second.
<urn:uuid:abb2fcdf-4b5f-4e6f-9cc7-87b31dde6363>
CC-MAIN-2017-04
http://edwardbosworth.com/CPSC2105/MyTextbook2105_HTM/MyText2105_Ch11_V06.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280872.69/warc/CC-MAIN-20170116095120-00472-ip-10-171-10-70.ec2.internal.warc.gz
en
0.935728
6,279
3.546875
4
Technological advances in the past half-decade have made it much easier for nonspecialists to use specialized maps as part of their daily work. The Environmental Protection Agency uses custom maps to target pollution enforcement, humanitarian groups use maps to track the migration of people displaced by wars and famine and emergency responders use maps to manage their response to hurricanes and floods. Using maps to glean important public policy data has a long pre-Internet history, though, stretching back, at least, to Lewis and Clark. Linda Pickle has spent decades using maps and other spatial analyses to gather insights from cancer data. She likely had the first copy of Geographic Information System software at the National Institutes of Health, NCI’s parent agency, she told Nextgov recently, and she’s watched as visualization data went from “little better than crayons” to Google Maps applications that nearly anyone can use. Pickle, who now works as a contractor doing temporal and spatial analyses for NCI, spoke with Nextgov about the history of cancer mapping, how maps can affect public policy and how changes in data analyses will and won’t affect cancer mapping. The excerpts below are edited for length and clarity. What information can you glean by looking at cancer rates in a map form? Do you know Tobler’s First Law? It basically says that things that are closer together tend to be more similar. This is true for cancer rates too. The rates tend to be more similar in a local area than, say, between Maryland and California. But, because cancer isn’t spread person to person, this spatial correlation is more of a nuisance. So we want to remove that and look to see what patterns remain after we’ve removed this tendency to be correlated. What we find once we’ve done this is that there’s a much stronger correlation by demographic variables. Marin County, Calif., which is very affluent, is similar to Montgomery County, Md., which is also pretty affluent. Why would there be any correlation in cancer rates based just on nearness? Some of it has to do with diagnosis and how they identify cases. They might be better at that in some places than others and that will certainly depend on the type of cancer. Also if there’s some very rare form of cancer, you’ll have better data on that in an area where they’re better able to diagnose it. There’s also an issue of death certificates [where a lot of cancer data is gleaned from] not always being accurate. We have to work with what we’ve got, but we keep in the back of our minds that the data we’re working with may not be totally accurate. So what demographic data do you look at? We model men and women separately. This is all at the county level. We do race and income and, in addition to income, we put in a poverty measure because one variable generally won’t capture the whole socioeconomic picture of a place. We also put in any health care availability and health care utilization information we have. More of that is becoming available than there used to be. We’ll look at how many doctors there are per 1,000 people and how many oncologists. We also include the percent of people who are obese and the percent who ever smoked. We ask if they “ever smoked” because a lot of people have quit but the lag time between the exposure to smoking and the development of or death due to lung cancer can be 20 or 30 years. Do you look at non-demographic data such as pollution rates? If there’s a particular hypothesis that suggests environment might be important we can put that into the model. But environment is difficult because most cancers are thought to take 20 to 30 years to develop. So it’s difficult to know what the environment was when a person was exposed to that cancer and we usually can’t get data back that far. That may change as more data becomes available. Also, most of the data we get from state registries is tied to people’s home addresses, which they aggregate and report at some higher geographic unit. But if you think about where you’re exposed to carcinogens during the day, it might be at home or it might be at work. And we don’t have that address. It might be from the fumes in your car or it could be that you’re jogging out near chemical pollutants. Does that lag time between exposure and development of a cancer also make it difficult to use other ‘big data’ to study cancer rates, such as who’s ordering carcinogenic foods on Amazon? Exactly. Also the length of the lag time will vary based on the person’s genetics and environment and other factors, so you don’t know precisely what each person’s lag time is. How is this data used to inform public policy? State epidemiologists look at the patterns in their areas. One good example is when the second generation of the Atlas of Cancer Mortality came out in 1987, it was obvious from just looking at the maps that cervical cancer rates were going down everywhere but in West Virginia and the surrounding areas where they weren’t going down very quickly at all. West Virginia was like a high-rate island on the map all in red. So a person at the state epidemiology office said ‘we have to do something about this’ and they changed their Medicaid policies to cover pap smear screenings for women who couldn’t pay for their own. The next set of maps showed West Virginia rates coming down more quickly.
<urn:uuid:a22382dc-16e4-4771-9de2-e72af9f0f3e8>
CC-MAIN-2017-04
http://www.nextgov.com/health/2014/05/cancer-maps-show-power-and-limits-data-public-policy/83884/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280872.69/warc/CC-MAIN-20170116095120-00472-ip-10-171-10-70.ec2.internal.warc.gz
en
0.962467
1,180
2.96875
3
Definition: An undirected graph that has a path between every pair of vertices. See also complete graph, biconnected graph, triconnected graph, strongly connected graph, forest, bridge, reachable, maximally connected component, connected components, vertex connectivity, edge connectivity. Note: After LK. If you have suggestions, corrections, or comments, please get in touch with Paul Black. Entry modified 19 April 2004. HTML page formatted Mon Feb 2 13:10:39 2015. Cite this as: Paul E. Black, "connected graph", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 19 April 2004. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/connectedGraph.html
<urn:uuid:c9f034be-c075-4b40-a41d-8da0ed6ef8c5>
CC-MAIN-2017-04
http://www.darkridge.com/~jpr5/mirror/dads/HTML/connectedGraph.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00014-ip-10-171-10-70.ec2.internal.warc.gz
en
0.855552
182
2.890625
3
In 1982, Kaoru Ishikawa (1915 – 1989) created the cause and effect diagram also known as the fishbone diagram. The fishbone diagram is often used in root cause analysis to identify linkages between systems or events (parallels can be drawn from the term “root” in root cause analysis and the root of a plant). Identifying issues that are the “cause” can be difficult and time consuming, much like identifying the extent of a plant’s root system. Roots are typically hidden, and the size can’t be determined without digging up the plant completely. Similarly, problems are often not visible, and determining their overall effect on the project without digging into the issue is impossible. It takes a great deal of effort to dig out the root and identify the problem. It would be difficult to be in the workforce for any length of time without running into the term root cause analysis. The concept is quite simple to understand, however, it can be a challenge to perform and solve. The Rubik’s Cube puzzle is a simple concept as well – also not easy to solve. When there are major mechanical disasters, such as the NASA Space Shuttle explosion, people immediately think that a root cause analysis study must be performed to determine what happened. Typically we associate root cause analysis with mechanical or electronic mechanisms that are part of products or systems. Since products and systems are created by people, let’s entertain the thought of performing a root cause analysis of team members. From time to time, even the best performing, individual contributors will have a decline in performance. Once the performance issue has been identified, rather than hope the situation will change, a proactive approach should be taken by sitting down with the individual and discussing the situation. This can be time consuming and, possibly, uncomfortable for both the team member and the manager, but it’s important that the team member is aware that his or her performance is not meeting expectations. The manager should communicate specific examples of poor performance (the effects) which may be: tardiness; poor quality of work product; late completion of deliverables; poor attitude; etc. By having a list of predefined “effects,” the manager can get responses from the team member as to what they feel is the cause. Start with general categories such as work responsibilities, home life, health issues, etc. These categories can be further broken down into subcategories. For example, under work responsibilities you could have: current work assignment; work area environment; dislike of other team member; compensation; time line of deliverables; inadequate tools; or work schedule. Home life could be broken down into living arrangement; family members; transportation, etc. You get the idea. Once the cause(s) have been identified, the manager and the team member can then begin to try to resolve the cause. Dr. Eliyahu M. Goldratt, in his book Theory of Constraints, contends that to improve quality, management must identify the weakest link in a project and elevate that link in order to raise the quality of the project. By identifying the team member(s) who are performing below expectations, and by performing a root cause analysis (Goldratt refers to the exercise as cause, effect, cause), the manager can elevate the probability of success for the project. By taking the time and effort to understand the team member better, the manager may develop a valuable, loyal employee. From Darrell Stiffler
<urn:uuid:e2bc4c0d-df99-40b5-8606-a6766d2183c8>
CC-MAIN-2017-04
http://blog.globalknowledge.com/2009/05/11/performing-root-cause-analysis-on-personnel/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280730.27/warc/CC-MAIN-20170116095120-00408-ip-10-171-10-70.ec2.internal.warc.gz
en
0.955917
716
3.28125
3
We've been told for a few years now that the Internet of Things -- common household, industrial and public devices enabled with sensors -- will transform how we work, play and interact with the world around us. What we haven't heard a lot of detail about is how the Internet of Things will actually operate: How will information be transferred from IoT sensors to other devices and computers? How and with what will the sensors be programmed? How will we strike the balance between accessibility and security? It's also a potentially self-serving initiative, as the service dispenses with the need to download apps in order to interact with an IoT device. Rather, The Physical Web relies on a URL-based identity and communication system. "The Physical Web must be an open standard that everyone can use," Google writes. "The number of smart devices is going to explode, and the assumption that each new device will require its own application just isn't realistic. We need a system that lets anyone interact with any device at any time." That all makes perfect sense to me, and The Physical Web's operational model -- as a "discovery service where URLs are broadcast and any nearby device can receive them" -- is much more likely to scale successfully with the billions of smart devices expected to populate the IoT. In Google's vision, people will be able to use vending machines, rental cars, appliances, devices in retails stores, and thousands of other objects that contain URL-accessible functions, features and information. "Once any smart device can have a web address, the entire overhead of an app seems a bit backward," Google says. It's also worth noting that once a smart device has a web address, it can be cataloged and mined for information by the Internet's largest collector and monetizer of information -- Google. Many of us -- myself included -- have made a permanent devil's bargain with Internet companies: They offer enticing features and services, and we offer information about ourselves. With the Internet of Things, the stakes get raised even further: People with smart devices will broadcast their activities not only to their carriers (and the NSA), but to Internet companies and other businesses with an IoT presence. That's what most of us expect. But it sure seems as if a URL-based system for the IoT would provide Google with more benefits than anybody else. Without trying to sound paranoid, it's worth thinking about, especially when you look at Google's overarching information collection strategy (examples abound, including here, here, here, here and here). As for enterprises, "a system that lets anyone interact with any device at any time" sure seems like a potentially risky double-edged sword. The Internet of Things holds a lot of promise; let's hope we approach it with common sense. This story, "The Physical Web: Google's Trojan Horse gift to the Internet of Things" was originally published by CITEworld.
<urn:uuid:00f7d3f3-c961-431a-b27b-ad91fff7a063>
CC-MAIN-2017-04
http://www.itworld.com/article/2695008/networking/the-physical-web--google-s-trojan-horse-gift-to-the-internet-of-things.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00224-ip-10-171-10-70.ec2.internal.warc.gz
en
0.957182
595
2.859375
3
Teamwork is essential in the workplace. Leading and working in teams is part of developing and honing leadership skills. Yet conflict among members of a team, even one chosen for harmonious interaction, occurs frequently. The Harvard Business Review points out that most teams are put together first, with managers and team members handling conflict after it occurs. The authors assert that a more productive method is to explore five areas before the team begins its work. The five areas examine how each team member 1) looks, 2) acts, 3) speaks, 4) thinks, and 5) feels. First, this method proactively helps to avoid conflict areas rather than experiencing them when employees are in the thick of the work - and experiencing the loss of productivity or cutting short potential synergies on the team. Second, it gives team members insight into each other so they can become a harmonious team and not be derailed by attitudes stemming from differences in any of the five categories. Third, the method lays the groundwork to discuss process before the focus turns to content. In all the areas, managers should aim to eliminate preconceptions and to reduce attitudes that might cause conflict. Nonconfrontational communication is the key. The Five Categories How team members look - Anyone who has worked in an organization knows that how coworkers look - styles of dress, personal choices, and characteristics - affect how they are perceived. How people look can also affect whether they are viewed as members of the team. Major conflicts can stem from something as seemingly insignificant as a suit-wearing business department working on a team with khaki-wearing creatives. How team members act - Teams need to discuss actions. Is meeting deadlines, for example, considered essential by one department and aspirational by another? Does one group value punctuality at a common time and the other prioritize flextime? How people act also covers, of course, the degree of friendliness, small talk, and informal interaction. How members speak - This can be an area of conflict on multiple levels. First, how much do people speak? If some team members speak a great deal and others never contribute, it may be advisable to institute meetings in which short contributions from everyone are mandatory. A simple, proven methodology is to allow each team member the same limited time to address the topic until every member has had an opportunity to contribute. Apply the same rule for a second or third round of discussion. Second, when people agree, are they agreeing to a set of methods in product or delivery? Or are they simply agreeing in principle, with no sense that they will be held to "yes." The authors pinpoint this as a major area of conflict. How members think - Whether people are rational or intuitive, or numbers- or concept-driven matters a great deal to a functional team. The HBR authors give a telling example of a biotech company where scientists valued a failed experiment for what it told them about the science, while the business group mourned the loss of a potential product. How members feel - Some team members may feel the appropriate way to communicate is a rah-rah, pep rally style. Other executives are direct. Others make demands to incentivise production. The various feelings expressed need to be explored as a group. In general, proactive steps can replace pounds of cure if a group functions less than optimally or fails to function at all. These five categories can help ensure smooth functioning.
<urn:uuid:345cacff-706c-4676-8ee2-d4f4f6c0da02>
CC-MAIN-2017-04
https://www.broadsoft.com/work-it/how-to-prevent-conflict-on-teams
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00224-ip-10-171-10-70.ec2.internal.warc.gz
en
0.957446
700
2.90625
3
When influenza hit early and hard in the United States this year, it quietly claimed an unacknowledged victim: one of the cutting-edge techniques being used to monitor the outbreak. A comparison with traditional surveillance data showed that Google Flu Trends, which estimates prevalence from flu-related Internet searches, had drastically overestimated peak flu levels. The glitch is no more than a temporary setback for a promising strategy, experts say, and Google is sure to refine its algorithms. ………………. But as flu-tracking techniques based on mining of web data and on social media proliferate, the episode is a reminder that they will complement, but not substitute for, traditional epidemiological surveillance networks. ………………. The latest U.S. flu season appears to have flummoxed the Google Flu Trends data-mining algorithms, as evidenced by wide disparities between its estimates and those reported by the U.S. Centers for Disease Control and Prevention (CDC). Several researchers think widespread media coverage of the flu outbreak may lie at the heart of the algorithms’ difficulties by triggering many flu-related Web searches by healthy people. Despite these problems, many feel Google Flu will recover its accuracy following the refinement of its models. “You need to be constantly adapting these models, they don’t work in a vacuum,” says Harvard Medical School’s John Brownstein. “You need to recalibrate them every year.” Meanwhile, several projects are underway to calculate flu outbreaks by crowdsourcing via citizen volunteers. Lyn Finelli with CDC’s Influenza Surveillance and Outbreak Response Team sees great potential in such efforts, particularly because the questionnaires are based on clinical definitions of influenza-like illness (ILI) and so generate very clean data. ……………… Some research groups also have published work suggesting that a close match can be made between official ILI data and models derived from analysis of flu-related Twitter messages. Article DCL: Disease monitoring systems are slowly, very slowly, introducing CEP techniques without knowing exactly what they are doing. These ‘Flu systems are but one example.
<urn:uuid:488e42ba-35e8-4f97-9eb6-32e09d35b43d>
CC-MAIN-2017-04
http://www.complexevents.com/2013/02/17/when-google-got-flu-wrong/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00132-ip-10-171-10-70.ec2.internal.warc.gz
en
0.94925
431
2.921875
3
Security is an important issue for many business owners and managers. Many work with their IT department or an IT partner to ensure their network and systems are secure from threats. But what about your email, social media and bank accounts? The weakest link of these online accounts is your password, hackers know this and that’s what they target. Do you take steps to ensure that you have a strong password? If you want to minimize the chances of your password being hacked, here are five things you should NOT do. 1. Don’t pick short passwords While short passwords are easier to remember, they are also easier and quicker to hack. The most common way to hack passwords is by using brute force: Developing a list of every possible password, then trying this list with a username. Using a mid-range computer like the one many have on their desk, with a normal Internet connection, you can develop a list of all potential passwords astonishingly quickly. For example it would take 11.9 seconds to generate a list of all possible passwords using five lowercase characters (a,b,c,d,etc.) only. It will take about 2.15 hours to develop a list of all possible passwords using five of any computer character. Once a hacker has the list, they just have to try every potential password with your user name. On the other hand, a list of all 8 character passwords with at least one special character (!,@,%,etc.) and one capital letter would take this computer 2.14 centuries to develop. In other words, the longer the password, the harder it will be to hack. That being said, longer passwords aren’t impossible to hack, they just take more time. So, most hackers will usually go after the shorter passwords first. 2. Don’t use the same password The way most hackers work is that they assume users have the same password for different accounts. If they can get one password, it’s as simple as looking through that account’s information for any related accounts and trying the original password with the other accounts. If one of these happens to be your email where you have kept bank information, you will likely see your bank account drained. It’s therefore important to use a different password for every online account. They key here is to try and use a password that’s as different as possible. Don’t just add a number or character onto the end of a word. If you have trouble remembering all of your passwords, try using a password manager like LastPass. 3. Don’t use words from the dictionary or all numbers This article published last year on ZDnet highlights the 25 most popular passwords. Notice that more than 15 contain words from the dictionary, and most of the rest are strings of common numbers. To have a secure password, most security experts agree that you should not use words from the dictionary or number combinations that are beside each other (e.g., 1234). 4. Don’t use standard number substitutions Some users have passwords where they replace letters with a number that looks similar, for example: h31lo (hello). Most new password hacking tools actually have combinations like this built in and will try a normal word, followed by replacing letters with similar numbers. It’s best to avoid this. 5. Don’t use available information as a password What we mean by this is using information that can be easily found on the Internet. For example, doing a quick search for your name will likely return your email address and social media profiles. If you have pictures of your kids, spouse, pets, family, their dates of birth, etc. on your Facebook profile and have put their names in captions, it’s possible for a hacker to see this (assuming the pictures are shared with the public). You can bet that they will try these names as your password. You would be surprised with the amount of personal information on the web. We suggest searching for yourself using your email address(s), social media profile names, etc. and seeing what information can be found. If your passwords are close to what you find, it would be a good idea to change them immediately. There are numerous things you can do to minimize the chance that your passwords are stolen and accounts hacked.
<urn:uuid:c2db519b-1db0-4859-9dfe-1bce1ff0318d>
CC-MAIN-2017-04
https://www.apex.com/five-password-donts/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00373-ip-10-171-10-70.ec2.internal.warc.gz
en
0.933466
895
2.703125
3
What is Spam mail guide - Email address security Spam mail is unwanted e-mail. It is e-mail that a user has not requested for, or usually knows about. Spam mail also known as junk or bulk e-mails are sent by spammers, usually attempting to legally or illegally sell their products or services. Spammers send copies of an e-mail to as many users as they can. Today over 90% of e-mail’s are spam mail. This has become a major concern around the world, and is taking up an individual’s time deleting spam mail, and also taking up business resources such as bandwidth. Laws have been put into place to prosecute spammers, however spam mail seems to be on the rise all the time. Spam mail is also known as unsolicited (Not looked for or requested) e-mail. How did they get my e-mail address? E-mail addresses are collected from a number of sources as below; - Chat rooms, - Websites, most commonly when a user purchases a product or service and specifies their e-mail address as part of an online transaction. - Newsgroups, Forums and Blogs - viruses which harvest users address books - And other sources E-mail addresses are sold to other spammers. Much of spam mail is sent to invalid e-mail addresses as well, hoping that there is a possibility it would be a legitimate address. Basics into avoiding Spam mail Avoid giving out your e mail address to anyone, such as the sources mentioned above. Avoid signing up for free offers and ensure you tick the option, not to participate in receiving future promotions/offers. Create a 2nd e-mail address so you can use this for potential spam mail. Purchase spam protection for your computer. See my Spam product guide. Check with your ISP, they may provide spam protection. Also read my; Wikipedia's guide to Spam
<urn:uuid:93157722-864e-43ce-8caf-3fd4e30ae79e>
CC-MAIN-2017-04
http://internet-computer-security.com/Spam/What%20is%20Spam.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00281-ip-10-171-10-70.ec2.internal.warc.gz
en
0.97088
409
3
3
Today there are more than 500 million Internet connected devices in use. They outnumber the total U.S. population and some experts are predicting that the number of connected devices will exceed 50 billion by 2020. Powering those devices not only requires plenty of electricity, but robust and plentiful Internet connections that satisfy our appetite for constant connectivity. Whether at home, on a street corner, riding public transportation, or sitting in a coffee shop, Americans are demanding readily available, high-speed Internet connections. The cable broadband industry is tackling this challenge through a “Cable WiFi” consortium offering access to over 200,000 Wi-Fi hotspots nationwide. These outside-the-home locations provide consumers with access to high-speed connections on smartphones, tablets, and laptops in thousands of locations. But these Wi-Fi networks are under incredible pressure. With so many devices all running data-heavy applications, are we asking too much of these public hotspots? In order to keep place with demand, Wi-Fi technology has to remain in a constant state of growth and evolution. Unless Wi-Fi networks gain access to more unlicensed spectrum, particularly in the 5 GHz band, growth could be stunted. Fortunately, a recent study from CableLabs and The University of Colorado issued a report showing that the very spectrum required can be used for Wi-Fi safely and without any harmful interference. This report is another step towards a future of fast, ubiquitous Wi-Fi. Being connected to reliable Internet on the go has become not only a convenience but a necessity. Wi-Fi networks carry more Internet traffic to consumer devices than wireless and wired connections combined. And the Internet of Things, or the connection of all devices and gadgets to the Internet, is already changing our lives, making Wi-Fi that much more important. By firing up 200,000 Wi-Fi hotspots, cable broadband providers are enabling the Internet to be the “anytime anywhere” experience that consumers crave. And with the release of more usable spectrum, widespread access to gigabit Wi-Fi can become a reality. Click to Enlarge
<urn:uuid:1af4ce82-456b-479c-88a4-1bc13e5d5dfa>
CC-MAIN-2017-04
https://www.ncta.com/platform/industry-news/growing-cable-wi-fi-to-200000-hotspots-and-beyond/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00491-ip-10-171-10-70.ec2.internal.warc.gz
en
0.924727
428
2.65625
3
Leitholdt E.,Bavarian State Conservation Office | Zielhofer C.,Bavarian State Conservation Office | Berg-Hobohm S.,Bavarian State Conservation Office | Schnabl K.,University of Osnabruck | And 5 more authors. Geoarchaeology | Year: 2012 The Central European watershed passes through the southern Franconian Jura in Bavaria, Germany. This principal watershed divides the Rhine/Main catchment and the Danube catchment. In the early Middle Ages, when ships were an important means of transportation, Charlemagne decided to connect these catchments by the construction of a canal known as the Fossa Carolina. In this paper, we present for the first time 14C data from the Fossa Carolina fill and document a high-resolution stratigraphic record of the Carolingian and post-Carolingian trench infilling. Our results provide clear evidence for peat layers in different levels of the trench infill, suggesting a chain of ponds. However, the majority of these peat layers yield mid-Medieval and younger ages. The period of major peat growth was during the Medieval climatic optimum. Therefore, our preliminary results do not prove the use of the trench during Carolingian times. However, first results from the reconstruction of the Carolingian trench bottom support the hypothesis that the Fossa was primarily planned as a navigable chain of ponds and not as a continuous canal. In the eastern part of the trench, a dam is located that was postulated in former studies to be part of a barrage for supplying the Carolingian canal with water. New 14C data indicate much younger ages and do not support the Carolingian barrage concept. © 2011 Wiley Periodicals, Inc. Source Auras M.,Institute for Stone Conservation e.V. | Beer S.,Bavarian State Conservation Office | Bundschuh P.,Institute for Stone Conservation e.V. | Eichhorn J.,University of Mainz | And 5 more authors. Environmental Earth Sciences | Year: 2013 Besides the enormous improvement of air quality in Germany due to the reduction of sulphur dioxide emissions in the last decades, high immissions of nitrogen oxides and fine particulate matter are frequently observed at traffic-rich urban sites. The changed chemical composition of air pollution requires a new investigation of its impact on historic buildings constructed of natural stone. In a pilot study a multi-disciplinary approach was chosen to obtain information on the actual pollution situation of historic buildings and monuments at traffic hotspots in Germany. The study concentrated on the two German cities of Munich and Mainz of different size, traffic volume and stone inventory. Dose-response functions were calculated to demonstrate the change of impact of different pollutants over the last three decades, and for comparison of traffic hotspots and housing areas of both cities. Numeric modelling on a city-scale was used to identify the historic buildings and monuments affected by elevated traffic immissions. Because a relevant part of these pollutants is dominated by short-range transport, the differences of wind speed and deposition rates were calculated using a street-scale 3D flow and dispersion model regarding traffic volume, wind regime and adjacent buildings. Finally, particulate matter was sampled at different positions of two buildings heavily exposed to traffic emissions. Individual particles were investigated by environmental scanning electron microscopy. After classification of the particles into different chemical groups, the fraction of traffic-induced particulate matter was quantified. Summarizing the results, it must be stated that soiling by traffic-related particulate matter, deposition of nitrates deriving from exhaust emission and other diffusely emitted components bear a severe damage potential for natural building stone at least locally at traffic-rich urban sites. © 2013 Springer-Verlag Berlin Heidelberg. Source
<urn:uuid:de853ab0-3d63-4fbe-9e87-ba2907723315>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/bavarian-state-conservation-office-2128866/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00307-ip-10-171-10-70.ec2.internal.warc.gz
en
0.927067
785
2.609375
3
News broke out earlier this year of a new breed of rootkit using techniques never before seen in modern malware. The most notable of them is the fact that the rootkit replaces the infected system's Master Boot Record (MBR). The MBR is the first physical sector of the hard drive and contains the first code loaded and executed from the drive during the boot process. In the competition between rootkits and rootkit detectors, the first to execute has the upper hand. And you can't execute earlier than from the MBR. Of course, MBR viruses used to be very common in the DOS days, 15 years ago or so. But this is 2008. This new Windows MBR rootkit launches itself very early during the Windows startup process without requiring any registry or file modifications. In fact, it is quite surprising that it's possible to write to the MBR from within Windows to begin with. The MBR rootkit — known as "Mebroot" — is very advanced and probably the stealthiest malware we have seen so far. It keeps the amount of system modifications to a minimum and is very challenging to detect from within the infected system. Below are some details about the MBR rootkit's stealth features: The ntoskrnl.exe module hook that executes the kernel-mode downloader payload is set to the nt!Phase1Initialization function which resides in the INIT section. This means that after the system has initialized the section is wiped out from memory and no sign of the hook is any longer present. The rootkit stores data that's required to survive reboots in physical sectors instead of files. This means that the data, including the real payload, is not visible or in any way accessible to normal applications. Therefore the rootkit does not have to hook the normal set of interfaces to keep them hidden. The MBR is the rootkit's launch point. Therefore it doesn't need to make any registry changes or to modify any existing startup executables in order to launch itself. This means that the only hooks it needs to make are used to hide and protect the modified MBR. Essentially this means that the rootkit hooks only two DWORDs from the disk.sys driver object which is shown in the picture below. Another interesting feature of the MBR rootkit that has not received very much public discussion is its networking layer and firewall bypassing capabilities. One reason for this might be that this part of Mebroot's code is heavily obfuscated and time consuming to analyze. It is known that the rootkit's main purpose is to act as an ultimate downloader. To be stealthy and effective it is essential that the rootkit does not trigger nor is blocked by personal firewalls. It is able to achieve this by operating in the lowest parts of the NDIS layer just above the physical hardware. Only a single DWORD is hooked at all times from the NDIS internal structures. To send packets the rootkit uses the SendPacketsHandler function implemented by the actual hardware specific driver. The rootkit uses its own unmodified versions of NDIS API functions it needs to operate. This has been done before by some malware, such as Rustock and Srizbi. However, what we have not seen before is the fact that the MBR rootkit uses a "code pullout" technique to only load the relevant code from the ndis.sys driver instead of loading the whole ndis.sys driver as its private module into memory. This means that the memory fingerprint of the malware is smaller and there are no additional modules loaded into the system address space which might trigger some forensic tools. This malware is very professionally written and produced. Which of course means it's not written for fun. Initial samples from December 2007 and January 2008 were at beta stage. Now it appears that the malware is fully-baked and more active distribution has begun. During the weekend our Security Lab started to receive information about multiple drive-by exploit sites spreading the latest version. (However, at the moment these attacks cannot be considered as widespread.) The actual site hosting the exploit code utilizes the following exploits: Microsoft Data Access Components (MDAC) Function vulnerability (MS06-014) AOL SuperBuddy ActiveX Control Code Execution vulnerability (CVE-2006-5820) Online Media Technologies NCTsoft NCTAudioFile2 ActiveX Buffer Overflow (CVE-2007-0018) GOM Player "GomWeb3" ActiveX Control Buffer Overflow (CVE-2007-5779) Microsoft Internet Explorer WebViewFolderIcon setSlice (CVE-2006-3730) Yahoo! JukeBox datagrid.dll AddButton() Buffer Overflow DirectAnimation.PathControl KeyFrame vulnerability (CVE-2006-4777) Microsoft DirectSpeechSynthesis Module Remote Buffer Overflow Proof of concept code for two of the exploits was publicly disclosed just less than a month ago. The downloaded payloads seem to clearly target online banking and other financial systems. We detect the latest MBR rootkit variant as Backdoor.Win32.Sinowal.Y. The exploit site is currently resolving to an IP address of 126.96.36.199 and seems to still be active. Here's some more information on Mebroot from Gmer, Prevx, and Symantec:
<urn:uuid:20d58cf0-9b09-42f8-a305-8632a3dd2071>
CC-MAIN-2017-04
https://www.f-secure.com/weblog/archives/00001393.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280791.35/warc/CC-MAIN-20170116095120-00519-ip-10-171-10-70.ec2.internal.warc.gz
en
0.931782
1,098
2.671875
3
Author : Varoon RajaniPublished Date : October 20, 2013 Tag List : Big Data, Big Data Challenges, Big Data Know hows Business Decision makers everywhere yearn for the right information that would help them make informed decisions. 30 years back business heads had a challenge of collecting enough data to make informed decisions. Today, the decision makers face challenge of a different kind, one where they have so much data that it is impossible to make sense out of it. We are talking about Big Data. Big Data is helping organizations of all sizes to make better business decisions, save costs, improve customer service, deliver better user experience, and identify security risks among other things. We read about Big Data everywhere; we listen to it from experts everywhere; even governments are talking about it. Here we talk about the challenges that organizations face while implementing Big Data. Big Data is not a technology initiative, but a business one. Big Data initiative has to be driven by the leaders of the organization, be it Business Heads or CXOs. Big Data can help an organization to improve operational predictability, increase sales, improve customer service among other things. These outcomes of the initiative have to be identified and articulated by the Business heads. Additionally, the procedural and in certain cases structural changes brought in by Big Data have to be managed carefully. Organizations do not change easily and not everyone may appreciate the value brought to the table with advanced analytics. This is a typical organizational challenge that needs to be handled aptly by the top management. Organizations have to be sure not to label Big Data as an IT driven initiative. The most important aspect for any organization to benefit from Big Data is the data itself. While there is variety of data collected by various tools and processes, not all data is relevant. It is critical for an organization to identify relevant sources of information depending on the outcome expected out of the effort. For example, if you want to improve customer experience on the website, an example of relevant data would be log details about the errors encountered by users while connecting to your website. In this case, you may not want to store or process the log details of successful connections. Only when there is relevant data, it can be processed and organized in a way that provides meaningful insights to the management to make informed decisions. For any successful Big Data effort there has to be a team of people with the right skills. As I pointed out earlier, Big Data is not a technology initiative, and the skill sets required are not limited to technology. For a successful Big Data effort, the team should have right mix of, Data Scientists: Data scientists with their skills and expertise help in deriving right statistics and identifying patterns to correlate variety of data and bring out meaningful insights. Technology Experts: Technology Experts who bring in specific skill sets to drive the technology that forms the backbone of the Big Data initiative. The team of technology experts will be able to identify the right set of software tools and hardware infrastructure required. Business Owners: Business owners who can drive the Big Data effort by defining the outcome of the Big Data effort and then working with the technology and data scientist teams to achieve the outcome. Technology forms the backbone for any Big Data initiative. Technology components for Big Data would include, Hardware Infrastructure: Organizations need to identify their needs and plan for the hardware infrastructure required for their efforts. Cloud Computing is also an option if you are not willing to invest in hardware. Software tools: You need to invest in the right set of tools for collection, processing and storing of data and for deriving analytics and visualization of data. To gain value out of the Big Data initiative and making it a success, it is important for the company to address all of these challenges together. Right set of technology to process the data will not be of any use without people with the right skill sets to derive insights. Leaving out any of these challenges unanswered will not bring out the strategic differentiator for the business. For more on Big Data visit our Blogs. Latest posts by Varoon Rajani (see all) - Redefining the Retail Industry with Big Data – Stories Capturing the Changes as they Happen - January 28, 2014 - AWS Cloud for Start-ups Jump Start Package - December 16, 2013 - AWS re:Invent Day 2 – Early Holiday Gifts for Developers! Postgres on RDS, Amazon Kinesis & more! - November 15, 2013 - Previous Blog: - It’s Music to Everyone’s Ears! Amazon Cloud's Transcoder launches its Audio Support Today - Next Blog: - Big Opportunities with Big Data
<urn:uuid:f3943482-92e0-44ae-96f6-6c6b5dccca0d>
CC-MAIN-2017-04
http://blog.blazeclan.com/4-faces-big-data-challenges-you-cannot-ignore/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00427-ip-10-171-10-70.ec2.internal.warc.gz
en
0.92775
957
2.609375
3
Scientists are taking advantage of a network of hundreds of GPS stations in Southern California to predict hazardous events such as earthquakes and flash floods. These days, analysts conduct traffic studies without getting in the way of drivers by using microsimulation software. RTM Dx is a free tool developed by researchers at Rutgers University's School of Criminal Justice to help police predict where crimes are going to occur. MIMO technology, which uses multiple transmitters to send more data at one, could help speed response times for military and first responders. Researchers have used smartphones, laptops, wireless routers and wired networks to track building occupancy and manage lighting, environmental controls and other services. With AT&T's Toggle, two virtual smartphones can exist within the same device; one is open and unsecured for the user. The other is locked down for government service. How competition among smartphone makers might pose problems for government mobile device management. Social Media Monitor helps police use social media for investigations and to look for signs of criminal activity such as gang violence, drug dealing, crimes against children and human trafficking. With its comic video depicting an unlucky lawyer running laptop endurance tests, Dell hopes to inspire its users to share their stories of rugged gear. BlueLine, created by William Bratton, is a closed social media platform for law enforcement that lets officers interact and share best practices.
<urn:uuid:f9293ce6-cd6f-40b2-9ecc-b22dd832af23>
CC-MAIN-2017-04
https://gcn.com/Blogs/Emerging-Tech/List/Blog-List.aspx?Page=11
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00335-ip-10-171-10-70.ec2.internal.warc.gz
en
0.943031
278
2.5625
3
File Transfer Protocol (FTP) celebrated its 40th birthday last month … time flies. The protocol now known as FTP has come a long way since Abhay Bushan, an MIT student, created the original specifications for it. The standard has grown from a simple protocol to copy files over a TCP-based network into the integrated model it is today. FTP provides control, compliance and security in a range of environments including cloud. In fact … FTP forms the link to many cloud-based services and applications. Today, newcomers such as peer-to-peer (P2P) networks are available. However, FTP is deemed more secure than P2P which is mandatory when it comes to transactions such as online banking. P2P is considered growing faster in its technology driven by the explosion of mobile devices and is appreciated for its file sharing and download speed. Are you still using FTP? What are you using FTP for in your organization? Let us know.
<urn:uuid:061cf23e-522c-4e31-9d82-8ef406301fcd>
CC-MAIN-2017-04
http://www.codero.com/blog/40-year-old-is-still-working-hard-for-technology/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280483.83/warc/CC-MAIN-20170116095120-00455-ip-10-171-10-70.ec2.internal.warc.gz
en
0.970116
196
2.78125
3
Most cyberattacks involve criminals exploiting some sort of security weakness. That weakness could be down to a poorly chosen password, a user who falls for a fake login link, or an attachment that someone opened without thinking. However, in the field of computer security, the word exploit has a specific meaning: an exploit is a way of abusing a software bug to bypass one or more security protections that are in place. Software bugs that can be exploited in this way are known as vulnerabilities, for obvious reasons, and can take many forms. For example, a home router might have a password page with a secret “backdoor code” that a crook can use to login, even if you deliberately set the official password to something unique. Or a software product might have a bug that causes it to crash if you feed it unexpected input such as a super-long username or an unusually-sized image – and not all software bugs of this sort can be detected and handled safely by the operating system. Some software crashes can be orchestrated and controlled so that they do something dangerous, before the operating system can intervene and protect you. When attackers outside your network exploit a vulnerability of this sort, they often do so by tricking one of the applications you are using, such as your browser or word processor, into running a program or program fragment that was sent in from outside. By using what’s called a Remote Code Execution exploit, or RCE for short, an attacker can bypass any security popups or “Are you sure” download dialogs, so that even just looking at a web page could infect you silently with malware. Worst of all is a so-called zero-day exploit, where the hackers take advantage of a vulnerability that is not yet public knowledge, and for which no patch is currently available. (The name “zero-day” comes from the fact that there were zero days during which you could have patched in advance.) What to do? Patch early, patch often! Reputable vendors patch exploitable vulnerabilities as soon as they can. Many vulnerabilities never turn into zero-days because they are discovered responsibly through the vendor’s own research, or thanks to bug bounty programs, and patched before the crooks find them out. Use security software that blocks exploits proactively Many vulnerabilities require an attacker to trigger a series of suspicious operations to line things up before they can be exploited. Good security software like Sophos Endpoint Security and Sophos Intercept X can detect, report and block these precursor operations and prevent exploits altogether, regardless of what malware those exploits were trying to implant. Want more information? Register for our webcast on Oct 4th “Stop Ransomware Before It Strikes.” Blog post originally appeared on Sophos.com.
<urn:uuid:524de6c2-ea4d-42ea-99c3-ae53ae3adb2e>
CC-MAIN-2017-04
http://www.itbestofbreed.com/sponsors/sophos/best-partnering/what-exploit
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00363-ip-10-171-10-70.ec2.internal.warc.gz
en
0.952951
577
3.78125
4
The Simple Network Management Protocol (SNMP) is the standard operations and maintenance protocol for the Internet for exchanging management information between the management console applications and managed devices. Management console application are application such as IBM Tivoli NetView or Solstice SunNet Manager. The managed devices includes hosts, routers, bridges, and hubs and also network applications like NetIQ eDirectory. This chapter describes SNMP services for NetIQ eDirectory 8.8. It contains the following topics:
<urn:uuid:c7d16267-3138-40e8-a146-c46a70a70d1d>
CC-MAIN-2017-04
https://www.netiq.com/documentation/edir88/edir88/data/ag7hanx.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00363-ip-10-171-10-70.ec2.internal.warc.gz
en
0.890961
96
2.671875
3
A peripheral device is generally defined as any auxiliary device, such as a computer mouse or keyboard, which connects and works with the computer in some way. Other examples of peripherals are expansion cards, graphics cards, image scanners, tape drives, microphones, loudspeakers, webcams, and digital cameras. RAM - random access memory - straddles the line between peripheral and primary component; it is technically a storage peripheral, but is required for every key function of a modern computer and removing the RAM effectively disables any modern machine. Many new devices, such as digital watches, smartphones, and tablet computers have interfaces which allow them to be used as a peripheral by a full computer, though they are not host-dependent as other peripheral devices. In a system on a chip, peripherals are incorporated into the same integrated circuit as the central processing unit. They are still referred to as ‘peripherals’ despite being permanently attached to their host processor. The global peripheral system on chip (SoC) market report segments the global market by applications, components, and geographies. The report also profiles the leading companies that are active in the field of developing and manufacturing peripheral system on chip (SoC) along with their product strategy, financial details, developments, and competitive landscape. The market is segmented into four geographies, namely North America, Latin America, Europe, and Asia-Pacific. The current and future market trends for each geography, along with Porter’s five force model analysis, market share of leading players, and competitive landscaping has been analyzed in this report. Market share analysis, by revenue of the leading companies, is also included in this report. The market share analysis of these key players is arrived at, based on key facts, annual financial information, and interviews with key opinion leaders, such as CEOs, directors, and marketing executives. In order to present an in-depth understanding of the competitive landscape, the report on the global peripheral system on chip (SoC) market provides company profiles of the key market players. Along with the market data, you can also customize the MMM assessments that meet your company’s specific needs. Customize to get comprehensive industry standard and deep-dive analysis of the following parameters: 1. Data from Manufacturing Firms - Fast turn-around analysis of manufacturing firms with response to recent market events and trends - Opinions from various firms about different applications - Qualitative inputs on macro-economic indicators and mergers, and acquisitions in each region 2. Shipment/ Volume Data - Value of components shipped annually in each geography tracked 3. Trend Analysis of Application - Application Matrix, which gives a detailed comparison of application portfolio of each company, mapped in each geography 4. Competitive Benchmarking - Value-chain evaluation using events, developments, market data for vendors in the market ecosystem, across various industrial verticals and market segmentation - Seek hidden opportunities by connecting related markets using cascaded value chain analysis Please fill in the form below to receive a free copy of the Summary of this Report Please visit http://www.micromarketmonitor.com/custom-research-services.html to specify your custom Research Requirement North America Peripheral-SoC What makes our report unique? What are market estimates and forecasts; which of Peripheral-SoC-North America markets are doing well and which are not? ...
<urn:uuid:f75799c9-f35d-4a17-842b-a12bb152b8ae>
CC-MAIN-2017-04
http://www.micromarketmonitor.com/market-report/peripheral-soc-reports-2605826714.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00087-ip-10-171-10-70.ec2.internal.warc.gz
en
0.93843
705
2.78125
3
Definition: A graph whose vertices and edges are subsets of another graph. Formal Definition: A graph G'=(V', E') is a subgraph of another graph G=(V, E) iff Note: In general, a subgraph need not have all possible edges. If a subgraph has every possible edge, it is an induced subgraph. If you have suggestions, corrections, or comments, please get in touch with Paul Black. Entry modified 17 December 2004. HTML page formatted Mon Feb 2 13:10:40 2015. Cite this as: Paul E. Black and Alen Lovrencic, "subgraph", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 17 December 2004. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/subgraph.html
<urn:uuid:8e0d26a3-63a1-4f0f-9275-1667a57ba2ee>
CC-MAIN-2017-04
http://www.darkridge.com/~jpr5/mirror/dads/HTML/subgraph.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00565-ip-10-171-10-70.ec2.internal.warc.gz
en
0.866493
198
3.3125
3
On September 22, individuals from around the globe celebrated World Car Free Day. The event, which began in 1997 under the name, “Carbusters,” aims to promote alternatives to car dependence and automobile-based planning, thereby improving quality of life for all. Though the event happened yesterday, the mission of this motivated group of individuals rings true on every day of the year. The world’s dependence on cars is one of the main contributors to greenhouse gas emissions and pollution. Not to mention, going car-free could significantly impact our reliance on fuel. Teleworking (or “telecommuting”) is one easy way that modern knowledge workers could minimize our collective dependence on cars. Did you know that as much as 850 million gallons of fuel could be saved annually if all those who could telework did so at least one day a week? Not to mention, if those with compatible jobs and a desire to work from home did so just half the time, the greenhouse gas reduction would be the equivalent of taking the entire New York State workforce permanently off the road. That’s pretty incredible, right? Technologies such as desktop and mobile video conferencing make working from home as natural as working in an office. The high definition quality of the images is so much like an in-person interaction that it is as if you are looking through a window to another place, rather than at an HD monitor. The technology simply disappears and you can interact naturally with colleagues, clients and partners. If your job is compatible with WFH and you are interested in the lifestyle, consider speaking to your manager about how to make the transition. Here are some pointers for when you have that conversation: Explaining the Value of High-Quality Video Communications. Even though the official World Car Free Day was yesterday, we encourage you to examine your life (both personal and work) to see how you can cut out driving. Even one or two trips saved per day can make a significant impact. Remember, we make the world we live in and we shape our own environment. It is our collective responsibility as citizens of this planet to keep it in good health. By examining your personal reliance on cars and making small changes in your life, we can make the world a better place for future generations.
<urn:uuid:8264ceff-11fe-4a17-abfe-94497296d02c>
CC-MAIN-2017-04
http://www.lifesize.com/video-conferencing-blog/why-celebrate-world-car-free-day/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00501-ip-10-171-10-70.ec2.internal.warc.gz
en
0.95815
464
2.609375
3
Internet Protocol Version 6 (IPv6) is considered to be the next generation protocol for the Internet. It is designed to support continued Internet growth in the number of users, along with greatly enhanced routing functionality. The current version, Internet Protocol Version 4 (IPv4), was developed in the 1970s and provides the basis for today’s Internet functionality. However, IPv4 suffers from some serious limitations that are now inhibitors to any more growth of the Internet which in turn, inhibits additional integration of the Internet as a global business networking solution. IPv4 provides 232 (4,294,967,296) addresses. Although this appears to be a very large number, it is now insufficient to support the requirements of the maturing Internet. IPv6 has been under development by the Internet community for over ten years and is designed to overcome these limitations by greatly expanding available IP address space, and by incorporating features such as end-to-end security, mobile communications, quality of service, and system-management burden reduction. The true transition of the global Internet from IPv4 to IPv6 is expected to span many years. And, during this period of transition, many organizations introducing IPv6 into their infrastructure will support both IPv4 and IPv6 concurrently. There is not a one-size-fits-all transition strategy for IPv6. The incremental, phased approach allows for a significant period where IPv4 and IPv6 can co-exist using one or more transition mechanisms to ensure interoperability between the two protocol suites. The most often used methods of performing this transition is by operating in a dual-stack environment, and the use of tunneling and translation between the two versions of Internet Protocol (IP). Although there is seldom a viable dialogue between a company’s technical decision makers (TDMs) and business decision makers (BDMs), a professional CCNA must always recognize that any addition to, or modification of, an existing or newly installed network will require an additional amount of money, commonly known as transition costs. Transition costs can be classified as either recurring or non-recurring, and can stem from several sources. However, they are most commonly associated with software and hardware acquisition, employee training, consulting services, and operational costs. Transition to IPv6 can be phased into an organization’s infrastructure and applications through a lifecycle management process. Organizations expect to acquire IPv6 capability while upgrading infrastructure as part of the normal technology amortization-replacement lifecycle. The availability of transition mechanisms will enable organizations to replace only that equipment deemed necessary to facilitate IPv6 integration. As existing equipment is replaced with newer equipment, native IPv6 capability will be part of the equipment’s basic operating capability. Consequently, the cost of transition from equipment replacement should be significantly minimized. Training will be an important part of the integration process. IPv6, while built on many of the fundamental principles of IPv4, is different enough that most IT personnel will require formalized training. The level of training required will vary, and will depend on the role a member of the organization’s IT staff plays in developing, deploying, and supporting IPv6 integration. Companies will potentially need to make plans for training their staff. There are four main categories of training that are considered to be the most critical for any organization that is transitioning from IPv4 to IPv6. - Awareness – This is generalized information about IPv6 and IPv6-related issues. This type of training is most commonly delivered through workshops, seminars, conferences, and summits. These types of events typically provide overviews of IPv6 technologies, identify vendors that support IPv6, and provide participants with a rudimentary understanding of the IPv6 technology, as well as business drivers, deployment issues, and potential services and/or products enabled by IPv6. - Architectural– Training in this category should be very detailed and oriented toward those individuals who will have primary responsibilities in designing the architecture and deploying IPv6. Although the type of subject matter will be quite broad, particular attention should be paid to the fundamentals of IPv6, DNS, and DHCPv6, auto-configuration, IPv6 address allocation, transition mechanism, security principles for IPv6 environments, and mobility. Additional topics covered should be routing, multicasting, and principles for connecting to the IPv6 Internet.These topics are the areas where participants will encounter the greatest number of new subjects relative to IPv6, and will have the greatest impact on the development of successful integration plans. - Operational– Once IPv6 has been integrated into the network, it will need to be supported. Operational training will consist mostly of job-specific training targeted to a participant’s job responsibilities. Core topics such as the fundamentals of IPv6, auto-configuration, and transition mechanisms will need to be covered.However, the bulk of operational training should focus on supporting applications or protocols that have IPv6 underneath them. One example is training for system administrators focusing on supporting IPv6-enabled e-mail and web servers. Operational training will often be hardware- or software-specific, generally produced by or for a particular vendor’s product. - Specialized – As IPv6 deployment advances and the base level of understanding becomes more pervasive, the need for specialized training will emerge. This type of training should focus less on IPv6 specifically and address greater technological topics where IPv6 plays an important role. All of my posts addressing the transition from IPv4 to IPv6 could be considered as falling into the Awareness category of training. And, my next one will specifically address the three most commonly used transition strategies, known as dual stack, tunneling, and translation between the two versions of IP. Author: David Stahl
<urn:uuid:4387dc95-dcfb-4bc9-9e44-19697cb2d37a>
CC-MAIN-2017-04
http://blog.globalknowledge.com/2009/11/22/ipv6-basic-transition-concerns/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280730.27/warc/CC-MAIN-20170116095120-00409-ip-10-171-10-70.ec2.internal.warc.gz
en
0.940369
1,164
2.84375
3
News Article | January 9, 2016 The Kepler spacecraft is back in action and NASA has confirmed that it has found over 100 planets orbiting stars. Ian Crossfield from the University of Arizona announced the mission's discoveries at an American Astronomical Society conference Tuesday, noting that the revamped Kepler mission, now known as K2, found some planets different from what the original mission observed. Many of these were orbit stars and multi-planet systems hotter and brighter than those from the original Kepler field. For instance, K2 spotted a system with three planets bigger than Earth, found a planet within the Hyades star cluster and discovered a planet in the process of being ripped apart while orbiting a white dwarf star. "It's probing different types of planets [than the original Kepler mission]. ... The idea here is to find the best systems, the most interesting systems," said Tom Barclay from NASA's Ames Research Center. According to Crossfield, the first five of the K2 campaigns each observed a different part of the sky and found 7,000 transit-like signals. These signals went through a validation process to narrow down the planet candidates, which were then validated. A $600-million mission, Kepler was launched in 2009 with the task of determining how Earth-like planets commonly occur in the Milky Way galaxy. Over the course of four years, the mission discovered more than 1,000 planets, a number more than half of all the exoplanets that had been discovered. The Kepler spacecraft had been looking at the same patch of sky since it was launched, but it lost the ability to stare at the same spot in 2013. It underwent a few tweaks to get it back in order, resulting in the K2, but it can no longer observe the same spot indefinitely. With K2, the same patch of sky can only be observed for about 80 days at a time. Aside from observing planets as they orbit other stars, K2 is also on the lookout for supernovas and studying planets in the solar system. The mission logged a 70-day observation of Neptune in 2014 to study the planet's windy weather and is currently staring at Uranus. Afterwards, K2 is set to observe an asteroid population sharing Jupiter's orbit. The revamped Kepler mission is also looking at trying to spot planets wandering the galaxy without their own stars. Agency: NSF | Branch: Standard Grant | Program: | Phase: SOLAR-TERRESTRIAL | Award Amount: 30.00K | Year: 2015 This award supports participation of students and early-career scientists at the Triennial Earth-Sun Summit (TESS), a joint meeting of the Space Physics and Aeronomy Section of the American Geophysical Union and the Solar Physics Division of the American Astronomical Society. TESS is intended to be a gathering of the entire Heliophysics community, including distinct sub-disciplines devoted to studies of the Sun, the near-Earth space environment, and their interactions with the Earths atmosphere. The goal of this conference is to promote greater interaction and unity within this community. This increasingly interdisciplinary effort is necessary to understand, predict, and mitigate the effects of space weather. This program will be administered by the American Astronomical Society (AAS), the principal professional organization for US astronomers. They have very successfully and economically administered similar programs in the past, and the proposed procedures ensure that these funds will provide the maximum benefit to the astronomical and geospace communities at minimal administrative cost. Agency: NSF | Branch: Standard Grant | Program: | Phase: | Award Amount: 150.00K | Year: 2015 This award supports the transition of the World Wide Telescope (WWT) project from Microsoft Research to the American Astronomical Society (AAS). Amateur astronomers support an industry of considerable value and are involved in research through citizen science efforts such as the Zooniverse. The WWT software provides a uniquely capable platform for involvement of amateur and professional astronomers alike in research, education, publishing, outreach, and communication. Keeping this package available is a valuable service for the country. The investment to ensure a smooth transition from its current state of being proprietary, albeit free, to an open source project led by the AAS, is clearly very worthwhile and at the same time extremely cost-effective. Microsoft supported the WWT software system up until the fall of 2014, when they decided to release WWT to the open source community. The AAS is keen to assume a leadership role in a project of such evident value, but needs time to arrange for long-term support of WWT in aid of the US astronomical community. Microsoft funding ended on June 30, 2015, so this project will bridge the gap, although supporting only maintenance activities. This will ensure both that the WWT remains available to AAS and that the system remains available for existing users. As noted, WWT is currently used widely by many constituencies. This software has unique capabilities, including the creation of video abstracts for publications. It is used in many schools, planetariums and museums across the country and the world. Under a community-driven open source model, these activities will be supported by experts from the respective constituencies, and connections between the various activities can leverage the work in one community for the benefit of all. Agency: NSF | Branch: Standard Grant | Program: | Phase: | Award Amount: 107.94K | Year: 2012 The proposing professional scholarly organizations, the American Astronomical Society and the American Institute of Physics, will conduct a pilot project to deliver the digital data sets that underlie figures and tables in three of the journals that they publish in astronomy and plasma physics. The project will involve developing methods for identifying and acquiring those digital data, as well as for providing access to the actual data objects in the published literature. The proposers will (i) conduct surveys of authors to determine their willingness to share data and their interest in re-using data that other researchers might publish; (ii) convene expert stakeholders for focused workshops on metadata semantics, digital structures and formats, and on practices for peer review of data; (iii) develop and refine publishing production methods to acquire, validate, deliver, maintain, and curate data; and (iv) raise the awareness of scientists about the merits of and prospects for sharing data. The pilot will be assessed in part by quantitative metrics on the submission of data sets for publication and the use of these data sets by readers of the participating journals, and the outcomes will be disseminated through multiple forums to the scholarly publishing and research communities. Agency: NSF | Branch: Standard Grant | Program: | Phase: | Award Amount: 190.00K | Year: 2014 Astronomy is a fully international endeavor; we all share the same sky. To participate in the international scientific community, American researchers must meet and collaborate with researchers from around the world. Every three years, the International Astronomical Union has a General Assembly accompanied by topical symposia, joint discussions, working groups, and business meetings. The next such meeting, in August 2015, will be held in Hawaii and hosted by the US (the last US-hosted meeting was in 1988). In order to ensure a vigorous participation by the US astronomical community, this award will fund small travel grants to approximately 200 US astronomers. Preference will be given to early-career astronomers, astronomers from less-endowed institutions, and astronomers playing an active role in the governance of the IAU and the conduct of the associated symposia. Scientists supported by this program will come from a wide range of educational institutions, museums, planetariums, etc. The experiences they gain from this meeting will be shared with a broad segment of the US population, exposing them to the global nature of astronomy and the importance of international collaboration to the advancement of science and technology. The small travel grant program will be administered by the American Astronomical Society (AAS), the principal professional organization for US astronomers. They have very successfully and economically administered similar programs in the past, and the proposed procedures ensure that these funds will provide the maximum benefit to the astronomical community at minimal cost. The AAS will solicit proposals, select awardees, issue funds to be used for airfare only, collect receipts and meeting reports from the grantees, and prepare a final assessment and report on the entire program.
<urn:uuid:9fc63619-2e8b-4bb4-9b73-20b9708738f4>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/american-astronomical-society-295729/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279915.8/warc/CC-MAIN-20170116095119-00437-ip-10-171-10-70.ec2.internal.warc.gz
en
0.935908
1,697
3.390625
3
File corruption sometimes only causes a minor amount of information loss. Other times, it can prevent a file from functioning entirely. Files become corrupt when parts of them have become damaged. It’s like tearing the pages out of a book. If you tear out enough pages, or even just one important page, the book stops making sense. Gillware Data Recovery offers financially risk-free corrupt file recovery services if you have lost a critical file to corruption. How Do Files Become Corrupt? Our data recovery engineers here at Gillware draw a line between two types of file corruption. This distinction is based on the root cause of the corruption. One is what we refer to as “soft” corruption. The other is “hard” corruption. “Soft corruption” is what our corrupt file recovery engineers refer to corruption that appears as a symptom of a larger issue. Fixing the larger issue will make most, if not all, of the file corruption vanish. Fixing the larger issue may be extremely difficult. But the corruption is easy to fix. More or less, it goes away by itself once the problem is fixed. Take, for example, a hard drive with failing read/write heads. The heads can read data intermittently. Maybe one or more of the actual heads in the headstack has failed, and the rest work fine. If one of those heads is dead, you end up with “bad sectors”. These sectors aren’t intrinsically bad. There’s nothing physically wrong with the platters themselves. But large files are often split between different surfaces of the hard drive’s platters. Because of this, it takes the combined efforts of several heads to read the whole thing. If one of those heads is bad, the file will appear to be corrupt because parts of it are unreadable. After our engineers get into the hard drive and replace its heads, we can usually read most or all of those “bad” sectors. Suddenly, the file isn’t corrupted anymore. Soft corruption can also occur in RAID arrays. Take the example of a failing RAID-5 array. On March 24, 2016, the RAID controller card notices that one of the hard drives in the array is lagging behind. It’s still functional, but it seems likely to fail soon. Because RAID-5 has one hard drive’s worth of fault tolerance, the controller takes the drive offline and goes about its business. But then another hard drive fails a few months later, on June 15. The entire array crashes. Your IT technician might look at the RAID and see that the first drive to fail can be brought back online. They might force that hard drive back online. The problem is that the controller stopped writing data to that drive several months ago. We refer to the first drive to fail in a RAID-5 array as a “stale” drive. If the RAID array gets resuscitated like this, it will try to integrate all of the stale data into the array. This means that just about all of the data written to the array since the first hard drive failed will be corrupted. A file you haven’t touched since January will be fine. Your main Outlook PST that’s seen constant use since March won’t be so lucky. This makes RAID data recovery more difficult for our engineers. It creates massive amounts of file corruption. Repairing the second failed hard drive and using it to reconstruct the array without the stale drive can clear a lot of it up. An illustration of how data can be corrupted in a failed RAID-5 array. In file corruption data recovery scenarios, we call these types of corruption “soft”. File corruption isn’t the problem in and of itself. The real problem is usually a much more serious issue. Clearing up that issue tends to fix the vast majority of the corruption. “Hard corruption” is a different matter for our corrupt file recovery engineers. This type of file corruption has a root cause, but addressing the root cause will not undo the corruption. The corruption itself has to be addressed. Take a hard drive with damage to its magnetic data storage platters. The magnetic coating on its platters stores all of the data on the drive. If parts of that coating are damaged, those sectors are gone forever. Sectors can go bad as a result of old age. Even if the heads are fine, those sectors are still lost. Files that have lost sectors become corrupted, and may be completely nonfunctional. Hard corruption can also appear in files after deletion or drive reformats. When you delete files from your computer or reformat your hard drive, that once-occupied space is flagged as “unused”. As you continue to use the hard drive, more data is written to it. This data will eventually start overwriting the old data. There is no way to “roll back” a sector on a hard drive’s platters once it has been overwritten. Hard drives only keep backup sectors of extremely important things, such as firmware modules and partition superblocks. A software crash can also cause hard file corruption. Perhaps you’ve had Microsoft Word freeze on you in the middle of a Word document. You tell Windows to force shut down the program, and then try it again. But when you open the file again, Word spits out an error message. When the program crashed, it accidentally wrote some data where it shouldn’t and garbled up your document. Other culprits for hard corruption include virus attacks or sudden and improper computer shutdowns. If system-critical files become corrupt, it can prevent your operating system from starting up altogether. The Corrupt File Recovery Process Hard file corruption seems like it would be impossible to recover from. But there are many occasions where corrupt file recovery is possible for our data recovery engineers. To recover a corrupted file, our engineers must find a way to circumvent the corrupted sectors and get the file working again. When your Outlook PST, Quickbooks QBW, and SQL database files become corrupt, it can leave your business dead in the water. Our corrupt file recovery engineers are well-acquainted with the geometry of these file types, and many more. We know how to repair these files when a few bad sectors prevent them from working. File corruption could prevent you from opening your Outlook PST email archive. But usually, only a few bad sectors are to blame. Our data recovery engineers can work around these sectors and get your PST functional again. There may be some minor data loss, but it is better than having a file that won’t open. Why Choose Gillware for Corrupt File Recovery? At Gillware, we understand that it isn’t always possible or within our clients’ budgets to repair corrupted files. We don’t like our clients throwing money at us and getting nothing in return. We’re sure they wouldn’t like it either. That is why we keep our entire data recovery process financially risk-free. We start with a free evaluation. For clients living in the continental United States, we even offer a prepaid UPS shipping label at no charge. After we complete the evaluation, we present you with a price quote and a probability of success. We don’t send you a bill until we’ve recovered everything we can, and you only pay the bill if we’ve successfully recovered your data at a price that makes sense for you. Our data recovery engineers are world-class. After logging in tens of man-hours of data recovery work, there’s very little they haven’t seen or done. You can trust that your data is in good hands when you choose Gillware Data Recovery. Are You Ready to Have Gillware Assist You with Your Corrupt File Recovery Needs? Best-in-class engineering and software development staff Gillware employs a full time staff of electrical engineers, mechanical engineers, computer scientists and software developers to handle the most complex data recovery situations and data solutions Strategic partnerships with leading technology companies Gillware is proud to be a recommended provider for Dell, Western Digital and other major hardware and software vendors. These partnerships allow us to gain unique insight into recovering from these devices. RAID Array / NAS / SAN data recovery Using advanced engineering techniques, we can recover data from large capacity, enterprise grade storage devices such as RAID arrays, network attached storage (NAS) devices and storage area network (SAN) devices. Virtual machine data recovery Thanks to special engineering and programming efforts, Gillware is able to recover data from virtualized environments with a high degree of success. SOC 2 Type II audited Gillware has been security audited to ensure data safety, meaning all our facilities, networks, policies and practices have been independently reviewed and determined as completely secure. Facility and staff Gillware’s facilities meet the SOC 2 Type II audit requirements for security to prevent entry by unauthorized personnel. All staff are pre-screened, background checked and fully instructed in the security protocol of the company. We are a GSA contract holder. We meet the criteria to be approved for use by government agencies GSA Contract No.: GS-35F-0547W Our entire data recovery process can be handled to meet HIPAA requirements for encryption, transfer and protection of e-PHI. No obligation, no up-front fees, free inbound shipping and no-cost evaluations. Gillware’s data recovery process is 100% financially risk free. We only charge if the data you want is successfully recovered. Our pricing is 40-50% less than our competition. By using cutting edge engineering techniques, we are able to control costs and keep data recovery prices low. Instant online estimates. By providing us with some basic information about your case, we can give you an idea of how much it will cost before you proceed with the recovery. We only charge for successful data recovery efforts. We work with you to define clear data recovery goals for our technicians, and only charge you upon successfully meeting these goals and recovering the data that is most important to you. Gillware is trusted, reviewed and certified Gillware has the seal of approval from a number of different independent review organizations, including SOC 2 Type II audit status, so our customers can be sure they’re getting the best data recovery service possible. Gillware is a proud member of IDEMA and the Apple Consultants Network.
<urn:uuid:02433dcf-9bf0-4226-84e2-02498d9a1146>
CC-MAIN-2017-04
https://www.gillware.com/corrupt-file-recovery-services/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00190-ip-10-171-10-70.ec2.internal.warc.gz
en
0.93446
2,186
2.546875
3
When you double-click a file on your Mac, the operating system will automatically open the file using the program assigned to that type of file. It is possible, though, to open the file using another program if you wish. To open a file on your Mac using a different program, navigate to the file you wish to open and right-click on it to see the file menu as shown below. When the file menu opens, click on the Open With option as shown in the image above. This will open the Open With submenu as shown in the image below. This submenu contains other programs that the Mac OS thinks could properly open the file and manipulate it in some way. If the program you wish to use is listed, then simply select it from this submenu and the file will open using that program. On the other hand, if the program that you wish to use is not listed, then click on the Other... menu option. This will then open the Choose Application dialog where you can select a different application that you wish to use to open the file as shown in the image below. As you can see from the image above, the Choose Application dialog will display a list of applications that you can choose to open this file with. By default, this dialog will only show Recommended Applications, which means that you will only be able to select the applications that are in bold. If you wish to select a different application than a recommended one, you can change the Enable option to All Applications. This will then allow you to select any application you wish. Once you have determined the application you wish to use, select it by left-clicking on it once. If you want to make this application always open this particular file, then also put a check mark in the Always Open With check box. Then click on the Open button. The file will now open with the selected application. If you have any questions about this process please feel free to post them in our Mac OS Forum. Any files that start with a period on a Mac are considered hidden files in the Mac OS and are not visible from within the Finder. You can see these hidden files from within the Terminal utility by using the ls -a command, but that is not convenient when you wish to see all files on your computer through the Finder. This tutorial will describe how to make it so that all files on your Mac are ... A file extension, or file name extension, is the letters immediately shown after the last period in a file name. For example, the file extension.txt has an extension of .txt. This extension allows the operating system to know what type of file it is and what program to run when you double-click on it. There are no particular rules regarding how an extension should be formatted other than it must ... The default setting for Mac OS is to not display a file's extension. For those who want to view the full filename, rather than having the extension removed automatically , this tutorial will provide information on how to make it so you view the extensions for all files on your computer or for just an individual one. In the Mac OS it is possible to change the default program the operating system will use to open a file when you double-click on it. After you have changed this file association, though, you may want to reset this file association back to the default program that Mac OS was configured with when it was installed. This tutorial will explain how to restore your default file associations in Mac OS. A Windows Command Prompt is a screen where you type in commands that you would like to execute. The command prompt is very useful if you want to use batch files, basic scripting, or to perform various administrative tasks. The normal command prompt has one shortcoming and that is that you cannot directly launch programs that require administrative privileges in order to work properly. This is ...
<urn:uuid:72d3b9ed-f3a3-4e75-b24e-e6565581b2c2>
CC-MAIN-2017-04
https://www.bleepingcomputer.com/tutorials/open-file-with-different-program-on-mac/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00218-ip-10-171-10-70.ec2.internal.warc.gz
en
0.920249
787
2.609375
3
I have to create a table in which one of the fields is CITY of datatype CHAR(15).....and I have to use some sort of compressing the alphabetic field CITY ......how do I compress the field while creating it...please help me on this? ok...i will explain what i require.... I need to create an employee table named "EMPLOYEEDET" with the following characteristics COLUMNNAME DATATYPE LENGTH REMARKS dept_id numeric 4 foreign key which refers the corresponding column in the parent table "DEPARTMENT" emp_no numeric 4 primary key firstname alphabetic 15 lastname alphabetic 15 city alphabetic 15 use some sort of compressing the alphabetic field city gender alphabetic 1 gender should be M or F.it should not allow to enter any other character other than M and F age NUmeric 4 salary numeric integer:8 doj date 10 now i just have to create the table with following remarks being followed but i am not able to understand the remark of the city field. what does it mean? what i did next to go forward was first i created the table with city fields datatype as CHAR and later on altered the table by changing the datatype from CHAR TO VARCHAR which I think will occupy less space then CHAR.Suggest me if I am wrong.
<urn:uuid:fa7ce87d-d772-4bcc-8d62-0986677a1484>
CC-MAIN-2017-04
http://ibmmainframes.com/about37039.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00126-ip-10-171-10-70.ec2.internal.warc.gz
en
0.868146
305
2.671875
3
Questions derived from the CISSP – CISSP ISC2 Self-Test Software Practice Test. Objective: Access Controls SubObjective: Control access by applying concepts/methodology/techniques Item Number: CISSP.1.1.28 Single Answer, Multiple Choice Which technology allows users to freely access all systems to which their account has been granted access after the initial authentication? - Smart cards - Single sign-on - Biometric device D. Single sign-on Single sign-on allows users to freely access all systems to which their account has been granted access after the initial authentication. This is considered both an advantage and a disadvantage. It is an advantage because the user only has to log in once and does not have to constantly re-authenticate when accessing other systems. It is a disadvantage because the maximum authorized access is possible if a user account and its password are compromised. Discretionary access control (DAC) and mandatory access control (MAC) are access control models that help companies design their access control structure. They provide no authentication mechanism by themselves. Smart cards are authentication devices that can provide increased security by requiring insertion of a valid smart card to log on to the system. They do not determine the level of access allowed to a system. A biometric device can provide increased security by requiring verification of a personal asset, such as a fingerprint, for authentication. They do not determine the level of access allowed to a system. Single sign-on was created to dispose of the need to maintain multiple user account and password to access multiple systems. With single sign-on, a user is given an account and password that logs on to the system and grants the user access to all systems to which the user’s account has been granted. CISSP All-in-One Exam Guide, Chapter 4: Access Control, Single Sign-On, pp. 149-151.
<urn:uuid:08a83288-9447-4a88-92ca-873d4b13e10c>
CC-MAIN-2017-04
http://certmag.com/access-controls/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00244-ip-10-171-10-70.ec2.internal.warc.gz
en
0.924342
396
2.78125
3
Solutions to solve this problem are not new. Microsoft tried to implement a Web single sign-on with Passport (now called Windows Live ID). To compete with Microsoft, Sun Microsystems launched the Liberty Alliance, with the goal of creating a de facto standard for Internet Web applications. Unfortunately, both initiatives had limited adoption, and both applications are now nearly dead. A few years later, at the RSA conference in 2006, Bill Gates gave a keynote address on the end of passwords for the Internet by using CardSpace (i.e., Information Cards), which was introduced with Microsoft Windows Vista. However, five years later, just a few services on the Internet have adopted the "standard." In fact, it is really hard to change user behavior. Nowadays, users are accessing their services from several devices, such as PCs at the office and home, smartphones, tablets, TVs, etc.; a trend that contributed to a low-rate of adoption for CardSpace. Passport, Liberty Alliance and CardSpace were designed for user convenience, but in reality didn't increase the security level. Service providers have valid concerns about these technologies, which can lead to low adoption rates. That's why most Internet banking systems around the globe never adopted them. Instead, banking systems added mechanisms to confirm user identity, while at the same time providing ways for people to utilize Web-based services. Usually, a user has a login and password as authentication, but it's not enough to guarantee the user's identity since his or her credentials could have been stolen. Some efforts have been made to protect users against this kind of attack. For example, today many financial institutions use virtual keyboards that change the position of the numbers and letters with each new session. However, attackers can potentially circumvent this process by adding the capability of taking screen snapshots at every mouse click. An improvement from this basic approach would be to put together two characters in a single button: It increases the security, but not for long periods of time. The more someone uses this interface and the more character clusters change, the more data the attacker can gather. The attacker can also obtain more clues about the user's password in this way. Therefore, adding a second factor for authentication (i.e., two-factor authentication) can improve security and mitigate attacks of a stolen login and password. To make it work, the system should go beyond what the user knows (login and password) and incorporate into the system what the user has (e.g., a One-Time Password token, or OTP). However, giving something to a user is not an inexpensive approach. There are many logistics to deploy and maintain a solution like this. There are many technologies out there that companies can use. One of the cheapest methods available is the token table. The token table is a rudimentary OTP challenge/response solution where the service not only provides a login and password, but also a request for the user to insert, for example, the code 10 of his token table. I can't say that this method is ineffective, but of course it has its limitations because of the limited number of codes, the fact that the table is easy to scan, etc. Some Internet Banking services use OTP tokens. OTP tokens are six-digit codes that are time-based. You generate an OTP and the resulting token is valid for some period of time (i.e., usually 1 minute). As you can imagine, it's not a cheap solution, and from a user's perspective, it doesn't scale. Take my own example: I have an account in two different banks. Each bank offered me these tokens. Can you imagine one for each bank, one for Facebook, one for Twitter, one for Amazon, etc.? In the end, l would carry dozens of these tokens. It's an ineffective approach that's inconvenient for users. There are a variety of possible solutions. Facebook and Google have adopted an approach that uses mobile phones to retrieve a password or to unlock an account. Some banks even use a similar approach to authorize a transaction. This approach relies on a third-party device to attest to the user's identity, but at the same time it does not use a reliable medium - SMS, for example, is not very reliable (at least not globally). To unify and simplify this process, in 2011 Intel launched an initiative called Identity Protection Technology (IPT), which is an umbrella term for a number of building block components, such as OTP authentication embedded into the chipset. By centralizing the technology in a single device, Intel lets users access their services while also decreasing concerns about men-in-the-middle- or men-in-the-browser-style attacks. There are many solutions to the identity issue. From a service provider's standpoint, the most pragmatic approach may be to adopt many technologies to support authentication - thereby providing the least path of resistance and hassle for the user. For example, I hate the idea of carrying an OTP token. Bruno Domingues is a senior enterprise solution architect at Intel Corporation, focused on capacity planning and performance tuning, mission-critical planning and design, large-scale vPro deployment, and alternative computer models and infrastructures. Prior to joining Intel in 2007, Bruno worked for Microsoft and has more than 10 years of experience in the industry. Bruno holds a mathematics degree and has attained various technical certifications with Novell and Microsoft. The above cloud computing insights were provided to InformationWeek by Intel Corporation as part of a sponsored content program. The information and opinions expressed in this content are those of Intel Corporation and its partners and not InformationWeek or its parent, UBM TechWeb.
<urn:uuid:1aef0cd4-f9d3-40f0-b482-5f31a531c3c8>
CC-MAIN-2017-04
http://www.networkcomputing.com/storage/managing-identity-effectively-cloud/1785054245
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00180-ip-10-171-10-70.ec2.internal.warc.gz
en
0.948835
1,159
2.9375
3
The fiber optics technology has been one of the world’s most effective innovation of wire communication occurred. This technology has changed the world, and make the internet behavior today as a practical platform for worldwide access to data and information. And it is not only the internet, but also other kinds of communications that has undergone a sea change owing to the deployment of networks driven by the optical fiber backbone. When it comes to local communication, the multimode fiber optic cable only play a significant role in ensuring high data transmission rates at a high speed and low attenuation within the network with multi-user support. There are many significant advantages, people will use these cables. Below is a list of the main benefits. The biggest feature is the ability of multimode optical fiber carrying multiple signals at the same time in the same line. Therefore, the network user can send more than one packet in the cable at the same time, and all information will remain unchanged after will reach their destination. Channel will not mix or distort the multiple information channels. High power signal transmission capacity Multimode cables are excellent when it comes to carrying a high amount of total power inside the signals. The power is almost keep not loss, and the information is easy to be delivered at the other end of line with out any intermediate magnification. Real-time data transmission In the network data transmission in the design of the optical fiber data transmission speed. The high speed of optical transmission network is derived from the fact that the data rather than other more traditional electromagnetic signal. So the soft real-time system is feasible to use the network in some given scenario. High bandwidth and transfer rate The multi-channel factor attributes to a high bandwidth and high rate of data transfer. The optical signal is using total internal reflection – a physical properties or light is reflecting surface. Therefore, it is extremely difficult into the fiber optic network. Therefore, multimode fiber optic cable to enjoy a high level of data security. Support of multiple protocols These networks can support many data transfer protocol, including Ethernet, ATM and Infiniband, Internet protocols. Therefore, one can use the cable as the back bone of a series of high value Obviously, a multimode fiber optic cable can be used as a backbone for the cable communication needs with high performance. Use these cables will improve your experience, if you’re using equipment, depending on the level of network performance. Using multimode fiber cable instead of inferior cable can greatly improve the bandwidth and noise suppression. When choosing fiber optic network cable must have the correct information in determing the solution.
<urn:uuid:b672e207-df01-4a0f-ac35-6aee046be6e4>
CC-MAIN-2017-04
http://www.fs.com/blog/the-benefits-of-using-multimode-fiber-optic-cable.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00484-ip-10-171-10-70.ec2.internal.warc.gz
en
0.897698
527
2.546875
3
Despite its reputation, hacking isn’t always about cracking code, installing malicious software or maneuvering past security configurations. Believe it or not, many hackers’ most relied-upon methods are better described as non-technical. Social engineering, a non-technical approach that exploits human error to access confidential information, is a serious threat to organizations and individuals everywhere. For years, cyber criminals have launched social engineering attacks in the form of phishing emails to dupe recipients in to sending personal data. But now, as social media platforms continue their rapid growth, hackers have taken the opportunity to cast a much wider phishing net. Some common phishing tools on social media platforms now include fake login pages that intercept credentials and fabricated profiles that hackers use to connect with other user accounts. Once a user accepts a connection request from one of these accounts, a hacker can access privileged information, such as email addresses, phone numbers and other personal information. Since social engineering attacks are capable of undermining otherwise sound security operations, they are of particular concern for an organization’s security. Email and social media are universal tools for business, so it’s natural to be nervous about the potential for these breach events. Fortunately, there are a number of ways that you can fight back against social engineering attacks. - Education: Chief among social engineering countermeasures is a robust security and privacy awareness program that explains these hacking methods across all potential channels, and helps employees understand what to watch for. - Policy: Security systems may not block some social engineering attacks. But a thoroughly defined policy containing routine measures against social engineering remains a strong defense. For instance, help-desk employees can be required to ask callers for a unique corporate identifier, which will thwart attempts to gain confidential information by phone. - Identity management: Limiting access to certain critical information makes it much easier for organizations to protect that information. Authorization rules defining access privileges are a key competency for achieving this. Lunarline assists organizations in the areas of security policy management, as well as identity management, and the Lunarline School of Cyber Security now offers training programs specifically focused on social engineering risks.
<urn:uuid:657cd20c-5d00-4aff-bd0a-3fd8228dd05c>
CC-MAIN-2017-04
https://lunarline.com/blog/2016/01/fight-back-social-engineering-attacks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00484-ip-10-171-10-70.ec2.internal.warc.gz
en
0.927099
439
2.859375
3
As stated in a recent U.S. News article, “For the United States to remain the global innovation leader, we must make the most of all of the potential STEM talent this country has to offer.” However, according to an ASQ survey, today’s young people seem reluctant to pursue STEM careers and education programs. The ASQ study also notes 51 percent of students say they “spend more time after school on the computer, browsing the Internet or playing games, than on schoolwork, such as studying and reading.” To help bridge that gap, Cable in the Classroom (CIC) has harnessed broadband’s learning potential to create an online game aimed at middle school and older students called Coaster Crafter: Build. Ride. Scream! Combining both computer gaming and STEM education concepts, Coaster Crafter embeds the activities of designing, building, testing, and then taking a virtual ride, into a personalized roller coaster. Set in a virtual amusement park, these custom-designed roller coasters help stimulate kids’ interest and engagement in science and math. And really, who doesn’t love a roller coaster? Coaster Crafter is an excellent example of the catalytic power of cable broadband when it’s combined with great digital content and curious kids. CIC’s emphasis on STEM also is reflective of the cable industry’s overarching commitment to STEM education. Cable industry companies such as Time Warner Cable and Discovery Communications have created national initiatives to increase the number of kids interested in STEM careers. And as an industry serving consumers through innovative technology, cable itself has a vested interested in STEM success in the U.S. This is CIC’s fourth online learning game for students. Previous games have included: eLECTIONS: Your Adventure in Politics (now updated for 2012), Shakespeare: Subject to Change, and the weather game WindWard. The author, Carson Ward, is a summer intern at NCTA. She will be a rising junior this fall at the University of Maryland at College Park.
<urn:uuid:3841b184-663b-446b-b4fe-3f515da5d86b>
CC-MAIN-2017-04
https://www.ncta.com/platform/uncategorized/work-hard-play-hard-cable-in-the-classroom-invests-in-stem-education-with-coaster-crafter/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00116-ip-10-171-10-70.ec2.internal.warc.gz
en
0.938945
427
2.734375
3
A group of researchers from the Chinese University of Hong Kong have demonstrated that even applications with zero permissions can be used to launch attacks that allow attackers to forge text and email messages, access private information, receive sensitive data, and even gain remote control of the targeted device. Tested on a Samsung Galaxy S3, a Meizu MX2 and a Motorola A953, their “GVS-Attack” was successful independently on whether the device was running the vendor’s official Android version or CyanogenMod OS. “GVS-Attack utilizes an Android system built-in voice assistant module – Google Voice Search,” they explain in a paper, and invokes the device’s speaker. “Through Android Intent mechanism, VoicEmployer (their prototype attack app) triggers Google Voice Search to the foreground, and then plays prepared audio files (like “call number 1234 5678″) in the background. Google Voice Search can recognize this voice command and execute corresponding operations.” The researchers have also discovered a vulnerability of status checking in Google Search app, which can be exploited by the GVS-Attack to make the device call arbitrary malicious numbers. This can be executed even when the device is locked and secured with a password, ideally in the early hours of the morning, when the device owner is more likely to be asleep. In order to execute the attack, a malicious app – in this case their own VoicEmployer – has to be installed on the target’s phone and run. Users who don’t lock their phones are even in more danger, as the data contained on their device can be transmitted to the attacker, and he (or she) can gain control of the victim’s Android phone remotely. The malicious app is able to do all this by bypassing a number of Android permissions (Read Contacts, Write SMS, Send SMS, Internet, Set Alarm, Get Accounts, and so on). “GVS-Attack can dial a malicious number through playing “call …”, when this call is answered by an auto audio record machine, actually the data transmission channel has been built. Any audio type of data can be transferred through this channel instead of commonly used Internet connection,” they explained. It’s also interesting to note that a number of popular mobile apps weren’t able to detect VoicEmployer as malicious. “Through experiments, the feasibility of our attack schemes has been demonstrated in the real world,” they concluded, adding that they hope that their research will “inspire application developers and researchers rethink that zero permission doesn’t mean safety and the speaker can be treated as a new attack surface.”
<urn:uuid:787b0e74-1fa6-4ff6-9009-80bd9faccabb>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2014/07/29/researchers-successfully-attack-android-through-devices-speaker/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283689.98/warc/CC-MAIN-20170116095123-00510-ip-10-171-10-70.ec2.internal.warc.gz
en
0.930932
567
2.625
3
According to the Financial Times, "Google is phasing out the internal use of Microsoft’s ubiquitous Windows operating system because of security concerns, according to several Google employees. The directive to move to other operating systems began in earnest in January, after Google’s Chinese operations were hacked, and could effectively end the use of Windows at Google, which employs more than 10,000 workers internationally." “We’re not doing any more Windows. It is a security effort,” said one Google employee. “Many people have been moved away from [Windows] PCs, mostly towards Mac OS, following the China hacking attacks,” said another. New hires are now given the option of using Apple’s Mac computers or PCs running the Linux operating system, employees wanting to stay on Windows required clearance from “quite senior levels". Although Windows remains the most popular operating system in the world by a large margin, with various versions accounting for more than 80 per cent of installations, Windows is known for being more vulnerable to attacks by hackers and more susceptible to computer viruses than other operating systems. The greater number of attacks on Windows has much to do with its prevalence, which has made it a bigger target for attackers. Would it make any difference if the victims were running Linux or any other operating system if an attacker can conduct such a sophisticated attack? It is not the matter of OS, their target is the user. Linux, Windows, Mac, whatever–everything has weaknesses. Especially the users of those systems.Besides Windows, we have other choices,Windows, Mac OS, and others, which one is your next OS? Change your OS or educate the users? Reposted from Topsight
<urn:uuid:491a3731-d014-469d-b93f-ff36b1e11ff2>
CC-MAIN-2017-04
http://www.infosecisland.com/blogview/4219-Its-time-to-ditch-Windows-which-one-is-your-next-OS.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00382-ip-10-171-10-70.ec2.internal.warc.gz
en
0.952606
353
2.546875
3
A quick guide to trojans - what they are, how they work and the potential consequences of a having a trojan unwittingly installed on your computer or smartphone. What is a trojan? Much like the wooden horse of Greek mythology, a trojan horse program (usually called a trojan) appears to be desirable or harmless, but also silently performs actions that are harmful to the user's device, data or privacy. A legitimate program that also performs a harmful action because of a bug in its coding or flaw in its design may also be considered a trojan, at least until the problem is fixed. Tricking the user Trojans first and foremost rely on tricking the user into believing that the program is legitimate, so that they will willingly install the malware themselves. Malware authors will often go to great lengths to make their trojans look authentic, often disguising them as movie or sounds files, documents, popular games, product updates for legitimate programs and so on. If a user can successfully see through this disguise, avoiding the threat is much easier. Trojans are distributed in many ways – via malicious or compromised websites, in email messages as file attachments, through social media or file sharing networks, even on removable media such as USB sticks. Usually, these distribution channels involve some sort of trickery, such as promising the user a video or image if they click on a link, which gives them the malware instead. More malicious attacks involve using vulnerabilities in the user's device to forcibly download and install the trojan without the user's permission, in an attack known as a driveby download. If a trojan succeeds in getting installed onto a system, it is often very difficult for users to realize they are performing any harmful actions, as these are usually well camouflaged to keep the system from triggering any notification messages that might arouse the user's suspicions. Very generally, trojans can be divided into two groups based on the actions they perform: data-dealers and control-stealers. Data-dealers will look for and steal information from the device or data about the user, such as credit card numbers, passwords or documents, keystrokes entered or the user's web browsing history; control-stealers meanwhile focus on taking control of the computer, for example by installing a backdoor to control the system or programs or opening a channel to a remote site where an attacker can give commands to the infected device. More technically, most antivirus vendors will give a specific classification to a trojan based on the specific type of action it performs. You can read more about the Types F-Secure uses to classify trojans. Recognizing and avoiding trojans Avoiding trojans often comes down to keeping any installed programs on a computer updated to block any attempts at driveby downloads, and exercising some vigilance when downloading files from the Internet, even when offered by a trusted contact. Reputable antivirus programs (especially those with a website security indicator such as Browsing Protection) also provide a layer of defense against these malware. One advantage mobile devices enjoy over their computer counterparts is that before any program can be installed, a notification message is displayed and the user must manually click ‘OK' before it can be installed. This prevents silent install of a malware onto a mobile device. User vigilance however is still strongly recommended.
<urn:uuid:a3839d0d-4939-4b7d-a398-7e449a315683>
CC-MAIN-2017-04
https://www.f-secure.com/en/web/labs_global/trojans
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00016-ip-10-171-10-70.ec2.internal.warc.gz
en
0.93952
690
3.296875
3
In part 1 of this series, we discussed the cultural challenges of operating IT according to technology silos. In today’s post, we’ll contemplate why technology silos are a legacy way to organize IT. What does a data center really do? If you’re a technologist, then your answer to this question is likely that the data center does many different things—and what those tasks may be probably depends on your area of expertise. For instance, a data center delivers lots of packets to many hosts very quickly. A data center stores and retrieves information. A data center houses databases. Energy-conscious folks might note that a data center consumes energy in the forms of cooling and electrical draw. However accurate each of those perspectives might be, to my way of thinking they all miss the point: A data center is an engine for business. Who’s Driving What? We technologists can’t be faulted for thinking of data centers in terms of technology—that’s what we love. But data centers don’t exist for technology alone. They exist to drive the business. If there were no business needs, there would be no data center. Let’s pause for a moment to envision a data center in terms of an internal combustion engine. In an engine, there are many parts. There is an engine block with cylinders, pistons, cylinder heads, connecting rods, a crankshaft, exhaust headers, a starter motor, spark plugs, a fuel injection system, an oil pump, and more. Connected to that engine are accessories such as the cabin air conditioner, the alternator, and belt tensioner pulley. Some engines might feature a turbo or supercharger. Modern engines are equipped with several computer systems that constant tweak engine inputs and monitor engine outputs. As car buyers and drivers, do we think about the many systems and individual parts that make up the engine? No—to most of us, it is “that thing under the hood that makes the car go.” So it is with data centers. In business, few people think about the storage systems, virtual servers, hypervisor hosts, Ethernet network, and security infrastructure that are all integral parts of the data center. They think of the data center like most of us think of an engine—that thing behind the secure doors that makes the business go. Now, perhaps “making the business go” is vague—fair enough. Let’s modify it to agree with business stakeholders and say that the data center “engine” exists to deliver business applications. When all the parts of the data center hum along together, that is precisely what they are doing—delivering applications to business users in such a way that they can get their work done. Your Favorite Memory Now, let’s shift gears for a moment and think about the most successful IT project in which you ever participated. My “most successful” project was a data center migration for a payment processing company. During the migration, several teams of people collaborated to forklift hardware and migrate data and applications from one facility to another—all without interrupting service to all the customers relying on us to process financial transactions for them. Why was this project memorably successful? In my opinion, the project succeeded because individual IT silos worked together to accomplish each project milestone. We had to in order to continue delivering card transactions, facilitate reporting, keep payment gateways available, and just generally keep the engine running. All silos were interdependent. The network team was dependent on the cabling team. The application team was dependent on the infrastructure teams. The server team was dependent on the storage team. We all had to learn a bit about what the other teams did and needed from one another in order to successfully complete each move as it happened. We all worked in the context of application delivery. An interesting takeaway is that data centers operate as unified wholes, whether we think of them that way or not. To bring this back around to the monitoring of IT infrastructures, our focus should be on monitoring application delivery rather than on siloed infrastructure systems. Monitoring infrastructure is important, to be sure, but the true goal is correlating the data to explain how applications are being delivered. To loop back to the engine analogy, we should never lose sight of our goal—to drive success—because that’s the reason we’re here. In part 3 of this series, we’ll consider how an IT team that works across technology silos thinks about and monitors applications.
<urn:uuid:6194820e-5db5-4aeb-8daa-1318628505d2>
CC-MAIN-2017-04
http://www.bmc.com/blogs/it-silos-why-theyre-a-bad-way-to-organize-it/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280730.27/warc/CC-MAIN-20170116095120-00410-ip-10-171-10-70.ec2.internal.warc.gz
en
0.953467
946
2.890625
3
The South America microbial food culture market is expected to grow at a CAGR of XX% during the forecast period of 2017–2022. The market, estimated at USD XX billion as of 2017, is projected to reach USD XX billion by 2022. Microbial food cultures (MFC) are the viable bacteria, yeasts, and mold used in food production. These are used as food ingredients in fermented foods and as a probiotic in the food industry. The viability of microbial culture is important at the time of food consumption. MFC has been used for centuries for the conversion of the substrate into fermented food products with improved sensory properties. The increased alcoholic beverage consumption, especially in the production of beer, is the primary driving factor for the South American market. MFC has a wide application market, from dairy to bakery, which has increased its consumption demand. Lactic acid bacteria and yeasts have used since long in the dairy and bakery applications, capturing a huge market. Moreover, the probiotic market, which has grown at a fast rate in recent years, has triggered the market growth. Consumers are looking for live food cultures that have positive health benefits. Despite the growing market, there is a need for innovative, different strains of cultures with high potential value in food processing. Biotechnological innovations are playing a great role in the advancement of microbial cultures. However, the stringent regulatory framework that places emphasis on its documented use is always a restraint for the market. MFC requires strict growth environment, which is sometimes hard to maintain in the processing plants and this hinders the market growth. The market has been segmented into the type of culture, strain type, application and geography. The MFC market, by type, includes bacteria, yeast, and mould. Bacteria culture such as lactic acid bacteria holds the largest share in the market due to its significant use in the dairy industry. However, yeast consumption in beer and other applications is growing in the market. Mold is a fast-growing market supported by the huge cheese consumption. The various microbial culture strains available in the market are single-strain culture, multi-strain culture, and multi-strain mixed culture. By application type, the market is segmented into beverages, dairy, bakery, cereals and others. The beverages market is sub-segmented into alcoholic and non-alcoholic beverages. Beverages hold the largest share in the market due to the massive consumption of dairy and alcoholic beverages. Yeast has a dominant presence in the baked food industry with its vast consumption in the sour dough and bread manufacturing. By geography, the South American market has been segmented into Brazil, Argentina and other countries. Brazil is the key player among South American countries. Some of the major players capturing the market include - The focus on reducing production cost through innovative cultures will be the future spotlight for the market. MFC are difficult to preserve for longer periods until food consumption, due to the adverse processing environment, which can be an attracting point for research. Key Deliverables in the Study Global Microbial Food Culture Market - Segmented by Type, Growth, Trends and Forecasts (2017 - 2022)
<urn:uuid:9f49aa1f-3aca-4adc-ac68-24ab791ada2e>
CC-MAIN-2017-04
https://www.mordorintelligence.com/industry-reports/south-america-microbial-food-culture-market
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00042-ip-10-171-10-70.ec2.internal.warc.gz
en
0.963027
645
2.6875
3
The CobiT Security Baseline The Control Objectives for Information and Related Technology (CobiT) is a comprehensive set of resources that contains the information organizations need to adopt an IT governance and control framework. The scope of CobiT includes security, in addition to other risks that can occur on an IT infrastructure. CobiT identifies critical steps for information security. The CobiT framework process model consists of 34 generic IT processes grouped into four domains: plan and organize; acquire and implement, deliver and support; and monitor and evaluate. CobiT provides more than 300 detailed control objectives that contain policies, procedures, practices, organizational responsibilities and audit guidelines that enable the review of IT processes against these control objectives. The CobiT framework of four domains and 34 generic IT processes includes important security objectives. These objectives are identified as the CobiT Security Baseline and are organized into 39 essential steps to help organizations plan their information security: 1: Based on a business impact analysis (BIA) for critical business processes, identify data that must not be misused or lost, services that need to be available and transactions that must be trusted. The business must consider the security requirements for: - Who may access and modify data. - What data retention and backup are needed. - What availability is required. - What authorization and verification are needed for electronic transactions. 2: Define specific responsibilities for the management of security and ensure that they are assigned, communicated and properly understood. Be aware of the dangers of delegating too many security roles and responsibilities to one person. Provide the resources required to exercise responsibilities effectively. 3: Consistently communicate and regularly discuss the basic rules for implementing security requirements and responding to security incidents. Establish minimum dos and don’ts, and regularly remind people of security risks and their personal responsibilities. 4: When hiring, verify with reference checks. 5: Obtain the skills needed to support the enterprise security requirements through hiring or training. Verify annually whether skills are up-to-date. 6: Ensure that no key security task is critically dependent on a single resource. 7: Identify what, if anything, needs to be done with respect to security obligations to comply with privacy, intellectual property rights and other legal, regulatory, contractual and insurance requirements. 8: Discuss with key staff what can go wrong with IT security that could significantly impact the business objectives. Consider how best to secure services, data and transactions that are critical for the success of the business. 9: Establish staff understanding of the need for responsiveness and consider cost-effective means to manage the identified security risks through security practices and insurance coverage. 10: Consider how automated solutions may introduce security risks. Ensure that the solution is functional and that operational security requirements are specified and compatible with current systems. Obtain comfort regarding the trustworthiness of the solution through references, external advice, contractual arrangements, etc. 11: Ensure that the technology infrastructure properly supports automated security practices. 12: Consider what additional security requirements are needed to protect the technology infrastructure itself. 13: Identify and monitor sources for keeping up-to-date with security patches and implement those appropriate for the enterprise infrastructure. 14: Ensure that staff knows how to implement security in day-to-day procedures. 15: Test the system, or major changes, against functional and operational security requirements in a representative environment so the results are reliable. Consider testing how the security functions integrate with existing systems. 16: Perform final security acceptance by evaluating all test results against business goals and security requirements involving key staff. 17: Evaluate all changes, including patches, to establish the impact on the integrity, exposure or loss of sensitive data, availability of critical services and validity of important transactions. Based on this impact, perform adequate tests prior to making the change. 18: Record and authorize all changes, including patches (possibly emergency changes after the fact). 19: Ensure that management establishes security requirements and regularly reviews compliance of internal service-level agreements and contracts with third-party service providers. 20: Ensure that third parties provide an adequate contact with the authority to act on security requirements and concerns. 21: Consider the dependence on third-party suppliers for security requirements, and mitigate continuity, confidentiality and intellectual property risk. 22: Identify critical business functions and information, and those resources (e.g., applications, third-party services, supplies and data files) that are critical to support them. Provide for the availability of these resources in the event of a security incident to maintain continuous service. Ensure that significant incidents are identified and resolved in a timely manner. 23: Establish basic principles for safeguarding and reconstructing IT services, including alternative processing procedures, how to obtain supplies and services in an emergency, how to return to normal processing after the security incident and how to communicate with customers and suppliers. 24: Together with key employees, define what needs to be backed up and stored off-site to support recovery of the business, (e.g., critical data files, documentation and other IT resources, and secure it appropriately. At regular intervals, ensure that the backup resources are usable and complete. 25: Implement rules to control access to services based on the individual’s need to view, add, change or delete information and transactions. Especially, consider access rights of service providers, suppliers and customers. 26: Ensure that responsibility is allocated to manage all user accounts and security tokens to control devices, tokens and media with financial value. Periodically review the actions and authority of those who manage user accounts. Ensure that these responsibilities are not assigned to the same person. 27: Detect and log important security violations. Ensure that they are reported immediately and acted upon in a timely manner. 28: To ensure that counterparties can be trusted and transactions are authentic when using electronic transaction systems, ensure that the security instructions are adequate and compliant with contractual obligations. 29: Enforce the use of virus-protection software throughout the enterprise’s infrastructure and maintain up-to-date virus definitions. Use only legal software. 30: Define policy for what information can come into and go out of the organization, and configure the network security systems (e.g., firewall), accordingly. Consider how to protect physically transportable storage devices. Monitor exceptions and follow up on significant incidents. 31: Ensure that there is a regularly updated and complete inventory of the IT hardware and software configuration. 32: Regularly review whether all installed software is authorized and properly licensed. 33: Subject data to a variety of controls to check integrity (accuracy, completeness and validity) during input, processing, storage
<urn:uuid:7dcfccbf-c21d-4493-8050-c779ebee45fa>
CC-MAIN-2017-04
http://certmag.com/the-cobit-security-baseline/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00493-ip-10-171-10-70.ec2.internal.warc.gz
en
0.918197
1,366
2.703125
3
Disclaimer: I am not a statistician. A particular style of telephone company directory allows callers to “dial by name” to reach a person, after playing the matching contacts’ names. In the example used here, input must be given as surname + given name with a minimum of three digits using the telephone keypad (e.g. Smith = 764). To cover all possible combinations, you’d calculate 8^3, or 512 combinations. With a directory that allowed repeated searches in the same call, it would take about seven hours of dialing to cover all possible combinations. Let’s use available data to try and reduce the complexity of the problem while increasing the return on effort - like the giant nerds we are. The 2000 U.S. Census provided raw data on over 150,000 surnames occurring 100 or more times in the population. This puts the lowest occurrence of a surname in the data at 1 in 2,500,000. The uncounted surnames represent 10.25% of people counted in the 2000 Census. This means our data only cover 89.75%* of the U.S. population, but we can safely assume† that the remaining names closely follow the patterns established in the data we do have available. In this analysis, the first three characters of each surname in the Census data were converted into a three-digit combination using a telephone keypad conversion function. The resulting data were manipulated using an Excel pivot table to group matching combinations and sum the percentage of occurrence. This resulted in a table that ranked each combination. To facilitate the creation of interactive charts, this data was then imported into a Google Spreadsheet. Unsurprisingly, the distribution of surnames for the patterns is non-uniform, with favorable spikes. Sorting by rank, we find the best pattern - 227 - should return 2% of the surnames for the average U.S. company. What’s more exciting is that we can use a smaller amount of effort to achieve a larger than expected amount of results. Searching by ascending rank to return 50% of the surnames, you only need to search 67 patterns, which is 13% of all possible combinations. To return 90% of the surnames you only need to search 241 patterns, which is 47% of all possible combinations. Some milestones are listed in the chart below. The following chart shows the curvilinear relationship of the expected returns versus the effort expended. A test case was performed against an actual U.S. company phone directory, with a medium-sized population that happened to be highly biased to Polish surnames. Approximately 120 names were “randomly” selected based on a known list of employees and the patterns for each were searched. In spite of the bias, the test case correlated well with the expected results. The highest number of surnames (6) was returned by pattern 627 (3rd Rank), the second highest number of surnames (5) was returned by pattern 227 (1st Rank) and the fourth highest number of surnames (3) was returned by pattern 726 (5th Rank). These three data points average to estimate a total population of 300, which is close to the expected size of the company. The U.S. Census includes racial data, which may be helpful in tailoring to certain populations, but surnames by state would be more helpful, which do not appear to be available. A geographic breakdown could improve results in the test case. · Three patterns do not appear in this data: 577, 957, 959. · Sorted by rank, the last 10% of surnames require 53% of the effort. · Surname data from the 2010 Census was not compiled and is not available. · Unlike the U.S., Canada has a large population of 2-letter surnames. · Canada’s government does not release surname data. Get The Full List Thanks to Nick Roberts of Foundstone for supplying a Canadian point of view on the subject. * Two-letter surnames were excluded. This reduces the coverage of the analysis by 0.25% to 89.50% of the total population, a negligible change. Since entering these surnames would require the first letter of the given name, these should be analyzed separately for the distribution of given names, with some consideration to the biases of ethnicity. The U.S. Census does not consider surnames with one character valid. † Some references in this document extrapolate the Census data to include 100% of the population for clarity. The spreadsheet available lists percentages of both the sample data and the population as a whole for accuracy.
<urn:uuid:50de3a84-e619-46d0-971d-e8e31ca34f89>
CC-MAIN-2017-04
http://blog.ioactive.com/2012/09/completely-unnecessary-statistical.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00337-ip-10-171-10-70.ec2.internal.warc.gz
en
0.92929
967
3.0625
3
On Thanksgiving morning in 1993, a group of hunters strode through tall trees and dense undergrowth near Odessa, Del. In a clearing, just 150 feet off Route 9, they stumbled upon a small human skeleton with grass growing through it. Twenty years later, the skeleton remains a mystery. She was female, just a hair above five feet tall, white, between 20 and 45 years old, and apparently without anyone looking for her. Police surmised that the woman had been killed, stripped and dumped in that field some three months prior. With no clothing, jewelry or obvious report of a missing person to go by, who she was or where she came from were beyond simple reckoning. That was the end of a woman’s story and the beginning of the story of Unidentified Person (UP) #2212 of the National Missing and Unidentified Persons System. She is among 40,000 others whose identity is unknown. Now, an innovative marriage of art and technology offers new hope for giving names back to these individuals — but it also raises important questions around science versus interpretation. An information artist named Heather Dewey-Hagborg has in her lab a DNA sample of UP #2212. She’s going to use it to make a bust of what the deceased woman may have looked like. There are already sketches and clay models of UP #2212 that were rendered by artists and anthropologists who made scientific guesses based on the victim’s bones. But a 3-D model informed by genetic traits could be the extra piece of data that finally reveals the woman’s identity. Dewey-Hagborg doesn’t usually work with the Delaware Office of the Chief Medical Examiner, but Deputy Director Hal Brown saw her art and asked her to apply it to his field as an experiment. Brown had read about a project called Stranger Visions, in which Dewey-Hagborg explored the concept of how much information could be obtained from a single strand of hair or other objects, such as a piece of discarded chewing gum or a cigarette butt, that contain DNA. Dewey-Hagborg collected these samples — or as some would call them “forensic evidence” — from around New York City, used DNA analysis to identify the physical traits of each person to whom the objects belonged, and used 3-D modeling software to create her vision of what those people may have looked like. Finally, she used a 3-D printer to create physical models of the heads she had created to go alongside the evidence. Photo: Information artist Heather Dewey-Hagborg uses DNA analysis to identify physical traits. Photo by Matt Greenslade Photography. “It’s similar to me being a sketch artist except instead of being a sketch artist, I work with code and create models and physical models,” Dewey-Hagborg said. With unlimited funding, Dewey-Hagborg said she would have access to about 50 identifying traits taken from DNA analysis, which could lead to a fairly distinct portrait. As it is, she has access to eye color, gender, size of the nose, tendency toward obesity, ethnicity and maternal ancestry. When combined, these traits give her the ability to create a face that bears a “family resemblance” to the real individual. The final detail: She paints the eyes by hand. Exactly what this technology means and where it’s headed are questions Dewey-Hagborg has put much thought into but hasn’t yet reached any definite conclusions about. “I’m mainly just hoping to point to that and start a conversation about it. There is no regulation about the use of this technology in law enforcement, and there is a whole slew of potential problems with it.” The portraits she creates are artistic interpretations based on data that was found scientifically, but often, she said, people hear “DNA” and assume that what they’re looking at is the end-all, be-all. “It has this added aura of objectivity because it comes from science,” she said. And that could be abused. Misconceptions about science and mathematical probability, along with public belief in the infallible integrity of DNA evidence has led to the conviction of innocent people. The FBI announced in July that it will review more than 2,000 cases in which DNA testing of hair samples led to convictions, on death row and elsewhere. The FBI didn’t report that any scientist had used flawed techniques, but rather the bureau will examine whether the reports and testimony connected to DNA testing accurately reflected the science. Image: Artists Heather Dewey-Haggorg's "Stranger Vision" project began as an attempt to see how much she could learn about a person from a single strand of hair. With direct-to-consumer genetic testing becoming available for relatively low prices through companies like 23andMe, there is a trend toward genetic information becoming more accessible. What was just decades ago a relative mystery is now becoming a matter of public record. It’s possible that within 100 years, DNA scanners will be just another smartphone feature. As the Human Genome Project proceeds and advances are made in forensic DNA technology, more information can be obtained from smaller samples and the scientific community’s general understanding of DNA is improving, said John Butler, a fellow with the National Institute of Standards and Technology. However, some areas of knowledge are growing faster than others. “The weak link right now is the genetic-to-phenotype information,” Butler said. “You can get the DNA from the sample, but interpreting that data and making sense of that data is something that’s very much in the infancy of the abilities of what we can do right now.” Butler agreed with Dewey-Hagborg’s assessment that DNA technology isn’t the 100 percent accurate science that it’s often portrayed as in TV shows and movies. “There’s a test called ‘iris flex’ right now that will do six different points in the genome, and it can tell you brown eyes versus blue eyes accurately about 90 percent of the time,” he said. “Still, that’s not even close to what you would want to do if you’re going to spend all this time making a 3-D model of somebody.” A company called DNAPrint Genomics, which went out of business in 2009, offered a service for law enforcement called DNAWitness. The company used DNA ancestry markers to inform customers of a suspect’s skin color, based on the sample it received. Famously, Louisiana law enforcement used the service while searching for the person who murdered Pamela Kinamore in 2002, along with a string of other victims. Initially police believed the serial killer was white, based on eyewitness accounts and the tendency of serial killers to kill within their own ethnic groups (Kinamore was white). But DNAPrint Genomics revealed that the person police were looking for was black, and that information eventually tied the murder of Kinamore to Derrick Todd Lee, who is now known as the Baton Rouge Serial Killer. Although the limited information offered by DNAPrint Genomics was useful in this case, Butler said the technology didn’t have enough specificity to warrant widespread use in law enforcement. A suspect’s skin color is often already known anyway, and even if it’s not, many areas are too racially diverse for information about skin color to be very useful, he said. For phenotyping, or using DNA analysis to determine someone’s physical traits, to be useful, the profile provided by DNA analysis needs to be more comprehensive, specific and accurate than what science has access to today. If a law enforcement agency uses DNA technology today, it’s almost always for matching samples against known DNA or fingerprints in a national database, not phenotyping, Butler said. “Right now, there’s not a single police lab in the United States, to my knowledge, that’s doing anything with this type of technology,” he said. But if what Dewey-Hagborg is doing were to be developed further and a close-to-real portrait of someone could be made from DNA alone, he said there isn’t a law enforcement agency in the country that wouldn’t want to use it. Photo: 3-D models have a "family resemblance" to the real individual. Photo by Matt Greenslade Photography. Forensic artists at the National Center for Missing and Exploited Children (NCMEC) use their own techniques to chip away at today’s 40,000 missing and unidentified people. Using software developed by SensAble Technologies, called FreeForm, forensic artists and anthropologists try to re-create the faces of the deceased by taking cues from the CT scans of skeletal remains. Joe Mullins, a forensic artist with NCMEC, along with anthropologist David Hunt, used the software to generate faces for two 2,000-year-old mummies: an 8-year-old boy and a 3-year-old boy. 3-D busts of the faces were displayed as part of the External Life in Ancient Egypt exhibit at the Smithsonian’s National Museum of Natural History. The facial models taught historians that the mummies were of West Asian or Middle Eastern origin, and that the mummified children’s facial features were more refined than they originally thought, a historical breakthrough. Computer modeling uses many of the same techniques as clay modeling, Mullins said. Artists place tissue depth markers and layers of muscle and skin to build a likeness, but using computers eliminates the need to put a fragile old skull in the mail or risk breaking a skull while molding clay on top of it. But unlike Dewey-Hagborg’s work, Mullins said forensic portraits should leave no room for artistic interpretation — that’s why their models are in black and white. “If you see a facial reconstruction from skeletal remains and it is in bright vivid color with blue eyes, olive skin and beautiful blond hair, that person is psychic or has some fancy DNA testing that nobody else has,” Mullins said, adding that as an artist himself, he finds it a constant struggle to not take artistic license while creating a model and reminds his students to not fall into that trap. Photo: Dewey-Hagborg describes herself as a high-tech sketch artist. Photo by Matt Greenslade Photography. The work of Mullins and NCMEC has led to the identification of children and adults, including one victim of Gary Ridgway, the so-called Green River Killer, who is estimated to have killed more than 90 people. NCMEC used clay models until about 2007, but now uses digital modeling exclusively. Mullins and his team of forensic anthropologists can get a lot of information just by looking at a skull. They can discern what the person’s smile may have looked like by looking at the teeth, identify whether the victim’s ear lobes were attached or hanging, and identify facial characteristics such as nose width, lip thickness and eye shape. Mullins would be grateful if DNA forensics could add enough information to identify all the skulls and put him out of a job. “That would be awesome,” he said. “Having that extra information for us takes the ambiguity out of the equation. That narrows the field of when we’re putting a face together and it’s going to be more accurate. That would be an incredible resource for us to use.” NCMEC publishes images of the facial models, hoping they will spur recognition with the public. If someone comes forward with a lead, NCMEC can use DNA matching to see if the lead is correct, but beyond that, DNA isn’t used in its modeling process, Mullins said. “With the facial reconstructions, it’s already a horrible, tragic story by the time it gets here because you open up a box and it’s an 11-year-old little girl, or what’s left of her,” Mullins said. They don’t call it “closure,” he said — being able to finally solve a case just gives them the opportunity to provide a family with answers. “We have these coined expressions that we say during media interviews,” Mullins said, “but there are no words to explain how rewarding it is.”
<urn:uuid:4c0b5af2-7a1e-4cc7-a69a-a6040dc2c578>
CC-MAIN-2017-04
http://www.govtech.com/The-Future-of-Forensic-Identification.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00063-ip-10-171-10-70.ec2.internal.warc.gz
en
0.96405
2,618
3.171875
3
Mourtzas N.D.,16 18 Kefallinias Str. Quaternary International | Year: 2012 Palaeogeographical reconstruction of the seafront of the ancient city of Delos is based on the recording and study of all available indicators of sea level change. Contemporary submerged beachrock formations and presently submerged areas of ancient human activity, including ancient coastal constructions, indicate the phases, vertical direction, extent and time frame of the changes. The sea level along the coasts of Delos has risen by a total of 2.15m since the end of the Hellenistic period. This change occurred during two successive distinct phases of submersion, initially by 1.35m and then by another 0.80m. The sea's transgression into the ancient coastal zone by a width of at least 30m radically altered parts of its geomorphology and resulted in the submersion of the ancient sea defences. © 2011 Elsevier Ltd and INQUA. Source Mourtzas N.D.,16 18 Kefallinias Str. | Kissas C.,42 Chaimanta Str. | Kolaiti E.,16 18 Kefallinias Str. Quaternary International | Year: 2014 Study of the architectural, morphological and constructional features of the coastal harbour installations of the ancient foreharbour of Lechaion indicates that they were built or rebuilt during the period of the Roman domination of Corinth, and has facilitated the reconstruction of the vertical movements and the palaeogeography of the coast. On the basis of the current position of the sea level indicators including beachrocks, fossilized uplifted and submerged marine notches, and ancient coastal harbour installations, and the relationship between them, the sea level during the Roman operation of the harbour was determined to be 0.90 m lower than at present. Furthermore, the subsequent abandonment of the harbour and the siltation of its constructions were determined. During two successive tectonic subsidence co-seismic events, the sea level rose by 2.0 m in total, 1.60 m during the first event and 0.40 m during the second one. A strong uplift tectonic event followed and the sea level dropped by 1.10 m. This regression of the sea was responsible for the present shoreline morphology. Determination of the sea level fluctuation at the shore of the ancient harbour of Lechaion allowed the palaeogeographical reconstruction of the coast in different stages related to these changes. © 2013 Elsevier Ltd and INQUA. Source Kolaiti E.,16 18 Kefallinias Str. | Mourtzas N.D.,16 18 Kefallinias Str. Quaternary International | Year: 2015 Along the Peloponnesian coast of the Saronic Gulf and on the coast of Aegina and Poros islands, submerged coastal geomorphological features related directly with submerged ancient coastal constructions, indicate three distinct sea levels. Submerged tidal notches incised on the carbonate basement, beachrocks formed in the intertidal zone and archaeological indicators, such as the ancient harbour installations in Kenchreai and Epidaurus and on Aegina island, the extended coastal buildings and constructions in Agios Vlasis, Psifta and Palaiokastro-Methana, and Vagionia on Poros island, are used to determine the age and magnitude of submersion and the extent of the Upper Holocene marine transgression. By the correlation of geomorphological, historical and archaeological indications three distinct sea levels were identified, at-3.30±0.15m,-0.90±0.15m and-0.55±0.05m. Initial change in sea level occurred definitely after AD 400±100. The intermediate change is dated between AD 1586 and 1839, and the most recent change after 1839. Sea transgression followed a long period of sea level stability, which lasted at least 2200 years, from the Middle Bronze Age to the Late Roman period. © 2015 Elsevier Ltd and INQUA. Source
<urn:uuid:3ae97151-afc1-439e-a13e-354744fd790d>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/16-18-kefallinias-str-226946/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00181-ip-10-171-10-70.ec2.internal.warc.gz
en
0.913479
860
2.625
3
The cc command invokes the Cray C compiler. The cc and c99 commands accept C source files that have the .c and .i suffixes; object files with the .o suffix; library files with the .a suffix; and assembler source files with the .s suffix. The cc and c99 commands format are as follows: cc or c99 [-c] [-C] [-d string] [-D macro[=def]] [-E] [-g] [-G level] [-h arg] [-I incldir] [-l libfile] [-L libdir] [-M] [-nostdinc] [-o outfile] [-O level] [-P] [-s] [-S] [-U macro] [-V] [-Wphase,"opt..."] [-X npes] [-Yphase,dirname] [-#] [-##] [-###] files [file] ... See Section 2.5 for an explanation of the command line options.
<urn:uuid:53532580-bc98-4da6-9e34-010f9dbe1320>
CC-MAIN-2017-04
http://docs.cray.com/books/S-2179-50/html-S-2179-50/z1011114461pvl.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00329-ip-10-171-10-70.ec2.internal.warc.gz
en
0.684798
205
2.609375
3
The backbone network, also called network backbone is an important architectural element of a network that carries the bulk of the network traffic. It provides the highest-speed transmission paths and the longest distances for exchang of data by interconnecting different local area networks (LANs) or subnetworks. At the local level, a backbone ties together diverse LANs in offices, colledges or office buildings. When several LANs are being interconnected over a large scale, it is metropolitan area network (MAN) or other wide area network (WAN), or even the Internet. The first Internet backbone was made between UCLA and SLI on October 29, 1969. Today, in the United States, most of backbones are run by telecommunication companies such as AT&T, Bell South, Congent, Qwest, Level 3, MCI/Worldcom, Sprint, and Time Warner. Backbones are primarily used in medium to large-sized networks, such as a building or a group of buildings on a campus. These backbones generally fall into two basic categories – distributed backbone and collapsed backbone. In addition, there are parallel backbone and serial backbone applied in some networks. These four type of backbone as following: Distributed backbone – A distributed backbone has a core consisting of multiple switches or routers chained together, typically in a ring. It allows for simple expansion and limited capital outlay for growth, because more layers of devices can be added to existing layers. Collapsed backbone – A collapsed backbone has a central device at the hub of a star network. In medium to large networks, this central device is a chassis switch. It greatly facilitates the provisioning of appropriate bandwidth to the connected nodes. Each connected distribution node has its own dedicated connection into the core. Parallel backbone – A parallel backbone consists of using two cables routed between the routers and switches. While there are additional initial costs of installing a parallel backbone, the benefits can quickly outweigh these costs. Serial backbone – A serial backbone is the simplest kind of backbone network which consists of two or more internet working devices connected to each other by a single cable in a daisy-chain fashion. A serial backbone topology could be used for enterprise-wide networks.
<urn:uuid:c0e47e80-be6e-4bd9-9aab-959513550c8a>
CC-MAIN-2017-04
http://www.fs.com/blog/backbone-network.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281226.52/warc/CC-MAIN-20170116095121-00539-ip-10-171-10-70.ec2.internal.warc.gz
en
0.935971
448
3.890625
4
It seems that as soon as we get settled with our 720p and 1080p HDTV displays, talk started up about even higher resolution images. Demonstrations of 2K and 4K display technologies have knocked viewers’ virtual socks off. Is there a limit to how high “high definition” can become? One problem that doesn’t get discussed much, however, is how to move all that high resolution data to the television screen. A 3,840 by 2,160 pixel image (sometimes called “Double Full HD”) requires four times as much data as a 1080p frame. How can you squeeze that through a broadband pipe, let alone broadcast it over the airwaves? The problem is that bit-mapped images can only be compressed so much before you start to get noticeable artifacts when the image is decompressed back to its original size. You may have noticed this problem with JPEG photos that have been compressed too aggressively. Now there is hope from an unexpected direction. The Society for Motion Picture and Televison Engineers (SMPTE) has been quietly working on the problem, and have come up with a solution. In a back-to-the-future move, the organization’s latest standard abandons the rasterized approach that has been with us since the first days of broadcast television, and instead adopts a vector-based design. A rasterize image is scanned dot by dot, row by row. On a 1080p display, this means 1,080 rows of 1,920 dots each. That’s a lot of data. A vector image takes a different route. Instead of scanning the image, it defines the image as a series of lines that can be described by mathematical formulas. It can be as simple as drawing a straight line that starts here and ends there, or it can take on complex curves. These lines can form the boundaries of areas that are then filled with a texture that is mapped from the original image. Thus these vectors can be assigned a variety of characteristics such as color and thickness, and can be used to recreate the original image. Now here’s where it gets interesting. Because each element of the image is actually a geometrical definition, it can be infinitely scaled to match the resolution of the display on which it is shown. Each display will show as much detail as it can, based on its actual resolution. And because all the information exists as formulas, it can be condensed into a much smaller data stream than raster-graphics can. Some estimates say that a typical movie image can be defined as a vector image that takes up less than 10% of the space required for a compressed raster image, but with no loss of the original image content. It gets even better. Because these vector definitions can be defined as objects, their behavior can be tracked from frame to frame. Instead of sending all the vector information for each frame, the object definition can be sent just once along with instructions about how it moves in subsequent frames. This makes it possible to reduce the data stream by another order of magnitude, so that it is just 1% the size of the equivalent compressed raster image. The beauty of this new approach is that no changes need to be made in the display panels already in use. The controllers simply need to be modified to interpret this vector data and convert it into a rasterized image in the native resolution for the display. Existing HDTVs will be able to use a small external box with an HDMI connection to take advantage of this new technology, while new displays with even higher resolutions will be able to take full advantage of the same data stream. The Brooklyn Bridge Video Lab is one of the first companies to develop the code for video processing chips required to support this new standard. The company has also announced that it will go public shortly, so you can buy a piece of this technology when the IPO happens.
<urn:uuid:9051db43-3f27-4ee4-bec3-97724871e494>
CC-MAIN-2017-04
https://hdtvprofessor.com/HDTVAlmanac/?p=1452
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00017-ip-10-171-10-70.ec2.internal.warc.gz
en
0.950577
803
2.96875
3
How to Write Clean Code Even bad code can function. But if code isn't clean, it can bring a development organization to its knees. Every year, countless hours and significant resources are lost because of poorly written code. Within this article we will take the key concepts, and points around writing Clean Code - referenced from the amazing book, The Clean Coder: A Code of Conduct for Professional Programmers by Robert Cecil Martin (aka Uncle Bob). How should be name our objects, functions, variables and methods? - Names should be thought of well and thoughtfully. - Communicate your intent. If you require a comment to describe your name, you have chosen a bad name. - Use pronounceable names. People will need to talk about your code, make it easy for them. - Code should read like well written English, writing a line code so it reads like a sentence. The names you choose should lend itself to this, i.e - Classes are named with a Noun. i.e. Variables are named with a Noun. i.e user_account_id = 124816 Method/Functions are named with a Verb. i.e. Predicates should return a Boolean. i.e if x == 2: - Classes are named with a Noun. i.e. - Ensure you stick to the rule of naming inside of the realms of your scope. - variables names should be named extremely short if there scope is extremely small. - variables names should be long if they are in a big long scope, such as a global variable. - function names should be short if they have a long scope - function names should be long if they have a short scope - classes names should be short, as in the realms of Python they can be considered public. You mentioned nouns and verbs - can you explain what they are? - Noun: a word that refers to a person, place, thing, event, substance or quality e.g.'nurse', 'cat', 'party', 'oil' and 'poverty'. - Verb: a word or phrase that describes an action, condition or experience e.g. 'run', 'look' and 'feel'. How should I construct my functions and how should they operate? As a general rule of thumb, functions should, - be small, very small - do just ONE thing How do you ensure your function is only doing ONE thing ONLY? Ensure you can extract no further functions from the original function. Once you can extract no more, you can be sure the function is doing one thing, and one thing ONLY. - You should not pass Boolean or None types into your function - No more then 3 arguments should be passed into your function - It is better to raise an exception then return an error code - Custom exceptions are better then returning error codes - Custom exceptions should be scoped to the class Command Query Structure (CQS) This is one of my favorite disciplines within functions, as this is a great way to create clean functions that only do ONE thing, - Functions that change state should not return values, but can throw an exception - Functions that return values should not change state - It is bad for a single function to know the entire structure of the system - Each function should only have limited knowledge of the system If you want your code to be clean, then TDD SHOULD be adopted. Lets look at why, the laws of TDD and the TDD process, The benefits to TDD are, - promotes the creation of more efficient code - improves code quality - ensures the minimum amount of code is used - prevents code regression There are 3 laws to TDD, - Write no production code until you have create a failing unit test - Write only enough of a test to demonstrate a failure - Write only enough production code to pass the test The process for TDD is, - RED - Create a test and ensure it fails - GREEN - Write production code to ensure the test passes - REFACTOR - Refactor your code and ensure tests still pass Good architectures are NOT composed of tools and frameworks. Good architectures allow you to defer the decisions about tools and frameworks, such as UIs and databases. But how is this achieved? This is achieved by building an architecture that decouples you from them - by building your application, not on your software environment, but based on your use-case. Decoupling your applications allows for changes to be made, fair easier that it being a single monolithic application. Also decoupling allows for clear business decisions to be made to each part of your application i.e time/money spent on the UI, API and usecase, i.e core application. What is a Use Case? A use case is a list of actions or event steps to achieve a goal. It is important to note within our use-case no mention is made to databases or UIs. Below is an example, -- Create Order -- - Data Customer-id Customer-contact-id Payment-information Shipment-mechanism - Primary Course Order clerk issues Create Order command with above data System Validates all data System creates order and determines order-id System delivers order-id to clerk - Exception Course Validation Error - System delivers error message to clerk As more use cases are defined partitioning it is required to allow clear separation within your system, this design is also know as EBI (Entity, Boundary, and Interactor). - Business Object (Entities) - Entities are application independent business rules. The methods within the object should NOT be specific to any of the systems. An example would be a Product Object. - Controller (Interactors) - Use cases are application specific. Use-cases are implemented by interactor objects ; It is the goal of the interactor to know how to call the entities to reach the goal of the use case. An example for our example use case would be CreateOrder. - User interfaces (Boundaries) - A boundary is the interface that translates information from the outside into the format the application uses, as well as translating it back when the information is going out. Below shows how this actually looks, Clean Code - Robert Cecil Martin
<urn:uuid:54d0f366-2d30-41ec-86bb-421e8ddafae1>
CC-MAIN-2017-04
https://www.fir3net.com/Programming/Python/clean-code-notes.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00017-ip-10-171-10-70.ec2.internal.warc.gz
en
0.880796
1,319
3.34375
3
Elliptic Curve Cryptography Elliptic Curve Cryptography (ECC) Security providers work continuously to innovate technology to upend hackers who are diligent in their efforts to craft clever new ways to steal data. Advanced ECC, while not new, uses a different approach than standard RSA. RSA draws its strength by using increasingly larger logarithms, which take more time to process. ECC, on the other hand, relies on discovering a distinct logarithm within a random elliptic curve. The larger the elliptic curve, the greater the security. Using a random formula improves the encryption strength while a smaller logarithm increases the performance of digital certificates. ECC uses a smaller algorithm to generate keys that are exponentially stronger than RSA keys. The smaller algorithm means less data is being verified between the server and the client, which translates to increased network performance. This is especially important for websites that experience a high level of traffic. Is ECC Right for You? Entrust SSL Certificates using ECC technology are ideal for scenarios where server-load performance is critical, and site visitors and the Web/app server are known to be compatible with ECC keys.
<urn:uuid:e353fb11-3e2c-4512-b790-af206a9009fb>
CC-MAIN-2017-04
https://www.entrust.com/elliptic-curve-cryptography-ssl/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00503-ip-10-171-10-70.ec2.internal.warc.gz
en
0.910374
244
3.140625
3
Virus:Boot/Ripper infects floppy disk boot records and hard disk Master Boot Records (MBRs). The virus is encrypted with a variable key, which is quite rare among boot sector viruses. F-PROT for DOS v3.0, 3.01, 3.02 and 3.03 have a bug which causes the disinfection of Ripper to fail. This might cause a machine to become unbootable. Do not use these versions of F-PROT to disinfect this virus; contact Support instead. Ripper contains two encrypted strings: - "FUCK 'EM UP" - "(C)1992 Jack Ripper" Ripper was found in November 1993 from Norway. However, it is believed to be of Bulgarian origin. The virus will only infect hard drives when an attempt to boot from an infected diskette is made. Once the virus has infected the hard drive, all non-protected floppies used in the machine will be infected. Ripper is two sectors long, and it stores the original boot sector to the last sector of the root directory. It also reserves one sector before that for its own code. Ripper has stealth capabilities; the virus code cannot be seen in boot records while the virus is active in memory. Ripper contains a destructive activation routine. It corrupts disk writes by random - approximately one disk write in 1000 is corrupted. The virus will swap two words in the write buffer, causing slow and in some cases difficult-to-notice corruption on the hard disk.
<urn:uuid:df540e3b-9e9f-4157-9e5b-b41a5b3b1ca1>
CC-MAIN-2017-04
https://www.f-secure.com/v-descs/ripper.shtml
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279915.8/warc/CC-MAIN-20170116095119-00439-ip-10-171-10-70.ec2.internal.warc.gz
en
0.92644
316
2.65625
3
We've all been there. Following an injury, you or a family member gets an X-ray or MRI but when you follow up with a specialist a few weeks later, he or she can't access the study (unless, of course, you made a special trip to pick up a CD from the other care provider). In this age of rapid-fast information sharing, it's hard to understand why this still happens. As a radiologist, I’ve closely watched the movement among my colleagues that is being referred to as “accountable imaging.” The term refers to the idea that all those involved in diagnostic imaging should be accountable for both the effect of the study on patient outcome and the cost of imaging services. This began as a response to a steep increase in imaging seen between 2000 and 2006, when Medicare imaging costs doubled, from $5.92 billion to $11.91 billion. Several studies indicated that too much of the testing was done inappropriately, exposing patients to unnecessary radiation and follow-up procedures. Since then, a nationwide effort to educate physicians (Choosing Wisely, a multispecialty initiative) on appropriate use of imaging studies helped drive a decrease in Medicare imaging spending, dropping to $9.45 billion by 2010. There are still areas of medicine where imaging use is problematic, but we're seeing signs of improvement. Lack of access = redundancy One area of concern, for which there is little hard data, is redundant imaging done because the ordering physician cannot access the original study. This was more common before the use of digital imaging systems, when the only copy of a study was a film residing in a physician or hospital medical record archive. The use of digital imaging made it easier to provide a copy (like the CD cited in the example above), and online access to reports has further reduced the problem. But it can still happen. If a patient loses the CD, or sees multiple physicians who are not part of the same health system, access to prior studies may be difficult. Or when a patient sees a new physician after moving from one city to another and does not have his or her medical records moved to the new physician. One reason that this matters is that many imaging studies involve radiation – sometimes, a lot of radiation. CT scans are at the center of concern, because they can expose patients to radiation levels that can be as much as 10 times that of a simple chest x-ray. While they can provide critical information, done too often over a lifetime, CT scans can raise the risk of cancer and other complications. The second reason of concern is cost. While an MRI doesn’t carry the same radiation risk as a CT scan, it may carry a high price tag. If you are concerned about cost as well as quality, unnecessarily repeating an MRI is a problem. For the welfare of the patient and the financial viability of the healthcare system, solving the access issue is both a practical and an ethical imperative. As physicians, we subscribe to the foundational tenet of “first, do no harm.” So if we have the ability to solve this problem and prevent harm to patients, we have an ethical duty to pursue it. The question is: Do we have the technology to solve this problem? Yes and no. Unifying the patient record The ultimate solution is the creation of a unified, digital patient record, containing the full history of all healthcare encounters, including diagnostic images and reports. Stored in the cloud and quickly accessible over the internet by authorized caregivers, such a record could virtually eliminate the need for redundant imaging studies. It could also give a much better understanding of a patient’s condition, because the image often tells a more complete story than the report that accompanies it. Ideally, the patient (or guardian) would have an authorization code, which he or she would give to healthcare providers. The doctor (or other authorized professional) could access the complete record, update it as necessary, and send it back to the cloud for storage. Recent advances in interoperability are making it possible to integrate diagnostic images with electronic medical records (EMRs), and we should see significant growth in this area in the near future. And the use of cloud storage for EMRs is increasing, which will lead to wider access for all authorized healthcare providers. Work remains to be done, however. While there is a uniform standard for diagnostic image formats (although it is not always precisely applied), there is no uniform standard for EMR applications. So, while you can integrate an image with an EMR, universal sharing is still a ways off. But we are seeing progress. Vendor-neutral archives, which translate proprietary image formats to a universal format (DICOM), are becoming more widespread, which will make images easier to integrate with EMRs. And EMR developers have begun a project, The CommonWell Health Alliance, to create standards that would allow easy interoperability, though the work is still in the early stages. Cloud-based imaging archives, possibly linked to the patient’s own personal health record, could also help to deal with this issue. The question of how and where a universal patient record would be stored, and who would pay for that infrastructure, remains to be answered. So the answer is yes, we have the technology, but no, we don’t yet have an infrastructure to make it work. But based on recent developments, it seems that the practical and ethical imperatives to cut costs and improve health are moving the industry in the right direction.
<urn:uuid:346cb456-9f2a-4d0b-af0d-b902487f7670>
CC-MAIN-2017-04
http://www.computerworld.com/article/2475430/healthcare-it/why-integrating-emrs-and-digital-images-is-an-ethical-and-practical-imperative.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282926.64/warc/CC-MAIN-20170116095122-00557-ip-10-171-10-70.ec2.internal.warc.gz
en
0.960561
1,135
2.546875
3
This is the first in a series of Market Updates on the creation of accessible documents. It concentrates on the creation of accessible PDF files from word processing and desktop publishing systems. Document creation is one of the major parts of personal productivity. An accessible electronic document is one that can be read easily by a person with a disability. The possible disabilities include various degrees of vision impairment, muscular-skeletal disorders (that limit the ability to use traditional controls such as a mouse), dyslexia, and learning difficulties. Documents in a language other than the native language of the speaker can be difficult to access, and with the internationalisation of the web this is becoming a more common problem. Some of the issues and solution to this problem are shared with access for the disabled so it will be considered in this report.
<urn:uuid:2fe41f8f-8443-450e-9a4e-af442ade86cb>
CC-MAIN-2017-04
http://www.bloorresearch.com/research/market-update/creation-of-accessible-documents/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00494-ip-10-171-10-70.ec2.internal.warc.gz
en
0.939375
165
2.515625
3
This article is an introduction to different default gateway solutions. Those technologies are enabling devices on IPv4 local subnets to have more than one Default gateway configured or at least some configuration that make them work half the way of ideal redundant solution. Idea behind this article is to be an introduction to a set of articles that will explain different redundancy solutions based on IPv6 technology. Some of those technologies, will be used in future and some of them already existing and suggested to be used from day one on IPv6 implementation. Default gateway is the next hop address of the device that leads the packets out of the local LAN segment. If there are packets destined to an IP address that is not from local subnet PC will forward those packets usually to router device that will have the information where to forward those packets in order to get them transferred towards the destination.
<urn:uuid:7f2cf446-a5c6-48cd-8ce9-70cc12f8c670>
CC-MAIN-2017-04
https://howdoesinternetwork.com/tag/default-gateway
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00494-ip-10-171-10-70.ec2.internal.warc.gz
en
0.923751
169
3.203125
3
Women in technology who wish to blow the whistle on malfeasance they witness but who fear the consequences of reporting such information have good reason to be cautious. New research shows that female whistleblowers experience more retaliation than male whistleblowers. The study, Antecedents and Outcomes of Retaliation Against Whistleblowers: Gender Differences and Power Relationships, sought to identify factors that determine whether a whistleblower would face reprisal. In particular, the study examined whether the whistleblower's gender and level of power in the organization increased or decreased the likelihood that they'd face retaliation. A group of academic researchers from Georgetown University, Indiana University and Louisiana State University conducted the study on a U.S. Air Force base in the Midwest. The researchers mailed a confidential, 25-page survey to all 9,900 employees on the base. Marcia Miceli, a professor in Georgetown University's McDonough School of Business who co-authored the study, said the Air Force base was in some ways an ideal environment for the research because it was such a large employer. "There are very few large studies of whistle blowing in the U.S. or in the world, and you need to have a large sample of employees to get enough possible whistleblowers to answer questions about retaliation," she says. The survey contained more than 200 questions. Respondents were asked about their positions on the base; whether, over the past year, they had observed on the base any wrongdoing that they considered serious; the type of wrongdoing (they could choose from a list of 17 forms of wrongdoing that included stealing, accepting bribes, waste, mismanagement, sexual harassment and illegal discrimination); whether they reported the malfeasance; if they hadn't reported the wrongdoing, why they hadn't; and if they were threatened with or had experienced any of a variety of consequences after reporting the incident, such as a demotion, a poor performance review, verbal harassment, intimidation or tighter scrutiny of their daily activities. Of the 3,288 base employees who responded to the survey, the majority—63 percent—indicated that they hadn't witnessed any malfeasance. The remainder, 37 percent, reported that they had observed wrongdoing. Of that 37 percent, 26 percent reported the wrongdoing. The rest did not. 125 male whistleblowers and 78 female whistleblowers answered all of the survey questions concerning retaliation, its predictors and its consequences. Miceli says whistleblowers who skipped any of those questions had to be excluded from the statistical analyses due to missing data. Of the 26 percent of whistleblowers, 37 percent reported experiencing some form of retaliation. The study, which was published in the March/April 2008 issue of the journal Organization Science, found that more women reported experiencing consequences perceived as retaliation (such as poor performance reviews, verbal harassment, intimidation or tighter scrutiny of their daily activities) after disclosing wrongdoing than did men. The study also found that a woman's level of power and authority on the base didn't protect her from retaliation. "In organizations, the theory is, the more power you have, the more likely you'll escape retaliation because the organization thinks you're more credible or because they don't want to alienate you," says Miceli, who has studied whistleblowers for more than 20 years and whose research culminated in the book Blowing the Whistle, with her research partner from Indiana University, Janet Near. Miceli says the theory on power proved "somewhat true" for male whistleblowers in the study, but not for female whistleblowers. "In the male sample, there was a small but significant correlation in that the more powerful they were, the less retaliation they said they experienced. For the female sample, there was no relationship," says Miceli. Notably, the factor that turned out to be the biggest indicator of whether a whistleblower would face retaliation was the amount of support the whistleblower perceives she or he has in the organization. "If there are a lot of people in the company who support what you are saying [about the malfeasance] and who support you, that gives you more protection against retaliation," says Miceli. The survey did not detect a relationship between the type or severity of retaliation and type or severity of wrongdoing. Miceli says the sample size wasn't big enough. "We'd probably need 100,000 people in the sample." In spite of the odds stacked against female whistleblowers, they're more likely than men to report the original wrongdoing to an outside organization after they've experienced retaliation. In other words, the retaliation isn't successful in silencing them. "The more retaliation they faced, the more likely women were to keep fighting the battle over what they felt was wrong," says Miceli. By contrast, the amount of retaliation men faced didn't affect whether they took external measures to report the malfeasance. Miceli thinks women may be more likely to bring wrongdoing to the attention of external parties because more channels exist for them, such as the EEOC. She adds that men may be less likely to escalate incidents because they're more cognizant of the threat to other men and of the loyalty issues among them. Ramifications for the Private Sector Critics of this research may question the applicability of findings from a military base to the experiences of corporate workers. Miceli concedes that there's no way for her to know for sure how "generalizable" any specific finding from the at the Air Force base would be to any private sector organization. However, she notes that a lot of the findings from research on whistle blowing conducted in other settings offered similar findings. Therefore, managers in all organizations need to be careful not to retaliate against whistleblowers, she says. "For an organization that has high integrity and wants to succeed, ignoring or retaliating against whistleblowers, whether male or female, makes no sense," says Miceli. She recommends that companies encourage employees to give them valid information about perceived problems so that they can correct them before the problems get too big. When asked if she had any sense of whether retaliation against female whistleblowers was more common in male dominated environments like the Air Force base (63 percent of employees on the base are men) and IT departments, Miceli answered that it was a great empirical question and one that needed to be answered. It's really hard to do this kind of research in the private sector because the topic is so sensitive," says Miceli. If you are interested in conducting this kind of research in your organization, contact Miceli.
<urn:uuid:612170e9-39fd-4251-b0d4-536641ec2503>
CC-MAIN-2017-04
http://www.cio.com/article/2436408/morale/whistleblowers--women-experience-more-retaliation-than-men--study-reports.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282935.68/warc/CC-MAIN-20170116095122-00402-ip-10-171-10-70.ec2.internal.warc.gz
en
0.973575
1,314
2.515625
3
An Authentication Factor is a piece of information used to authenticate or verify a person's identity for security purposes. The three main types are: - Something you know (ie. a password or PIN) - Something you have (ie. a credit card, hardware token) - Something you are (ie. fingerprint, retinal pattern, etc.)
<urn:uuid:bd9b34dc-63e7-4e88-bfed-9fcd3c0801f9>
CC-MAIN-2017-04
http://hitachi-id.com/concepts/authentication_factor.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00154-ip-10-171-10-70.ec2.internal.warc.gz
en
0.771558
74
3.140625
3
Walker W.,Woods Hole Oceanographic Institution | Baccini A.,Woods Hole Oceanographic Institution | Schwartzman S.,Environmental Defense Fund EDF | Rios S.,Instituto Del Bien Comun IBC | And 9 more authors. Carbon Management | Year: 2014 Carbon sequestration is a widely acknowledged and increasingly valued function of tropical forest ecosystems; however, until recently, the information needed to assess the carbon storage capacity of Amazonian indigenous territories (ITs) and protected natural areas (PNAs) in a global context remained either lacking or out of reach. Here, as part of a novel north-south collaboration among Amazonian indigenous and non-governmental organization (NGO) networks, scientists and policy experts, we show that the nine-nation network of nearly 3000 ITs and PNAs stores more carbon above ground than all of the Democratic Republic of the Congo and Indonesia combined, and, despite the ostensibly secure status of these cornerstones of Amazon conservation, a conservative risk assessment considering only ongoing and planned development projects puts nearly 20% of this carbon at risk, encompassing an area of tropical forest larger than that found in Colombia, Ecuador and Peru combined. International recognition of and renewed investment in these globally vital landscapes are therefore critical to ensuring their continued contribution to maintaining cultural identity, ecosystem integrity and climate stability. © 2015 Taylor & Francis. Source
<urn:uuid:68bb1458-6624-447e-925c-d9b23791c272>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/instituto-del-bien-comun-ibc-1779014/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00274-ip-10-171-10-70.ec2.internal.warc.gz
en
0.888282
285
2.796875
3
The study investigated the core home range of 86 bull, great hammerhead and tiger sharks tagged in waters off south Florida and the northern Bahamas to understand if these highly mobile shark species might benefit from spatial protection, such as marine protected areas (MPAs). The team examined shark movements in core habitat use areas, or CHUAs, where the sharks were spending the majority of their time, in relation to zones that prohibited fishing or were these sharks were already fully protected within areas of the U.S. and Bahamas exclusive economic zones (EEZs). "There are concerns that spatial protections may not benefit large sharks since they are highly mobile and likely to regularly move in and out of MPAs," said study co-author Neil Hammerschlag, a research assistant professor at the UM Rosenstiel Marine School and UM Abess Center for Ecosystem Science and Policy. "While it's not feasible to protect highly mobile species wherever they go, our findings suggest that significant conservation benefits can be achieved if they are protected in areas where they spend the majority of their time, such as their core habitat use areas." The results show that none of the tracked bull shark's regional CHUAs were in areas that are fully protected from fishing, and for the great hammerhead and tiger sharks tracked, only 18 percent and 35 percent, respectively, of their core use areas were currently protected. The study also found that the majority of the CHUAs utilized by all three shark species were within the U.S. EEZ. "Our results will help enable policy makers to make more informed decisions when developing conservation plans for these species, particularly when considering a place-based management approach," said UM Rosenstiel School alumna Fiona Graham, the lead author of the study. In 2011 the Bahamas declared a ban on all commercial shark fishing in its more than 650,000 square kilometers (251,000 square miles) of waters under their federal EEZ. The state of Florida enacted new measures in 2012 to fully protect four shark species, including tiger and great hammerhead sharks, by prohibiting their harvest and possession in state waters. These new findings have important implications for marine conservation and spatial planning, such as to better evaluate the effectiveness of current, and placement of future MPAs, according to the researchers. Current research has shown that waters off Florida and the Bahamas are important pupping and feeding grounds for several sharks, providing them with the critical habitat required for the conservation of these slow-to-mature ocean animals. Many shark populations are threatened worldwide due to overfishing, a trend that is largely driven to fuel the shark fin trade as well as from accidental bycatch from fishing operations. Populations of hammerhead sharks in the northwest Atlantic and other areas have declined more than 80 percent over the last two decades, according to some research reports, which has resulted in great hammerheads being listed as globally endangered by the International Union for the Conservation of Nature (IUCN) Red List. Both bull sharks and tiger sharks are listed as near threatened by the IUCN. "This is of particular importance for hammerheads sharks since they are experiencing the greatest declines in the region and are of high conservation concern," said Hammerschlag. However, this species is susceptible to death from capture stress, so effective conservation strategies would also need to prevent great hammerheads from capture in the first place." More information: Fiona Graham et al. Use of marine protected areas and exclusive economic zones in the subtropical western North Atlantic Ocean by large highly mobile sharks, Diversity and Distributions (2016). DOI: 10.1111/ddi.12425 China and South Korea have scheduled talks for 22 December to address a decades-long boundary dispute that has hampered research and exploration in the Yellow Sea. This northern part of the East China Sea, between mainland China and the Korean peninsula, is home to a rich ecosystem that is under intense environmental strain from human activities. Confrontations over fishing rights in the disputed region have turned deadly — and research is not immune to the tension. South Korean scientists report that the Chinese coastguard has intercepted research vessels in the Yellow Sea and East China Sea on at least ten occasions, threatening their activities and forcing them to move east. At other times, the Chinese navy has shadowed South Korean research vessels. “The confrontations are happening all the time,” says marine sedimentologist Kyung-Sik Choi of Seoul National University. The friction in the Yellow Sea is one of many marine territorial disputes in east Asia: over the past two years, China has captured the world’s attention with its construction of artificial islands in the South China Sea and a series of alleged rammings of local fishing boats by its coastguard and navy vessels. A spat with Japan over islands and gas fields in the East China Sea is also escalating, as China boosts its military presence and extraction efforts there. In this particular case, both parties seem ready — at least publicly — to seek a solution. Chinese President Xi Jinping and South Korean President Park Geun-hye pledged in July 2014 to begin talks by the end of 2015. “If the maritime boundary is fixed in some way, it will be good for scientists because we will know exactly where our playground is,” says Hyun-Chul Han, a marine geologist at the Korea Institute of Geoscience and Mineral Resources in Daejeon. “It will be a great relief and secure scientists’ safety.” Few expect South Korea and China to fully resolve their dispute in this first round of talks. But some analysts say that boosting scientific ties between the nations in the Yellow Sea would be a feasible — and politically valuable — initial step. “Maybe this could be an area of low-hanging fruit that these talks could address, to at least point to some level of utility and productiveness,” says James Schoff, a senior associate at the Carnegie Endowment for International Peace in Washington DC. Under the 1982 United Nations Convention on the Law of the Sea, nations can claim exclusive rights to exploit resources in an exclusive economic zone (EEZ) within 200 nautical miles (370 kilometres) of their coasts. But because the Yellow Sea is less than 400 nautical miles in breadth, China and South Korea’s EEZs overlap, and they have never agreed to a boundary (see ‘Troubled waters’). Research vessels from both countries avoid straying across a line of longitude about halfway between Seoul and Qingdao, effectively dividing the Chinese and South Korean marine-science communities. The law does not in principle restrict purely scientific activities in another nation’s EEZ, but in practice, countries can quickly set these zones off-limits to others. Chinese data covering the Yellow Sea look “cut in half” because of the dispute, says Zuosheng Yang, a marine geologist at the Ocean University of China in Qingdao. In the past, China has rejected simply drawing a line that is equidistant from the two nations’ coasts. Instead, it claimed rights to about two-thirds of the Yellow Sea, based on the extent to which sediments billowing out from China’s Huang He and Yangtze rivers blanket the sea floor. This ‘silt line’ was met with howls of protest from South Korean scholars and received little international support. But the silt line has a practical significance: Chinese boats motor across it to escape the turgid, fish-poor sediment plumes, sometimes leading to fatal clashes with South Korea’s coastguard. In 2011, a Chinese fisherman stabbed a Korean coastguard to death with a shard of broken window glass; in a separate 2014 skirmish, the Korean coastguard shot and killed a Chinese fisherman. The dispute has also prevented cooperation in assessing the deterioration of the Yellow Sea’s marine ecosystem. Dams in Chinese rivers have interrupted the once-steady flow of sediment and nutrients into the waters, and pollution has created enormous algal blooms. Urbanization has also claimed most of the tidal flats that once ringed the Yellow Sea basin, threatening key habitats for migratory birds. Monitoring and management of the basin requires collaboration, says Paul Liu, an oceanographer at North Carolina State University in Raleigh. South Korean and Chinese ocean researchers do share some data through a joint marine-research centre in Qingdao, which has held workshops and coordinated some work since 1995. But when asked about the boundary dispute, Wei Zheng, the centre’s vice-director, said: “It still is a problem.” She declined to comment further, citing the sensitivity of the issue. Choi, for example, says that he and his colleagues would like to conduct a deep seismic survey transecting the entire Yellow Sea. But he says that the project would need permission and protection from China’s coastguard to prevent passing fishing boats causing any damage to the kilometres-long cables and attached equipment. Both Liu and Yang say that an agreement would similarly foster collaborations to look at how sediments have swirled across the Yellow Sea in the past, and how new dams on China’s rivers have changed that process. “The Chinese cannot only study the western side, or Koreans cannot only study the eastern side,” Liu says. “They have to work together to know the whole picture of the area.” « Gevo to supply Lufthansa with alcohol-to-jet fuel (ATJ) | Main | GoodFuels Marine and Boskalis successfully test UPM’s sustainable wood-based biofuel for marine fleet » The Advanced Research Projects Agency – Energy (ARPA–E) intends to issue a new Funding Opportunity Announcement (FOA ) in November, 2016, for the development of advanced cultivation technologies that enable profitable and energy efficient production of macroalgal-biomass (seaweeds) in the ocean. ARPA–E held a workshop on this topic in February 2016. These technologies are expected to be deployed and support cultivation of macroalgal-biomass feedstocks at a scale relevant for the production of commodity fuels and chemicals. The primary challenge is to reduce capital and operating cost of macroalgae cultivation dramatically, while significantly increasing the range of deployment by expanding into more exposed, off-shore environments. ARPA-E is interested in new designs and approaches to macroalgae cultivation and production with integrated harvesting solutions. These systems may leverage new material and engineering solutions, autonomous and/or robotic operations, as well as advanced sensing and monitoring capabilities. In addition to field-type cultivation, ARPA-E is also interested in unconventional approaches, for example ranching where free-floating macroalgae are harvested at locations predicted/determined by satellite imaging and current/drift modeling. Given the enormous size and geographic diversity of the US marine Exclusive Economic Zone (EEZ), the agency expects that there will be different system solutions based on the intended area of deployment, macroalgal species to be cultivated, and downstream processing. To support and accelerate the development of these advanced cultivation systems, ARPA-E is also interested in hydrodynamic and ocean current models that can predict the mechanical stresses on a cultivation system as well as the flow and distribution of nutrients through a macroalgae field. Furthermore, to validate the performance of macroalgae cultivation systems, appropriate sensors to measure in situ biomass production and composition as well as nutrient concentrations will be required. Finally, to complement the new system design approaches, ARPA-E is also looking for advanced breeding tools that can help in the development of new, highly productive macroalgae cultivars. ARPA-E has determined that, at this time, biomass conversion is not a limiting factor for profitable and wide-spread production of fuels and chemicals from macroalgae, and consequently will not support work in that area at this time. However, an understanding of macroalgae conversion processes are expected to inform and guide the development of cultivation and harvest strategies, or other tools. Overall, the program will address marine system design/engineering and integration with biomass production, hydrodynamic and ocean modeling, marine spatial planning, sensor technology development, macroalgae breeding tools, and field testing of cultivation systems and sensor technologies. The program will also address emerging markets necessary as stepping stones to a thriving marine macroalgae-to-fuels and chemicals industry. ARPA–E anticipates that this program will have four areas of interest. ARPA-E is now compiling a teaming partner list to facilitate the formation of new project teams. ARPA-E intends to make the list available on ARPA–E eXCHANGE, ARPA–E’s online application portal, this month. Once posted, the Teaming Partner List will be updated periodically, until the close of the Full Application period, to reflect new Teaming Partners who have provided their information. News Article | November 13, 2015 The noose finally appears to be closing on China, well at least a little. First, the Philippines scored a major geopolitical victory last month when The Hague agreed to hear its claim against Beijing’s recent land grabbing and subsequent land reclamation activities in Scarborough Shoal and the Spratly Islands (claimed by both China and the Philippines) in the South China Sea. According to the UN’s mandated 200-nautical mile Exclusive Economic Zone (EEZ), much of the area in dispute actually does lie within Manila’s EEZ. Not to be deterred and with little options due to its limited military and economic muscle, Manila now not only has the world watching even closer to its apparent legitimate claims, Indonesia has also called China’s bluff. News Article | April 15, 2016 Nord Stream 2, the controversial Russian-German pipeline project, is generating fierce opposition in Central and Eastern Europe as well as from the European Parliament and the European Commission. But could the opponents of the pipeline, owned 50% by Gazprom and 50% by some of the largest Western European companies, stop the project? They may be able to follow a complex legal route that could place formidable obstacles in the way of the pipeline. There is also an even more complex political route that could result in a blocking of the project, but this would involve a high-stakes battle at the highest political level. Energy Post editor-in-chief Karel Beckman reports. At a debate in the European Parliament on 6 April, the mood towards Nord Stream 2 could not have been more hostile. Petras Auštrevičius, MEP from Lithuania and Vice-Chair of the ALDE-faction (Alliance of Liberals and Democrats for Europe) in the European Parliament, called Nord Stream 2 a “killer project”, that “would kill much of what the Energy Union was intended to achieve”. MEP (and former Polish Prime Minister) Jerzy Buzek of the European People’s Party (Christian-Democrats), and also Chair of the important ITRE Committee (Industry, Research and Energy) in the Parliament, said that “Nord Stream 2 and Energy Union cannot co-exist”. He also stressed that “the majority of the European Parliament opposes Nord Stream 2.” By pitting Nord Stream 2 against “the Energy Union” the opponents have turned their opposition to the pipeline into a high-stakes game. The Energy Union is one of the top priorities of the current European Commission. But their stance is not a surprise. Earlier, on 17 March, Prime Ministers and leaders of 9 EU member states (Czech Republic, Hungary, Poland, Slovak Republic, Romania, Estonia, Latvia, Lithuania, Croatia) had sent a letter to Jean-Claude Juncker, President of the European Commission, speaking out against Nord Stream 2. They pointed out, among other things, that Nord Stream 2 poses “risks for energy security in the region of Central and Eastern Europe, which is still highly dependent on a single source of energy”. And the European Commission itself has also taken a highly critical stance towards Nord Stream 2. In a speech in October last year, Miguel Arias Cañete, Commissioner of Climate Action and Energy, said there were were “serious doubts” whether Nord Stream 2 was compatible with the EU’s “strategy of security of supply”. He said diversification of routes and sources is key to this strategy and “Nord Stream 2 does not follow this core policy objective. On the contrary, if constructed, it would not only increase Europe’s dependence on one supplier, but it will also increase Europe’s dependence on one route.” At the event on 6 April in Brussels, Maros Šefčovič, who as Vice-President of the European Commission is in charge of the Energy Union project, voiced equally strong criticism of Nord Stream 2. “Let me be very clear”, he said. “The impact of the Nord Stream 2 project goes clearly beyond the legal discussions. Nord Stream 2 could alter the landscape of the EU’s gas market while not giving access to a new source of supply or a new supplier, and further increasing excess capacity from Russia to the EU. This raises concerns, and I am pretty sure this will play a role in this debate.” The European Parliament and European Commission seem bent on stopping the Nord Stream 2 project. But can they? The EU is not directly involved in the decision-making process around Nord Stream 2: it is the national permitting authorities of the countries whose waters the pipeline will cross that must grant approval for the project. In this case, these are the permitting authorities of Russia, Finland, Sweden, Denmark and Germany. Currently, the Nord Stream 2 consortium is “communicating” with the permitting authorities in the five countries, says Ulrich Lissek, Head of Communications of the company. If they give approval, the pipeline can go ahead, according to Lissek: “There is an established approval process based on the rule of law that we are going through. The outcome does not depend on a political decision in Brussels.” Lissek, who was one of the speakers at the debate in Brussels, said in a separate interview with Energy Post that, although he regarded the political debate in Brussels as “important”, “its outcome will not have a direct impact on the project”. Nor does Nord Stream 2 wait on an FID (final investment decision), Lissek explained. In effect, the FID has already been taken: the shareholders agreement that was signed in Vladivostok on 4 September last year was the start of the project, said Lissek. (At the time of the signing, Gazprom had a 51 per cent share in the joint project company, called New European Pipeline AG, while Eon, Shell, OMV and BASF/Wintershall each had 10 per cent and Engie 9 per cent. Later Gazprom’s share was reduced to 50 per cent and Engie’s increased to 10 per cent.) The implementation of the agreement requires a number of separate investment decisions, such as the selection of the pipeline supplier, which took place last month, and getting the required permits from the authorities. Lissek said he does not expect problems with the environmental permits: after all, Nord Stream 1 was built in the same location and is already operational. Indeed, Nord Stream 1 also encountered a great deal of opposition, yet it has been fully approved and is functioning today. So, Lissek implied, why should Nord Stream 2 be treated differently? Does this mean case closed, Nord Stream 2 will be built if the shareholders want it to? Well, not quite. As Alan Riley, professor at City Law School in London and nonresident senior fellow with the Atlantic Council, pointed out in Brussels: Nord Stream 2 may well face both legal and political obstacles that Nord Stream 1 did not encounter. (See also his in-depth article on Energy Post.) “The world has changed since the first Nord Stream was launched in 2008”, he said. Riley noted that firstly, EU energy law has progressed (the Third Energy Package and Third Gas Directive were adopted in 2009) and secondly, Ukraine was invaded by Russia in 2014 and is still in a state of war, which creates a new political context. According to Riley, the European Commission and individual member states do have legal options to block Nord Stream 2. The key issue here is whether EU law – specifically, the Third Energy Package – applies to the pipeline or not. Under the Third Energy Package, the owners of major gas pipelines must be independent of the suppliers of gas, and they must allow equal access to all suppliers who want to make use of the pipeline. Nord Stream 2 clearly does not conform to these requirements: it is 50% owned by Gazprom, which is a supplier, and also partly by companies like Shell, OMV and Engie, also suppliers. If the Nord Stream 2 consortium were forced to submit to the unbundling and third party access rules of the Third Energy Package, it seems clear that the business case for the project would be destroyed. However, the Nord Stream 2 consortium takes the view that since the pipeline runs offshore, the Third Energy Package does not apply to it. The rules of the Third Energy Package, says Lissek, do apply to the connecting parts of the pipeline inside Germany (owned by different companies), but not to Nord Stream itself, which runs offshore and is an “import pipeline”. He said there are ample precedents for this view: “The five gas pipelines in the Mediterranean that run from Africa to Europe are not subject to the Third Energy Package either. Nor is Nord Stream 1. So Nord Stream 2 cannot be treated differently.” Riley disagrees. He says EU law applies to the territorial waters of the relevant member states (the 12-mile limits) and to their 200-mile Exclusive Economic Zones (EEZ). For Nord Stream 2 this means that “at least 100 kilometres of the offshore route fall under the Third Energy Package”. If the Third Energy Package has not been applied to existing “import pipelines”, said Riley, it’s because it was not in force yet when they were built. Since it has come in force, there is no reason why it should not apply. If he is right that would mean that the offshore pipe would need to be certified as TSO by the national regulators of the five countries involved. And under aticle 11 of the Third Gas Directive, the regulators must refuse to grant certification to a project if it does not a) comply with the “unbundling” and third-party access requirements of the Third Energy Package, and b) if the project puts at risk “the security of energy supply of the Member State and the European Union”. Maros Šefčovič also alluded to this point in his speech in Brussels. “Let me underline again that EU law applies in principle also to off-shore infrastructure under the jurisdiction of Member States including their exclusive economic zones”, he said. But he added, “What exactly within EU sectorial legislation applies has to be assessed in regard to their specific provisions.” In other words, the Commission Vice-President is not entirely sure yet of the applicability of the Third Energy Package to Nord Stream 2. A spokeswoman for the Commission did not want to make any further comments. She said “the Commission is in touch with the German Energy Regulator to find out more about the details of the project. On that basis the Commission will draw its conclusions on the extent to which EU law (internal energy market, environment, competition etc.) applies to the Nord Stream 2 project and the next steps to be taken by the Commission.” On 19 November last year, the European Commission sought the advice of its Legal Service on this question. To the disappointment of the Commission, the Legal Service replied that the Gas Directive does not apply to the part of the project running through the exclusive economic zones and territorial waters of the member states concerned. This was reported on 7 February by the website Politico on the basis of an analysis of the Legal Service reply written by DG Energy, which Energy Post also has a copy of. In this analysis, DG Energy concedes that “if the opinion of the Legal Service is applied, this would mean that the EU cannot claim any applicability of its energy legislation to any part of Nord Stream 2.” However, the author of the analysis then goes on to present a contrary opinion, arguing, in a detailed Annex, why, according to DG Energy, the Legal Service opinion is wrong and EU law should apply to Nord Stream 2. Thus, the issue is not settled yet. That’s as far as the Third Energy Package is concerned. But as Riley pointed out, the certification process by the national regulators also involves a “supply security test”, and he wondered, “how on earth can Nord Stream 2 ever pass such a test?” The issue of “energy security” is a critical one in the debate around Nord Stream 2. The opponents invariably argue that the pipeline has an extremely negative impact on Europe’s energy security. This is enough reason, they say, for EU leaders to block the project. One of the critical strategical objectives of the EU’s Energy Union is after all “energy security, solidarity and trust”, as it was phrased by the European Commission when it launched the Energy Union in 2014. This phrase also appears in the official Energy Union “package” that was published on 25 February 2015. Šefčovič repeated it in Brussels: “Energy security, solidarity and trust constitute a key dimension of our framework strategy of 25 February 2015”, he said. But what exactly is “energy security”? And what is “solidarity”? From an economic perspective, energy security in the gas market is defined by the European Commission as “diversification of energy sources, suppliers and routes”. As Šefčovič put it: “diversification of energy sources, suppliers and routes are crucial for ensuring secure and resilient energy supplies to European citizens and companies.” The critics of Nord Stream 2 have no doubt that the pipeline fails to meet the energy security test on these grounds. Šefčovič said: “Nord Stream 2 could alter the landscape of the EU gas market while not giving access to a new source of suply, and further increasing excess capacity from Russia to the EU.” He added that “Nord Stream 2 could lead to decreasing gas transportation corridors from three to two, abandoning the route through Ukraine. Also the Yamal route via Poland could be endangered.” Buzek too said that “Nord Stream 2 is against the EU strategy of energy diversification. It would further strengthen the position of a dominant supplier who is already under investigation for abusing its position.” Yet this argument seems debatable. Even if Nord Stream 2 does not bring additional supplies (the consortium says it will, opponents deny this), all it does is change the route by which Russian gas is transported to Europe. This may not increase suppliers or bring additional routes (since there already is a Nord Stream 1), but neither does it reduce the number of routes or supplies. So how can it reduce competition or energy security? The answer is: from an EU perspective it clearly doesn’t. From an Eastern European perspective, however, things look rather different. Once Nord Stream 2 becomes operational, Gazprom may close down its gas transit through Ukraine (this is not certain yet), and through countries like Poland and Slovakia, either completely or partially. This will lead to huge losses for these countries. Ukraine stands to lose $2 billion in annual transit fees, but a country like Slovakia also makes $800 million annually on its gas transit contract. This puts the opposition from Central and East European countries in a somewhat different light. For many years they have made profits from their transit capabilities. Now they are afraid to be passed by. This is painful for them, but why would Russia and its Western European partners not be allowed to decide on a more profitable arrangement for themselves? Isn’t this the whole idea of the “well-functioning gas market” the EU has managed to create – against Russian opposition – over the last seven years, since the last supply crisis in 2009, when Gazprom cut off gas transit through Ukraine? Péter Kaderják, Director of the Regional Centre for Energy Policy Research (REKK) in Budapest – an opponent of Nord Stream 2 – said in Brussels that the diversification strategy of the EU has worked very well so far. He pointed to the new LNG terminals being built or planned in Lithuania and Poland, the new interconnectors that have been built, and the reverse flow capabilities that have been created since 2009. As a result, he said, “supply security risk has significantly decreased” and Eastern European gas markets have increasingly integrated with the west. The result of this diversification effort is that Eastern European countries have become much less dependent on Russia. They can now easily import gas from Western Europe. True, their market position will clearly worsen, once Nord Stream 2 is built. As Kaderják said, there will be “a widening price gap between West and East”. But one could argue that this is what competition is about. The market is not about “solidarity”. It may also be noted that it is rather misleading to argue, as the 9 prime ministers do in their letter to Juncker, that Eastern European countries are “still highly dependent on a single source of energy”. They may still be “highly dependent” on Russia for their gas supplies, but gas is only part of their energy needs. Many countries in Eastern Europe, like Poland, Hungary and Bulgaria, actually do not use much gas in their energy mix at all. And they can choose to develop alternative sources of energy. There seems to be an element of choice involved when it comes to being “dependent on Russian gas”. None of this is meant to imply that Russia does not have geopolitical motivations in building Nord Stream 2. No doubt for Moscow Nord Stream 2 is also seen as a weapon in its conflict with Ukraine. That is what makes the issue so complex. The EU will have to decide whether it should regard Nord Stream 2 as simply an economic project, or also as a geopolitical threat. MEP Rebecca Harms, co-chair of the Group of the Greens in the European Parliament, had no doubts on this account. She said in Brussels that Ukraine is at war, and asked: “Why should we allow a further weakening of the country?” The problem with this argument is that it could be applied to any project that affects the Ukrainian economy negatively. As Lissek of the Nord Stream 2 consortium said: ”Do we have to stop every economic exchange with Russia?” He complained: “Why is so much importance attached to this project? That’s not very fair.” For Ukraine the ending of Russian gas transit would clearly be an economic blow. Yet the country also has itself to blame. For years it has allowed tremendous corruption in the gas sector, thereby putting the energy security of Western European countries at risk. The Russian decision in 2006 and 2009 to cut gas transit through Ukraine had a lot do with that. Opponents of Nord Stream 2 argue that this past is not relevant anymore: Ukraine has firmly started on a process of reform. Šefčovič said: “Ukraine continues to be a reliable gas partner and transit country….” He added that “The completion of energy sector reforms in Ukraine [is] of utmost importance and should be further implemented. There is no better reassurance for Ukraine to remain a transit country and to attract investors to its gas assets then by completing ownership unbundling of Naftogaz and setting up an independent energy regulator in line with the 3rd Energy Package. Ukraine has all our support to make these necessary game-changing steps.” The question is, does the energy market reform in Ukraine imply that Nord Stream 2 should not be built and the EU should help the country retain its dominant position in Russian gas transit to Western Europe? Wouldn’t this be an incentive for the country not to pursue its market reforms? In an article published by the Atlantic Council in April, the well-known Russia expert Anders Åslund, a fierce critic of Nord Stream 2, writes that Ukraine has made tremendous progress in its energy reforms. One effect has been an “extraordinary” decline in gas consumption: “As late as 2011, consumption amounted to 59.3 billion cubic metres. In 2015, it had fallen to merely 36 bcm, of which Ukraine itself produced 20 bcm.” The expectation is that in 2016 gas consumption will fall to a mere 29 bcm, “of which Ukraine itself will produce 20 bcm”. In 2015, Ukraine imported a mere 6.1 bcm from Russia. Åslund writes that “Energy is the linchpin of Ukraine’s independence on Russia”, and recommends that Ukraine should “stop buying gas from Russia altogether”, yet he wants Nord Stream 2 also to be stopped, as it is “instigated by Gazprom and certain German interests to weaken the EU and its energy policy to the disadvantage of Ukraine”. This does not seem consistent. Åslund has a broader perspective, though, since he also wants Gazprom to be investigated as “a criminal organization”. He may have a point, but if Gazprom really is a criminal organization, then it seems reasonable to argue that this is what the debate in Europe should be about: not about Nord Stream 2 but about Gazprom. After all, Gazprom is dealing with Europe in many ways, supplying large amounts of gas through various channels. In the end, a political process to halt Nord Stream 2 could be even more complicated than a legal process. The only way this could happen is if the European Commission and European Parliament received full support from the European Council. But that would require the agreement of Germany and other Western European member states, like The Netherlands and Austria, which stand to benefit from Nord Stream 2. So far, the European Council has been fairly critical of the project. After a meeting in December 2015, its President Donald Tusk wrote: “Talking about the Energy Union, leaders had an exchange on the Nord Stream II project, some of them were very critical, and we also discussed the conditions that need to be met by major energy infrastructure projects. We reiterated that any new infrastructure should be fully in line with the Energy Union objectives. Not to mention the obvious obligation that all projects have to comply with all EU laws, including the third Energy Package. These are clear conditions for receiving support from the EU institutions or any Member State – political, legal or financial. Now the ball is in the court of the European Commission. But the political message of the European Council is clear and goes in a similar direction as the position expressed by the European Parliament.” Didier Seeuws, Director of Transport, Telecommunications and Energy at the European Council, said in Brussels that the Council had “put forward not only legal but also political conditionality”. But Seeuws declined to speculate about what exactly a political decision from EU leaders to block Nord Stream 2 would look like. Any such move would run into one formidable problem: Russia would regard it as an extremely hostile act. Perhaps Péter Kaderják had the wisest advice to offer at the event in Brussels. He said that if Nord Stream 2 gets built, “Regulatory measures should ensure that competition is maintained in the Central and South East European region even if Gazprom transits gas along new routes. The EU and local regulators should prevent Gazprom from blocking interconnection capacities and make it release significant gas volumes in Germany.” This kind of compromise might be the most pragmatic outcome of the battle for Nord Stream 2. A similar solution is hinted at in the “analysis of the Legal Service reply” written by DG Energy, which refers to the possibility of a “case-specific regulatory framework for the project”. It is probably the maximum that the opponents of Nord Stream 2 will be able to achieve.
<urn:uuid:b175db72-4689-488a-8399-e5687f1c2aea>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/eez-2464564/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00486-ip-10-171-10-70.ec2.internal.warc.gz
en
0.957472
7,497
3.171875
3
The next big technological shift affecting all three accelerators is the photonics revolution: using lasers and crystal holography to store information -- think Supermans Fortress of Solitude. Weve already been rummaging around in the foothills of this particular mountain of change. Magnetic drives give way to optical drives, and copper cable is joined by fiber optics enabling the transmission of data in the form of light rather than magnetic charge. Just as laying down steel tracks for the railroads transformed the economic and social landscape in the late 19th Century, our crisscrossing the oceans and continents with fiber optics in the late 1990s transformed the modern landscape -- only to a far greater degree and in a fraction of the time. These three accelerators describe a curve that starts out almost imperceptibly slow. In the '80s, it started curving upward to the point where we could almost start to feel the change. By the end of the '90s it was impossible not to feel it. Yet today, oddly, people often seem to feel the technology revolution is over, that the biggest changes are behind us. This is a grave mistake. As radical a change as weve seen with fiber optics, weve barely scratched the surface of the photonics revolution. Crystal holography is yet another technology that will give us inconceivably vast amounts of data, all stored in three-dimensional and instantly retrievable form. Information about virtually everything, at your fingertips: just add light and stir. And thats just storage. Our microchips get faster and faster, as well. Weve already stepped into the next frontier in processing: nanotechnology and quantum computing. Researchers have charted the workings of soon to be constructed nano-computers that store infinitesimal bits of information (called qubits, for quantum bits) on single atoms. And what about bandwidth? That may be the greatest shift of all. We have already stepped off our copper wires and onto fiber optic cables, and now were stepping off those translucent filaments onto thin air. We have become the high-wire artist without the net and without the wire; flying through the air with the greatest of ease. Fiber optics will continue to provide the backbone of communications but, with advances in wireless transmission, our capacity to increase bandwidth, both wired and wireless, has virtually no upward limit. Photonics, crystal holography, nanotechnology, quantum computing, and infinitely extensible wireless transmission will all accelerate the virtualization of business processes using many innovative iterations of cloud computing. The rate of change ahead will make the days of 1999s Internet boom seem like a quiet autumn afternoon sitting on the front porch rocking chair watching the leaves turn. Whats critical to remember here is that none of this is a maybe. This accelerating rate of change is as certain as the sun rising in the East tomorrow morning, and its going to sweep across our landscape like the technological tsunami it is. This is going to happen whether we want it to or not. From education to healthcare, agriculture to energy to manufacturing, it will burst through every industry and every institution, metamorphosing everything and leaving nothing untouched in its wake. It will be deeply disruptive to every aspect of every industry and every aspect of human activity ... except for those who see it coming. Daniel Burrus is considered one of the worlds leading technology forecasters and business strategists, and is the founder and CEO of Burrus Research, a research and consulting firm that monitors global advancements in technology driven trends to help clients better understand how technological, social and business forces are converging to create enormous, untapped opportunities. He is the author of six books, including The New York Times and The Wall Street Journal best seller Flash Foresight: How To See the Invisible and Do the Impossible as well as the highly acclaimed Technotrends.
<urn:uuid:7d99a3ef-ee6b-4da8-b442-bb55740ee04b>
CC-MAIN-2017-04
http://www.cioupdate.com/reports/article.php/11050_3932451_2/Special-Report---Seeing-the-Tech-Tsunami-iBeforei-the-Impact-Part-II.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283689.98/warc/CC-MAIN-20170116095123-00512-ip-10-171-10-70.ec2.internal.warc.gz
en
0.939088
786
2.65625
3
“Beware the sleeping dragon. For when she awakes the Earth will shake.” — Winston Churchill, speaking about China For two decades, TOP500.org has published a biannual list of the world’s fastest computing machines. For the first 17 years, the competition for the number one spot was a back-and-forth match between the United States and Japan. But in November 2010, China cracked the coveted pole position with the Tianhe-1A supercomputer. Tianhe-1, which translates into Milky Way, was developed by the Chinese National University of Defense Technology (NUDT) in Changsha, Hunan. When it was unveiled in October 2009, it was immediately ranked as the world’s fifth fastest supercomputer in the TOP500 list released at SC09. The upgraded Tianhe 1-A, equipped with 14,336 Xeon processors and 7,168 NVIDIA Tesla GPUs, brought the machine’s top LINPACK speed from 563 teraflops to 2.56 petaflops. The boost rocketed the system to the number one spot in November 2010, beating out the University of Tennessee’s Jaguar supercomputer and giving China bragging rights as a technology superpower. It was the first time that a non-US system held the number one spot in six years. The US came back again in June 2012 with the IBM Sequoia Blue Gene/Q, which had a LINPACK performance of 16.32 petaflops. In November 2012, the title changed hands again, claimed by an upgraded Jaguar – renamed Titan and packing 17.59 petaflops LINPACK. Despite the impressive benchmark, Titan’s reign was short-lived. Seven months later, in June 2013, China reestablished list dominance with its upgraded system, Tianhe-2. With a remarkable 33.86 petaflops LINPACK, the Chinese system beat out second place finisher Titan by nearly a 2-to-1 margin. China’s Tianhe-2 remains the fastest supercomputer in the world. What’s more, the Tianhe-2 project is two years ahead of schedule. The supercomputer was originally scheduled to be completed in 2015, but the latest reports say that it is expected to be fully operational by the end of 2013. Consider the technical specifications of this phenomenal computing machine: 16,000 computer nodes, each comprising two Intel Ivy Bridge Xeon processors and three Xeon Phi coprocessors for a total of 3,100,000 cores; 1.4 petabytes of RAM; and a proprietary high-speed interconnect, called TH Express-2, that was designed by NUDT. Tianhe-2 has a maximum power draw of 17.6 megawatts, with an additional 24 megawatts allocated for cooling. A recent Guardian article explores what China’s still-emerging supercomputing prowess tells us about the country’s absorptive state. The United States is still the world’s leading supercomputer power with 252 top 500 systems, but China is catching up – with 66 of the top 500 supercomputers. The Institute of Electrical and Electronics Engineers asserts that Tianhe-2’s win “symbolizes China’s unflinching commitment to the supercomputing arms race.” The race to build the first exascale supercomputer is still in progress and the US, EU, Japan, India, Russia and China have all expressed their intentions to reach this goal. But most experts, according to the Guardian piece, say the odds are in China’s favor. It’s been two years since Obama called on Americans to come together for “our generation’s Sputnik moment” during his 2011 State of the Union address, but the response from funding bodies has been lackluster. An exascale plan was only recently submitted to Congress and no new funds have been granted yet. China by contrast has maintained a targeted investment strategy, spending approximately $163 billion USD on R&D in 2012. Since 2008, it has increased funding by 18 percent each year at the same time as other countries’ budgets were flatlining. Even though China can claim the leading system, it has had to rely on US technology to do so. This is a key point of the Guardian article, which was penned by James Wilsdon, professor of science and democracy at the University of Sussex; Kirsten Bound, head of international innovation at the independent UK charity Nesta; and Tom Saunders, a policy and research analyst at Nesta. “In one sense, Tianhe-2 is an achievement that the Americans should be every bit as proud of as the Chinese,” write the authors. But China is hard at work designing and manufacturing its own technologies and most experts agree that it won’t be long before China produces its first 100 percent home-grown supercomputer. China is particularly adept at absorbing, adapting and improving on foreign-developed technologies. Supercomputing is one of the main sectors this kind of absorptive process is taking place, but it’s also occurred in other high-profile cases, for example high-speed rail network, advanced nuclear reactors and space exploration. Note the authors: “These examples suggest that what China’s President Xi Jinping has termed ‘innovation with Chinese characteristics’ will not be a straightforward path from imported to home-grown innovation, but a messier process in which the lines between Chinese and non-Chinese ideas, technologies and capabilities are harder to draw.” The Nesta report, China’s Absorptive State: research, innovation and the prospects for China-UK collaboration, will be available next week, scheduled to coincide with the first high-level UK government delegation to Beijing for over a year.
<urn:uuid:7196196c-3473-4011-9767-51ac8bd3e87a>
CC-MAIN-2017-04
https://www.hpcwire.com/2013/10/11/chinas-exascale-ambitions/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00504-ip-10-171-10-70.ec2.internal.warc.gz
en
0.943835
1,210
2.65625
3
After a year on the market, the iPad is still the hottest tablet around. And students in Chicago Public Schools (CPS) have been lucky enough to use them in the classroom for an entire school year. Teachers at various CPS institutions are using the iPad to heighten student learning at all grade levels. Whether it’s helping special education students “speak” to grocery store clerks on field trips, assisting high school physics students in “building” roller coasters to understand motion and energy, or conducting daily formative assessments to improve student performance, the iPad engages students — and according to experts, that’s the most rewarding part. “What we’ve found with the iPads as we’ve rolled this out is that having kids with a device such as the iPad in the classroom — within the curriculum — is very powerful,” said CPS Technology Education Director John Connolly. “Our feedback from our teachers and students is that this is something they’re using every day. It’s embedded in all of their subjects, even if they were originally targeting one subject, and we’re seeing some really cool things happening with those students.” CPS is testing the device in more than 20 schools to see whether it could eventually become a permanent learning tool for the entire school district. Since the trial launched last August, other school districts around the country have followed suit. The expansion of technology in education — and in government at large — is widespread. Over the last several years, many colleges, universities and K-12 school districts, not to mention local and state agencies, have incorporated emerging technology like Apple iPhones and Amazon Kindles into their daily lives. Adding the iPad is just an extension of this. While some contend that such technology incorporated into the classroom can be more of a distraction than a learning tool, CPS executives, educators and students are proving otherwise. At Chicago’s Burley Elementary School, Technology Coordinator Carolyn Skibba said iPads allow for easy collaboration among teachers and students. The administration, she said, was excited about the potential of a device that’s small, flexible, portable, visual and hands-on, especially when working with younger students. “It really seemed like something that could integrate more seamlessly into the learning experience for the kids,” she said. “We felt that other technology initiatives in the district had to some extent underserved or overlooked our youngest learners, and we felt that the iPad was a tool, because of its visual and hands-on design, would really be a natural fit for our youngest learners.” The kids have taken to the technology, navigating the iPad’s apps with ease and using the touchscreen like pros, she said. The second-graders in teacher Begoña Cowan’s class learn about spelling and pronunciation without having to share a pile of traditional magnetic letters. Instead, each student uses the ABC — Magnetic Alphabet app on his or her iPad to spell “-oom” and “-oop” words. When it’s time to put the iPads away, they each return the device to the cart with two hands held up against their chests to keep it safe. First-grade students have used apps like Pages, Simplenote and smartNote to help with basic word processing. For one assignment, the kids copied a photo of a totem pole from the Web, pasted it in the app and wrote a few sentences about the meaning of the totem pole, which shows honor when a tribe chief has died. “We’ve done a lot of explicit instruction on how to use the iPad and basic word-processing skills for young children, and the iPad allows us to take a virtual field trip every day by searching Web content in a way that’s user-friendly for early childhood students,” said teacher Kristin Ziemke-Fastabend. At the Chicago High School for the Arts, physics students work in small groups to use the Coaster Physics app to create roller coasters while incorporating traditional learning methods, said CPS Technology Integration Specialist Margaret Murphy. “They start out with sheets to do the mathematics — the physics calculations — and another person is drawing a roller coaster on a large sheet of paper,” she said. “Another is designing it on the iPad, and they’re all sharing with each other, making sure the way their math worked out is working on the iPad, and it is matching what they’ve drawn on their paper.” When teacher Kevin Cram taught the roller coaster lesson plan pre-iPad, he said several students didn’t have the opportunity to design their own roller coasters, which involved physical materials like pipe insulation that students cut up and glued together to make one-dimensional projects. “Not everyone was able to create as much as I wanted,” he said. “The iPad allowed easy access and manipulation and creation because of this app. So every group got to design and put their ideas into an actual model.” Also using a blended approach in the classroom is Jenny Cho-Magiera, whose fourth-grade class at the National Teachers Academy used iPads to follow along with a voice-recorded lesson about the anatomy of a flower. The students saw the pages in the book from which Cho-Magiera was reading. Any student who missed the lesson could review that exact lesson at a later time. Where does traditional teaching enter the picture? Her teacher’s assistant moved from table to table with a real lily to show students what they were learning about. For Cho-Magiera, the most revolutionary thing about the iPad is how fast she can respond to students’ assessments of the day’s lesson. Before the iPad, the children would scribble something on a half-sheet of paper and turn it in, sometimes forgetting to write their names. Cho-Magiera wasn’t able to react or answer questions until at least the next day. Now Cho-Magiera said she uses Google Forms, a survey development interface. In about 30 seconds, she can put three or four questions in the form, and the students use the iPads to answer. The results are formulated into a Google spreadsheet in real time, and she can immediately sort through them and form work groups based on which students need help with different topics. “Just like that, I have my differentiated groups for that day,” she said. “I don’t need to wait 24 hours to put them into a group — when they forgot what they were learning about yesterday. As a result of that, their proficiency has gone up because my teaching has become more efficient.” Back at the School for the Arts, Cram also uses iPads for formative assessments, utilizing what he calls a “WebQuest.” Cram likes to include both a pre-quiz and a post-quiz, and during a lesson, students investigate different websites on their iPads to research and answer questions. “We can see what areas of growth they have after they’ve done the research,” he said, noting that the answers to both pre- and post-quizzes are submitted via Google docs. Cram says he hasn’t seen any dramatic improvements in learning since incorporating the iPad, but he anticipates that there will be soon. “The students are much more engaged and interested in the material. And because of that, maybe I’m pushing them a bit more and asking more challenging questions,” he said. “Through practice and more work with challenging questions, and being exposed to that with the engagement at 90 to 100 percent with the iPad versus much lower with a lecture or even hands-on labs.” Incorporating new, up-to-the-minute technology, especially in education, sounds great. It’s been said time and again that students should be taught in ways that they’re comfortable — and they’re quite comfortable with technology. But to critics, technology might hurt more than help the ability to learn. One person questioning the impact of some new technologies on students is President Barack Obama, who at Virginia’s Hampton University commencement, said that with iPods, iPads, Microsoft Xboxes and Sony PlayStations, “information becomes a distraction, a diversion, a form of entertainment, rather than a tool of empowerment, rather than the means of emancipation.” In Chicago, Connolly said, the way CPS rolled out the iPad trial has helped conquer this challenge. The school district asked its schools to submit applications, which a committee reviewed and then determined which schools would test the technology. “Not only is that the fair way to do it,” Connolly said, “but it also allowed schools and teachers who were interested in using technology to step to the forefront.” Two hundred schools applied for grants that were valued at more than $20,000. Each grant includes 32 iPads, one MacBook Pro for syncing purposes, $200 in iTunes credit for applications and a storage cart for the hardware. Professional development has also been a huge part of the trial’s success. CPS partnered with Apple to provide professional development and create a cohort of collaboration across the schools to share best practices and ideas. Teachers train every other month for one day. The morning is dedicated to learning new applications or new ways to incorporate the iPad into the classroom, and the afternoon is geared toward collaboration. “What we’ve found in the feedback is that teachers love the time of trading stories of how they’re using and implementing the iPad with other colleagues from other schools, in addition to learning something in the front half of the day,” Connolly said. Trainers also provide onsite training in the classroom, so teachers don’t have to be pulled out of class. Preparation was another factor in the trial’s success. Each teacher devised a blueprint for incorporating the iPad into his or her lesson plans well in advance of receiving the technology. “So they could expand what they were already comfortable doing,” he said. “All of that together, it’s kind of the ground-up approach.” As teachers become more comfortable using the iPad, demand is growing. “Other teachers are peeking in and saying, ‘We want to use that too,’ which is pretty exciting for us, but now we’re running into an issue of people saying, ‘We need that technology,’” said Connolly. CPS CIO Arshele Stevens said she believes that knowing how to implement and supervise iPad use in the classroom is key to making sure the device doesn’t become a distraction to learning. Part of the original intent of the iPad trial was to ensure that the district served as a guide for all schools to implement the technology. “We’ve always planned, at the end of the trials, to assess, and then if we see a project that’s really transformed a student’s knowledge of a subject matter, to elevate that,” she said. “We want to create a model.” CPS has three categories of teachers as far as computers go, Stevens said: those who are proficient, those who are fairly comfortable, and then there’s the larger population, which doesn’t even want to use e-mail. “Those are the teachers who really don’t know how to integrate technology in the classroom. It’s not because they’re reluctant; it’s that they don’t know how,” she said, adding that if the district can, based on a successful trial, create a step-by-step process to incorporate iPads in the classroom for a teacher who’s uncomfortable with technology — a process they’re able to execute — that’s beneficial for the entire district. The plan, she said, is to expand the program next school year not only to additional schools, but also to users in the central office. “We’re hoping to extend its use,” she said, “because we find that most people are really excited about it.”
<urn:uuid:b8f2ec88-3bdf-4215-8f1e-7b9b2fdb217c>
CC-MAIN-2017-04
http://www.govtech.com/education/118877879.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00228-ip-10-171-10-70.ec2.internal.warc.gz
en
0.968961
2,559
2.890625
3
Multidimensional analysis, the basis for OLAP, is not new. In fact, it goes back to 1962, with the publication of Ken Iverson’s book, A Programming Language. The first computer implementation of the APL language was in the late 1960s, by IBM. APL is a mathematically defined language with multidimensional variables and elegant, if rather abstract, processing operators. It was originally intended more as a way of defining multidimensional transformations than as a practical programming language, so it did not pay attention to mundane concepts like files and printers. In the interests of a succinct notation, the operators were Greek symbols. In fact, the resulting programs were so succinct that few could predict what an APL program would do. It became known as a ‘Write Only Language’ (WOL), because it was easier to rewrite a program that needed maintenance than to fix it. Unfortunately, this was all long before the days of high resolution GUI screens and laser printers, so APL’s Greek symbols needed special screens, keyboards and printers. Later, English words were sometimes used as substitutes for the Greek operators, but purists took a dim view of this attempted popularization of their elite language. APL also devoured machine resources, partly because early APL systems were interpreted rather than being compiled. This was in the days of very costly, under-powered mainframes, so applications that did APL justice were slow to process and very expensive to run. APL also had a, perhaps undeserved, reputation for being particularly hungry for memory, as arrays were processed in RAM. However, in spite of these inauspicious beginnings, APL did not go away. It was used in many 1970s and 1980s business applications that had similar functions to today’s OLAP systems. Indeed, IBM developed an entire mainframe operating system for APL, called VSPC, and some people regarded it as the personal productivity environment of choice long before the spreadsheet made an appearance. Even today, there are OLAP products that use APL internally. However, APL was simply too elitist to catch on with a larger audience, even if the hardware problems were eventually to be solved or become irrelevant. It did make an appearance on PCs in the 1980s (and is still used, sometimes in a revamped form called “J”) but it ceased to have any market significance after about 1980. Although it was possible to program multidimensional applications using arrays in other languages, it was too hard for any but professional programmers to do so, and even technical end-users had to wait for a new generation of multidimensional products. By 1970, a more application-oriented multidimensional product, with academic origins, had made its first appearance: Express. This, in a completely rewritten form and with a modern code-base, has become a widely used contemporary OLAP offering, but the original 1970’s concepts still lie just below the surface. Even after 30 years, Express remains one of the major OLAP technologies, although Oracle has struggled and failed to keep it up-to-date with the many newer client/server products. Oracle announced in late 2000 that it will build OLAP server capabilities into Oracle9i starting in mid 2001. The Oracle9i OLAP Services includes both a version of the Express engine, called the analytic workspace, and a new ROLAP engine. This will mean that the Express engine lives on well into its fourth decade, and perhaps even its fifth. More multidimensional products appeared in the 1980s. Early in the decade, Stratagem appeared, and in its eventual guise of Acumate (now owned by Lucent), this too was still marketed to a limited extent till the mid 1990s. However, although it is a distant cousin of Express, it has never had Express’ market share, and is now little used. Along the way, Stratagem was owned by CA, which was later to acquire two ROLAPs, the former Prodea Beacon and the former Information Advantage DecisionSuite, both of which soon died. Comshare’s System W was a different style of multidimensional product. Introduced in 1981, it was the first to have a hypercube approach and was much more oriented to end-user development of financial applications. It brought in many concepts that are still not widely adopted, like full non-procedural rules, full screen multidimensional viewing and data editing, automatic recalculation and (batch) integration with relational data. However, it too was heavy on hardware and was less programmable than the other products of its day, and so was less popular with IT professionals. It is also still used, but is no longer sold and no enhancements are likely. Although it is now available on Unix, it is not a client/server product and was never promoted by the vendor as an OLAP offering. In the late 1980s, Comshare’s DOS One-Up and later, Windows-based Commander Prism (now called Comshare Planning) products used similar concepts to the host-based System W. Hyperion Solution’s Essbase product, though not a direct descendant of System W, was also clearly influenced by its financially oriented, fully pre-calculated hypercube approach. Ironically, Comshare subsequently licensed Essbase (rather than using any of its own engines) for the engine in some of its modern OLAP products. Another creative product of the early 1980s was Metaphor. This was aimed at marketing professionals in consumer goods companies. This too introduced many new concepts that only became popular in the 1990s, like client/server computing, multidimensional processing on relational data, workgroup processing and object oriented development. Unfortunately, the standard PC hardware of the day was not capable of delivering the response and human factors that Metaphor required, so the vendor was forced to create totally proprietary PCs and network technology. Subsequently, Metaphor struggled to get the product to work successfully on non-proprietary hardware and right to the end it never used a standard GUI. Eventually, Metaphor formed a marketing alliance with IBM, which went to acquire the company. By mid 1994, IBM had decided to integrate Metaphor’s unique technology (renamed DIS) with future IBM technology and to disband the subsidiary, although customer protests led to the continuing support for the product. The product continues to be supported for its remaining loyal customers, and IBM relaunched it under the IDS name but hardly promoted it. However, Metaphor’s creative concepts have not gone and the former Information Advantage, Brio, Sagent, MicroStrategy and Gentia are examples of vendors covered in The OLAP Report that have obviously been influenced by it. One other surviving Metaphor tradition is the unprofitability of independent ROLAP vendors: no ROLAP vendor has ever made a cumulative profit, as demonstrated by Metaphor, MicroStrategy, MineShare, WhiteLight, STG, IA and Prodea. The natural market for ROLAPs seems to be just too small, and the deployments too labor intensive, for there to be a sustainable business model. By the mid 1980s, the term EIS (Executive Information System) had been born. The first explicit EIS product was Pilot’s Command Center though there had been EIS applications implemented by IRI and Comshare earlier in the decade. This was a cooperative processing product, an architecture that would now be called client/server. Because of the limited power of mid 1980s PCs, it was very server-centric, but that approach came back into fashion again with products like EUREKA:Strategy and Holos and the Web. Command Center is no longer on sale, but it introduced many concepts that are recognizable in today’s OLAP products, including automatic time series handling, multidimensional client/server processing and simplified human factors (suitable for touch screen or mouse use). Some of these concepts were re-implemented in Pilot’s Analysis Server product, which is now also at the end of its life, as is Pilot, which changed hands in August 2000, almost certainly for the last time, when it was bought by Accrue Software. By the late 1980s, the spreadsheet was already becoming dominant in end-user analysis, so the first multidimensional spreadsheet appeared in the form of Compete. This was originally marketed as a very expensive specialist tool, but the vendor could not generate the volumes to stay in business, and Computer Associates acquired it, along with a number of other spreadsheet products including SuperCalc and 20/20. The main effect of CA’s acquisition of Compete was that the price was slashed, the copy protection removed and the product was heavily promoted. However, it was still not a success, a trend that was to be repeated with CA’s other OLAP acquisitions. For a few years, the old Compete was still occasionally found, bundled into a heavily discounted bargain pack. Later, Compete formed the basis for CA’s version 5 of SuperCalc, but the multidimensionality aspect of it was not promoted. Lotus was the next to attempt to enter the multidimensional spreadsheet market with Improv. Bravely, this was launched on the NeXT machine. This at least guaranteed that it could not take sales away from 1-2-3, but when it was eventually ported to Windows, Excel was already too big a threat to 1-2-3 for Improv’s sales to make any difference. Lotus, like CA with Compete, moved Improv down market, but this was still not enough for market success, and new development was soon discontinued. It seems that personal computer users liked their spreadsheets to be supersets of the original 1-2-3, and were not interested in new multidimensional replacements if these were not also fully compatible with their old, macro driven worksheets. Also, the concept of a small multidimensional spreadsheet, sold as a personal productivity application, clearly does not fit in with the real business world. Microsoft went this way, by adding PivotTables to Excel. Although only a small minority of Excel users take advantage of the feature, this is probably the single most widely used multidimensional analysis capability in the world, simply because there are so many users of Excel. Excel 2000 includes a more sophisticated version of PivotTables, capable of acting as both a desktop OLAP, and as a client to Microsoft Analysis Services. However, the OLAP features in Excel 2000 are inferior to those in OLAP add-ins, so there is a good opportunity for third-party options as well. By the late 1980s, Sinper had entered the multidimensional spreadsheet world, originally with a proprietary DOS spreadsheet, and then by linking to DOS 1-2-3. It entered the Windows era by turning its (then named) TM/1 product into a multidimensional back-end server for standard Excel and 1-2-3. Slightly later, Arbor did the same thing, although its new Essbase product could then only work in client/server mode, whereas Sinper’s could also work on a stand-alone PC. This approach to bringing multidimensionality to spreadsheet users has been far more popular with users. So much so, in fact, that traditional vendors of proprietary front-ends have been forced to follow suit, and products like Express, Holos, Gentia, MineShare, PowerPlay, MetaCube and WhiteLight now proudly offer highly integrated spreadsheet access to their application servers. Ironically, for its first six months, Microsoft OLAP Services was one of the few OLAP servers not to have a vendor-developed spreadsheet client, as Microsoft’s (very basic) offering only appeared in June 1999 in Excel 2000. However, the (then) OLAP@Work Excel add-in filled the gap, and still (under its new snappy name, BusinessQuery MD for Excel) provides much better exploitation of the server than does Microsoft’s own Excel interface. A few users demanded multidimensional applications that were much too large to be handled in multidimensional databases, and the relational OLAP tools evolved to meet this need. These presented the usual multidimensional view to users, sometimes even including a spreadsheet front-end, even though all the data was stored in an RDBMS. These have a much higher cost per user, and lower performance than specialized multidimensional tools, but they are a way of providing this popular form of analysis even to data not stored in a multidimensional structure. Other vendors expanded into what is now called desktop OLAP (even though, in Web implementations, the cubes are usually resident on the server): small cubes, generated from large databases, but downloaded to PCs for processing. These have proved very successful indeed, and the one vendor that sells both a relational query tool and a multidimensional analysis tool (Cognos, with Impromptu and PowerPlay) reports that the latter is much more popular with end-users than is the former. Now, even the relational database vendors have embraced multidimensional analysis, with Oracle, IBM, Microsoft, Informix, CA and Sybase all developing or marketing products in this area. Ironically, having largely ignored multidimensionality for so many years, it now looks like Oracle, Microsoft and IBM might be the new ‘OLAP triad’, with large OLAP market shares, based on selling multidimensional products they did not invent. So, what lessons can we draw from this 35-year history? You can contact Nigel Pendse, the author of this paper, by e-mail at NigelP@OLAPReport.com if you have any comments or observations. Nigel Pendse is principal of OLAP Solutions and has had over twenty-five years experience as a user, vendor and independent consultant in the areas now known as OLAP and Business Intelligence. He is a frequent speaker at conferences, and as a consultant, advises both users and vendors on OLAP product issues. He has degrees in mechanical and nuclear engineering, and was international marketing director of a well-known software firm until 1994. He is based in London, England.Nigel Pendse provided permission on behalf of OLAPReport.com to archive this article and feature it at DSSResources.COM on Saturday, 20 July 2002. This article, "The origins of todays OLAP products", was posted at DSSResources.COM on October 6, 2002. All information copyright © 2002, Business Intelligence Ltd, all rights reserved. This version was last updated on July 20, 2002. The current version should be online at http://olapreport.com/origins.htm.
<urn:uuid:80dfa056-5363-4315-8ab6-47cf5ee740a3>
CC-MAIN-2017-04
http://dssresources.com/subscriber/password/papers/features/pendse10062002.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279915.8/warc/CC-MAIN-20170116095119-00440-ip-10-171-10-70.ec2.internal.warc.gz
en
0.974236
3,057
3.1875
3
At the end of March, the Department of Energy’s National Energy Research Scientific Computing (NERSC) Center accepted the first phase of its new Cray Cascade system. Named Edison, the new machine is a Cray XC30 with 664 compute nodes and 10,624 cores. Each node has two eight-core Intel “Sandy Bridge” processors running at 2.6 GHz (16 cores per node), and has 64 GB of memory. While the user environment on Edison is remarkably similar to that on Hopper, NERSC’s current main system, a number of new features and technologies are available on the Edison Phase I system, including the Cray Aries high speed interconnect, Hyper-Threading technology, the Sonexion storage system, and an external batch server. Once fully installed later in 2013, Edison will have a peak performance of more than 2 petaflop/s. The integrated storage system will have more than 6 petabytes (PB) of storage with an I/O bandwidth of 140 gigabytes (GB) per second. Bucking the trend toward GPUs and hybrid architectures, Edison will use Intel processors exclusively. To find out the reasoning behind the design and deployment of Edison and what it means to NERSC’s 4,500 users, Jon Bashor of Berkeley Lab Computing Sciences spoke with NERSC Division Director Sudip Dosanjh, NERSC Systems Department Head Jeff Broughton and Advanced Technologies Group Leader Nick Wright. Jon Bashor: First of all, how did the system come to be named Edison? Jeff Broughton: At NERSC, we name our computers after famous scientists, like Hopper for Grace Hopper and Franklin for Benjamin Franklin. In this case, we were looking for someone iconic, someone who represented American team science. Thomas Edison was the obvious choice, especially for his work in the field of energy. His work had practical applications, and while NERSC supports mainly basic research, naming the system after Edison will be a constant reminder to consider the applied connotations of the science we support. Bashor: When the procurement was announced in June 2012, the new system was described as delivering more than 2 petaflop/s peak performance, about twice that of Hopper. When other centers announce procurements, the new systems often provide an order of magnitude increase in performance. Is NERSC exiting the peak performance race? JB: Although NERSC systems in the past have been highly ranked, NERSC has never been part of the peak performance “race.” We have always focused on the sustained performance of scientific applications as the basis for our procurements. For the most part, over the past 10 years, sustained performance has been a relatively constant fraction of peak performance. But with the move to GPUs and manycore systems, it has become increasingly difficult to maintain the traditional sustained/peak ratio. Getting users to move to these new architectures requires a major effort to port their applications. As we transition to the path to exascale over the next decade, we think we can ask our 4,500 users to make the move to an entirely new class of architecture only once. We made a conscious decision that this is not the time for our users to make the transition and to let the architectures get sorted out before making that move. Our next system, now referred to as NERSC-8, will adopt one of the new energy-efficient architectures. We are also seeing in our user community a big increase in data-intensive computing, with a focus on high throughput and single-node performance, so another goal was to provide a system that could meet the needs of data-intensive applications while continuing to support conventional HPC. And we will continue to apply one of our unofficial benchmarks – the number of published research articles published by our users based on calculations performed at NERSC. We’ve averaged 1,500 per year for the past five years – another example of the sustained performance we are most interested in. Bashor: Edison will be one of the first Cascade systems delivered by Cray. Is there any risk associated with this? Why did NERSC take this route? Nick Wright: Over the years NERSC has taken calculated risks on numerous occasions by deploying a low serial number version of a system. This goes as far back as 1978, when a Cray-1 with serial number 6 was deployed, and 1985, when NERSC installed the first Cray-2 machine. Other examples include the T3E-900 installed in 1997, which was the largest I/O system built to date, and Hopper, one of the first Cray XE6 machines, deployed in 2009. There are significant benefits for the NERSC user community in terms of increased scientific output from having access to the latest, highest-performing technology. Also, there are significant benefits to NERSC and its sponsors because utilizing the latest technology maximizes the useful lifetime of a system and the return on investment. In this case, the Cray Cascade system has both the latest Intel processors and the highly scalable Aries interconnect, both of which are already delivering exceptional performance to our users. JB: Additionally, Edison has a novel cooling scheme that will allow us to move it to the Computational Research and Theory (CRT) Center, our new facility now under construction. CRT will use “free” cooling using only outside air and cooling towers. Thanks to our Bay Area climate, we can run this way year round without the added cost of mechanical chillers. Cascade is designed to operate with the warmer air and water temperatures that we can expect during certain parts of the year. Edison has proven to be very reliable right out of the starting gate. Phase 1 of the system was installed in late 2012, and has already proven very popular with our users. Bashor: Not only is this the first Cray machine to use Intel processors, but it also incorporates the new Aries interconnect and the Cray Sonexion storage system. What will this mean for NERSC’s 4,500 users? Sudip Dosanjh: We believe Edison will be a very productive system for our users. It has very high memory bandwidth, memory capacity and interconnect bandwidth relative to its compute capability. Data movement is the limiting factor for a large fraction of the 600 codes that run at NERSC. Floating point units are often idle waiting for data to arrive. Many codes spend a few percent of their total runtime performing floating point operations, the rest of the time is spent accessing memory or calculating memory addresses. Edison will be a very effective platform for running the very broad range of science codes at NERSC. JB: In short, Edison will be a very scalable and robust system for the broad range of users that NERSC has. We support more than 700 projects across all the program offices of the Department of Energy’s Office of Science: Advanced Scientific Computing Research, Basic Energy Sciences, Biological and Environmental Research, Fusion Energy Sciences, High Energy Physics and Nuclear Physics. The Intel processors and Aries interconnect have delivered good performance on a diverse class of algorithms. Since the programming and operating environment is essentially the same as Hopper, it’s been extremely easy for users to move codes from one system to another, even on a day-to-day basis. It just takes a simple recompile to optimize performance. NW: NERSC users are frequently performing science runs at scale, both to generate science results as well as explore the capabilities and limitations of their codes. The capabilities of the new highly scalable Aries interconnect will enable greater performance at scale and will deliver significant sustained performance. Bashor: The system also provides 64 GB of memory per node. How is this different from other systems and what’s the benefit to users? NW: We could have purchased Edison configured with 32 GB per node and spent the money saved on boosting the peak flops of the system. As Jeff said though, our focus is on enabling scientific productivity and we felt the benefits to users from extra memory far out weighed those that a slightly higher peak flop rating would have delivered. The 64 GB per node is twice the amount of memory we have per node in Hopper. This will allow those codes that are more data-intensive to process more quickly and to solve larger problems. Bashor: NERSC’s future home, the CRT center currently under construction in Berkeley, will use the Bay Area’s “natural air conditioning” to cool the machine room and improve efficiency. What’s being done in the meantime? JB: At our current facility in Oakland, we have essentially the same climate, and we are prototyping the CRT cooling infrastructure using Edison. We removed 1,100 tons of chillers and replaced them with heat exchangers. Cooling comes from evaporation in cooling towers exclusively. This gives us an opportunity to test the principles we are implementing and adjust the CRT design as needed. The expected PUE (power usage effectiveness) for Edison is approximately 1.1, which represents a two-thirds savings in the energy costs for cooling compared to similar systems cooled with mechanical chillers. Bashor: Looking farther ahead to exascale, NERSC’s 4,500 users and their 600 applications will face some real challenges to maintain their current scientific productivity of 1,500 papers each year. How will you manage the transition to exascale systems from the users’ perspective? SD: Exascale computing will impact all scales of computing from supercomputers to racks because the fundamental building blocks, processors and memory, are changing dramatically. Clock speeds are expected to remain near a GigaHertz for the foreseeable future, and the concurrency in processors is expected to increase at a Moore’s Law pace. All of the codes running at NERSC will need to transition to energy efficient architectures during the next few years. If we can’t make this leap, users will be stuck at today’s performance levels and we will miss many opportunities for scientific discovery. We will work with our users to make this transition as smooth as possible. We have started an application readiness effort to begin transitioning our codes in the NERSC-8 time frame. JB: As I mentioned, our plan is to start transitioning users to exascale programming models – which will involve increased parallelism and non-transparent memory hierarchies – with our procurement of the NERSC-8 system, which will arrive in late 2015. We expect this system to be some variety of a GPU or manycore architecture. Of the 600 applications running on NERSC systems, we know that just 10 of them account for about 50 percent of the time used on the machines. We will make a concerted effort to port those codes to NERSC-8. There will still be a number of applications that will be difficult or not cost-effective to immediately transition to the new machine, so they will stay on Edison and be given more time to transition. We see Edison as a kind of safety net for users of those applications. Bashor: Last question. NERSC recently completed its 10-year strategic plan. Can you give us a short summary of where the center expects to be in 10 years? SD: The aggregate computing needs of Office of Science research teams at NERSC will be well into the exascale regime by the end of the decade. Science teams need to run simulations at hundreds of petaflop/s, and they need to run thousands to millions of petascale simulations. We will deploy pre-exascale systems in 2016 and 2019. We anticipate deploying our first exascale system, NERSC-10, in 2022. We are also seeing the growing importance of big data and many recent scientific breakthroughs enabled by NERSC involve large data sets. For the past four years, our users have imported more data to NERSC to analyze than they have exported. Many months, we import more than a petabyte of data! We will begin enhancing our data capabilities starting in 2014, and we will deploy data systems, storage, advanced networking, and enhanced user services so that current users and DOE experimental facilities can move and process exabytes of data early in the next decade. While we have talked a lot about systems in this interview, I want to close by talking about the most critical component of NERSC’s very successful 39-year history – our staff. We run a very effective operation, operating Edison, Hopper, three large clusters, a 20-petabyte HPSS data archive and more, all stitched together seamlessly by the NERSC Global Filesystem. On the services side, we providing extensive technical support to our 4,500 users as well as providing the intellectual infrastructure needed to advance scientific discovery across DOE’s research mission areas. Our users are also very complimentary of our staff – consistently giving us excellent scores in our annual survey. Achieving this requires both extensive expertise and true dedication on the part of our staff. Although I only joined NERSC last fall, I am very proud to be part of such an accomplished organization. About NERSC and Berkeley Lab The National Energy Research Scientific Computing Center (NERSC) is the primary high-performance computing facility for scientific research sponsored by the U.S. Department of Energy’s Office of Science. Located at Lawrence Berkeley National Laboratory, the NERSC Center serves more than 4,000 scientists at national laboratories and universities researching a wide range of problems in combustion, climate modeling, fusion energy, materials science, physics, chemistry, computational biology, and other disciplines. Berkeley Lab is a U.S. Department of Energy national laboratory located in Berkeley, California. It conducts unclassified scientific research and is managed by the University of California for the U.S. DOE Office of Science. For more information about computing sciences at Berkeley Lab, please visit www.lbl.gov/cs.
<urn:uuid:e8d1558c-d9a4-4db7-b105-9ba7049d45cb>
CC-MAIN-2017-04
https://www.hpcwire.com/2013/04/29/nersc_managers_shed_light_on_edison_/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280310.48/warc/CC-MAIN-20170116095120-00348-ip-10-171-10-70.ec2.internal.warc.gz
en
0.936685
2,912
2.640625
3
President Obama faces a dilemma in deciding whether to prohibit the National Security Agency from tinkering with encryption as one way to collect intelligence data from adversaries who threaten to harm America. The origins of the NSA date back to World War I, when the Army created a unit to decipher enemy code. In the 21st century, the NSA still breaks secret codes used by our adversaries to identify threats, and that could mean tampering with encryption. Don't we want the NSA to break encryption to find out how our enemies seek to harm us? What's different today than nearly 100 years ago is that our enemies don't necessarily write the code that protects their secrets. But at the same time, as a panel of experts last month told President Obama: "Encryption is an essential basis for trust on the Internet; without such trust, valuable communications would not be possible. For the entire system to work, encryption software itself must be trustworthy. Users of encryption must be confident, and justifiably confident, that only those people they designate can decrypt their data." What's different today than nearly 100 years ago is that our enemies don't necessarily write the code that protects their secrets. Here's how Eugene Spafford, executive director of the Center for Education and Research in Information Assurance and Security at Purdue University, explains the predicament the White House faces on deciding how far to limit the NSA's ability to exploit encryption: The good guys and bad guys use the same technologies and Internet sites protected by commercial encryption. Balancing Safety with Rights "The fundamental hard problem behind all of this is how do we show respect for the privacy rights of all the people and organizations using things appropriately while at the same time giving ourselves reasonable opportunities to know about and possibly counter moves by bad actors who are using the same things?" Spafford asks. "That's a tough, policy-type of issue that's been debated back and forth for as long as there's been intelligence agency and as long as there are parties with different goals." In a speech Obama delivered Jan. 17 revealing new limits on the way intelligence agencies collect telephone metadata (see Obama Orders Review on Use of Big Data), the president did not tackle most of the 46 recommendations submitted last month by the panel of experts, including one to prevent the NSA from subverting initiatives to create secure encryption to safeguard confidential communications and data (see Panel Recommends Limits on NSA Surveillance). Though the president didn't mention encryption is his speech, an administration spokeswoman - Caitlin Hayden - said the president has asked Cybersecurity Coordinator Michael Daniel and the Office of Science and Technology Policy to jointly lead a study on encryption safeguards and report the results within 60 days. "We support the recommendation's aim to protect the integrity of standards for commercial encryption," Hayden said after the president's speech. Where did the NSA possibly go too far? NSA's critics, including noted cryptographer Bruce Schneier, suggest that a cryptographic random-number standard promoted in guidance from the National Institute of Standards and Technology might contain a backdoor to allow the NSA to spy on organizations employing the random bit generator. NIST has withdrawn the standard pending further review (see NIST Review Won't Disrupt Work with NSA). In December, reports surfaced that security vendor RSA received $10 million to set an NSA formula as the default method for number generation in RSA's BSAfe software. But RSA denied allowing NSA to provide a backdoor to compromise its security software (see NSA Reports Sullying Vendors' Standings?). Allan Friedman, co-author of the just-published book "Cybersecurity and Cyberwar," says it's the NSA's job to break encryption for the intelligence community but the agency pushed organizational boundaries "a little bit further than most people are comfortable with." Friedman says no one at the NSA took the responsibility for balancing its actions on encryption with the risks it posed of portraying the United States government as insensitive to the economic impact of those actions. "If someone [else] had been in the room when some of these programs had been proposed, at least the NSA leadership would have been forced to deal with the observations of how it is injuring the other American interests and would have had to justify why it was in America's interests [to do what it did]," he says. Purdue's Spafford says there are times when it could be appropriate to tamper with IT products, such as wares known to be used only by our adversaries. But, he sees tampering as inappropriate if it would impede with standards, such as those published by NIST. "We have to decide on those things that are so important - so fundamental, foundational to international communication, trade and trust - that we don't mess with them," Spafford says. Yet, at the same time, we need to allow the NSA to use its know-how to crack secret coding our enemies use to keep their nefarious plots hidden without jeopardizing individuals' privacy, civil liberties and the economic vitality the Internet offers. I don't envy Daniel and his White House colleagues who are working on coming up with a balanced plan, but no one said their jobs would be easy. What would you do?
<urn:uuid:aa2fadfe-846c-42f5-9c8d-5e483a904373>
CC-MAIN-2017-04
http://www.bankinfosecurity.com/blogs/obamas-difficult-choice-on-encryption-p-1608/op-1
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00377-ip-10-171-10-70.ec2.internal.warc.gz
en
0.961072
1,052
2.875
3
NASA Discovers Disaster Simulation Falls ShortBy David F. Carr | Posted 2005-09-01 Email Print Insulating foam that won't stay put has grounded the space shuttle. Now NASA has to rethink how it uses computer models to predict safety hazards.When the space shuttle Discovery came back to Earth on Aug. 9, the crew owed their safe return partly to luck. After Discovery's July 26 launch, NASA was forced to admit that two years of work had not solved the problem of how to keep insulating foam from flaking off the shuttle's external tank, the cause of the Columbia disaster in 2003. Now the shuttle is grounded. NASA predicted the tank would not shed anything larger than 0.03 pound, compared with the 1.67-pound piece of foam believed to have punched a hole in the heat shield on Columbia's wing. But one of the four large chunks observed during Discovery's launch was about 0.9 pound30 times the predicted maximum and big enough to be deadly. As in 111 prior successful shuttle missions, debris either missed Discovery or hit a less vulnerable area. Computer models played a role in deciding the Discovery was safe to fly. But models are only as good as the underlying assumptions. These assumptions can change, such as when the predictions of a computer model of foam impacts on a wing panel are verified with ballistic testingfiring chunks of foam from high-speed air guns. Still, human judgment ultimately played a large role in weighing the results of the simulations and tests. So how did the assessment wind up being too optimistic? At a press conference after Discovery's launch, former astronaut Richard Covey, who co-chaired the Return to Flight Task Group, an advisory panel that reviewed NASA's preparations for the mission, dismissed the notion that there was too much reliance on computer models. Simulations are the logical option when full-scale testing is impractical, such as trying to fit the entire external tank into a wind tunnel, Covey says. However, seven members of the task group added a dissenting opinion to its final report, specifically critiquing NASA's use of computer models "without the attention to the interdependencies between the models necessary for a complete understanding of the end-to-end result." To draw the line between tests and simulations, NASA had to decide how many millions of dollars of shuttle components it could deliberately ruin. In ballistic tests, the agency managed to stretch dollars by firing several shots at each wing panel, says Darwin Moon, manager of the orbiter stress group at Boeing. Computer models then extrapolated the effect of other impacts at various speeds and angles. Now, the detailed imagery that showed where and when foam shed from Discovery's tank can help improve the next round of models. "We're still learning how foam sheds," Moon says. Already, one fix has been suggested by a computer model of the foam created by AlphaStar Corp., working with Boeing and Lockheed Martin. The fix: spraying the foam over a fishnet structure to stop it from shedding, "or if it sheds, it sheds in smaller pieces," says AlphaStar CEO Frank Abdi. AlphaStar has worked on several other models related to the shuttle program, including a step-by-step thermal simulation of how the heat of reentry burned through Columbia's wing. After that accident, an investigation board skewered NASA for having grown complacent. Previous shuttles survived foam damage to heat-resistant tiles, but Columbia suffered a puncture to one of the reinforced carbon heat-shield panels used on the shuttle's nose and wings to withstand extreme heat approaching 3,000 degrees Fahrenheit. The board said mission planners were analyzing the danger to shuttle launches based on statistics about the damage caused by foam strikes on the tiles, not the panels. Read more about NASA's redefinition of its space mission: Should NASA Open Near-Orbit Space to Business? Today, most shuttle impact analysis is done with Livermore Software Technology Corp.'s LS-Dyna software and "material models" of the behavior of foam, tile, heat shields and ice. Kelly Carney, a NASA aerospace engineer, says modeling ice, water and vapor took eight months and rework of the LS-Dyna software. The payoff: The model showed ice posed a fatal risk, prompting NASA to use heated shuttle tanks. And although the foam debris recorded on Discovery was an unpleasant surprise, "the response to it was a lot different this time," Carney says. His hope: The data recorded during Discovery's flight will allow NASA to build better models and correct problems for future flights.
<urn:uuid:dd02188c-d81f-49aa-a12b-51f8d6eae345>
CC-MAIN-2017-04
http://www.baselinemag.com/c/a/Business-Intelligence/NASA-Discovers-Disaster-Simulation-Falls-Short
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00193-ip-10-171-10-70.ec2.internal.warc.gz
en
0.955275
939
2.75
3
In the 10 Kentucky counties surrounding the Blue Grass Army Depot -- where chemical weapons, such as mustard gas, VX nerve agent and GB (sarin), dating back to the 1940s are stored until their destruction -- officials responsible for emergency preparedness once relied on enlarged and laminated county highway maps to prepare for a possible release incident. Using a phased approach, the Kentucky Chemical Stockpile Emergency Preparedness Program (KY CSEPP) affordably implemented a GIS that allows officers to quickly and easily obtain information such as evacuation routes and nearby medical facilities. In late 2001, with an investment of less than $40,000 from FEMA, officials from the KY CSEPP, part of the Kentucky Division of Emergency Management, started developing computerized maps of the almost 3,000 square-mile area around the depot. Like many small- to moderate-sized communities, the KY CSEPP had little if any digital data and limited funding, so it chose to build its GIS in four phases. In the first phase, the KY CSEPP chose PlanGraphics, a geospatial consulting firm based in Frankfort, Ky., to build the GIS, and then acquired computer and printing hardware, and ESRI's ArcView GIS software. The KY CSEPP then asked PlanGraphics to assess data availability and gather necessary geographic data from various sources to develop base maps for each county. The individual counties' thematic, one-meter aerial photos, 10-meter SPOT satellite imagery and topographic base maps were assembled into a consistent regional base. Fortunately the Kentucky Office of Geographic Information already had most of the raster data PlanGraphics needed to prepare imagery and topographic base maps. The last step of phase one was a comprehensive survey conducted by PlanGraphics to determine if digital data on important resources like schools, hospitals, shelters and public safety assets existed in the 10 individual counties. To no one's surprise, the data was virtually nonexistent, especially in the region's predominantly rural counties. Pooling the Data Data is always the most important -- and almost always the most expensive -- component of any GIS development project, and the KY CSEPP project didn't disprove those time-tested facts. During the second phase, the KY CSEPP used PlanGraphics to design a detailed database for 65 non-base-map layers in 10 categories including evacuation origins, such as schools; public safety, medical and evacuation resources; utility and transportation networks; and political and administrative boundaries. If an incident occurs, officials must decide to shelter residents in place or evacuate a potentially large number of people to safe shelter sites. Therefore, the KY CSEPP needs many detailed data sets. As PlanGraphics began building the KY CSEPP's GIS, it uncovered several novel data sources. The Kentucky Board of Medical Licensure keeps records of all licensed physicians and physician assistants, which were obtained and used to populate attribute databases. They also helped locate approximately 2,000 MDs and physician assistants on the GIS maps. Similar data sets were found for pharmacies, veterinarians, daycare centers, assisted living and long-term care centers, and group foster-care facilities, among others. A lot of necessary information had to be converted from hard copy. Because the state didn't have a master address database for its 120 counties, the KY CSEPP purchased addressing software from Geographic Data Technology Inc. (GDT) to geo-code locations with situs addresses. In addition, county CSEPP and local public safety staff identified known locations on one-meter digital ortho photography for digitizing, and provided GPS coordinates for a number of other features, such as emergency landing zones. Local public safety and emergency management personnel were incredibly knowledgeable and helpful in building the GIS database. Local participation in the development effort is essential to build ownership and trust among individuals who will provide future data updates and use the GIS locally. After collecting and processing the data, and populating the GIS database, PlanGraphics developed an ArcView Project -- a file for organizing work -- that gave users an interface designed specifically for non-GIS professionals. This GIS had to be user-friendly. The two opening application screens let users pick the type of data and counties to display by answering several basic questions and clicking a few buttons. Expanding the GIS As we gained experience and became comfortable with the GIS, we wanted to use it to do more, including making it available to others. For the third phase, PlanGraphics added customized functionality to the ArcView Project and developed a secure Internet site for access to the GIS from the field, the Blue Grass Army Depot, 10 county emergency operations centers, FEMA Region IV in Atlanta, and the U.S. Army's CSEPP Office in Washington, D.C. PlanGraphics was asked to augment the ArcView Project to enable the KY CSEPP to more quickly conduct analyses typically performed for planning and used during incident response. First, a customized search capability was developed to search the database by facility name, type or location within a boundary feature, such as immediate response zones, cities, counties or census tracts. Second, several different buffering routines that find features, such as highways and rivers, by proximity were programmed. Finally the ability to search for an XY location, buffer the coordinate position, and find and identify features within the selected area was added. The GIS was so popular the KY CSEPP was producing and sending out dozens of maps each week. Sometimes they even went to non-CSEPP agencies that just wanted access to the map information. The KY CSEPP intended to install the GIS mapping project in all 10 counties from the start, as well as in other state and federal agency locations, so due to cost, ESRI's ArcIMS solution was chosen. This solution would make the data and maps available over the Internet. Using ArcIMS also kept the GIS database synchronized. After purchasing and installing a dedicated server and the ArcIMS software for the KY CSEPP, PlanGraphics converted the ArcView Project to an Internet mapping application that provides all the functionality that was previously developed. The secure application is being placed online and tested by all users before full rollout. PlanGraphics also is preparing a database maintenance strategy for keeping the GIS current. The KY CSEPP has moved well beyond black-and-white highway maps, but there is still much more to accomplish. The next step will be to integrate the GIS with functions such as chemical sensor monitoring, plume modeling, evacuation routing, alert call notification and traffic video surveillance. PlanGraphics developed an interoperability architecture called STEPs (Spatial Templates for Emergency Preparedness), and we are jointly pursuing funding for it. All users can also just as easily apply the GIS to other types of emergency preparedness and response hazards. Even if the KY CSEPP mapping project were to progress no further, there is dramatic improvement in planning for and responding to an incident at the depot, no matter how unlikely. Bill Hilling is the planning project supervisor for the Kentucky Division of Emergency Management's Chemical Stockpile Emergency Preparedness Program.
<urn:uuid:3ad390ab-aefc-4427-bfbe-39238670e923>
CC-MAIN-2017-04
http://www.govtech.com/public-safety/Disaster-Planning.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00275-ip-10-171-10-70.ec2.internal.warc.gz
en
0.95219
1,483
3.234375
3
Importance of Log Analysis All network systems and devices like Windows/Linux desktops & servers, routers, switches, firewalls, proxy server, VPN, IDS and other network resources generate logs by the second. And these logs contain information of all the system, device, and user activities that took place within these network infrastructures. Log files are important forensic tools for investigating an organizations security posture. Analysis of these log files provide plethora of information on user level activities like logon success or failure, objects access , website visits; system & device level activities like file read, write or delete, host session status, account management, network bandwidth consumed, protocol & traffic distribution; and network security activities like identifying virus or attack signatures and network anomalies. What is Security Information Event Management? Security Information Event Management (SIEM) refers to the concept of collecting, archiving, analyzing, correlating, and reporting on information obtained from all the heterogeneous network resources. SIEM technology is an intersection of two closely related technologies, namely the Security Information Management (SIM) and Security Event Management (SEM). According to Wikipedia "Security Information Management (SIM), is the industry-specific term in computer security referring to the collection of data (typically log files; e.g. eventlogs) into a central repository for trend analysis. This is a basic introductory mandate in any computer security system. The terminology can easily be mistaken as a reference to the whole aspect of protecting one's infrastructure from any computer security breach. Due to historic reasons of terminology evolution; SIM refers to just the part of information security which consists of discovery of 'bad behavior' by using data collection techniques..." So, to a large extent SIM is concerned with network systems, like Windows/Linux systems, and applications. As a technology SIM is used by system administrators for internal network threat management and regulatory compliance audits. SEM on the other hand is concerned with the "real time" activities of network perimeter devices, like firewalls, proxy server, VPN, IDS etc. Security administrators use SEM technology for improving the incident response capabilities of the perimeter/edge devices through network behavioral analysis. The target audience for SEM technology is NOC Administrators, Managed Security Service Providers (MSSP), and of course the Enterprise Security Administrators (ESA). Introducing ManageEngine® EventLog Analyzer for SIM ManageEngine® EventLog Analyzer (www.eventloganalyzer.com) is a web-based, agent-less syslog and windows event log management solution for security information management that collects, analyses, archives, and reports on event logs from distributed Windows host and, syslogs from UNIX hosts, Routers & Switches, and other syslog devices. EventLog Analyzer is used for internal threat management & regulatory compliance, like Sarbanes-Oxley, HIPAA, GLBA, PCI, and others. EventLog Analyzer is used to: - Provide a centralized repository for all the collected resource logs - Mine through the collected system logs and generate pre-defined and custom reports - Zero in on applications causing performance and security problems - Determine unauthorized access attempts and other policy violations - Identify trends in user activity, server activity, peak usage times, etc. - Obtain useful event, trend, compliance and user activity reports - Understand security risks in your network - Monitor critical servers exclusively and set alerts - Understand server and network activity in real-time - Alert on hosts generating large amounts of log events indicating potential virus activity - Schedule custom reports to be generated and delivered to your inbox - Generate reports for regulatory compliance audits - Identify applications and system hardware that may not be functioning optimally - Centralized archival of all collected logs for meeting regulatory compliance requirements - And more. Introducing ManageEngine® Firewall Analyzer for SEM ManageEngine® Firewall Analyzer (www.fwanalyzer.com) is a firewall log analysis tool for security event management that collects, analyses, and reports on enterprise-wide firewalls, proxy servers, and VPNs to measure bandwidth usage, manage user/employee internet access, audit traffic, detect network security holes, and improve incident response. Firewall Analyzer helps you to: - Manage heterogeneous perimeter devices - Provide a centralized repository for all the collected device logs - Mine through the collected device logs and generate pre-defined and custom reports - Analyze incoming and outgoing traffic/bandwidth patterns - Identify top Web users, and top websites accessed - Project trends in user activity and network activity - Identify potential virus attacks and hack attempts - Determine bandwidth utilization by host, protocol, and destination - Detect anomalies through network behavioral analysis - Analyze efficiency of firewall rules - Determine the complete security posture of the enterprise - Provide user specific firewall views to manage authorized perimeter device - Generate instant reports for bandwidth usage, traffic statistics, user activities, and more - Manage remote/customer premises firewalls and generate customized reports - And more. About ZOHO Corp. Founded in 1996, ZOHO Corp. is a software company with a broad portfolio of elegantly designed, affordable products and web services. ZOHO Corp. offerings span a spectrum of vertical areas, including network & systems management (ManageEngine.com), security (SecureCentral.com), collaboration, CRM & office productivity applications (Zoho.com), database search and migration (SQLOne.com), and test automation tools (QEngine.com) ZOHO Corp. and its global network of partners provide solutions to multiple market segments including: OEMs, global enterprises, government, education, small and medium-sized businesses and to a growing base of management service providers. www.manageengine.com, www.zoho.com
<urn:uuid:ac9ab142-c82b-483e-935d-8750c637e318>
CC-MAIN-2017-04
https://www.manageengine.com/products/eventlog/Analyzing-Logs-for-SIEM-Whitepaper.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00091-ip-10-171-10-70.ec2.internal.warc.gz
en
0.863211
1,219
2.65625
3
In one of the older posts, I talked about how the Prefetch file names are created. Today I was looking at program execution from network shares i.e. originating from the UNC paths and realized that I have not included these in the original article. To test what happens, I launched WinXP under windbg and put a breakpoint on the hashing function and then executed a test file from a shared VM folder – the screenshot shows the mapping between the drive and the UNC path where the executable is placed: Once executed, the windbg popped up and I could trace the full path to a file in a Memory window - z:\test.exe is executed - it is mapped to its UNC path \\vmware-host\Shared Folders\X\test.exe - which is then prepended with a device name responsible for HGFS file system (used internally by VM) to form a final string used in a hash calculation - \DEVICE\HGFS\VMWARE-HOST\SHARED FOLDERS\X\TEST.EXE Now, that was the case with a ‘fake’ share created by the VM software. What about a real share? Following the same procedure: - I mapped a host \\H\C$ drive as N: inside the guest system with ‘net use’ - and then executed N:\test.exe The result shown below is not very surprising either as now the path refers to LANMANREDIRECTOR: And in case you are curious what happens to drives created with subst… For drives mapped locally using ‘subst drive: path’ e.g. subst g: . there is no difference as the device will refer to HARDDISKVOLUME### (where ### is hard drive’s number) – I don’t include screenshot here as I hope this example doesn’t need one. However, using subst in a slightly different way i.e. referring to target path via localhost’s IP: e.g. subst g: \\127.0.0.1\c$ will make the Prefetch file name to be created using the following path: As you can see, each of the test files created a different hash In other words, there is plenty of ways to abuse the file naming creation of the prefetch file and it’s quite hard to write an universal hash calculator to cover all these cases – it really depends on the environment and there are lots of tricks to confuse the system + I bet there are a few more that wait to be uncovered.
<urn:uuid:29e7bff7-1be4-4829-b200-0ac6393c610e>
CC-MAIN-2017-04
http://www.hexacorn.com/blog/2012/10/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00303-ip-10-171-10-70.ec2.internal.warc.gz
en
0.930133
554
2.546875
3
By John A. Davis, VMware Certified Instructor (VCI) VCP5-DV, VCP5-DT, VCAP5-DCA, VCAP5-DCD, VCP-vCloud, VCAP-CID One of the most important steps in mastering a new technology is learning the associated terminology or vocabulary. In the Information Technology (IT) field, this can be a very challenging step, as much of the terminology is often used inconsistently. This article defines many of the most commonly used terms in the virtualization vocabulary. These are considered core, high-level terms. These are straight forward, commonly accepted definitions. Virtual Machine (VM) A set of virtual hardware devices, including virtual CPU, virtual RAM, virtual I/O devices, and other virtual hardware devices. It resembles and behaves like a traditional, physical server and runs a traditional operating system (OS), such as Windows or Linux. Many products and technologies today provide a platform on which VMs can be built and run. Although these technologies may have many fundamental differences, they tend to share these characteristics: - Many VMs can run on each physical host concurrently. - VMs running on the same host are isolated from one another. - The OS installed on the VM is unaware that it is running in a VM. - Administrators and users in one VM cannot access the underlying host OS or the guest OS of other VMs running on the same host. A VM running a server OS such as a Windows Server or a Red Hat Enterprise Linux Server. A virtual server typically runs one server-based application. A VM that is running a desktop OS such as Windows 7 or Red Hat Enterprise Desktop. A virtual desktop typically has one direct, concurrent user. An object that represents the "gold standard" of a particular virtual server build or virtual desktop build, typically including a well-configured OS and applications. Administrators can quickly deploy new VMs by automatically copying the template to create the new VM. VM Guest OS The OS that runs in a VM. Virtual Hardware Device (Virtual Device) A software component that resembles and behaves like a specific hardware device. The guest OS and software applications in the VM behave as though the virtual hardware device is actually a physical hardware device. A VM is a set of virtual hardware devices that correspond to the set of devices found in traditional, physical servers, such as virtual CPUs, virtual RAM, virtual storage adapters, and virtual Ethernet adapters. Virtual Network Interface Card (vNIC) Software that resembles and behaves like a traditional Ethernet Adapter. It has a MAC address, and it receives and sends Ethernet packets. Virtual SCSI Adapter Software that resembles and behaves like a traditional SCSI adapter. It can generate SCSI commands and attach multiple virtual disks. Virtual CPU (vCPU) Software that resembles and behaves like a traditional, physical CPU. Depending on the underlying technology, vCPUs could be software-emulated or software-modified: - Software Emulated - A process that resembles and behaves like a specific model of a physical CPU, which, in some cases, could be different than the model of underlying physical CPU in the host hardware. - Software Modified - A process that provides a filtered, indirect connection to the underlying host CPU. Typically, the vCPU provides subsets of the instruction set and feature set that are available on the host CPU. The vCPU traps and modifies privileged commands but sends other commands directly to the hardware. Resembles and behaves like a physical disk. It may be a file, a set of files, software, or some other entity, but to a VM, it appears to be a SCSI disk. For example, in Microsoft Hyper-V, virtual disks are referred to as VHD files with the file extension vhd. Virtual Ethernet Switch (vSwitch) Software that resembles and behaves like a physical Ethernet switch. It allows vNICS from multiple VMs to connect to virtual ports. It allows physical NICs to connect to virtual ports and serve as uplinks to the physical network. A vSwitch maintains a MAC address table and routes traffic to specific ports, rather than repeating traffic to all ports. It may include other features commonly found in physical Ethernet switches, such as VLANs. A network provided by virtual switches. It may be an extension of a traditional network that is built on physical switches and VLANs, or it may be an isolated network formed strictly from virtual switches. A collection of VMs, virtual networks and storage, and other virtual items that can deploy and run business applications, as an alternative to running applications directly on physical infrastructure. It allows IT personnel to install software applications in traditional OSs, such as Windows and Linux, without needing to know details of the underlying physical infrastructure. The OSs and applications run in VMs, in virtual networks, and on virtual storage. Virtual Desktop Infrastructure (VDI) A set of virtual desktops running on virtual infrastructure. VDI often involves detailed optimization at the physical infrastructure, virtual infrastructure, desktop OS, and application levels to allow close to native performance. VDI management software automatically brokers and connects users to their virtual desktops. VDI management software also automatically provisions virtual desktop pools from VM templates. A complex system that provides a set of services to consumers, without requiring the consumer to understand any of the underlying complexities of the system. Although simple, this is a highly accepted definition of the term, even when used to describe non-IT clouds. For example, some people consider electricity, water, and cable television services to be provided by clouds. Clouds provide some IT-based service, often utilizing virtual infrastructure. Businesses can use privately owned clouds, externally owned clouds, or both external and private clouds (hybrid clouds). Types of IT-based clouds include: - Infrastructure as a Service (IaaS) - IaaS provides virtual infrastructure as a service where consumers can easily implement and utilize VMs without needing to understand, manage, or own the underlying physical infrastructure. Examples of public IaaS providers are Hosting.com (http://hosting.com) and RackSpace (http://rackspace.com) - Software as a Service (SaaS) - SaaS provides software applications as a service where consumers can easily use applications without needing to understand, manage, or own the underlying server OSs, software applications, databases, or infrastructure. Examples of public SaaS are Google Apps (http://www.google.com/apps) and Salesforce CRM (http://www.salesforce.com/crm/). - Platform as a Service (PaaS) - PaaS provides a software development platform as a service where consumers can easily build applications on a provided platform without any need to understand, manage, or own the underlying infrastructure. It allows developers to easily create applications that are easily portable. Examples of public PaaS are Microsoft Azure (http://www.windowsazure.com/en-us/) and Force.com (http://force.com). A thin OS designed solely to provide virtualization. It drives physical hardware, executes VMs, and dynamically shares the underlying hardware with the associated virtual hardware. It is not intended to serve directly as a general-purpose OS, instead, it provides the platform on which VMs can run. A software component that is aware that it is running in a VM. For example, a paravirtualized virtual device driver runs in a VM that communicates with the underlying host OS. Typically, a paravirtualized driver is optimized to share queues, buffers, or other data items with the underlying host OS to improve throughput and reduce latency. For another example, Citrix XenServer runs paravirtualized OSs in VMs where the guest OS is modified to work very efficiently with the underlying hypervisor. P2V (Physical to Virtual) The migration of a traditional server, such as a specific Windows 2008 sever, from physical server hardware to a VM. Typically refers to the action of copying one VM or VM template to create a new VM. During a clone operation, the VM files are typically copied, renamed, and modified to customize the new VM. A point-in-time capture of the state of a VM. Snapshots allow the user to revert the VM to a previously captured state. A primary use is to undo changes that were made in a VM but are no longer wanted. The movement of VMs from one resource to another, such as from host to host or datastore to datastore. Live VM Migration Live VM migrations occur while the VM is running. Cold VM Migration Cold VM migrations occur while the VM is shut down. A state where a VM is competing for a scarce resource with other VMs or overhead processes. For example, if the memory capacity of a host is currently fully utilized, and some VMs attempt to demand more memory, then memory contention occurs and some VMs may begin to swap to disk. Highly Available (HA) A system or component that has some automatic protection in case of disruption. The protection may allow a small amount of unplanned downtime, but it will automatically correct the issue within a pre-determined time interval. VM High Availability (VM HA) Ensures that a VM is automatically made available, although the host on which it runs fails. VM HA may require an automatic reboot of the VM on another host. A system or component that has automatic, state-full protection in case of failure. For example, some software applications are designed to replicate state to multiple servers and databases to provide a fault-tolerant application. The failure of a server does not result in any loss of state of the application or any disruption to the end user. VM A VM that continues to run state-fully even if host hardware fails. This may be achieved by synchronizing the execution and state of multiple VMs running on multiple hosts. A measurement, usually in percentages, by which the amount of provisioned virtual hardware is greater than the actual physical resources. For example, if a set of thin-provisioned virtual disks is configured for a total of 3 TB, but the datastore where they reside is only 2 TB, then the over-commitment is 150 percent. Refers to a state where the actual, attempted resource usage exceeds the capacity of the actual hardware resources. For example, if a set of VMs stored in the same datastore generate more I/O than the underlying LUN can accommodate, then the datastore is over-committed. VMs executed directly on a client system, such as the user's PC. Some virtualization products, such as VMware Workstation and Microsoft Virtual PC, are designed solely for running local VMs. Some VDI products allow virtual desktops to run remotely in the datacenter, but also allow the user to check and execute the virtual desktop locally on client systems. A client device that has a very lean implementation of Windows or Linux and is mainly intended to allow the user to connect to a remote virtual desktop rather than to run applications natively. A client device that is even leaner than a thin client. Typically, a zero client runs an embedded, proprietary OS and has no local disk. It is used to connect to remote virtual desktops. Virtual Machine Manager (VMM) Also called a virtual machine monitor, a process that controls the execution of a VM and brokers its use of virtual hardware with the underlying host. It notifies the host when the VM needs to access the physical resources. CPU Hardware-Assisted Virtualization These features are commonly provided on modern CPUs, allowing the host to offload some of the virtualization work to the CPU to improve performance. - Intel-VT and AMD-V - These features provide hardware assist for the virtual CPUs by allowing the VMM to execute on the CPU at a level just below Ring 0, making its execution more efficient. - Intel EPT and AMD-RVI - These features provide hardware assist for the virtual CPUs by allowing the translation of guest OS virtual memory pages to be cached on the CPU. These features improve the translation time and minimize the frequency in which the VM's guest OS must perform translations. A packaged software application that runs in a virtualized, runtime environment, where the application perceives that it is natively installed. For example, a virtualized Windows-based application accesses a virtual Windows registry and virtual file system that are created at runtime by the runtime environment by overlaying modifications in the package on the native registry and file system. A pre-built VM containing pre-installed software that can be easily implemented. Typically, the appliance is downloaded from a website as an OVF file, deployed into the virtual infrastructure, and easily configured using the console of the VM and a web browser. Most virtual appliances allow very simple implementation, relieving the customer of a complex installation and configuration. Open VM Format (OVF) A specification that can be used to export and import VMs from one virtual environment to another. Typically, virtual appliances are stored in the OVF format. From the Global Knowledge white paper vTerminology: A Guide to Key Virtualization Terminology.
<urn:uuid:30878334-c70b-4d71-9300-2e93bb2e51b0>
CC-MAIN-2017-04
https://www.globalknowledge.com/ca-en/content/articles/virtualization-terms-you-should-know/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00211-ip-10-171-10-70.ec2.internal.warc.gz
en
0.893943
2,748
3.5625
4
Of all the virtual currencies out there, BitCoin is the most interesting from a technical perspective - and the least interesting from the business point of view. BitCoin is a peer-to-peer virtual currency that uses cryptography to control the creation and transfer of money. Unlike all other currencies, BitCoin is completely independent. "It is company or organization agnostic," says Ajay Vinze, a professor at the W. P. Carey School of Business at Arizona State University, who is studying BitCoin. All other currencies are either based on some standard of value, or are backed by an issuing entity such as a government or corporations. Gold, for example, is a metal which has both practical and decorative uses, though many gold coins also have additional value as historical artifacts. Many BitCoin enthusiasts are attracted by BitCoin's independence, and the fact that its value comes directly from its network of users. But aside from its status as a technical marvel, it has little practical benefit for business users or consumers. Here are seven reasons why. 1. Nobody has to accept it. Traditional government-backed currencies have local monopolies on things like paying taxes or utility bills. Merchants may decide to accept live chickens as payment, but they usually have no choice about whether or not to accept their national currency. Companies that issue virtual currencies typically offer goods or services in return. "You want it to have inherent value, you want it to have someone backing it up and exchange it to other forms if you choose," Vinze says. Now, this isn't a guarantee. Governments fall. Companies go bankrupt - or may simply decide to discontinue their virtual currencies.If a currency is only as strong as its backers, BitCoin isn't backed by anyone. 2. No critical mass. For a currency to have any practical value, it has to have a critical mass of buyers and sellers. "Technically BitCoin has no reason why it shouldn't be successful, but it certainly has no following that you would want," Vinze says. 3. No switching costs. Say you are a U.S. business that accepts U.S. currency, and you decide to stop accepting it, and, say, accept only live chickens. You will lose all potential customers who don't have access to live chickens. You will have problems paying suppliers and employees. And you won't be able to pay taxes - in the U.S., even barter-only transactions are taxable. These are all high switching costs. If you are a player of a popular online game, and decide to stop using their in-world currency, your game experience will suffer significantly, and you may have to stop playing the game altogether. This is a switching cost. If you decide to sell off your existing BitCoins and stop using them, there are no switching costs. All your existing suppliers and employees will happily take other forms of payment. Your only loss would be the marketing value of accepting BitCoin, which is likely more than offset by the additional costs of processing BitCoin payments. And if a newer, cooler virtual currency comes along, that has even more buzz surrounding it, there is no downside at all to leaving BitCoin. 4. There's nobody to police it. If someone breaks into your bank and steals your money, the bank would cover the loss, or it will be covered by FDIC deposit insurance. If someone points a gun at you and takes your wallet, you can call the police and have them arrested. You might get your money back, and they'll go to jail. Maybe not the first time they rob someone, but eventually. If a thief steals virtual currency from a company in order to defraud the company, there might be legal repercussions as well. In addition, companies carefully police their virtual currencies, banning hackers from their platforms and constantly improving security measures. And if their users suffer from a virtual theft, a company might make good on the loss in order to maintain good customer relations. If your BitCoin money is stolen, there is nobody to turn to for redress. If someone steals your laptop - or hacks into it -- they get all the BitCoins stored there. If you keep your BitCoins with an online exchange, and it is hacked, there is no government-mandated insurance to cover your loss, and nothing protecting your account against the exchange closing down. There are no jails for BitCoin crooks. This may change if governments start passing laws treating virtual currencies like real money, and forcing virtual currency exchanges to get financial services licenses, audits, and deposit insurance. Until then, nobody should be keeping more money in BitCoin than they can afford to lose. In fact, last year a BitCoin user woke up to find his haul of virtual currency had been plundered. A user with the handle allinvain found 25,000 BitCoins had been stolen. 5. There's no real need for it. What actual purpose does BitCoin serve that isn't already being met by other payment channels? Companies already have wire transfers, checks, prepaid cards, credit card payments, PayPal, Google Checkout, Western Union, and, of course, cash. BitCoin offers completely anonymous payments, which can be useful for tax avoidance, money laundering, gambling, and other illegal activities. A company that does a large amount of business in BitCoin, beyond what could be accounted for with the marketing coolness factor, would thus draw attention from regulators - same as a company that does a lot of its business in cash, which has the same benefit of anonymity. 6. It's volatile. According to Vinze, BitCoin has fluctuated greatly during its short history, up to a high of around $30, and down to its current value of around $5. This is a significantly higher volatility than almost any national currency. For example, the Euro has vacillated between $1.20 and $1.60 over the past five years. The fact that BitCoin has survived despite these fluctuations is a sign of its resilience, Vinze says. But it's also bad news for companies that want to do business in it, since these fluctuations make it difficult to set prices. 7. BitCoin is unfair. Traditional, government-backed currencies are created when banks loan out money for new homes, business expansion, or to pay for college educations - actions which increase the money supply but also grow the economy. Virtual currencies backed by companies or organizations are issued to reward players for in-game performance or to reward customers for purchases, or are sold directly to users. BitCoins are created when users run complex algorithms on their computers, with fewer and fewer BitCoins generated as times goes on. "Anyone who got into BitCoin at the very beginning can be theoretically rich with very little effort," says Benjamin Joffe, head of digital strategy consultancy Plus Eight Star. "In addition, some people might be able to create wealth out of thin air, out of other people's efforts, by way of botnet networks." Read more about software in Network World's Software section.
<urn:uuid:8722fcaf-1c37-4a0f-b28a-eecd83fd9bd0>
CC-MAIN-2017-04
http://www.csoonline.com/article/2132010/data-protection/bitcoin--seven-reasons-to-be-wary.html?source=rss_news
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00027-ip-10-171-10-70.ec2.internal.warc.gz
en
0.966958
1,471
2.5625
3
It's not unusual for scientists to work for years untangling one of the more tangled problems nature leaves under rocks in the hope no one will find the problem, let alone a solution. In commercial technology development, companies are praised for funding "research" that goes on for more than six months, even when it's entirely focused on building a single product. DARPA has both those records beat for long-term optimism. The Eureka of the real world announced this week it has given $500,000 in seed money to a non-profit organization that will dedicate itself for the next 100 years to building a working, practical starship on which humans can travel outside our solar system for the first time. The project is called 100YSS (for 100-year star ship). The project was proposed by physician and former astronaut Mae Jemison as part of the mission of Dorothy Jemison Foundation for Excellence, a non-profit organization Jemison formed to honor the principles of her late mother, who taught in the Chicago Public Schools for more than 25 years. The money and, presumably, technical participation of DARPA, are to begin a project that could very well take 100 years to develop its keystone product – an actual starship fueled by actual antimatter that is capable of travelling to planets orbiting distant stars. It's a long trip. Using the chemical rockets that are the best we can do right now, it would take about nine months to get a ship with a human crew to Mars. (Unmanned missions can take longer and use less fuel because they don't have to keep anyone alive inside the capsule, especially the Mars Dirt Mission Specialist who keeps asking "Are we there yet?" every day no matter how often he's told it's not funny.) That's three times as long as it took Christopher Columbus or the Pilgrims to travel to the new world, though Columbus thought he was making great time to the Philippines instead. And Mars is only about 35 million miles away. The nearest star (not the nearest one with planets we want to visit, just the closets one) is Proxima Centauri, 4.3 light-years or 25.3 trillion miles away. It may be remedial for anyone reading this blog to talk about distance in space. But the scale of distances in space is so insane I doubt even most astrophysicists really grasp them as anything but numbers. Are Mars and the future too far away for us to plan to go? The real distances are simply too large to really be understood by brains designed to calculate whether falling out of a tree from this height would be fatal (assuming number of hungry prowling jaguars X and distance to next climbing point Y for proto-humans with an aptitude for calculus. Or actuarial statistics.) At 93 million miles, the distance from the Sun to the Earth is several zeroes shorter than even the number of digits in the distance to the closest star. If you wanted to roll the Earth to the Sun (across the surprisingly cheap and holey fabric of the space-time continuum, according to Star Trek officers who are always falling through it), it would take 3,720 complete rotations of the Earth to do it. (You might suggest rolling it to Mars would be easier because it's closer, but that would be stupid. You're way too small to roll a whole planet, there's nowhere to stand and nothing to roll it on. Besides, there's a whole band of asteroids in the way that would get the Earth dirty and gritty, like an apple you drop in the sand, and then who would even want it?) So. 4,000 rotations of the Earth to get to the Sun. That's a lot. Proxima Centauri is 270,000 times farther away than the sun. 270,000 times. Not 270 million more miles. 93 million miles times 270,000. That's still meaningless in real terms, but it sounds pretty impressive, if only because it's clear anyone who multiplies millions by thousands of anything is going to spend a long time hip deep in problems. No gas for the trip. Not even antigas Despite a confident assumption that antimatter will provide enough power to drive starships powered by rockets across the trillions of miles they will have to go between rest stops, antimatter is still a pretty unlikely propellant. It's powerful enough. Antimatter engines, as we currently think of them, involve the injection of streams of antimatter onto a core of nuclear fuel. The anti-protons making up the antimatter are annihilated in a blast that can be focused with powerful magnetic fields until it is focused like a laser, making it even more efficient as a propellant. The only problem, aside from our not knowing how to contain, store or use antimatter in such a mundane way, is that we don't have any. Well, not much. The volume of matter and antimatter in the universe should be about equal, according to most theories about the formation of the universe. Due to a process known as "baryogenesis," the universe is made up of piles and piles of matter, but so little antimatter it's almost undetectable. Reasons vary from "we're working on it" to "who the hell knows." (Baryogenesis is the physics version of the medical "idiopathic," which sounds incisive but actually means "you just don't know, but I don't know in Latin." Assuming the most likely theory – that baryogenesis is a misspelling of Barry O'Genesis, a giant gaelic space monster that devours antimatter because it's the low-calorie diet food of those who bestride the cosmos and snack on them – it doesn't matter why there's no antimatter around. We can make it even when we can't find it. At the current rate of production of all the world's supercolliders combined, it would take 1,000 years to make a microgram of antimatter, according to TechStew. Can hope and overcomes the laws of physics? Why not? It's worked for us so far. That's a predictable rate of progression, at least, though it doesn't really compare to the 80 supertankers full of antimatter TechStew estimates would be needed to fuel a real trip to Proxima Centauri. Clearly fuel efficiency will be an issue. So far, Prius is not involved in either the real space program or Jemison's aspirational, motivational project. The elements that will make 100YSS or something like it successful are persistence and determination – and time, according to Jemison. "We’re embarking on a journey across time and space," Jemison said. "If my language is dramatic, it is because the project is monumental. This is global aspiration. And each step of the way, its progress will benefit life on Earth. Our team is both invigorated and sobered by the confidence DARPA has in us to start an independent, private initiative to help make interstellar travel a reality." This is an engineering project, however, not sociology or education. So regular conferences in odd locations are called for to allow engineers to publicly scoff at one another's work and complain about how Microsoft is trying to dominate the universes with Microsoft Con-Matter rather than the open-source anti-matter. The next one will be held in Houston Sept. 13 of this year and will continue until the space ship is done or everyone runs out of clean socks (Sept. 16). However farfetched the goal, both 100YSS and Jemison are serious about their goal. So is DARPA and its $500,000 in seed money and both Icarus Intersellar and the Foundation for Enterprise Development, non-profits dedicated to interstellar travel and employee-owned entrepreneurship, respectively. Luckily Jemison and the foundation leading the century-long effort to build an antimatter understands their newest unrealistically optimistic project is unrealistically optimistic. Time, technical development and the virtuous pursuit of pure knowledge will bring down barriers such as distance and time, according to Jemison. (Distance and time are almost the same thing in space, though on Earth there is a much closer correlation between time and boredom.) "I recognize that the concept of humans travelling to other star systems may appear fantastical—but no more so than the fantasy of reaching the moon was in the days of H. G. Welles," Jemison wrote in announcing the project. Methods can change if the goal remains the same That was because Wells foresaw the day people would travel to the Moon inside the shells of giant cannon, arriving as freeze-dried layers of jam. All the major laws of motion and most of thermodynamics had been defined by Wells' time; rockets were invented three-quarters of a century earlier (much earlier in China, but Wells wasn't writing from there). Wells, who was no dummy, could very well have connected the two and realized it would take sustained thrust to get to and from the moon safely, consistency that is not available from most cannon after the first blast. Instead, what thin brushes humans have had with real space were made possible by people with little interest in fantastical apparatuses, few literary aspirations and the temerity to ignore technological predictions from the man who invented fictional versions of the time machine, invisible man, invading extraterrestrial, flat-panel displays, laser, the joystick, poison gas (smoke), biological warfare, the atom bomb, automatic door, antigravity, conveyer belt, the answering machine, wireless router, cell phone, parallel universe, networked world and radioactive ruin. That must have taken guts. Or ignorance. By not knowing they were supposed to be inventing the real-world version of things Wells had already blue-skyed about, Einstein, von Braun, Goddard, Oberthand Laika helped prove a law of practical technology, if not of real science: If you set yourself to wait for the perfection of a technology you're sure will solve all your problems, you'll die of old age or go broke before it's finished. And, by focusing on just on way of solving a whole series of complex problems, you'll miss half a dozen simpler, cheaper solutions built bit by jury-rigged bit and then turned into real technology. Trying to imagine the kind of vessel we might be able to build in 100 years – down to predicting the kind of fuel it will use – is a sure path to someone's wall of fools, not the annals of science. On the other hand, the odd, detail-free charter of the 100YSS is also a strength; Jemison said she intends to build a real starship, but only as the end product of a century's worth of education, promotion of education, broad-based scientific and technical development of the kind the government should promote through schools and corporations should promote in open-source development labs and workshops. Jemison hasn't started a starship manufactory; she's started a skunk works designed to crank out not starships, but educated, curious, inventive, practical minds with the skills to turn over rocks to discover secrets the universe has hidden there. She has started a movement to pick the best ideas from the best minds to overcome the gravity of Earth, distance of space and self-defeat of her fellow primates, to clear a path not to a single starship, but to the future itself. Read more of Kevin Fogarty's CoreIT blog and follow the latest IT news at ITworld. Follow Kevin on Twitter at @KevinFogarty. For the latest IT news, analysis and how-tos, follow ITworld on Twitter and Facebook.
<urn:uuid:b3dbae83-161a-413e-bd51-24be9aa21b73>
CC-MAIN-2017-04
http://www.itworld.com/article/2726983/consumer-tech-science/darpa-funds-starship--liftoff-t-minus-100-years.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00055-ip-10-171-10-70.ec2.internal.warc.gz
en
0.954885
2,429
3.28125
3
Censorship seeks to suppress the free exchange of ideas and information deemed unacceptable or threatening by a party in power. The internet has become the world’s largest platform for free speech. Unrestricted access to information empowers individuals like no generation before, giving a voices to those who might not otherwise be heard, and eyes to those who might not otherwise see. But censorship threatens the open nature of the internet, inhibiting the world’s free market of ideas. Governments and corporations can silence free speech, limit access to information, and restrict the use of communication tools. Such actions serve the interests of those in power and undermines the civil liberties of everyone else. For this reason, it is vital for everyone to remain vigilant and act swiftly when threatened by censorship. Table of contents - Who censors? - What is censored? - How is the web censored? - What types of content should be censored? - How do children factor into the censorship debate? - How can I fight online censorship? - Which countries censor the most? - Free speech and IP The most obvious occurrences of censorship are those put in place by law, particularly national governments. Governments of autocratic regimes often censor the web to stifle dissent. Perhaps the most famous example is China, where the ruling Communist Party has instituted a complex, nationwide censorship system and well-staffed internet police force. Google famously exited the Chinese market because it refused to comply with the governments censorship requirements for search results. Hundreds, if not thousands, of websites, social networks, and apps are blocked by a blacklisting system commonly referred to as the Great Firewall. The Great Firewall notably blocks access to western social networks like Facebook and Twitter, news sources, messaging apps, and even Comparitech. In China’s case, the reason for this type of censorship is two-fold: it suppresses dissent on mediums not under the government’s thumb, and it protects domestic internet companies from international competition. The void that Google left in China gave rise to Baidu, the country’s largest search engine, for example. The censorship goes further than just blocking content, however. Those who speak out even on domestic public forums can be jailed or worse. Certain key words and phrases cannot be sent to friends or followers on Chinese social media. Censorship mechanisms put in place by governments often serve as a means of not only preventing discourse, but of identifying and punishing those who engage in what authorities perceive as the wrong discourse. Whether it be banning a book in schools or jailing a dissident, governments often argue that doing so is in the public interest. Note that in China and other countries, much of the censorship is actually performed by ISPs and internet companies at the request of the government. A corporation, such as an ISP or internet company, might censor content at the behest of a government authority. In China, ISPs are responsible for blocking websites, while social media companies are tasked with filtering messages and posts containing sensitive keywords. Corporations sometimes engage in censorship to hinder competition or otherwise protect their own assets. Deleting negative reviews and comments on their product or service is one example of this. Another is when the brand logos of certain products are blurred out of videos in order to protect advertisers in competition with those brands. Corporate censorship often refers to a different practice: threatening staff with termination, monetary loss, or access to the marketplace. A company employee who witnesses an ethics violation could get fired if he tells anyone outside of the company, for instance. Corporations censor content that they think will damage their public image. Rapper Ice-T altered the lyrics of Cop Killer when pressured to do so by Time Warner. An episode of South Park was censored by Comedy Central–a TV channel under the Viacom umbrella–because it depicted the Muslim prophet Muhammed. Youtube routinely scrubs search results of videos that contain pornographic content, abuse, and hate speech. Perhaps the most contentious debate around censorship today is that of net neutrality. Net neutrality argues that the internet should be treated like a utility: all websites and apps receive equal treatment in terms of access. But telecommunication corporations, which have been buying up content creation companies, want to funnel people toward the content that makes them money. To do this, they throttle traffic to competitors such as Netflix, while connections to their own entertainment offerings are unfettered. This practice was banned under an FCC order in the United States, but Republicans who recently took over the presidency and maintained a majority in the House of Representatives and Senate, may soon overrule it. Corporations can also censor their own content to protect business interests. To use Netflix as an example again, it does not allow users to view the catalogs of other countries. A Netflix subscriber in the UK cannot watch shows exclusive to subscribers in the United States, for example, unless they use a VPN to spoof their location. This is because Netflix is required by copyright holders to honor content licensing restrictions that apply to individual countries. Censorship can even take place on an individual level. Social networks like Facebook and Twitter allow users to block content from certain users and sources. Censorship is an issue of individual liberty, so there’s nothing inherently wrong about this when it comes to civil rights. But weeding out opposing views and only seeing self-affirming posts that validate what a person already thinks probably isn’t a healthy practice. What is censored? Websites and apps Autocratic regimes often censor websites that publish opposing views that threaten their power or public image. Notably, social media and news sources are frequent victims of state-sponsored online censorship. In Turkey, major social media sites including Facebook and Twitter were blocked from public view after the president arrested a dozen political opponents. The Great Firewall has taken this a step further by blocking websites that instruct netizens how to evade censorship. This is the case with Comparitech, which has published tutorials and VPN recommendations for bypassing the Great Firewall. Without a VPN or some other type of proxy, this website cannot be viewed from mainland China. Apps can also be blocked or outlawed. WhatsApp has been blocked permanently or temporarily in multiple countries including China, Turkey, and Brazil. Dating apps in ultra-conservative Muslim nations are off the table as well. People, events, and organizations Sometimes censorship targets particular people and organizations deemed a threat to those doing the censoring. In China, all websites, text messages, advertisements, and social media posts that even mention the Falun Gong spiritual movement, which authorities persecuted starting in the 1990s, is scrubbed from sight. Similarly, anything relating to the June 4, 1989 Tiananmen Square massacre is heavily censored. Social media and messaging apps like WhatsApp, Facebook Messenger, and Snapchat pose a threat to autocratic governments because, although correspondence is relatively private, it is more difficult to control. WhatsApp has been blocked permanently in China and temporarily in several other countries including Turkey and Brazil. Authorities are especially wary of WhatsApp because chats are encrypted, meaning only the intended users can view the contents of their messages. Third parties that try to intercept communications will only see jumbled text due to the encryption. For this reason, authorities in Brazil argue that WhatsApp could be used in drug deals and terrorist attacks, which justifies temporary service outages. China’s banishment of western chat apps gave way to the rise of WeChat, the country’s largest chat app made by domestic tech giant Tencent. Tencent cooperates with the government by using sophisticated censorship measures such as keyword filtering–blocking messages or links containing sensitive content like “falun gong”. Temporary and permanent account bans are placed on repeat offenders. The deep web Most people find what they are looking for on the internet using Google. But Google and most other popular search engines only index a tiny fraction of the content on the internet. There’s good reason for this; most of the other stuff either can’t be indexed or isn’t useful. It contains old web pages, social media content, private files stored in the cloud, the contents of apps, court records, and academic journals, to name a few. What’s left is known as the “surface web”, and experts estimate the deep web is about 500 times larger than what can be turned up in search results. Everything else is censored, though not necessarily for the same motives as a state or corporate censorship. A small sliver of the deep web is known as DarkNet, which contains websites that can only be accessed using Tor. Tor is anonymity software that can be used to access the hiddien .onion websites that make up DarkNet–sites that don’t want to be found. They include marketplaces for illicit goods and services, secret blogs, forums, and chat rooms, and private gaming servers. If you’re interested in uncovering the deep web and DarkNet, see our guide to accessing the deep web and darknet. How is the web censored? Many methods of blocking content on the web exist for those with the power to do so. The most common bottleneck where authorities can efficiently censor large swathes of the population is at the ISP level. ISPs, or internet service providers, act as gateways for everyone connected to the internet. Governments can order ISPs to block the IP addresses and domain names of specific websites and apps. Every device on the internet–be it the one your reading this article on now or a server hosting a web site or app–is assigned a unique IP address. When someone tries to access a web page, a request is sent to the ISP, who resolves the request by finding the corresponding IP address. The ISP then connects the two devices, such as a laptop and a website, so traffic can flow freely between them. ISPs have the power to selectively block such requests and traffic using a firewall. IP blocking is the presumed method used by Netflix to prevent users from accessing content from outside of their country of residence. Netflix routinely blacklists IP addresses of proxy servers, such as VPNs and smart DNS providers. As mentioned above, keyword filtering identifies and blocks content containing keywords deemed inappropriate by an authority. This takes place on the client, website, and application levels as well as the ISP level. As mentioned, WeChat will block messages containing sensitive keywords that undermine party rule in China. Search engines can also limit results returned when certain keywords are searched. ISPs implicit in censorship use deep packet inspection to mine the contents of internet traffic for sensitive keywords. On a small scale, this could be done by routing all traffic through a proxy server which inspects traffic and blocks anything containing blacklisted keywords. On a country-wide scale, such as in China, this requires a more complex intrusion detection system (IDS). In such a system, copies of packets are created and passed to filtering devices so that the traffic flow isn’t interrupted. If banned content is detected, the ISP sends connection reset requests to the server until the connection is abandoned altogether. DNS poisoning–also known as DNS spoofing, hijacking, and tampering–occurs when corrupt DNS data causes traffic to be diverted to the wrong IP address. The attacker, or in some cases the government and ISP, poison the resolver cache on a nameserver, where web page requests are sent. In China, the DNS entries for Facebook and other websites were poisoned so that anyone who tried to go to those sites would be redirected to a dead end. Some experts say these requests were sent to other sites that authorities disapproved of, resulting in a distributed denial-of-service (DDoS) attack. Changing a domain name is not as simple as changing an IP address, so this method can be more effective than IP blocking. Sometimes DNS hijacking and keyword filtering are implemented in unison. Routers can prevent unwanted communication by hijacking DNS requests containing sensitive keywords and injecting altered DNS replies. When all of the automated methods above fail, there’s good old-fashioned humans to do the dirty work. China’s internet police is estimated to comprise of a 50,000-strong force. They order the entities hosting sensitive content to remove it or face punishment. In addition to the internet police, countries like China and Russia pay commentators on social media to support ruling parties and disparage opponents online. What types of content should be censored? The First Amendment of the United States does not protect all forms of speech and expression. Obscenity, child pornography, defamation, and speech that incites “imminent and immediate” lawless action (yelling “fire!” in a crowded movie theater when there is none) are a few. Hate speech overlaps with some of these categories as well. Obscene content is defined as something an average person would find objectionable and with no serious literary, artistic, political or scientific value. How do children factor into the censorship debate? The responsibility for protecting children from obscene, vulgar, or pornographic content generally lies with parents. That being said, governments have instituted mechanisms like movie and video game rating systems to help parents judge if material is appropriate for their children. While this is a common practice in modern society, where the line should be drawn between acceptable and not acceptable for children is still debated today. Under the guise of protecting children, overreaching policies have been challenged all the way to the US Supreme Court, such as in ACLU v. Reno. In that case, the Communications Decency Act was struck down, affirming that online speech deserves the full First Amendment protection. How can I fight online censorship? Support free speech advocacy organizations A handful of great organizations fight for a free and open internet across the world. They raise awareness, educate the public, hold events, lobby legislatures, engage in campaigns, and call out anti-free speech practices. Some of their key issues include net neutrality, freeing imprisoned journalists, advocating for greater government transparency, supporting encryption, and generally keeping the internet free and open. You can support these organizations by signing up for their newsletters, supporting them on social media, joining campaigns, contacting lawmakers through their websites, and, of course, donating money. You can find a list of organizations below. Know your rights and the organizations that fight censorship In the US, free speech is covered in the First Amendment. But every country is different. Learn about your rights and stay vigilant. If you feel your rights are being infringed upon, here are some organizations that guard whistleblowers, protect free speech, and/or actively fight against censorship: - ALA Office of Intellectual Freedom - Amnesty International - American Civil Liberties Union - Article 19 - Berkman Center for Internet and Society at Harvard University - Center for Democracy and Technology - Committee to Protect Journalists - Derechos Human Rights - Electronic Frontier Foundation (EFF) - Electronic Privacy Information Center - Global Internet Liberty Campaign - Human Rights Internet - Human Rights Watch - International Federation for Human Rights - International Federation of Journalists - International Freedom of Expression Exchange (IFEX) - International PEN - Internet Education Foundation - Internet Free Expression Alliance - National Coalition Against Censorship (NCAC) - Reporters without Borders - Sociedad Interamericana de Prensa US court cases In addition to the First Amendment, dozens of important court cases lay out the bounds of free speech and expression in the US. Here are a few that deal specifically with technology and censorship: - American Civil Liberties Union et al. v. Janet Reno - CompuServe Incorporated v. Patterson - Stratton-Oakmont and Porush v. Prodigy - Dial Information v. Thornburgh, 938 F.2d 1535 - Information Provider’s Coalition v. FCC - Miller v. California - U.S. v. Thomas Use encryption and anonymity tools The best way to evade state-sponsored censorship and spying is to utilize encryption and/or anonymity software. You can do this in a number of ways: - Set up encrypted email - Use a VPN to encrypt internet traffic (see our top VPN lists for China, UAE, and Turkey) - Use Tor to remain anonymous online - Opt for encrypted chat apps like Signal and Telegram - Encrypt your files both on your hard drive and on the cloud - Use public DNS or smart DNS servers in lieu of your ISP’s default DNS servers Encryption ensures no one can snoop on your files an communications except for those whom you want to see them. Modern encryption algorithms cannot yet be cracked by brute force. Encryption and anonymity tools are especially important for whistleblowers. Those who witness violations of free speech should be allowed to speak up without being reprimanded. Which countries censor the most? There are a number of indexes that rank countries by their right to free expression. They include the Freedom House Freedom of the Press report, Reporters Without Borders Press Freedom Index, and the Open Net Initiative. While this article often cites China due to the enormity and sophistication of its censorship system, countries in Africa, the Middle East, and North Korea are often cited as the most heavily censored. Western European countries typically rank the highest for press and internet freedom. Free Speech and IP Free speech sometimes conflicts with the principle of intellectual property, in which the creator of material controls how, when and where that material may be used by other parties. Intellectual property rights are important to protect content creators, but those rights must be balanced so as not to infringe on the best interests of the public. Quoting a politician’s speech in an essay or including excerpts from scientific journals in a report, for example, should not be censored. In the United States, this sort of speech is protected under Fair Use. Fair Use is a legal guideline that states copyrighted material may be used without the copyright owner’s consent for limited and “transformative” purposes, such as commenting, criticizing, and parody. “Censorship” by Bill Kerr licensed under CC BY-SA 2.0
<urn:uuid:e5d512c1-7b37-4b02-abeb-1c0e340a6d67>
CC-MAIN-2017-04
https://www.comparitech.com/blog/vpn-privacy/guide-to-online-censorship/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281746.82/warc/CC-MAIN-20170116095121-00449-ip-10-171-10-70.ec2.internal.warc.gz
en
0.929607
3,754
3.6875
4
If you think the storage systems in your data center are out of control, imagine having 450 billion objects in your database or having to add 40 terabytes of data each week. The challenges of managing massive amounts of data involve storing huge files, creating long-term archives and, of course, making the data accessible. While data management has always been a key function in IT, "the current frenzy has taken market activity to a whole new level," says Richard Winter, an analyst at WinterCorp Consulting Services, which analyzes big data trends. New products appear regularly from established companies and startups alike. Whether it's Hadoop, MapReduce, NoSQL or one of several dozen data warehousing appliances, file systems and new architectures, the segment is booming, Winter says. Some IT shops know all too well about the challenges inherent in managing big data. At the Library of Congress, Amazon and Mazda, the task requires innovative approaches for handling billions of objects and peta-scale storage mediums, tagging data for quick retrieval and rooting out errors. 1. Library of Congress The Library of Congress processes 2.5 petabytes of data each year, which amounts to around 40TB each week. And Thomas Youkel, group chief of enterprise systems engineering at the library, estimates that the data load will quadruple in the next few years, thanks to the library's dual mandates to serve up data for historians and to preserve information in all its forms. The library stores information on 15,000 to 18,000 spinning disks attached to 600 servers in two data centers. More than 90% of the data, or over 3PB, is stored on a fiber-attached SAN, and the rest is stored on network-attached storage drives. The Library of Congress has an "interesting model" in that part of the information stored is metadata -- or data about the data that's stored -- while the other is the actual content, says Greg Schulz, an analyst at consulting firm StorageIO. Plenty of organizations use metadata, but what makes the library unique is the sheer size of its data store and the fact that it tags absolutely everything in its collection, including vintage audio recordings, videos, photos and other media, Schulz explains. The actual content -- which is seldom accessed -- is ideally kept offline and on tape, Schulz says, with perhaps a thumbnail or low-resolution copy on disk. Today, the library holds around 500 million objects per database, but Youkel expects that number to grow to as many as 5 billion. To prepare, Youkel's team has started rethinking the library's namespace system. "We're looking at new file systems that can handle that many objects," he says. Gene Ruth, a storage analyst at Gartner, says that scaling up and out correctly is critical. When a data store grows beyond 10PB, the time and expense of backing up and otherwise handling that much data go quickly skyward. One approach, he says, is to have infrastructure in a primary location that handles most of the data and another facility for secondary, long-term archival storage. E-commerce giant Amazon.com is quickly becoming one of the largest holders of data in the world, with around 450 billion objects stored in its cloud for its customers' and its own storage needs. Alyssa Henry, vice president of storage services at Amazon Web Services, says that translates into about 1,500 objects for every person in the U.S. and one object for every star in the Milky Way galaxy. Some of the objects in the database are fairly massive -- up to 5TB each -- and could be databases in their own right. Henry expects single-object size to get as high as 500TB by 2016. The secret to dealing with massive data, she says, is to split the objects into chunks, a process called parallelization. In its S3 storage service, Amazon uses its own custom code to split files into 1,000MB pieces. This is a common practice, but what makes Amazon's approach unique is how the file-splitting process occurs in real time. "This always-available storage architecture is a contrast with some storage systems which move data between what are known as 'archived' and 'live' states, creating a potential delay for data retrieval," Henry explains. Another problem in handling massive data is corrupt files. Most companies don't worry about the occasional corrupt file. Yet, when dealing with almost 450 billion objects, even low failure rates become challenging to manage. Amazon's custom software analyzes every piece of data for bad memory allocations, calculates the checksums, and analyzes how fast an error can be repaired to deliver the throughput needed for cloud storage. Mazda Motor Corp., with 900 dealers and 800 employees in the U.S., manages around 90TB of data. Barry Blakeley, infrastructure architect at Mazda's North American operations, says business units and dealers are generating ever-increasing amounts of data analytics files, marketing materials, business intelligence databases, Microsoft SharePoint data and more. "We have virtualized everything, including storage," says Blakeley. The company uses tools from Compellent, now part of Dell, for storage virtualization and Dell PowerVault NX3100 as its SAN, along with VMware systems to host the virtual servers. The key, says Blakeley, is to migrate "stale" data quickly onto tape. He says 80% of Mazda's stored data becomes stale within months, which means the blocks of data are not accessed at all. To accommodate these usage patterns, the virtual storage is set up in a tiered structure. Fast solid-state disks connected by Fibre Channel switches make up the first tier, which handles 20% of the company's data needs. The rest of the data is archived to slower disks running at 15,000 rpm on Fibre Channel in a second tier and to 7,200-rpm disks connected by serial-attached SCSI in a third tier. Blakeley says Mazda is putting less and less data on tape -- about 17TB today -- as it continues to virtualize storage. Overall, the company is moving to a "business continuance model" as opposed to a pure disaster recovery model, he explains. Instead of having backup and offsite storage that would be available to retrieve and restore data in a disaster recovery scenario, "we will instead replicate both live and backed-up data to a colocation facility." In this scenario, Tier 1 applications will be brought online almost immediately in the event of a primary site failure. Other tiers will be restored from backup data that has been replicated to the colocation facility. Adapting the Techniques These organizations are a proving ground for handling a tremendous amount of data. StorageIO's Schulz says other companies can mimic some of their processes, including running checksums against files, monitoring disk failures by using an alert system for IT staff, incorporating metadata and using replication to make sure data is always available. However, the critical decision about massive data is to choose the technology that matches the needs of the organization, not the system that is cheapest or just happens to be popular at the moment, he says. In the end, the biggest lesson may be that while big data poses many challenges, there are also many avenues to success. This story, "Storage tips from heavy-duty users" was originally published by Computerworld.
<urn:uuid:7e1440a3-6b72-4630-8aaf-cb7df5f93ea3>
CC-MAIN-2017-04
http://www.itworld.com/article/2735425/storage/storage-tips-from-heavy-duty-users.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00385-ip-10-171-10-70.ec2.internal.warc.gz
en
0.942694
1,510
2.5625
3
Are You Who You Say You Are? How to Fix Identity Problems Consider this: A stranger walks into a room and introduces himself as “John.” His behavior seems normal. No one questions his identity or asks for proof he is who he says he is, and other people address him as “John” for the rest of the day. Now, if “John” wants to get on an airplane, the procedure changes. He must present identification to prove who he is. It is validated against his photo and boarding pass. In this scenario, by identifying himself and having his identity confirmed John has been authenticated and authorized to board the plane (excluding outlier conditions like the No Fly list of course). Your identity is unique to you and not something you generally share. In an electronic world, we use credentials, biometrics, and even 2 factor authentication to prove our identity. In this model, however, there are inherent problems that make audits a nightmare. Identity Problem #1 – Sharing Credentials If we share credentials (username, password, and/or pin), there is typically no way to establish a user’s identity since the methodology for authentication has been shared. Unfortunately, many devices, assets, and applications do not have good security concepts around identities, so administrators and users must share credentials to perform various tasks. This creates an auditing issue when the resource is accessed because the person or technology behind the access cannot be identified. Identity Problem #2 – No Link Between Aliases and Real Identity A second problem occurs with your identity if you have an alias. It is quite common for people to have a variety of aliases, from email addresses to usernames. Many times, they are formed from letters in your name, but for privacy or personal humor they could be anything else. What complicates the concept of aliases is when there is no obvious link to whom the real identity belongs. Correlating users for an audit across multiple aliases creates an exponential problem based on the number of users and their number of aliases. Identity Problem #3 – Identity Changes A third problem with identities is if your identity changes (yes, it can) or you have a personal alias. A typical identity change happens if you marry and you change your last name; either hyphenated or completely changed. Thinking back to our original analogy, this the same issue if “John” walked into another room and introduced himself as “Fred.” Finally, some people have aliases for their own reasons, some they created, others are nicknames, and others are given to them without their consent. A Personal, Real World Example of These Identity Problems As a security professional, I deal with the latter problem all the time and have been branded as John Titor. This is an alias that I cannot falsely embrace but could lead me to introduce myself as John, and people would actually believe me as well. Unfortunately, the nature of the internet, conspiracy theories, and hoaxes has led my identity to be linked to someone else (real or not). This is commonly known, regardless of how it happens, as some form of identity theft or identity impersonation. I cannot confirm (because I am not) or deny I am (because the accusers will not believe me) this alias without potentially participating in identify theft myself. The only thing I can do is ignore the situation and hope it stays manageable. It is a very weird problem to have and people with common names like John Smith deal with it all the time. Parents that name their children the same name commonly deal with this situation as well. Having a unique name is a benefit unless you have a well known alias. Your name, or alias, is not unique enough to confirm your identity without an additional method such as a photo, password, PIN or biometrics. Thus, stopping identity theft or impersonation is a bigger problem. Best Practices to Secure Your Identity With these known flaws in the identification of a person’s identity, here are a few best practices to secure your identity and make sure it remains solely with you: - Never share your identity authentication mechanisms with anyone else. These include credentials, ID cards, pins, or any other form or electronic or physical resource that confirms your identity. For example, you would never share your passport, right? So why share your username and password to your workstation or bank account? - Minimize your aliases. This is primarily important for work; have all necessary aliases linked coherently back to your identity. At home, having goofy or non-identifiable aliases for email is acceptable but the more you have makes it even harder to personally manage. I personally recommend three: One for official business with banks and financial institutions. A second for correspondence, and third for spam, junk mail, and everything else. - Keep all access unique. Your identity itself is unique and every link to a resource should be unique as well. This means you may use the same or similar alias (username) but the password should be unique per resource. This way any password re-use associated with your identity cannot be a risk by someone else stealing it and impersonating you. In addition, it allows for auditing and reporting on your identity to positively confirm or refute access. At BeyondTrust, we specialize in keeping your identity unique and your privileges secure. With advanced capabilities in PowerBroker Password Safe to link aliases, randomized passwords, and positively identify an identity, we help maintain secure enterprise computing environments. For more information on how to secure your accounts and identities, contact us today.
<urn:uuid:7dde6473-699e-45ff-aedf-d7bda72158c4>
CC-MAIN-2017-04
https://www.beyondtrust.com/blog/say-fix-identity-problems/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00137-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947477
1,146
2.59375
3