text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
Bonasoni P.,CNR Institute of Neuroscience | Laj P.,CNRS Laboratory for Glaciology and Environmental Geophysics | Marinoni A.,CNR Institute of Neuroscience | Sprenger M.,ETH Zurich | And 22 more authors. Atmospheric Chemistry and Physics | Year: 2010 This paper provides a detailed description of the atmospheric conditions characterizing the high Himalayas, thanks to continuous observations begun in March 2006 at the Nepal Climate Observatory-Pyramid (NCO-P) located at 5079 m a.s.l. on the southern foothills of Mt. Everest, in the framework of ABC-UNEP and SHARE-Ev-K2-CNR projects. The work presents a characterization of meteorological conditions and air-mass circulation at NCO-P during the first two years of activity. The mean values of atmospheric pressure, temperature and wind speed recorded at the site were: 551 hPa,-3.0 °C, 4.7 m s-1, respectively. The highest seasonal values of temperature (1.7 °C) and relative humidity (94%) were registered during the monsoon season, which was also characterized by thick clouds, present in about 80% of the afternoon hours, and by a frequency of cloud-free sky of less than 10%. The lowest temperature and relative humidity seasonal values were registered during winter,-6.3 °C and 22%, respectively, the season being characterised by mainly cloud-free sky conditions and rare thick clouds. The summer monsoon influenced rain precipitation (seasonal mean: 237 mm), while wind was dominated by flows from the bottom of the valley (S-SW) and upper mountain (N-NE). The atmospheric composition at NCO-P has been studied thanks to measurements of black carbon (BC), aerosol scattering coefficient, PM1, coarse particles and ozone. The annual behaviour of the measured parameters shows the highest seasonal values during the pre-monsoon (BC: 316.9 ng m-3, PM 1: 3.9 μgm-3, scattering coefficient: 11.9 Mm -1, coarse particles: 0.37 cm-3 and O3: 60.9 ppbv), while the lowest concentrations occurred during the monsoon (BC: 49.6 ng m-3, PM1: 0.6 μg m-3, scattering coefficient: 2.2 Mm-1, and O3: 38.9 ppbv) and, for coarse particles, during the post-monsoon (0.07 cm-3. At NCO-P, the synoptic-scale circulation regimes present three principal contributions: Westerly, South-Westerly and Regional, as shown by the analysis of in-situ meteorological parameters and 5-day LAGRANTO back-trajectories. The influence of the brown cloud (AOD>0.4) extending over Indo-Gangetic Plains up to the Himalayan foothills has been evaluated by analysing the in-situ concentrations of the ABC constituents. This analysis revealed that brown cloud hot spots mainly influence the South Himalayas during the pre-monsoon, in the presence of very high levels of atmospheric compounds (BC: 1974.1 ng m-3, PM 1: 23.5 μg m-3, scattering coefficient: 57.7 Mm -1, coarse particles: 0.64 cm-3, O3: 69.2 ppbv, respectively). During this season 20% of the days were characterised by a strong brown cloud influence during the afternoon, leading to a 5-fold increased in the BC and PM1 values, in comparison with seasonal means. Our investigations provide clear evidence that, especially during the pre-monsoon, the southern side of the high Himalayan valleys represent a "direct channel" able to transport brown cloud pollutants up to 5000 m a.s.l., where the pristine atmospheric composition can be strongly influenced. © 2010 Author(s). Source Bertotti L.,Institute of Marine Science | Cavaleri L.,Institute of Marine Science | Loffredo L.,Catholic University of Leuven | Monthly Weather Review | Year: 2013 Nettuno is a wind and wave forecast system for the Mediterranean Sea. It has been operational since 2009 producing twice-daily high-resolution forecasts for the next 72 h. The authorshave carried out a detailed analysis of the results, both in space and time, using scatterometer and altimeter data from four different satellites. The findings suggest that there are appreciable differences in the measurements from the different instruments. Within the overall positive results, there is also evidence of differences in Nettuno performance for the various subbasins. The related geographical distributions in Nettuno performance are consistent with thevarious satellite instruments used in the comparisons. The extensive system of buoys around Italy is used to highlight the difficulties involved in a correct modeling of wave heights in Italy's coastal areas © 2013 American Meteorological Society. Source Chiggiato J.,Undersea Research Center | Jarosz E.,U.S. Navy | Book J.W.,U.S. Navy | Dykes J.,U.S. Navy | And 5 more authors. Ocean Dynamics | Year: 2012 During September 2008 and February 2009, the NR/V Alliance extensively sampled the waters of the Sea of Marmara within the framework of the Turkish Straits System (TSS) experiment coordinated by the NATO Undersea Research Centre. The observational effort provided an opportunity to set up realistic numerical experiments for modeling the observed variability of the Marmara Sea upper layer circulation at mesoscale resolution over the entire basin during the trial period, complementing relevant features and forcing factors revealed by numerical model results with information acquired from in situ and remote sensing datasets. Numerical model solutions from realistic runs using the Regional Ocean Modeling System (ROMS) produce a general circulation in the Sea of Marmara that is consistent with previous knowledge of the circulation drawn from past hydrographic measurements, with a westward meandering current associated with a recurrent large anticyclone. Additional idealized numerical experiments illuminate the role various dynamics play in determining the Sea of Marmara circulation and pycnocline structure. Both the wind curl and the strait flows are found to strongly influence the strength and location of the main mesoscale features. Large displacements of the pycnocline depth were observed during the sea trials. These displacements can be interpreted as storm-driven upwelling/ downwelling dynamics associated with northeasterly winds; however, lateral advection associated with flow from the Straits also played a role in some displacements. © 2011 Springer Science+Business Media, LLC. Source Cavaleri L.,CNR Institute of Neuroscience | Bertotti L.,CNR Institute of Neuroscience | Torrisi L.,CNMCA | Bitner-Gregersen E.,DNV GL | And 3 more authors. Journal of Geophysical Research: Oceans | Year: 2012 We analyze the sea state conditions during which the accident of the cruise ship Louis Majesty took place. The ship was hit by a large wave that destroyed some windows at deck number five and caused two fatalities. Using the wave model (WAM), driven by the Consortium for Small-Scale Modelling (COSMO-ME) winds, we perform a detailed hindcast of the local wave conditions. The results reveal the presence of two comparable wave systems characterized almost by the same frequency. We discuss such sea state conditions in the framework of a system of two coupled Nonlinear Schrdinger (CNLS) equations, each of which describe the dynamics of a single spectral peak. For some specific parameters, we discuss the breather solutions of the CNLS equations and estimate the maximum wave amplitude. Even though, due to the lack of measurements, it is impossible to establish the nature of the wave that caused the accident, we show that the angle between the two wave systems during the accident was close to the condition for which the maximum amplitude of the breather solution is observed. Copyright 2012 by the American Geophysical Union. Source Bertotti L.,CNR Institute of Neuroscience | Bidlot J.-R.,European Center for Medium Range Weather Forecasts | Bunney C.,UK Met Office | Cavaleri L.,CNR Institute of Neuroscience | And 6 more authors. Quarterly Journal of the Royal Meteorological Society | Year: 2012 We consider an exceptional storm-'Klaus' (January 2009)-its evolution on the Western Mediterranean Sea, and how the associated wind and wave conditions were modelled by seven of the major systems presently operational in this area. We intercompare the model results and then verify them and the related model ensemble versus the available measured data. Working with short-term forecasts (24 h) only, as expected, each model correctly anticipates the incoming of an exceptional storm. However, even at such limited range, we have found substantial differences among the results of the different models. The differences concern the time the storm should have entered the Western Mediterranean Sea, the peak values of wind speed and significant wave height, the general distribution of the fields, and the locations where the maxima were achieved. We have compared the model results versus the available measured data, wind from scatterometer, waves from altimeter, plus a few buoy data. We have found some inconsistencies in the results, model wind data being on average larger than the measured one, while the opposite was true for wave heights. However, the limited amount of data available and its different times and positions, at and off the centre of the storm, impede the drawing of any definite conclusion in this respect. On the whole we feel that our results, although related to a single storm, cast doubts on the reliability of a single forecast system to provide sufficiently reliable and accurate forecasts in case of an incoming exceptional storm. The results, both for wind and waves, have improved using an ensemble of the seven considered models. This suggests that there is no relevant systematic error in the used models except, as possibly suggested by our results, in the case of wave generation under very strong wind and very young sea conditions. © 2011 Royal Meteorological Society and British Crown, the Met Office. Source
<urn:uuid:93214563-fffc-44e4-a74b-8ee6b3e42439>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/cnmca-353027/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00147-ip-10-171-10-70.ec2.internal.warc.gz
en
0.916135
2,125
2.5625
3
In SQL Server, statistics can be created using CREATE STATISTICS command or using CREATE INDEX command. At the feature level, the statistical information created using CREATE STATISTICS command is equivalent to the statistics built by a CREATE INDEX command on the same columns. The only difference is that the CREATE STATISTICS command uses sampling by default while the CREATE INDEX command gathers the statistics with fullscan since it has to process all rows for the index anyway. A typical command will look like: CREATE STATISTICS [IX_Stats_City] WITH SAMPLE 50 PERCENT; In this command, we are sampling 50% of the rows. For bigger tables, a random sampling may not produce accurate statistics. Therefore, for bigger tables, you may need to use the resample option on UPDATE STATISTICS. The resample option will maintain the fullscan statistics for the indexes and sample statistics for the rest of the columns. Statistical information is updated when approximately 20 percent of the data rows have changed. Though there are some exceptions to this rule, we will keep this guideline as generic. We can also manually update statistics using UPDATE STATISTICS. To provide up-to-date statistics, the Query Optimizer needs to make smart query optimization decisions. It is generally best to leave the “AUTO UPDATE STATISTICS” database option ON (the default setting). This helps to ensure that the Optimizer statistics are valid, so that queries are properly optimized when they are run. Additionally, SQL Server uses AUTO_CREATE_STATISTICS, which causes the server to automatically generate all statistics required for the accurate optimization of a specific query. From SQL Server 2005 version, SQL Server maintains modification counters on a per-column basis rather than a per-row basis as was done in earlier versions. Therefore, sysindexes.rowmodctr is an approximation of what earlier versions of SQL Server would have shown, but the column is not used to determine when auto statistics occurs. In SQL Server 2000, sp_updatestats would iterate over all the objects in the database and update statistics for every object, regardless of whether there had been any changes to the table (that is, rowmodctr was zero). This has changed from SQL Server 2005, so that, if the sysindexes.rowmodctr value is zero, then the index/statistics is skipped because its statistics are already fully up to date. Running sp_updatestats on a database with objects requiring no update will send a message like: 0 index(es)/statistic(s) have been updated, 0 did not require update. Though this command still works in the latest version of SQL Server, it is recommended to move to the new syntax of UPDATE STATISTICS. This command was introduced with SQL Server 2005 version and is used similar to sp_updatestats interchangeably. Updating of statistics ensures that any query that runs get the up-to-date statistics to satisfy the query needs. A typical command would look like: UPDATE STATISTICS Sales.SalesOrderDetail WITH FULLSCAN, ALL This command computes statistics by scanning all rows in the Sales.SalesOrderDetail table. FULLSCAN and SAMPLE 100 PERCENT have the same results. Use caution when using FULLSCAN on large tables as it can take time and also affect performance of the system. It is ideal to do the same during non-peak hours or during maintenance windows. FULLSCAN cannot be used with the SAMPLE option. There are some conditions in which it might be appropriate to turn off auto statistics or disable it for a particular table. For example, when a SQL Server database is under very heavy load, sometimes the auto update statistics feature can update the statistics on large tables at inappropriate times, such as the busiest time of the day. In such cases, you may want to turn autostats off, and manually update the statistics (using UPDATE STATISTICS) when the database is under a comparatively lesser load. UPDATE STATISTICS Sales.SalesOrderDetail WITH FULLSCAN, NORECOMPUTE The above command forces a full scan of all the rows in the Sales.SalesOrderDetail table, and turns off automatic statistics for the table. To re-enable the AUTO_UPDATE_STATISTICS option behavior, run UPDATE STATISTICS again without the NORECOMPUTE option. To know when the statistics were last updated, use the STATS_DATE function. Similar to the STATS_DATE function, we can also use another command DBCC SHOW_STATISTICS for the same data for a specific table and index like: DBCC SHOW_STATISTICS (‘[HumanResources].[Shift]’, ‘PK_Shift_ShiftID’) At the same time, you need to analyze what will happen if you turn off the auto update statistics feature. While turning this feature off may reduce some stress on your server, by not running at inappropriate times of the day, it could also cause some of your queries to be not properly optimized, which could put extra stress on your server during busy times. It is a fine line of trade off which only an experienced DBA can make based on application workload and query patterns. As with many other optimization issues, you will need to test to see if turning this option on or off is more effective for your environment. Statistics is an important concept inside SQL Server and keeping it up-to-date is essential. In this blog post, we have dealt with the basics of statistics and how to update the same. In future posts we will expand on the same.
<urn:uuid:b97b024a-75fb-4836-8ca6-13d2d1959923>
CC-MAIN-2017-04
https://blogs.manageengine.com/application-performance-2/appmanager/2013/10/22/optimizing-sql-server-performance-the-story-of-statistics.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00295-ip-10-171-10-70.ec2.internal.warc.gz
en
0.88694
1,186
2.75
3
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers. John Crawford, an Intel Fellow, provided an example of what a four-core, Itanium-based processor might look like, although Intel is not expected to deliver the processor until 2005 or 2006. Putting four processor cores on a single chip could provide a large performance boost for Itanium-based servers and alleviate problems such as heat dissipation. Crawford showed how Intel could package four of its Itanium 2 processors to share one, large cache. Intel has promoted the use of a large cache in its Itanium chips as a way to provide a high-speed bridge between the processor cores and memory. Crawford would not say when the chips would come out or whether Intel would offer a four-core design before a two-core design, but did confirm that Intel is working on this type of technology. "This is imminently possible," Crawford said, during his keynote presentation. "You can expect things of this nature coming out." IBM has already started shipping a dual-core Power4 processor in servers based on AIX, its version of Unix. Hewlett-Packard and Sun are also expected to start shipping dual-core chips in systems based on their own flavours of Unix, respectively HP-UX and Solaris, next year. Intel, however, has said in the past that it would not produce a multiple-core Itanium chip until the middle of the decade. By spreading out four processor cores around a shared cache, Intel could reduce the amount of heat generated by its chips, Crawford said. The demonstration of the four-core processor showed two cores on each side of the cache. This would place the hottest parts of each processor at some distance from each other. Intel is also looking to bring its hyperthreading technology to the Itanium chips. Hyperthreading makes one processors appear as multiple processors to software and can provide a performance boost for applications that are written to take advantage of the technology.
<urn:uuid:fd336237-fc7a-43c8-a0e4-ec7a6f3febea>
CC-MAIN-2017-04
http://www.computerweekly.com/news/2240047965/Intel-demonstrates-multicore-Itanium-model
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00111-ip-10-171-10-70.ec2.internal.warc.gz
en
0.96833
424
2.578125
3
About SSL Certificates and SSL Encryption What Is SSL? SSL (Secure Sockets Layer) is a standard security technology for establishing an encrypted link between a server and a client—typically a web server (website) and a browser, or a mail server and a mail client (e.g., Outlook). SSL allows sensitive information such as credit card numbers, social security numbers, and login credentials to be transmitted securely. Normally, data sent between browsers and web servers is sent in plain text—leaving you vulnerable to eavesdropping. If an attacker is able to intercept all data being sent between a browser and a web server, they can see and use that information. More specifically, SSL is a security protocol. Protocols describe how algorithms should be used. In this case, the SSL protocol determines variables of the encryption for both the link and the data being transmitted. Find Your SSL Certificate: SSL Plus Certificate Secure a single common name Extended Validation Certificates Green Bar Assurance Multi-Domain (SAN) Certificates Secure multiple domains EV Multi-Domain Certificates Secure multiple domains with EV Secure your entire domain SSL secures millions of peoples' data on the Internet every day, especially during online transactions or when transmitting confidential information. Internet users have come to associate their online security with the lock icon that comes with an SSL-secured website or green address bar that comes with an extended validation SSL-secured website. SSL-secured websites also begin with https rather than http. Already understand the basics of SSL Certificates and technology? Learn about SSL cryptography >> Is My Certificate SSL or TLS ? The SSL protocol has always been used to encrypt and secure transmitted data. Each time a new and more secure version was released, only the version number was altered to reflect the change (e.g., SSLv2.0). However, when the time came to update from SSLv3.0, instead of calling the new version SSLv4.0, it was renamed TLSv1.0. We are currently on TLSv1.2. Because SSL is still the better known, more commonly used term, DigiCert uses SSL when referring to certificates or describing how transmitted data is secured. When you purchase an SSL Certificate from us (e.g., SSL Plus, Extended Validation SSL Plus, etc.), you are actually getting a TLS Certificate (RSA or ECC). Where Do Certificates Come In? All browsers have the capability to interact with secured web servers using the SSL protocol. However, the browser and the server need what is called an SSL Certificate to be able to establish a secure connection. What is an SSL Certificate and How Does it Work? SSL Certificates have a key pair: a public and a private key. These keys work together to establish an encrypted connection. The certificate also contains what is called the “subject,” which is the identity of the certificate/website owner. To get a certificate, you must create a Certificate Signing Request (CSR) on your server. This process creates a private key and public key on your server. The CSR data file that you send to the SSL Certificate issuer (called a Certificate Authority or CA) contains the public key. The CA uses the CSR data file to create a data structure to match your private key without compromising the key itself. The CA never sees the private key. Once you receive the SSL Certificate, you install it on your server. You also install an intermediate certificate that establishes the credibility of your SSL Certificate by tying it to your CA’s root certificate. The instructions for installing and testing your certificate will be different depending on your server. In the image below, you can see what is called the certificate chain. It connects your server certificate to your CA’s (in this case DigiCert’s) root certificate through an intermediate certificate. The most important part of an SSL Certificate is that it is digitally signed by a trusted CA like DigiCert. Anyone can create a certificate, but browsers only trust certificates that come from an organization on their list of trusted CAs. Browsers come with a pre-installed list of trusted CAs, known as the Trusted Root CA store. In order to be added to the Trusted Root CA store and thus become a Certificate Authority, a company must comply with and be audited against security and authentication standards established by the browsers. An SSL Certificate issued by a CA to an organization and its domain/website verifies that a trusted third party has authenticated that organization’s identity. Since the browser trusts the CA, the browser now trusts that organization’s identity too. The browser lets the user know that the website is secure, and the user can feel safe browsing the site and even entering their confidential information. How Does the SSL Certificate Create a Secure Connection? When a browser attempts to access a website that is secured by SSL, the browser and the web server establish an SSL connection using a process called an “SSL Handshake” (see diagram below). Note that the SSL Handshake is invisible to the user and happens instantaneously. Essentially, three keys are used to set up the SSL connection: the public, private, and session keys. Anything encrypted with the public key can only be decrypted with the private key, and vice versa. Because encrypting and decrypting with private and public key takes a lot of processing power, they are only used during the SSL Handshake to create a symmetric session key. After the secure connection is made, the session key is used to encrypt all transmitted data. - Browser connects to a web server (website) secured with SSL (https). Browser requests that the server identify itself. - Server sends a copy of its SSL Certificate, including the server’s public key. - Browser checks the certificate root against a list of trusted CAs and that the certificate is unexpired, unrevoked, and that its common name is valid for the website that it is connecting to. If the browser trusts the certificate, it creates, encrypts, and sends back a symmetric session key using the server’s public key. - Server decrypts the symmetric session key using its private key and sends back an acknowledgement encrypted with the session key to start the encrypted session. - Server and Browser now encrypt all transmitted data with the session key. Why Do I Need SSL? One of the most important components of online business is creating a trusted environment where potential customers feel confident in making purchases. Browsers give visual cues, such as a lock icon or a green bar, to help visitors know when their connection is secured. In the below image, you can see the green address bar that comes with extended validation (EV) SSL Certificates. What Does EV Look Like? If your site collects credit card information you are required by the Payment Card Industry (PCI) to have an SSL Certificate. If your site has a login section or sends/receives other private information (street address, phone number, health records, etc.), you should use SSL Certificates to protect that data. Your customers want to know that you value their security and are serious about protecting their information. More and more customers are becoming savvy online shoppers and reward the brands that they trust with increased business.
<urn:uuid:eda547b6-7d64-4221-9cbb-54ab8a6b74c7>
CC-MAIN-2017-04
https://www.digicert.com/ssl.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00505-ip-10-171-10-70.ec2.internal.warc.gz
en
0.920562
1,531
3.703125
4
The most obvious difference is that hubs operate at Layer 1 of the OSI model while bridges and switches work with MAC addresses at Layer 2 of the OSI model. Hubs are really just multi-port repeaters. They ignore the content of an Ethernet frame and simply resend every frame they receive out every interface on the hub. The challenge is that the Ethernet frames will show up at every device attached to a hub instead of just the intended destination (a security gap), and inbound frames often collide with outbound frames (a performance issue). In the physical world a bridge connects roads on separate sides of a river or railroad tracks. In the technical world, bridges connect two physical network segments. Each network bridge kept track of the MAC addresses on the network attached to each of its interfaces. When network traffic arrived at the bridge and its target address was local to that side of the bridge, the bridge filtered that Ethernet frame so it stayed on the local side of the bridge only. If the bridge was unable to find the target address on the side that received the traffic, it forwarded the frame across the bridge hoping the destination will be on the other network segment. At times there were multiple bridges to cross to get to the destination system. The big challenge is that broadcast and multicast traffic have to be forwarded across each bridge so every device has an opportunity to read those messages. If the network manager builds redundant circuits, it often results in a flood of broadcast or multicast traffic, preventing unicast traffic flow. Switches use the best of hubs and bridges while adding more abilities. They use the multi-port ability of the hub with the filtering of a bridge, allowing only the destination to see the unicast traffic. Switches allow redundant links and, thanks to Spanning Tree Protocol (STP) developed for bridges, broadcasts and multicasts run without causing storms. Switches keep track of the MAC addresses in each interface so they can rapidly send the traffic only to the frame’s destination. The other benefits of using switches are: - Switches are plug-and-play devices. They begin learning the interface or port to reach the desired address as soon as the first packet arrives. - Switches improve security by sending traffic only to the addressed device. - Switches provide an easy way to connect segments that run at different speeds, such as 10 Mbps, 100 Mbps, 1 Gigabit, and 10 Gigabit networks. - Switches use special chips to make their decisions in hardware making low processing delays and faster performance. - Switches are replacing routers inside networks because they are more than 10 times faster at forwarding frames on Ethernet networks. Networking & Wireless Training
<urn:uuid:eaf33095-c9e0-479c-bf81-767e601852f2>
CC-MAIN-2017-04
http://blog.globalknowledge.com/2012/08/14/what-is-the-difference-between-bridges-hubs-and-switches/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00321-ip-10-171-10-70.ec2.internal.warc.gz
en
0.935431
549
3.8125
4
Agile software development has become a major craze in many IT and engineering circles. At their best, agile processes provide faster development times with fewer bugs. At worst, such processes produce software that cannot be maintained and documented and doesn't meet client needs. Agile development processes focus on short sprints of development and testing as well as more distributed planning, customer involvement, and constant review. The general outline of the process, from top down, includes: - Business strategy alignment. - Market and general product definition. - Product specific direction creation. - Release planning. - Development sprints. - Feedback and evaluation. Each of these broad categories includes different activities and all involve organizations and stakeholders. When dealt with as a flexible and dynamic process, agile can benefit the development cycle. When motivated by philosophical rather than pragmatic reasons, it can consume resources without producing positive results.
<urn:uuid:66aa9119-f89e-423c-ad3a-3a09c1e6086d>
CC-MAIN-2017-04
https://www.infotech.com/research/essentials-of-agile-software-development
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00321-ip-10-171-10-70.ec2.internal.warc.gz
en
0.934582
181
2.5625
3
Department of Energy Using Warm Water to Cool New Data Center The U.S. government is currently in the process of building one of the most efficient data centers in the world. The data center is being built by the U.S. Department of Energy's National Renewable Energy Laboratory (NREL) in partnership with HP and Intel and will house a new High-Performance Computing (HPC) system. The facility is known as the Energy Systems Integration Facility and is located in Golden, Colorado. Steve Hammond, NREL Computational Science Director, told Enterprise Networking Planet that the goal of the new data center is to have a Power Usage Effectiveness (PUE) of 1.06, which is substantially better than the industry average of 1.91. "The compute resources will support the breadth of research at NREL with increased efficiency and lower cost for research into cleaner energy technologies," Hammond said. When it comes to NREL's own data center, Hammond stressed that his organization has taken a holistic approach. "We have taken a chips to bricks approach measuring both the bytes and the btu's," Hammond said. Warm Water Cooling The PUE reduction for the NREL data center is being achieved by way of a number of innovations. One of the primary ones is the use of warm water to cool the data center and the server rows. Ed Turkel, Manager, Worldwide HPC Marketing at HP told EnterpriseNetworkingPlanet that when looking at power usage in the data center, a lot of it is in the infrastructure used to cool the data center itself, typically by way of large air conditioning units. HP's new warm water approach is a more efficient method than air conditioning for a number of reasons. For one, the water pumps use less power than typical air conditioning unit fans. Turkel added that overall water is a better conductor of heat in contrast to air, and as such, a data center needs less of it to get similar levels of cooling. The warm water system runs through the floor of the data center as well as through the server racks. "The thermal exchange is directly to water inside the rack so it's not exchanging heat with air inside the rack or anything like that, we're bringing the warm water to the servers themselves," Turkel said. Going a step further, the new NREL data center will then reclaim the heat from the data center servers for other purposes, including heating the building. "The classic data center has lots of cold air that is approximately 60 degrees supplied to the front of the racks, in an effort to help keep your chips from getting hotter than 150 degrees," Hammond said. "Then you get 80 degree hot air out of the back of the racks and you try to eject that heat and declare victory." Hammond explained that in contrast, the NREL approach will supply water that is approximately 75 degrees and then after running through the servers, will return water that is 95 degrees. That return water will then be the primary heating source for the building. Intel Xeon Phi While server cooling is a key source of power efficiency, NREL is going a step further by taking advantage of a new generation of Intel HPC chips with the Xeon Phi. Steven Wheat, General Manager, High Performance Computing at Intel explained to EnterpriseNetworkingPlanett hat the Xeon Phi has a similar instruction set with Intel's Xeon E5, though there are a few key differences. For one, whereas an Intel Xeon E5 typically packs about 8 CPU cores, the Intel Phi can have 50 or more CPU cores. "We have been able to demonstrate a teraflop of sustained performance on a single Xeon Phi processor," Wheat said. From the networking interface perspective, NREL is using a flexible LAN On Motherboard (LOM) design where not all Network Interface Cards (NICs) on placed on all boards. The flexible LOM design also saves on power. Sitting at the core of NREL's new HPC data center deployment is the open source Linux operating system. HP's Turkel explained that his company has gone to great lengths over the years to fully optimize for power usage on Linux. "With our latest generation systems we've taken many of the Linux process daemons and offloaded them from the system and we have them running in a management processor that is run on a node," Turkel said.
<urn:uuid:9046a99a-2a15-41be-946d-d36972f273b9>
CC-MAIN-2017-04
http://www.enterprisenetworkingplanet.com/print/datacenter/department-of-energy-using-warm-water-to-cool-data-center.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280761.39/warc/CC-MAIN-20170116095120-00377-ip-10-171-10-70.ec2.internal.warc.gz
en
0.944295
898
2.859375
3
Last week, the Folding@home team reported that they achieved five petaflops of processing power for their popular protein folding research project, which relies on processor cycles contributed by hundreds of thousands of people. That’s more processing power than can be found at any single US DOE lab or supercomputing center. What’s more, the five teraflops corresponds to real application performance for the project’s protein simulation software, so we’re not just talking peak hardware performance. Perhaps even more impressive is that the project crossed the one petaflop mark only 18 months ago, on Sept. 16, 2007. At this rate, they’ll hit an exaflop in about five years. But it’s doubtful whether the Folders will really be so fortunate. Most of the performance increase over the last year and a half was the result of the GPGPU revolution. In September 2007, the project had a mere 42 teraflops of GPUs working for them. Today that number stands at 3,295 teraflops (3 petaflops). Two thirds of those are NVIDIA GPUs; one third are ATI (AMD) GPUs. The remainder of the performance increase over the last year and a half was gleaned from Cell BE-based PlayStation3 consoles and CPU-based PCs and workstations. While more GPU-based systems will surely be added to the Folding@home infrastructure in the future, the increased performance will likely follow more of a Moore’s Law type curve (albeit an accelerated one that corresponds to the faster evolution of GPUs). In sheer computing power the five petaflop Folding@home infrastructure represents more than four times the Linpack performance of the 1.1 petaflop IBM Roadrunner supercomputer at Los Alamos. Of course, that’s an apples-to-oranges comparison since supercomputers are monolithic machines built for tightly-coupled applications. Folding@home works more like a typical distributed computing system, where an application is divvied up over a large number of machines and then the results are aggregated. Some purists wouldn’t call Folding@home a supercomputer at all since it doesn’t exist as a stand-alone system. Nevertheless, the Folders are doing real HPC work and are pushing the envelope in both high-end computing and protein modeling. Whether Folding@home can stay ahead of the supercomputing performance curve remains to be seen. Depending upon the kindness of strangers may turn out to be a precarious model for ultra-scale computing. The advent of cloud computing means over-provisioned PCs may end up morphing into thin clients with much less processing power to share. On the other hand, if our client systems become the visual computing platform of choice — as Intel, AMD and NVIDIA seem to be angling for — our PCs and even televisions will be chock-full of GPUs, multicore CPUs or some hybrid of the two. In that case, the Folders will continue to have a large reservoir of machines to tap into. Getting to an exaflop on a distributed computing platform shouldn’t be all that difficult. When you consider that a high-end gaming GPU today offers more than a teraflop of peak performance, you would need only a million or so client machines to get an aggregated peak exaflop. Only some fraction of that will translate into application performance, but considering that GPUs continue to get more powerful and the software is getting better at extracting performance, exaflop protein folding is certainly within reach.
<urn:uuid:7ad025f3-2a4a-4f53-980f-271b499401aa>
CC-MAIN-2017-04
https://www.hpcwire.com/2009/02/24/folding_home_tops_5_petaflops/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00285-ip-10-171-10-70.ec2.internal.warc.gz
en
0.91909
751
2.921875
3
Research from Avast has been published detailing the operation of the XOR.DDoS trojan, designed to infect Linux systems. Will 2015 be the year of DDOS Extortion? As the name suggests, the purpose of the trojan is to support a DDOS (Distributed Denial of Service) network. DDOS is still one of the most difficult attacks to defend against - by definition, the attack is perpetrated simultaneously from large numbers of devices including home and business users wherever the trojan has been deployed. This makes the standard countermeasure for DDOS - blocking/blacklisting associated IP addresses - extremely hard. During the Christmas holidays we saw how devastatingly effective DDOS attacks can be - both Microsoft XBOX Live and Sony PlayStation servers were crippled by a prolonged DDOS attack from the Lizard Squad hacking group. The Lizard Squad have now claimed that these attacks were simply a 'marketing campaign' to demonstrate their capabilities, and that their DDOS service is now available for hire. DDOS attacks have been used to extort ransoms in the past, with Vimeo, Shutterstock, MailChimp and Bit.ly all being subject to DDOS coercion. Defending against malware like the XOR.DDoS trojan requires a layered approach. The increased use of zero-day malware makes anti-virus systems less effective and therefore system hardening measures need to be used to stop or disrupt the deployment of the trojan and its rootkit. Real-time file integrity monitoring is also essential to at least detect breach activity if an infection succeeds, so allowing remediation work to take place before a more widespread infection took hold and inflicted damage. Links to sources quoted are below:
<urn:uuid:672d97ad-56a8-4dee-b443-52335889cc22>
CC-MAIN-2017-04
https://www.newnettechnologies.com/xor-ddos-linux-malware-ddos-extortion.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00130-ip-10-171-10-70.ec2.internal.warc.gz
en
0.955243
347
2.640625
3
The idea first came to light when the Shanghai Municipal Education Commission said it is already experimenting with electronic textbooks. The commission was responding to a suggestion from Huang Shanming, an advisor to the city government. It plans to first hold trials in a limited number of schools and subjects. After it develops a framework for classes based on e-textbooks, it would like to make the new teaching material widespread. A number of schools around the world have already begun to use e-textbooks. Since 2004, publishers have been producing digital versions of textbooks approved by Singapore's Ministry of Education for use in the island nation. Last year, California Governor Arnold Schwarzenegger made headlines when he said he would like to replace some high school science and math texts with free, open source digital versions as a way of combating the state's budget woes. At least one other school in China is experimenting with e-textbooks. Starting in the second half of last year, students at Xujiang Elementary School in Zhejiang's Yiwu City have taken a class once a week on the history of industrial arts using an e-textbook. The class started out using a relatively simple e-textbook described by a teacher at the school as "flipbook animation." But teachers at the school, working in conjunction with the city's education department, later developed a multimedia version that is more interactive and includes video. One of the main concerns about Shanghai's plan is cost. Not all students have or can even afford a laptop or an electronic reader, and most school districts do not have the budget to supply them. But the plan could start out small, and when the price of e-readers goes down, it should save money. Huang said that from the first year of primary school to high school graduation, students in Shanghai use a total of 213 textbooks that cost a total of $256. The East China Normal University Press, which has done research into e-textbooks, said that if mass produced, the total cost of the books could be brought down to around $146.
<urn:uuid:1dda1932-a2b8-4a4f-a206-01ac93341fc1>
CC-MAIN-2017-04
http://www.networkcomputing.com/networking/shanghai-opts-electronic-textbooks/1801635368
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00066-ip-10-171-10-70.ec2.internal.warc.gz
en
0.956002
419
2.890625
3
There are plenty of malware and cybersecurity threat reports out there, but not many from the one Internet player whose central role and unmatched reach give it the potential for the clearest view of what's really happening on the Internet: Google. Five years ago Google launched its Safe Browsing initiative, under which it collects lists of suspected phishing and malware sites and provides an API that gives developers an easy way to have their apps check Google blacklists before opening a new site. The one consistent risk Google can't do anything about directly is the habit of many users of ignoring warnings about malware on sites with which they're familiar. Even when they haven't been redirected to an attack site, Google warnings mean a known site may have been infected by malware that makes it an involuntary participant in a malware distribution network. "We have very few false positives," the post said. Yesterday, on the fifth anniversary of that project, Google published stats showing some trends in the risks it has spotted and what Google is doing about them: - Google warns users of dangerous sites 12 million to 14 million times per day and warns users about 300,000 times per day they may be downloading malware; - Google finds about 9,500 new malicious web sites every day; - Flagged sites fall into two categories: those infected by malware that forces them to distribute it and "attack sites" built explicitly to distribute malware. The latter are increasing rapidly; - Attack sites try to avoid blacklists by changing their web hosts, DNS records and frequent regeneration of domain names; - "Drive by downloads" of malware most typically come from legitimate sites that have been compromised with malicious content or redirects to an attack site; - As built-in malware detection gets better, malware distributors increasingly rely on social engineering – convincing the user to install fake anti-virus or other software rather than trying to install malware covertly. - Social engineered attacks still trail drive-by downloads, but are catching up quickly. - The number of phishing sites is increasing fast, but many phishing sites stay online for as little as an hour to avoid detection. - Phishing sites disguise themselves as popular sites and may ask to install "browser extensions" (malware) to enable fake content. - Google continues to invest "heavily" in Safe Browsing, most notably by adding instant phishing and download protection in Chrome, adding malware scans for Chrome extensions and protection for Android apps in the Google Play store (not always successfully). Read more of Kevin Fogarty's CoreIT blog and follow the latest IT news at ITworld. Follow Kevin on Twitter at @KevinFogarty. For the latest IT news, analysis and how-tos, follow ITworld on Twitter and Facebook.
<urn:uuid:b4da81cc-3a4e-4fe7-884f-11f7a2fcb734>
CC-MAIN-2017-04
http://www.itworld.com/article/2722358/security/google-finds-9-500-new-threat-sites-per-day.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00552-ip-10-171-10-70.ec2.internal.warc.gz
en
0.927305
565
2.640625
3
Nearly 1 of 2 people (46%) lose data every year according to a study of Backblaze customers. Shocking? It surprised me at first glance, but then I dug in deeper. According to a Google study of hard drive failures, disk drives over a year old have about a 1 in 10 chance of failure each year. At this rate 1 of every 2 drives will fail every 5 years. A person that has a hard drive in their computer and an external drive for that period is nearly guaranteed to have one die. Computer Theft and Computer Loss 15% of households annually experience burglary or theft according to the Bureau of Justice. While statistics are not available for what was stolen, when a home is burglarized, a computer is a likely target. According to the Ponemon Institute, 637,000 laptops are lost at airports across the country every year. How many more in taxi cabs, coffee shops, and at vacation destinations? Viruses and Software Corruption Various surveys across the web have shown that viruses cause 4% – 7% of all data loss. Add software corruption to the mix, boot sector issues, registry issues, etc. and this starts becoming significant. Flooding, Fire, Earthquakes, and Other Disasters According to FEMA, about 1 in 10 households that have flood insurance suffer a loss each year due to flood damage. Half a million buildings catch fire every year based on USFA statistics. Nearly 200 earthquakes with a magnitude of 6.0 or greater occur worldwide annually according to the USGS. Computers are sensitive devices and don’t like to be wet, hot, or shaken. “Oops” is the #2 most common cause of data loss (after hardware failure) according to data recovery specialist Ontrack. Is the delete key too big on the keyboard? Blame it on that…but all of us have done it and wished there there were an undo key that was just as big. Losing data doesn’t always mean a hard drive crash. Sometimes it just means we deleted a folder (with our kids photos) or our dog knocked over an external drive (with our music library.) Whatever the cause, based on the actual needs of our customers, the various causes of data loss compile to require 1 of every 2 customers to restore data each year.
<urn:uuid:f3996151-a483-4dc5-9813-a5bdf7e7f02d>
CC-MAIN-2017-04
https://www.backblaze.com/blog/causes-of-data-loss-and-some-statistics/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00276-ip-10-171-10-70.ec2.internal.warc.gz
en
0.957006
479
2.546875
3
If you ever get the urge to build your own supercomputer, just take a look at what Chris Fenton has done. According to a blog post on his website, Fenton has managed to construct a working 1/10 scale version of the original Cray-1 machine. He’s calling it the Cray-1A. Built and sold by Cray Research in the 1970s, the Cray-1, with its iconic look and, at the time, unparalleled performance, jump-started the supercomputing industry. Today you can see the machines displayed in a handful of museums, but if you want to encounter a working version, you’re going to have to drop by Fenton’s house. An electrical engineer by trade, Fenton is one of those guys who just gets a kick out of building stuff. His website chronicles his various homebrew electronic projects: GPS altimeters, IED detectors, gas guns — you know, everyday gadgets you’d use around the house. Thus the need for the Cray-1 to round out the collection. Thanks to a Cray-1 hardware reference manual located online, Fenton was able to reverse engineer the design onto an FPGA device using Verilog as the hardware description language. The final design was implemented on a Xilinx Spartan-3E 1600 development board. “This is basically the biggest FPGA you can buy that doesn’t cost thousands of dollars for a devkit,” writes Fenton. “The Cray occupies about 75% of the logic resources, and all of the block RAM.” Not only did he reproduce a binary-compatible, cycle-accurate supercomputer, he also packaged it up to look like a miniature version of the original Cray-1, complete with doll-sized wraparound benching. The only thing missing is the software (oops). But not for lack of trying: After searching the internet exhaustively, I contacted the Computer History Museum and they didn’t have any either. They also informed me that apparently SGI destroyed Cray’s old software archives before spinning them off again in the late 90’s. I filed a couple of FOIA requests with scary government agencies that also came up dry. I wound up e-mailing back and forth with a bunch of former Cray employees and also came up *mostly* dry. My current best hope is a guy I was able to track down that happened to own an 80 MB ‘disk pack’ from a Cray-1 Maintenance Control Unit (the Cray-1 was so complicated, it required a dedicated mini-computer just to boot it!), although it still remains to be seen if I’ll actually get a chance to try to recover it. Fenton admits that without the software stack, the Cray-1A is not all that useful (unlike that gas gun!). Meanwhile he’s rewriting the CAL assembler for the architecture. Once that’s done, he could theoretically compose any software he wanted, although I imagine recoding the Fortran compiler and OS would chew up most of Fenton’s remaining free time. Not that that would stop him.
<urn:uuid:5b5df642-43db-49af-98b0-e6c57289ab59>
CC-MAIN-2017-04
https://www.hpcwire.com/2010/09/02/hobbyist_crafts_desktop_cray-1/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00094-ip-10-171-10-70.ec2.internal.warc.gz
en
0.957404
669
2.953125
3
There is a noise going about that cloud computing can cut costs, speed implementations, and scale quickly. However, the noise may be slightly off-the mark—particularly in product pitches! Just what is Cloud Computing? Search.com provides the following definition, "Cloud computing is a general term for anything that involves delivering hosted services over the Internet. These services are broadly divided into three categories: Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS)." The term cloud is used as a metaphor for the Internet, based on the cloud drawing used to depict the Internet in computer network diagrams as an abstraction of the underlying infrastructure it represents. Martin Banks, Associate Analyst at Bloor Research for Data Centres, told me, "I prefer the term Exostructure—an externally sourced (and theoretically limitless) seamless extension of an internal IT systems infrastructure that delivers information services on a fee-paying basis. This is looking at the issue from the users' point of view." Infrastructure-as-a-Service, like Amazon Web Services, provides virtual server instances with unique IP addresses and blocks of storage on demand. Customers use the provider's application program interface to start, stop, access and configure their virtual servers and storage. Platform-as-a-Service in the cloud is defined as a set of software and product development tools hosted on the provider's infrastructure. Developers create applications on the provider's platform over the Internet. PaaS providers may use APIs, website portals or gateway software installed on the customer's computer. Force.com, (an outgrowth of Salesforce.com) and GoogleApps are examples of PaaS. Developers need to know that currently, there are not standards for interoperability or data portability in the cloud. In the Software-as-a-Service cloud model, the vendor supplies the hardware infrastructure, the software product and interacts with the user through a front-end portal. SaaS is a very broad market. Services can be anything from Web-based email to inventory control and database processing. Because the service provider hosts both the application and the data, the end user is free to use the service from anywhere. A cloud service has three distinct characteristics that differentiate it from traditional hosting. - It is sold on demand, typically by the minute or the hour; - A user can have as much or as little of a service as they want at any given time; and - The service is fully managed by the provider (the consumer needs nothing but a personal computer and Internet access). So what does this really mean to a business? Well, rather than running computer applications on an in-house computer, you run them on an external machine, which could be anywhere in the world, and access the application programs via the internet. It also means that the data associated with the application is held externally to your organisation. So the application is hosted on a server with the associated data being stored in a database—all on a server run by a third party. There is just one more piece that we need to understand and that is that a cloud service can be either public or private. What does this mean? A public cloud sells services to anyone on the Internet. Amazon Web Services is the largest public cloud provider at the time of writing. A private cloud is a proprietary network or a data centre that supplies hosted services to a limited number of people. Just one more term that you need to understand and that is virtual private cloud; this is when a service provider uses public cloud resources to create their private cloud. What makes cloud computing so appealing at the moment? In a recent article, Nigel Stanley, Bloor Research's Security Practice Leader, said the following, "In an economic downturn cloud computing oozes sexiness. The thoughts of off loading your data to a third party gets financial types excited as they start to see how much money can be saved." Cloud computing means that rather than purchasing software, which would go on your CAPEX, you pay for it when you use it so it comes off your OPEX budget instead. Banks feels that, in fact, cloud computing will also reduce your OPEX spend as well as the implementation costs and associated consultancy costs will be less as well. On one point that Banks made I am not sure that I would agree with in that he felt the integration cost would also be smaller; I am not so sure and would advocate budgeting the same as an in-house implementation. So how can cloud computing be used in manufacturing? CRM has been one of the first areas covered; this being piloted by salesforce.com with its launch in 2000. Salesforce.com's CRM solution is broken down into several modules: Sales, Service & Support, Partner Relationship Management, Marketing, Content, Ideas and Analytics. Salesforce.com's Platform-as-a-Service product (Force.com Platform) allows external developers to create add-on applications that integrate into the main Salesforce application and are hosted on Salesforce.com's infrastructure. Salesforce.com currently has 55,400 customers and over 1,500,000 subscribers. Why CRM? Well the answer, in my view, is due to the need to support a mobile sales force that needs to be able to record information easily and quickly without necessarily having contact always to the centre. Couple this with the need for the centre to have control over this distributed workforce and you create an ideal environment for cloud computing solution. A number of the large ERP vendors, such as SAP, provide cloud capabilities. SAP launched its Business ByDesign in September 2007. Over the past couple of years Business ByDesign has been plagued by some really bad press. In September 2009, SAP gave a briefing to the industry on how it was tackling a number of the issues. These included: - Scalability issues: all customers run on their own blade servers - Overly "feature-rich": the suite was originally designed to meet all of the needs of its customer base instead of focusing on specific functionality - Lack of corporate commitment: SAP is cutting R&D funding and shifting resources to other products - Runs on NetWeaver: a full instance is too heavy for a SaaS application and finding "cloud developers" who have full Java EE stack experience may be tough Infor entered the market in October 2008 with the launch of a SaaS version of ERP SyteLine. This is a very typical entry from an existing vendor in that it allows a user to move seamlessly between SaaS and on-premises deployment, or vice-versa. Microsoft Dynamics entered the SaaS market in 2007 with the introduction CRM Live. This is run at Microsoft data centres around the world, along with all the other "Live" products such as Live Small Business Office. Software-plus-Services for Microsoft Dynamics ERP is the new capability being offered. This allows a user to choose to implement their Microsoft Dynamics software as a wholly-owned on-site solution, via online services, all or partly- hosted, or in any combination. Oracle entered the market last year with the introduction of an offering comprising its Oracle Sourcing and Oracle Sourcing Optimization products. Nagaraj Srinivasan, Oracle's vice president for EBS supply chain management, in an interview with Managing Automation in March 2009, described the primary focus as being on automating the transactional aspects of material procurement. The tool can be used to aggregate demand; determine whether an RFP, RFQ, or other sourcing process is needed; compile contract terms; notify and qualify suppliers; establish prices and discounts and conduct multi-round negotiations; and aggregate and award bids. In addition, Oracle is offering CRM as a SaaS, called CRM On Demand. Cloud Computing-based manufacturing solutions are emerging as viable competitors to products from established vendors. These cloud solutions are most commonly used for supply chain visibility, transportation management and supplier/contract negotiation. Vendors are rapidly creating cloud computing modules to address other manufacturing issues, such as: supply chain execution, shop floor planning, demand planning and production scheduling. But where else? Christian Verstraete, HP's Chief Technologist for Manufacturing and Distribution services, believes a couple of areas will quickly become the favourites of manufacturing companies and these include: - Cross enterprise collaboration. Verstraete sees cross-enterprise collaboration as being a current weak point in Supply Chain management. The required integrated environment would require the exchange of structured and unstructured data, of synchronous and asynchronous communication. By integrating multiple concepts of social networking and providing them in an integrated, cloud based environment, companies could use a variety of collaboration mechanisms to perform key business processes without having to manage the environment. Data can be contributed by the parties on request, limiting the sensitive data in the cloud. Mike Frichol, founder of Pragmatic Papers, stated:. "Cloud computing provides a geographically dispersed network approach that is much better aligned to serve all these trading partners trying to communicate with each other through different systems. Supply chains are networks. Cloud computing comprises networks for delivering business applications anywhere, anytime—that should significantly improve supply chain capabilities, communication and coordination." - High Performance Computing. Verstraete foresees the needs for additional computing power, as companies increase the use of digital models to virtually test their products and/or to understand their business environment better through business intelligence and decision making. The models used are typically highly parallelizable and fit well for a cloud environment as long as the amount of data they need to be provided with is not large, when the network could become a bottleneck. But cloud computing can get a business in hot water if they have not thought through the many consequences, and this particularly means data security. Stanley states, "Without assurances that organisational data will be totally secure in a remote site the whole concept of cloud computing is dead in the water." So securing the cloud is vital for its success. With companies trusting their corporate data—their most important asset—to third party organisations, what another of my Bloor colleagues, Peter Cooke, describes as the holy trinity of confidentiality, integrity and accessibility, has to be assured. The infrastructure underpinning this is Identity Access Management (IAM). Without it, system access security is non-existent. Another worry is about the ability of the provider of the service ability to still be around tomorrow. Raimund Genes, CTO at Trend Micro, the global security company, in a recent eBook. "You need a provider that will be in business three years from now. When you give up your IT infrastructure, you need a reliable service provider." Banks stated that "With Cloud Computing you must realize that your business process in no longer in your complete control. It is wrapped into the cloud service and in the control of the provider" Therefore it is imperative that when choosing a cloud service provider, you choose one that is likely to be there for the long-haul, or a supplier that has a strategy to manage the situation if they are not there. Could we ESCROW agreement for business processes locked in cloud services? The goal of cloud computing is to provide easy, scalable access to computing resources and IT services. Cloud computing users gain some significant economic advantages. They have no capital expenses. They have reduced service costs because of a simplified IT infrastructure. They do not have to buy systems scaled to their worst case use scenarios, and there is a reduction in large client applications. The primary disadvantages are the risks associated with Internet reliability, security and access of data, and the financial stability of the service provider. Generating Maximum Value from your IT Security Spend - An Analyst's Perspective. Nigel Stanley, Bloor Research, 29 September, 2009. The Cloud Computing Advantage for Companies that Outsource Manufacturing, Dr. Katherine Jones, Industry Week, April 24, 2009 What to Expect from Cloud Computing, internet.com, Three Steps to Secure Cloud Computing, Robert McGarvey, 2009
<urn:uuid:9e178f22-1b4c-4a5d-9e46-274b1e5c2113>
CC-MAIN-2017-04
https://www.bloorresearch.com/analysis/cloud-computing-what-is-it-really/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00488-ip-10-171-10-70.ec2.internal.warc.gz
en
0.946614
2,463
3.015625
3
What is Risk Management? Effective risk management can bring far-reaching benefits to all organizations, whether large or small, public or private sector, as well as individuals managing projects or programmes. Effective risk management is likely to improve performance against objectives by contributing to: - Fewer sudden shocks and unwelcome surprises - More efficient use of resources - Reduced waste - Reduced fraud - Better service delivery - Reduction in management time spent fire-fighting - Better management of contingent and maintenance activities - Lower cost of capital - Improved innovation - Increased likelihood of change initiatives being achieved - More focus internally on doing the right things properly - More focus externally to shape effective strategies.
<urn:uuid:803cfa1c-c313-49e6-899a-15859ff7daec>
CC-MAIN-2017-04
https://www.axelos.com/best-practice-solutions/mor/what-is-mor
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00516-ip-10-171-10-70.ec2.internal.warc.gz
en
0.918473
141
2.515625
3
The ability to access and process electronic information has become one of the most important factors in leading a full and productive life in today's knowledge-based society. This makes access to electronic information critical for people with disabilities who are seeking employment and other opportunities. Significant progress has been made to improve the accessibility of content presented on Web sites, often in HTML format. However, the accessibility of other electronic formats, such as Microsoft Word documents and PDFs, still lags behind and is often added as an afterthought, if at all. Given the enormous volume of content created daily -- often in the form of documents authored by individuals who know little about accessibility -- this means far too much material is inaccessible to far too many people. Consequently the potential of the Information Age to level the playing field in terms of employment opportunities and to contribute positively to the lives of people with disabilities hasn't been fully realized. For example, according to the U.S. Census Bureau, the poverty rate for 25- to 64-year-olds was eight percent, compared to 11 percent for those with a non-severe disability and 26 percent for people with a severe disability. Accessibility of documents can be implemented at a number of levels, as illustrated by the pyramid above. At the top of the pyramid are enterprise verification and remediation personnel who are responsible for verifying that content created and disseminated by the enterprise is accessible. This typically involves auditing Web sites and other repositories of information to verify compliance with accessibility legislation, regulations and enterprise policies. In the middle of the pyramid are quality-assurance and remediation personnel. They are typically responsible for testing documents before they are published and for correcting compliance errors. At the base of the pyramid are document authors. The authors' main interest is to create content. They typically are oblivious to accessibility and are rarely aware of what makes a document accessible. There are a number of reasons why applying accessibility at this level can have the greatest impact. Authors know the content well. As a result, they can provide the most effective accessibility information. And authors are far more numerous than quality-assurance or enterprise testing personnel. Making authors responsible for the accessibility of their documents will take accessibility to the grassroots, thereby increasing the chances that documents are accessible. Also, it's far less expensive to add accessibility at the author level. Broadly speaking, accessibility of electronic documents remains a highly specialized topic that's exclusive to accessibility experts. Most electronic documents are created without consideration for accessibility and are then made accessible at a later stage in the life cycle of the document. This is far from optimal because costs increase exponentially and quality decreases significantly the further accessibility is removed from the authoring stage of the document-management workflow. The outcomes of such inefficient workflows are several, including the creation of fewer accessible documents due to the significant cost and complexity associated with remediating documents at the later stages of the workflow, and a lower quality of accessibility data within the produced documents. A compelling solution is to make accessibility a part of the authoring process as opposed to a later-stage process that's often done, if at all, only as an afterthought. Author-level accessibility represents a significant breakthrough that will transform the accessibility of electronic documents by taking accessibility out of the realm of experts and bringing it into the mainstream. Using effective author-level tools, accessibility can be brought to the grassroots. For example, a university professor who is creating course materials and distributing them to students will easily create fully accessible documents from her or his favorite authoring environment. The professor won't view this as an added burden but rather as an integral part of the authoring process, similar to spellchecking. Meanwhile, students with visual impairments will be able to easily read course materials because they will have been created with accessibility built-in by the person most qualified to create this accessibility information. Thus, the student will not be at a disadvantage compared to sighted students. In another scenario, a job seeker with a visual impairment will be able to read job postings produced as PDF documents and fill out an online application form because the postings and forms will have been built with accessibility integrated into the documents and forms by their authors. This will enable job seekers to more effectively locate a suitable job and apply for it. Central to achieving this vision are software tools that integrate into the authoring environment and ensure documents are made accessible by the author. In order for such tools to be effective, they must meet criteria. o They should be integrated into the authoring environment so that the author does not have to exit the authoring application to run the accessibility tool. o They should inherently verify documents against a well-defined standard, such as Section 508 or W3C WCAG 2.0. Once the tool finishes the verification and assuming the author followed instructions, the document should be compliant with the specific standard. o They should provide the ability to both verify and fix compliance problems. o They should impose a minimal burden on authors in terms of their knowledge of accessibility or the amount of work that's required to make a document accessible. o They should use basic, nontechnical language and provide clear explanations and examples. It cannot be assumed that document authors understand technology or have more than a basic level of familiarity with their tools. It should be noted that for tools to support a specific standard for a given document format, the document format itself must support the accessibility structures that are required by the standard. For example, formats like HTML 4.0 or PDF 1.8 support all basic structures required for Section 508 and WCAG 2.0. The Microsoft Word 2007 format, on the other hand, does not; for example, it doesn't provide support for row headers. The degree of accessibility of a given format should be differentiated from how difficult it is to make the format accessible. For example and assuming tools are not used, while it may be significantly harder to add accessibility features to a PDF document than a Word document, a PDF document containing tables can be made accessible while a Word 2007 document cannot. PDF supports all the accessibility structures for tables while MS Word 2007 does not. Author-level tools can bring document accessibility to the grassroots, but they have to meet a several criteria related to how easy they are to use and to how fully they support specific standards. Effective author-level tools make it possible to implement more optimal workflows that can enable content authors to create accessible content from the outset. Deborah Kaplan is the director of the Accessible Technology Initiative at the California State University Chancellor's Office. She has several decades of experience in advocating for accessible technology and its implementation. She is the former executive director of the World Institute on Disability and a consultant to technology firms. As the director of the CSU's Accessible Technology Initiative, she oversees a comprehensive effort to implement accessible technology in the largest four-year higher-education system in the U.S. She has a law degree from University of California, Berkeley and a bachelor's degree from University of California, Santa Cruz. Monir ElRayes is founder, president and CEO of NetCentric Technologies, a company that provides document-compliance solutions designed to enable government, educational institutions and corporations to ensure the accessibility of electronic documents and their compliance with a variety of standards. ElRayes holds a master of engineering (electrical) degree from Cornell University and a bachelor of science degree (electrical engineering) from the University of Iowa.
<urn:uuid:0feca005-7f5c-4a23-a6ed-c845350c1947>
CC-MAIN-2017-04
http://www.govtech.com/policy-management/Document-Accessibility-Should-Begin-at-the.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00332-ip-10-171-10-70.ec2.internal.warc.gz
en
0.941439
1,516
2.71875
3
Bandwidth is one of the most fundamental, complex and overlooked aspects of video surveillance. Many simply assume it is a linear function of resolution and frame rate. Not only is that wrong, it misses a number of other critical elements and failing to consider these issues could result in overloaded networks or shorter storage duration than expected. In this guide side, we take a look at these factors, broken down into fundamental topics common between cameras, and practical performance/field issues which vary depending on camera performance, install location, and more. Resolution: Does doubling pixels double bandwidth? - Framerate: Is 30 FPS triple the bandwidth of 10 FPS? Compression: How do compression levels impact bandwidth? CODEC: How does CODEC choice impact bandwidth? Smart CODECs: How do these new technologies impact bandwidth? Practical Performance/Field Issues - Scene complexity: How much do objects in the FOV impact bitrate? - Field of view: Do wider views mean more bandwidth? Low light: How do low lux levels impact bandwidth? WDR: Is bitrate higher with WDR on or off? Sharpness: How does this oft-forgotten setting impact bitrate? Color: How much does color impact bandwidth? Manufacturer model performance: Same manufacturer, same resolution, same FPS. Same bitrate? The most basic commonly missed element is scene complexity. Contrast the 'simple' indoor room to the 'complex' parking lot: Even if everything else is equal (same camera, same settings), the 'complex' parking lot routinely requires 300%+ more bandwidth than the 'simple' indoor room because there is more activity and more details. Additionally, scene complexity may change by time of day, season of the year, weather, and other factors, making it even more difficult to fairly assess. We look at this issue in our Advanced Camera Bandwidth Test. Inside, we cover the 10+ other issues listed above. [[Note: this guide was originally released in 2014, but has been restructured and updated with additional information, including Smart CODECs and H.265]]
<urn:uuid:39f1b0a8-5692-408c-962d-8c425d5d6aff>
CC-MAIN-2017-04
https://ipvm.com/reports/bandwidth-surveillance-guide
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00056-ip-10-171-10-70.ec2.internal.warc.gz
en
0.90207
441
2.53125
3
F-Secure’s annual Online Wellbeing uncovered Internet users’ feelings of personal online security with regards to online banking, children’s safety while surfing the web, and credit card information when shopping online. Overall, 50% of respondents were confident about their security when banking online. However, only 6% of respondents felt secure in making credit card purchases online. Web surfing and phishing e-mails Phishing can appear in the form of what looks like an e-mail from a well-known bank, which in reality is a scam seeking personal information. On average, 54% felt fairly or very confident they would not fall for a phishing email. However, 27% of respondents do not know whether or not they can spot phishing emails. In Hong-Kong, 26% of respondents feel they cannot spot phishing emails. Although in other countries such as the UK (68%), Canada (60%), and Italy (67%) respondents are far more confident in their ability to spot such emails. Children and the Internet At the core of F-Secure’s “Online Wellbeing” is family security when using the Internet. Parents are increasingly worried about their children not being protected from unsuitable content including pornography and violent imagery. When asked the question, “My kids are safe when they are online”, over a third of respondents across all countries could neither agree or disagree with the statement. Parents and guardians do not know whether children are safe online or not. The vast majority (54%) of respondents did not agree that their children were safe online. Only 2% (strongly agree) of respondents in India feel their children are safe. In Germany, 69% strongly disagreed and disagreed with the statement. Surprisingly, respondents feel safer during online banking than when using their credit card for shopping online. In all eight surveyed countries, the majority agree that they are safe during online banking transactions. The countries that have the most confidence are France (62%) and the US (63%), but in Germany, 39% still do not have confidence in online banking. On a whole, 31% of all respondents were still unsure of their safety. The survey was carried out by a third party in December 2008 across 2019 Internet users aged 20-40 in USA, Canada, France, Germany, UK, Italy, India and Hong Kong. There were approximately 200 respondents surveyed per country. F-Secure asked respondents a series of basic online security questions and, using a Likert scale, asked them to rate the extent to which they were confident in the security of given online activities.
<urn:uuid:1c7a3170-7b17-4495-b6f4-bf79bbd4b9fa>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2009/03/05/f-secure-survey-finds-people-still-insecure-online/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281746.82/warc/CC-MAIN-20170116095121-00570-ip-10-171-10-70.ec2.internal.warc.gz
en
0.965245
533
2.609375
3
ZIFFPAGE TITLEDelivering the DataBy Mel Duvall | Posted 2005-04-06 Email Print How does Federal Reserve chairman Alan Greenspan decide to raise rates a quarter point? By analyzing a potent mixture of raw pecuniary data and computerized economic intelligence against first-hand reports from key hubs of U.S. financial activity and five Delivering the Data But, of course, Greenspan doesn't make decisions on anecdotal information alone. Inside the marble Fed building, 225 Ph.D. economists are dedicated to the single-minded task of understanding the U.S. economy and its relation to world economies. The Fed's research division captures and monitors information from hundreds of corporate, government and university sources. Housing starts. Jobs created. Gas prices. Measurements of mood, such as the University of Michigan Consumer Sentiment Survey, which examines consumers' confidence in the economy and their likelihood to spend money on cars, TVs and clothes. The Fed also buys mortgage and credit card information from banks and lenders to study how and how much consumer spending is being financed. All the data must be consolidated, analyzed and delivered to Greenspan and the other members of the board. They meet formally eight times a year, but receive reports daily. Sandra Cannon, Greenspan's chief of economic information management, makes sure reports on the GDP, consumer prices and other economic indicators are gathered as soon as they're available and loaded into the Fed's Forecasting, Analysis and Modeling Environment (FAME) database system. A number of top-tier banks, investment houses and energy trading firms such as Credit Suisse First Boston and Morgan Stanley use FAME for financial analysis. Disaster recovery specialist SunGard Data Systems acquired the system in 2003 from FAME Information Services. Once numbers are uploaded to FAME, economists at the Fed begin their calculations. Unlike a typical database, which stores data in categories such as date in one column and price in another, the system builds time into each piece of information. It can store, for example, that the rate on a five-year variable mortgage was 5% on Feb. 1, 2005, and 5.25% on Feb. 2, after Greenspan's tweak. The payoff for financial organizations, says SunGard FAME product specialist Kenneth Kunin, is that time-based calculations can be done faster and with less programming. Time is already attached to the information, making it easier to figure out, for example, how much interest might be earned on a bank's deposits from September 12 to March 16.
<urn:uuid:a63c3ed8-034c-49ca-a75a-c506ba9b79e5>
CC-MAIN-2017-04
http://www.baselinemag.com/c/a/Projects-Data-Analysis/Inside-the-Mind-of-Alan-Greenspan/4
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00296-ip-10-171-10-70.ec2.internal.warc.gz
en
0.93931
528
2.609375
3
Federation enables applications in different domains to share information about users. - Federated domains must have some pre-established relationship, bilaterally or in a group. - Information about users is exchanged: - Identity: Who is this user? - Authentication: How/when did the user sign in? - Authorization: What is the user allowed to do? - Federation enables single sign-on between domains: - User attempts to access an application in domain A (the service - The application checks the user's browser to see if he has already been authenticated. He has not. - The redirects the user's client (browser) to an identity provider (IdP) in domain B. - The IdP in domain B authenticates the user and redirects his browser back to domain A, along with a cryptographically signed assertion about his identity and entitlements. - The application in domain A reads the assertion, validates the cryptographic signature and automatically signs the user - The user is able to perform the same action against other applications in other domains. If they all trust the IdP at domain B, he may not be required to sign in again -- hence - In some deployment pattern, federation eliminates some - Domain B trusts domain A to name its own users. - Domain B does not create its own objects for domain A users. There are multiple standards for federation, including the security assertion markup language (SAML -- v1 and v2), WS*Security (mostly Microsoft) and OAuth (used by consumer web sites like Facebook and Live.com). The most common language/protocol for federated authentication between enterprise applications is SAML 2.0. The most common language/protocol for consumer-facing web sites is OAuth v2. Return to Identity Management Concepts
<urn:uuid:47db4c1f-c05e-4fa4-863a-3c08fae4c98e>
CC-MAIN-2017-04
http://hitachi-id.com/resource/concepts/federated-access.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00204-ip-10-171-10-70.ec2.internal.warc.gz
en
0.846298
395
3.25
3
If you could never draw a perfect circle without the help of a compass, a new table developed by Japanese researchers may be able to help out. In the latest video news report from DigInfo News, we see a demonstration of this table, which uses a computer to "control the XY position of a magnet under the surface of the table." While it still doesn't look as precise as something you'd get with a compass or ruler, the implications for usage with other digital technology is kind of fascinating. Keith Shaw rounds up the best in geek video in his ITworld.tv blog. Follow Keith on Twitter at @shawkeith. For the latest IT news, analysis and how-tos, follow ITworld on Twitter, Facebook, and Google+. Watch some more cool videos: Science Monday #1: Why it's dark at night BBC gives Doctor Who fans an Amy/Rory postscript The best remote-control car chase ever Science Monday: Origins of Quantum Mechanics in under 5 minutes Motion-copy robot can mimic painting brush strokes
<urn:uuid:34ad0b0e-cf5b-4980-99a3-166a2288a6d8>
CC-MAIN-2017-04
http://www.itworld.com/article/2716186/personal-technology/table-lets-you-draw-freehand-circles-and-straight-lines-more-accurately.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00112-ip-10-171-10-70.ec2.internal.warc.gz
en
0.907252
214
2.859375
3
For More Information The transformative power of the internet is found in every corner of the world. It defines our generation, and is inspiring the next, in ways that we are only just beginning to imagine. That’s why our vision for Digital Imagination is inspired by our deep belief in the liberating potential of technology. Our goal is to take people further, to fuel imagination, and to empower us all to realize our potential. We will harness connectivity to teach future makers, spark positive change and accelerate innovation that brings benefits to communities and society. Our new flagship program, Digital Imagination, is a collective movement to create digital solutions that respond to pressing societal challenges. We focus on three main areas: - Creating exciting ways to share the skills needed to thrive in the digital economy and create a positive social impact. - Supporting and investing in innovators and entrepreneurs to use digital technology to inspire social change. - Bringing people together to use digital technology to solve the most pressing issues facing society. This is a new focus for us that we will develop further in 2016. What We are Doing
<urn:uuid:967c4e63-be22-4a41-87bb-17986fde129c>
CC-MAIN-2017-04
http://www.libertyglobal.com/cr/cr-digital-imagination.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00322-ip-10-171-10-70.ec2.internal.warc.gz
en
0.887094
223
2.578125
3
You can’t see it, so you probably take it for granted. But think for a moment about all the glass that you use in everyday life. From windows to windshields, light bulbs to flat panel screens, these variations of melted sand are a fundamental part of many technologies. One of the key applications is for LCD panels, which require glass that is especially smooth and uniform in thickness. Corning is the market leader in LCD glass, with Asahi Glass Company (AGC) providing a significant portion as well. Glass also works well at blocking air and water vapor, which is necessary for encapsulating OLED displays. At present, LCD and OLED panels are made in “batch mode” production. This means that the glass substrates are cut into rectangles for processing, then then cut into individual panels. In general, larger substrates translate into more efficient production, especially for large displays (though there are indications that we may be nearing some of the limits for those efficiencies). The dream is to be able to produce displays on long ribbons of glass that move continuously through the different production steps, much like a newspaper is printed on a giant printing press using rolls of paper. Roll-to-roll processing could be much more efficient than batch processing. Among the many problems, one stands out; have you ever tried to roll up a sheet of glass? Corning and AGC have both managed this trick, as shown in this photo. At SID 2011, both companies demonstrated glass that is just 0.1 mm thick and can be rolled up. How thick is 0.1 mm? It’s about the same thickness as a sheet of paper. Managing this material is tricky, as you might imagine. The Corning demo showed a loop of glass traveling over a series of three rollers, and plastic film was attached to the edges on both sides of the glass to protect the edges from damage as it rolled around. Still, the advantages of this thin glass are plenty. Even if you don’t use roll-to-roll production, the glass is thinner and much lighter than standard LCD glass (which is typically 0.7 mm thick, or about the thickness of a credit card). While you can’t wrap it around a pen, it does make it possible to create more flexible displays which could lead to novel applications. The big bet, however, is that this could help lower production costs even further, and help meet the consumer’s continued expections of larger displays for less money.
<urn:uuid:99de3438-c00a-4bc3-a803-5c710f12477b>
CC-MAIN-2017-04
https://hdtvprofessor.com/HDTVAlmanac/?p=1491
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00140-ip-10-171-10-70.ec2.internal.warc.gz
en
0.958229
516
2.53125
3
Increasing frozen meat supplies underscore need for more temperature monitoring Wednesday, Mar 6th 2013 A recent report from the U.S. Department of Agriculture found that more meat and other perishable food items are being stored in refrigeration facilities now than in the past, underscoring the need for more temperature monitoring in the foodservice industry to prevent illness. The USDA study showed that more than 657 million pounds of chicken and more than 361 million pounds of turkey were in freezer units nationwide in January, up from more than 607 million pounds and more than 297 million pounds respectively 12 months earlier. In addition, total frozen pork supplies went from more than 585 million pounds in January 2012 to more than 605 million pounds earlier this year. According to Beef Magazine, total frozen meat supplies likely increased as restaurants and other foodservice providers purposefully took on additional inventory in preparation for the busier summer months. "So while inventories have grown in recent months (likely the result of strategic positioning), there's good reason to believe those stocks will be whittled away in the months to come as peak demand season begins to kick in," the news source said. Why more frozen meat should equal more temperature monitoring As food companies bring in more pork, chicken, turkey and other meats, they should also be utilizing temperature monitoring equipment to ensure that the stored meats remain fit for human consumption. Improper storage techniques increase the likelihood that bacteria and disease-causing pathogens appear in the food supply. Organizations should be leveraging all the tools at their disposal to combat foodborne illness, especially considering that a recent report from the Centers for Disease Control and Prevention found that approximately 46 percent of all annual food-related deaths were caused by land animals from 1998 through 2008. Considering that there are, on average, more than 9 million food borne illnesses, this means that improperly cooked and handled meat caused the untimely demise of hundreds of Americans during that 10-year period. Among all meats, U.S. residents were most likely to fall ill after eating poultry. According to the CDC, 19 percent of all deaths from a food borne illness were caused primarily by chicken and turkey. In comparison, dairy products were the likely culprit in 10 percent of all fatalities and leafy vegetables caused approximately 6 percent of all deaths related to a foodborne illness. "[O]ur outbreak-based method attributed most foodborne illnesses to food commodities that constitute a major portion of the U.S. diet," the report said. "When food commodities are consumed frequently, even those with a low risk for pathogen transmission per serving may result in a high number of illnesses. The attribution of food borne-associated illnesses and deaths to specific commodities is useful for prioritizing public health activities." While the CDC acknowledged that there are many factors responsible for the increase noted in the report, one way food service providers can reduce the likelihood of their supplies making patrons sick is by using temperature monitoring. According to the University of Missouri - St. Louis, the majority of disease-causing pathogens thrive at temperatures between 20 degrees and 50 degrees Celsius (68 degrees to 122 degrees Fahrenheit). By making sure that refrigeration and freezer rooms never reach this range, food companies can better ensure that their supplies are safe for human consumption.
<urn:uuid:010df2bd-73fe-4db1-b63c-961a8c9b215a>
CC-MAIN-2017-04
http://www.itwatchdogs.com/environmental-monitoring-news/cold-storage/increasing-frozen-meat-supplies-underscore-need-for-more-temperature-monitoring-399633
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280308.24/warc/CC-MAIN-20170116095120-00048-ip-10-171-10-70.ec2.internal.warc.gz
en
0.963911
662
2.609375
3
When it comes to security, the tech industry has a short memory. Lessons learned during the PC era are quickly forgotten when they stand in the way of making money in the post-PC world of mobile devices. For years, Microsoft added new features to Windows and Office without much thought given to security. When the Internet started taking off in the 1990s, all the vulnerabilities that had been ignored were suddenly accessible through the global network, leaving Microsoft scrambling for the next decade plugging holes and adding technology to combat malware. Now that we're in the mobile era, history is repeating itself. Hewlett-Packard analyzed 2,100 iOS apps from more than 600 Forbes Global 2000 companies and found that nine in 10 had vulnerabilities. The most common were unencrypted data storage on the device, the use of insecure protocols for transmitting data and failing to take simple steps during the development process to prevent reverse engineering. The latter is what hackers use to find vulnerabilities or to create counterfeit copies of popular apps. While HP studied only iOS apps, the company said its findings also apply to Android. Why security is weak The consensus among experts I talked to is that weak security is the result of developers being more interested in getting apps out to customers. There's also a general ignorance when it comes to good security practices that won't be addressed as long as speed is the priority. Developers can get away with sloppy security because there has never been a widespread malware infection on mobile devices. Just like the days before the Internet forced Microsoft to rethink security, PC software makers didn't worry about hackers as long as infection rates were low. The problem goes beyond just app developers. Ad networks that developers integrate into apps also introduce vulnerabilities. InMobi, which is used in many Android applications, was recently found to open a potential backdoor in a mobile device. Exploiting the flaw could enable a hacker to make phone calls, send text messages to premium rate numbers and post on social networks. The HP study shows that developers are building product that could one day provide an open door to the underlying operating system or the Web server that the app communicates with to send and receive data. This doesn't mean every vulnerability can be exploited. Google and Apple have built safeguards in their platforms that will stymie a lot of attacks. But those protections can only go so far. Hackers only need to find one exploitable vulnerability to break into a smartphone and steal sensitive data, such as website credentials or contact lists. The time to make security a fundamental in the application development process is before cybercriminals develop effective hacking tools for smartphones and tablets. I'm sure if Microsoft had it to do over again, Windows would have been secure by design from day one.
<urn:uuid:c7e18aa7-91ea-47a7-9fc9-ebdb3bfd200e>
CC-MAIN-2017-04
http://www.computerworld.com/article/2475660/mobile-security/why-mobile-app-developers-need-lessons-in-security.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00315-ip-10-171-10-70.ec2.internal.warc.gz
en
0.963337
557
2.65625
3
If processing large HDAM or DEDB databases then sorting the driving input into the same physical sequence as the DB will significantly improve performance. This process is often referred to as a RAPSORT and is simply an inhouse utility that reads the input file, gets the DB key and invokes the appropriate IMS randomiser to get the physical offset in the DB, and then creates an output file with each record prefixed by the physical offset value. This file is then sorted in the order of the physical offset, striping the offset. The new input file is now in the same order physically as the DB. Joined: 13 Jun 2007 Posts: 826 Location: Wilmington, DE RAPSORT and is simply an inhouse utility We would not have experience with this unless we were there. If you try to LOAD a keyed database out of sequence, the LOAD will not work. HDAM does not have a separate index, but it is keyed. I always just used syncsort - any external sort will work - sorted the input file in key sequence and then ran the load.
<urn:uuid:05dbaaec-0d39-44a5-b2c0-10e1ac08dd95>
CC-MAIN-2017-04
http://ibmmainframes.com/about37968.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00223-ip-10-171-10-70.ec2.internal.warc.gz
en
0.92309
241
2.6875
3
Tiny chip provides precise GPS navigation without the GPS - By Kevin McCaney - Apr 19, 2013 The Global Positioning System has long been an invaluable tool for the military, as well as anyone else looking to get from Point A to Point B. But sometimes GPS signals aren’t available, whether because of structural interference, a system glitch or intentional jamming. So in 2010, the Defense Advanced Research Projects Agency set out to develop a “timing and inertial measurement unit” (TIMU) made of microscale components that could deliver precise location and navigation without GPS. DARPA and researchers at the University of Michigan built a prototype chip, about one-third the width of a penny and six human hairs thick, that does the job. The TIMU, DARPA recently announced, performs the three functions necessary for navigation, simultaneously measuring orientation, acceleration and time. The chip has a six-axis inertial measurement unit made of three gyroscopes and three accelerometers, along with a master clock, DARPA said. It’s built from groundbreaking designs from DARPA’s Micro-Technology for Positioning, Navigation and Timing (Micro-PNT) program, launched in January 2010 with the goal of developing microscale clocks and inertial sensors. Six microfabricated layers make up the TIMU, each one 50 microns thick (the thickness of a human hair) and each performing a different function, DARPA said. Altogether, the whole package takes up 10 cubic millimeters. “Both the structural layer of the sensors and the integrated package are made of silica,” Andrei Shkel, DARPA program manager, said in the agency’s announcement. “The hardness and the high-performance material properties of silica make it the material of choice for integrating all of these devices into a miniature package. The resulting TIMU is small enough and should be robust enough for applications (when GPS is unavailable or limited for a short period of time) such as personnel tracking, handheld navigation, small diameter munitions and small airborne platforms.” Non-GPS navigation tools, of course, have been around at least since the early days of seafaring vessels. But even modern versions of the tools can be costly and cumbersome. DARPA has noted that a gyroscope used as an inertial sensor for a precision missile can cost up to $1 million and take a month to assemble by hand. The Micro-PNT program set out to scale down the process, by, among other things, developing 3D microfabrication methods. Among the methods researchers developed were microscale processes for glass blowing, quartz blowing and atomic layering of diamond in creating sensors that work like a Foucault pendulum, DARPA said. The idea isn’t to replace GPS, but to have an alternative in case the signal goes out. And, as with much of its research over the years, DARPA’s work in this area could have an impact outside the military — possibly, for example, helping with indoor location methods for 911 calls. Kevin McCaney is a former editor of Defense Systems and GCN.
<urn:uuid:21b7119d-2d22-4fc3-840e-81ab7e7dc588>
CC-MAIN-2017-04
https://gcn.com/articles/2013/04/19/tiny-chip-provides-precise-gps-navigation-without-gps.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00369-ip-10-171-10-70.ec2.internal.warc.gz
en
0.927059
658
2.984375
3
Why Use R? - Page 4 Connections With R So far the tools I've mentioned have been focused on the database portion of the problem -- gathering the data and performing some queries. This is a very important part of the Big Data process (if there is such a thing), but it's not everything. You must take the results of the queries and perform some computations, usually statistical, on them such as, what is the average age of people buying a certain product in the middle of Kansas? What was the weather like when most socks were purchased (e.g., temperature, humidity and cloudiness all being factors)? What section of a genome is the most common between people in Texas and people in Germany? Answering questions like these takes analytical computations. Moreover, much of this computation is statistical in nature (i.e., heavily math oriented). Without much of a doubt, the most popular statistical analysis package is called R. R is really a programming language and environment. It is particularly focused at statistical analysis. To add to the previous discussion of R, it has a wide variety of built-in capabilities, including linear and non-linear modeling, a huge library of classical statistical tests, time-series analysis, classification, clustering and a number of other analysis techniques. It also has a very good graphical capability, allowing you to visualize the results. R is an interpreted language, which means that you can run it interactively or write scripts that R processes. It is also very extensible allowing you to write code in C, C++, Fortran, R itself or even Java. For much of Big Data's existence, R has been adopted as the lingua franca for analysis, and the integration between R and database tools is a bit bumpy but getting smoother. A number of the tools mentioned in this article series have been integrated with R or have articles explaining how to get R and that tool to interact. Since this is an important topic, I have a list of links below giving a few pointers, but basically, if you Google for "R+[tool]" where [tool] is the tool you are interested in, you will likely find something. - Column Stores Key-Value Store/Tuple Store + R - CouchDB + R MongoDB + R Terrastore + R - Article about Teradata add-on package for R But R isn't the only analytical tool available or used. Matlab is also a commonly used tool. There are some connections between Matlab and some of the databases. There are also some connections with SciPy, which is a scientific tool built with Python. A number of tools can also integrate with Python, so integration with SciPy is trivial. Just a quick comment about programming languages for Big Data. If you look through a number of the tools mentioned, including Hadoop, you will see that the most common language is Java. Hadoop itself is written in Java, and a number of the database tools are either written in Java or have Java connectors. Some people view this as a benefit, while others view it as an issue. After Java, the most popular programming languages are C or C++ and Python. All of these tools are really useful for analyzing data and can be used to convert data into information. However, one feature that is missing is good visualization tools. How do you visualize the information you create from the data? How do you visually tell which information is important and which isn't? How do you present this information easily? How do you visualize information that has more than three dimensions or three variables? These are very important topics that must be addressed in the industry. Whether you realize it or not, visualization can have an impact on storage and data access. Do you store the information or data within the database tool or somewhere else? How can you recall the information and then process it for visualization? Questions such as these impact the design of your storage solution and its performance. Don't take storage lightly.
<urn:uuid:f46e57d3-2e16-49cc-9a81-4a1e3e1b7238>
CC-MAIN-2017-04
http://www.enterprisestorageforum.com/storage-management/why-use-r.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00057-ip-10-171-10-70.ec2.internal.warc.gz
en
0.955377
821
2.640625
3
Scientists at the Defense Advanced Research Projects Agency (DARPA) will next month detail what technology they need to build a cluster of 4 wirelessly-interconnected satellites for a 6-month demonstration mission to launch in early to in mid-2015. The testbed is yet another component of DAPRA's ambitious F6 program is intended to ultimately deploy what DARPA calls "fractionated modules" or individual small satellites that can act together as a traditional large spacecraft. In such an environment, each module would support a unique capability, such as command and control, data handling, guidance, navigation and payload. Modules could replicate the functions of other modules as well. Such modules can be physically connected once in orbit or remain nearby to each other in a loose formation, or cluster, harnessed together as a virtual satellite, DARPA stated. Such architecture has the potential to significantly enhance the adaptability and survivability of satellites, while also shortening development time for complex space systems and reducing the barrier-to-entry for participation in the national security space industry, DARPA says. DARPA says the F6 On-Orbit Demo Testbed will be made up of four satellite buses with a number of technical requirements including: - Host a Government-furnished Swift Broadband Satellite transceiver which utilizes the Broadband Global Area Network supported by the Inmarsat I-4 GEO constellation to provide persistent (near 24/7) on-demand broadband connectivity from the F6 demo cluster in low-earth orbit (LEO) to the ground network; - Supply and host a high-speed space-to-ground downlink transmitter and provide associated data exfiltration capability; - Supply and host a high-performance computing element that incorporates innovative and cost-effective processor architectures; - Host a Government-furnished mission sensor payload, while maximizing size, weight, power, and field-of-view capabilities. The testbed satellites will need to hit on some important objectives such as: 1. The capability to perform semi-autonomous long-duration maintenance of a cluster and cluster network including the ability to add and remove modules; 2. The capability to securely share resources across the cluster network with real time guarantees and among payloads or users in multiple security domains; 3. The capability to autonomously reconfigure the cluster to retain safety and mission critical functionality in the face of network degradation or component failures; 4. The capability to perform a defensive cluster scatter and re-gather maneuver to rapidly evade a debris-like threat. The testbed program will culminate in a 6-month on-orbit demo mission, with an estimated launch in early to mid-2015, DARPA said. Details of the System F6 On-Orbit Demonstration Testbed will be shared on Thursday, May 3, 2012 in Arlington, VA. The testbed announcement is but one part of the F6 program. In November DARPA issued a call for computer chip and electronics manufacturers to help it build the wireless communication system is will use to facilitate F6. From DARPA: "Essentially a network computing device, the F6 physically connects to and provides switching and routing functions between the spacecraft bus, the wireless inter-satellite transceivers, shared resource payloads (such as high-powered computing, data storage, and other communications links) and mission payloads such as sensors. The F6 serves as the hardware platform running the software that enables cluster networking including the network protocol stack, real-time resource sharing middleware, cluster flight applications and mission-specific applications. The F6 also provides cryptographic capability and other security features that enable a multi-level security environment. " Small, wirelessly-networked, energy efficient systems with sophisticated security policies and powerful processors are commonplace in today's world. They are not, however, state of the art in space, DARPA said. "Today's space electronics are clunky," said Paul Eremenko, DARPA program manager in a statement. "They provide limited processing speed and capability, they're bulky and power-hungry, and they are manufactured as bespoke, one-of-a-kind products." DARPA has also detailed the system software that will let operators operate and control the F6 virtual satellite system. The F6 Developer's Kit (FDK) is a set of open source interface standards, protocols, behaviors, and reference implementations thereof, necessary for any party, without any contractual relationship to any System F6 performer, to develop a new module that can fully participate in a fractionated cluster. A variety of companies have been participating in early F6 development including Boeing L3 Communications, Millennium Space Systems, Octant Technologies, and Science Applications International Corp. Lockheed Martin, Northrop Grumman Juniper Networks, IBM and Orbital Sciences. Layer 8 Extra Check out these other hot stories:
<urn:uuid:69677b05-aff6-4674-921c-f5b99295d1b5>
CC-MAIN-2017-04
http://www.networkworld.com/article/2222216/wi-fi/darpa-building-test-bed-for-virtual-satellite-clusters.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00361-ip-10-171-10-70.ec2.internal.warc.gz
en
0.907889
1,006
3.046875
3
State and local governments looking to improve efficiency and cut costs are casting their gaze skyward -- at streetlights -- for an answer. Some cities are modernizing their streetlights with light-emitting diodes (LEDs), others link them to centralized control systems and some do a combination of both. Replacing the high-pressure sodium (HPS) bulbs commonly used in streetlights with LEDs is a simple solution that can yield big benefits. According to a report from the Rensselaer Polytechnic Institute, Transcending the Replacement Paradigm of Solid-State Lighting, by Jong Kyu Kim and E. Fred Schubert, "Deployed on a large scale, LEDs have the potential to tremendously reduce pollution, save energy, save financial resources, and add new and unprecedented functionalities to photonic devices." Another strategy used by some municipalities is implementing a centralized control system that alerts officials when a light goes out. Previously a city worker or resident had to see a malfunctioning light and report it. A centralized system allows manpower to be used more efficiently and helps track energy consumption. Anchorage, Alaska, is lighting up the northern sky as the city works toward converting all its 16,500 streetlights to LEDs. According to Michael Barber, the city's lighting program manager, Anchorage purchased 4,300 LEDS in August 2008 for $2.2 million. He said energy efficiency and cost savings drove the initiative. So far, 1,200 lights have been installed, and Barber said the remaining 3,100 of them would likely be set up by May 2009. One of LEDs' main benefits -- besides using 50 percent less energy than traditional bulbs -- is that they can be connected to a centralized control system, which Anchorage has done. "Either over the power line or radio frequency, we' have a light that's communicating with a server and telling it, 'I'm burning at this temperature,' or, 'For some reason, I'm sucking up way more energy than I should,'" Barber said. The system lets the city know in real time when a light should be replaced or needs warranty support. That's important because LED bulbs are significantly more expensive. In the past, when HPS streetlight bulb failed within the warranty period, Barber said the city would forgo the warranty and just replace it because those bulbs are cheap -- only $10 each. LEDs, however, cost $500 to $1,000 apiece, so it's important to have accurate information. When and LED it loses 30 percent of its initial luminosity, it's considered to have failed. "With control systems we can have the light tell us when there's a warranty issue or if the light goes out," he said. "We'll see a surge and a change in the energy consumption on that circuit." Another benefit of the centralized system is increased efficiency through the use of controls, which leads to more energy and money saved. LEDs have dimmable ballasts that allow officials to change the light's brightness, which is a big advantage over HPS bulbs. Barber said the city is planning to dim the streetlights in residential neighborhoods between 10 p.m. and 5 a.m. by 40 to 50 percent. He hoped that by May 2009, the city's next round of budgets would be completed and there would be funding to continue retrofitting the remaining 12,200 streetlights. "We estimate that when we do the whole city, it will be within $1.5 [million] and $1.7 million a year in savings," Barber said. "We don't know what that would mean if we also implemented controls over the whole city, but it wouldn't be shocking to see 70 percent efficiency over the [HPS]." Centralized control systems are also benefiting cities that haven't converted to LED streetlights. About five years ago, Los Angeles began testing a remote-monitoring system on 5,000 of its more than 209,000 streetlights, according to Norma Isahakian, assistant director of the city's Bureau of Street Lighting. "I think the main benefit up to this point has been reporting on when the lights are out," Isahakian said. "We want to make sure the majority of lights are on, not just for the fact that we want the lights on, but there are also liability reasons." The city is attaching external computer boxes to its streetlights. Isahakian said the external units work best because Los Angeles uses more than one streetlight manufacturer. There's the cost of an external unit for each light and the base computer unit that information is transmitted to. "They use radio waves to get the information back to the main unit, and the main unit uses a cellular system to get it back to the main office," she explained. She said the project was initially launched in a convenient location where city-employed field workers were close enough to physically see the lights, which they then tracked online. The computer boxes are now installed on new streetlights in construction areas and on those that are replaced. Better workflow has been another improvement. "A lot of times when we go out to the unit, we know what's going wrong with it," Isahakian said. "Instead of making multiple trips, we'll make only one trip because we'll know the unit just needs to be changed." Los Angeles is also beginning to pilot the use of LED streetlights. According to the city's LED Street Lighting Energy Efficiency Program, the first phase involved retrofitting 100 streetlights between November 2008 and January 2009. According to a document from the program, "Based on preliminary analysis and evaluation of the development of the LED industry, the bureau is strongly considering a large-scale project to replace existing roadway fixtures into LED or any other high-efficiency light source." Isahakian said the bureau has been researching LEDs for the last couple of years, but only recently did the lights begin performing up to the standard it was looking for. "I think the remote-monitoring system and the LED fixtures together are really going to make more sense," she said, "because you're able to do more things with them, like dim the streetlights."
<urn:uuid:e7fb6c5f-c51c-423e-ade6-162b1f4f26fe>
CC-MAIN-2017-04
http://www.govtech.com/featured/Light-Emitting-Diode-Streetlight-Systems-Help.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00323-ip-10-171-10-70.ec2.internal.warc.gz
en
0.964316
1,266
3.0625
3
Stalnov O.,University of Southampton | Ben-Gida H.,Technion - Israel Institute of Technology | Kirchhefer A.J.,Boundary Layer Wind Tunnel Laboratory | Guglielmo C.G.,University of Western Ontario | And 3 more authors. PLoS ONE | Year: 2015 We study the role of unsteady lift in the context of flapping wing bird flight. Both aerodynamicists and biologists have attempted to address this subject, yet it seems that the contribution of unsteady lift still holds many open questions. The current study deals with the estimation of unsteady aerodynamic forces on a freely flying bird through analysis of wingbeat kinematics and near wake flow measurements using time resolved particle image velocimetry. The aerodynamic forces are obtained through two approaches, the unsteady thin airfoil theory and using the momentum equation for viscous flows. The unsteady lift is comprised of circulatory and non-circulatory components. Both approaches are presented over the duration of wingbeat cycles. Using long-time sampling data, several wingbeat cycles have been analyzed in order to cover both the downstroke and upstroke phases. It appears that the unsteady lift varies over the wingbeat cycle emphasizing its contribution to the total lift and its role in power estimations. It is suggested that the circulatory lift component cannot assumed to be negligible and should be considered when estimating lift or power of birds in flapping motion. © 2015 Stalnov et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Source El-Gammal M.,University of Western Ontario | El-Gammal M.,Boundary Layer Wind Tunnel Laboratory | El-Gammal M.,McMaster University | Naughton J.W.,University of Wyoming | And 2 more authors. Journal of Aircraft | Year: 2010 A study was conducted to employ direct measurements of surface pressure and skin friction to characterize the physics of the surface flow around a divergent trailing-edge (DTE) airfoil. The study also focused on estimating the profile drag from direct measurements accurately and compare it with that estimated from a survey in the airfoil far wake. The airfoil surface coordinates were selected to match the distribution of surface pressure, momentum thickness, and shape factor over the aft 20% of the chord to those on the DTE Douglas Long Beach Airfoil DLBA 243 in transonic flight. The model was constructed from a conventional glass-fiber reinforced composite sandwich. The two airfoil surfaces were tripped at 5%downstream of the leading edge by installing surface protrusions of 2 mm in diameter, 1 mm in height, and spaced 5 mm apart in the cross-stream direction. Velocity measurements upstream and in the far wake of the airfoil were obtained with pitot tubes. Source Doddipatla L.S.,University of Western Ontario | Doddipatla L.S.,Boundary Layer Wind Tunnel Laboratory | Naghib-Lahouti A.,University of Western Ontario | Naghib-Lahouti A.,Boundary Layer Wind Tunnel Laboratory | And 3 more authors. 40th AIAA Fluid Dynamics Conference | Year: 2010 Wake flows behind two dimensional bodies are mainly dominated by two coherent structures, namely the Karman-Benard vortices and the streamwise vortices, also referred to as rolls and ribs respectively. The three dimensional wake instabilities lead to distinct instability modes (mode-A, mode-B and mode-C or mode S) depending on the flow Reynolds number and geometric shape. The present investigation explores the mechanism in which the flow transitions to three dimensionality in the near wake of a profiled leading edge and blunt trailing edge body. A combination of Planar Laser Induced Fluorescence visualizations and Particle Image Velcoimetry measurements are conducted for Reynolds numbers ranging from 250 to 2300. The results indicate that three instability modes (mode-A, mode-B and mode-C) appear in the wake transition to three dimensionality, and their order of appearance does not occur through the traditional route as observed in circular cylinder flows. It is found that mode-C instability with a spanwise spacing of 2.4D (D being the trailing edge thickness) dominates the near wake development. © 2010 by hangan. Published by the American Institute of Aeronautics and Astronautics, Inc. Source
<urn:uuid:b5c34fe2-065e-4224-a2ba-9bb90ecb3833>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/boundary-layer-wind-tunnel-laboratory-2030366/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280308.24/warc/CC-MAIN-20170116095120-00049-ip-10-171-10-70.ec2.internal.warc.gz
en
0.92092
956
2.609375
3
Manufacturing Breakthrough Blog Friday October 2, 2015 The Goal Tree Basics In past posts I have explained that when using any of TOC’s TP’s, there are two distinctly different types of logic at play, sufficiency and necessity. Sufficiency logic tools use a series of if-then statements that connect cause and effect relationships between undesirable effects. Necessity logic uses the syntax, in order to have x, I must have y or multiple y’s. The Goal Tree falls into the category of necessity based logic and is used to lay out strategies for successful improvement. Bill Dettmer explained in a white paper he wrote that, “The Intermediate Objective (IO) Map dates back to at least 1995 when it was casually mentioned during a Management Skills Workshop conducted by Oded Cohen at the A.Y. Goldratt Institute, but it was not part of that workshop, nor did it ever find its way into common usage as part of the Logical Thinking Process (LTP). It was described as a kind of Prerequisite Tree without any obstacles.” Dettmer continued, “I never thought much about it for the next seven years, until in late 2002, when I began grappling with the use of the Logical Thinking Processes (LTP) for developing and deploying strategy.” At that time, Dettmer had been teaching the LTP to a wide variety of clients for more than six years, and had been dismayed by the number of students who had substantial difficulty constructing Current Reality Trees and Conflict Resolution Diagrams (CRD) of sufficient quality. According to Dettmer, they always seemed to take a very long time to build a CRT, and their CRD’s were not always what he would characterize as “robust.” He claimed they lacked reference to a “should-be” view of the system—what ought to be happening. It occurred to Dettmer that the IO Map he’d seen in 1995 could be modified and applied to improve the initial quality of CRTs. As time went on, Dettmer began to realize that the IO Map could serve a similar purpose with CRD’s. In 2007 Dettmer published a book, The Logical Thinking Process: A Systems Approach to Complex Problem Solving that introduced the world to this wonderful tool. Dettmer tells us that one of the first things we need to do is determine the system boundaries that we are trying to improve as well as our span of control and sphere of influence. Our span of control means that we have unilateral change authority while our sphere of influence means that at best, we can only influence change decisions. Dettmer explains that if we don’t define our boundaries of the system, we risk “wandering in the wilderness for forty years.” The Goal Tree Structure The hierarchical structure of the IO Map/Goal Tree consists of a single Goal and several entities referred to as Critical Success Factors (CSFs). CSF’s must be in place and functioning if we are to achieve our stated goal. The final piece of the Goal Tree are entities referred to as Necessary Conditions (NCs) which must be completed to realize each of the CSF’s. The Goal and CSF’s are worded as though they were already in place while the NC’s are stated more as activities that must be completed. The figure below is a graphic representation of a Goal Tree with each structural level identified. The Goal sits at the top with three to five Critical Success Factors directly beneath it. The CSF’s are those factors which must be in place if the Goal is to be realized. The CSF’s are those critical entities that must be in place if the Goal is to be achieved. For example, if your Goal was to create a fire, then the three CSF’s which must be in place are (1) a combustible fuel source, (2) a spark to ignite the combustible fuel source and (3) air with a sufficient level of oxygen. If you were to remove any of these CSF’s, there would not be a fire. Steven Covey suggests that we should, “Begin with the end in mind,” or where we want to be when we’ve completed our improvement efforts which is the purpose of the Goal. A Goal is an end to which a system’s collective efforts are directed. It’s actually a sort of destination which implies a journey from where we are to where we want to be. Dettmer also makes it very clear that the system’s owner is who determines what the goal of the system should be. If your company is privately owned, maybe the owner is a single individual. If there’s a board of directors, they have a chairman of the board who is ultimately responsible for establishing the goal. Regardless of whether the owner is a single person or a collective group, the system's owner(s) ultimately establishes the goal of the system. Critical Success Factors and Necessary Conditions There are certain high-level requirements which must be solidly in place and if these requirements aren’t achieved, then we simply will never realize our goal. These requirements are referred to as Critical Success Factors (CSFs). Dettmer recommends no more than three to five CSF’s. Each of the CSF’s have some number of Necessary Conditions (NCs) that are considered prerequisites to each of the CSF’s being achieved. Dettmer recommends no more than two to three levels of NC’s, but in my experience, I have seen as many as five levels working well. While to Goal and the CSF’s are written as terminal outcomes that are already in place, the NC’s are worded more as detailed actions that must be completed to accomplish each of the CSF’s. The relationship among the Goal, CSF’s and the supporting NC’s in this cascading structure of requirements represents what must be happening if we are to reach our ultimate destination. For ease of understanding, when I am in the process of constructing my Goal Trees, the connecting arrows are facing downward to demonstrate the natural flow of ideas. But when my structure is completed, I reverse the direction of the arrows to reveal the flow of results. In keeping with the thought of learning a tool and making it my own, I have found this works well, even though this is completely opposite of Dettmer’s recommendations for construction of a Goal Tree. In my next posting we will begin construction of a Goal Tree and begin to demonstrate why it is perhaps one of the best tools ever developed for achieving excellence in your company. As always, if you have any questions or comments about any of my posts, leave me a message and I will respond. Until next time. Dettmer, H. William. The Intermediate Objectives Map – White Paper, Goal Systems International, 2007 Dettmer, H. William. The Logical Thinking Process: A Systems Approach to Complex Problem Solving. Milwaukee, WI: ASQ Quality Press, 2007 Covey, Stephen R. The Seven Habits of Highly Effective People: Powerful Lessons in Personal Change. NY: Simon and Schuster
<urn:uuid:50cc3e80-d2b0-4474-9a1a-87eff258df1a>
CC-MAIN-2017-04
http://manufacturing.ecisolutions.com/blog/posts/2015/october/the-goal-tree-basics.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00011-ip-10-171-10-70.ec2.internal.warc.gz
en
0.961107
1,531
2.59375
3
Save disk space and money with the unique feature from Bacula Systems – Deduplication Volumes. - No other backup software stores data this way (patent pending technology) - Bacula Systems helps you overcome your scaling challenges - Raising the record size limit brings a major positive impact to your storage costs Deduplication refers to any of a number of methods for reducing the storage requirements of a dataset through the elimination of redundant pieces without rendering the data unusable. Unlike compression, each redundant piece of data receives a unique identifier that is used to reference it within the dataset and a virtually unlimited number of references can be created for the same piece of data. It is popular in applications that inherently produce many copies of the same data with each copy differing only slightly from the others, or even not at all. There are many storage systems and applications on the market today which implement deduplication. All can be classified into one of two deduplication types depending on how they store their data: - Fixed Block Deduplication takes places in units of a fixed size (typically 4kB -128kB). Data must be aligned on block boundaries to deduplicate - Variable Block Deduplication takes place in variable-length units anywhere from a few bytes to many gigabytes in size. Block boundaries do not exist. Deduplicable filesystems use fixed block deduplication. The optimal unit of deduplication is the record size and it varies depending on the filesystems. For example, 128kB by default for ZFS, 4kB for NetApp. Others available on request. The Traditional Way Traditional backup programs were designed to work with tapes. When they write to disks, they use the same format only writing to a container file instead of a tape. The Unix program tar is an example of this and so is Bacula’s traditional volume format. Files are interspersed with metadata and written one after the other. File boundaries do not align with block boundaries as they do on the filesystem. For this reason, backup data does not typically deduplicate well on fixed-block systems. Storage without deduplication The new era. Bacula Enterprise Deduplication Volumes. Deduplication Volumes store data on disks by aligning file boundaries to the block boundary of the underlying filesystem. Metadata, which does not align, is separated into a special metadata volume. Within the data volume, the space between the end of one file and the start of the next block boundary is left empty. Since every file begins on a block boundary, redundant data within files will deduplicate well using ZFS’s fixed block deduplication. This type of file is known as a sparse or holey file. Storage with Deduplication Volumes What to deduplicate? The following data types deduplicate well: - Files that change constantly but are only appended to large log files - Large files that change daily but only in small amounts - Monolithic Databases - Some types of email boxes - Identical files that appear in backups from many clients - Operating system data from virtual machines - Email attachments with multiple recipients Deduplication Volumes are limited only by ZFS itself: - Data will not dedupe across zpools - Deduplicated metadata is stored in the ARC/L2ARC - Only 1/4 of the ARC/L2ARC is reserved for metadata - Large dedupe repositories will require a large ARC/L2ARC How big does my L2ARC need to be? It is tempting to start with the total amount of primary data to be backed-up when calculating the size of the L2ARC but the space taken by holes in the Bacula volumes needs to be considered too. This must be subtracted when trying to estimate the total amount of primary data that can be backed-up and deduped using an L2ARC of a given size. One way to think about this is to picture the storage of data inside a deduplication volume in terms of full and partially full blocks. It is the number of these blocks that affects the size of the L2ARC, not the amount of data they contain. - Files smaller than the block size will consist of one partially full block - Files larger than the block size will consist of one or more full blocks and usually end with one partially full block Deduplication sizing – important parameters - The amount of data to deduplicate - The block size used for deduplication - The average percent full per-block vs empty space - The percentage of the L2ARC reserved for meta data Examples of the impact of changing the parameters: Primary data 100 TB ZFS record size 128 kB Average block fill percentage 50% Retention period 90 days – A typical situation with default values L2ARC metadata percentage 25% Daily percent of data changed 2% L2ARC size needed 560 GB L2ARC as percentage of primary data 0.547% – Changing daily percentage: 2% ->5% L2ARC metadata percentage 25% L2ARC size needed 1 110 GB L2ARC as percentage of primary data 1.074% – Changing L2ARC metadata percentage: 25% -> 50% Daily percent of data changed 5% L2ARC size needed 550 GB L2ARC as percentage of primary data 0.537% How to size? Accurate sizing is difficult in practice. Oversizing and using conservative estimates is recommended. To help in sizing your infrastructure for deduplication, Bacula Systems provides an online deduplication sizing calculator. – Install as much RAM as possible (ARC) – Use only SSDs for your L2ARC – Create a much larger L2ARC than you think you need How sizing impacts your costs? Current storage pricing trends bode well for ZFS deduplication. Solid State Drive (SSD) performance continues to increase and prices have come down significantly in the past years making large L2ARCs economically feasible. The combination of fast I/O processor (IOP) performance and large capacity is essential to maintain performance as the amount of data stored in the filesystem increases. Deduplication Volumes is supported with: - Nexenta Systems OpenStorage Appliances - NetApp Data ONTAP 8.0.1 and higher - Oracle / Sun ZFS Storage Appliances - White Bear Solutions WBSAirback Appliances - ZFS on Linux (64-bit only) - Need to see all our plugins? - BWeb™ Management Suite is a comprehensive GUI management suite for Bacula Enterprise Edition that provides the data reports, core metrics and analysis that system administrators need to provide to managers. - Training is available in different locations, depending on the Certified Bacula Systems Training Center you choose.
<urn:uuid:fd66534b-26ba-43d2-abd0-71586fadcbca>
CC-MAIN-2017-04
https://www.baculasystems.com/products/bacula-enterprise/deduplication-volumes
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00096-ip-10-171-10-70.ec2.internal.warc.gz
en
0.890074
1,458
2.875
3
This course is an introduction to spreadsheets, their core features and uses. Microsoft Excel remains one of the most popular applications in every Office and knowing how to effectively create, manage and manipulate spreadsheets is a key skill. The contents of this course covers the foundations of creating, editing and presenting spreadsheets using Microsoft Excel. It is therefore not version specific and topics covered are applicable to all currently supported versions of Microsoft Excel. This is currently only available as a traditional classroom course. This can be delivered from our training centre or from your offices. This course is applicable to all current supported versions of Microsoft Excel. There are no pre-requisites, as this is a beginner’s course. 5+ group = £200 per person (exc. VAT) 8+ group = £150 per person (exc. VAT) Spreadsheet Features and Uses Navigation and Selection Formatting and Sizing Valid Look & Feel Validation and Data Types Format Cells Menu Copying, Pasting and Inserting Customising your View Formulas & Fill Absolute Cell Referencing Charts – Options and Labels Flash Fill Exercise RAG Reporting Exercise Why use tables? This course is currently only available as classroom learning. Traditional classroom learning is still one of the most effective ways to learn. Our classroom training has the trainer delivering all the course information to the class with practical exercises and questions and answers throughout the entire session. You can check available dates and book onto the course below and download the course information below. If you have any questions, or would like to discuss other options please contact us.
<urn:uuid:13299995-2961-444c-9847-63019eb5ca9d>
CC-MAIN-2017-04
https://www.chorus.co/filters/training/courses/introduction-to-excel-training/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00004-ip-10-171-10-70.ec2.internal.warc.gz
en
0.890482
338
2.6875
3
The following is a guest post by Robert Kramers. In one way or another, biometrics has been around for a long time. Film fans may remember it from the 1993 film Demolition Man. This film is often cited as “predicting” the use of biometrics in security, but the truth is that this idea has been floating around since the discovery of fingerprints. So, what is bio-security? Bio-security uses biometrics as a way of identification and access control. It involves the use of body parts as an identification process, and works in a similar way we know fingerprint scanning works. An eye scanner, for instance, can detect all of the minute differences in the human eye, which are much more visible than people think, as evidenced by the photo below. When it comes to biometric security, everything from your face to your DNA can be used to identify you. This technology is not just the stuff of science fiction anymore and biometric security and is currently used on consumer tablets and smartphones as well as access to sensitive information and hardware within high level corporations. Biometrics has the potential to be used everywhere, from top-end businesses keeping entire buildings secure to artists, writers and photographers looking to secure the safety and protection of their intellectual property. Biosecurity is far more advanced than any other form of security, and contains fewer holes and breaches that leave more traditional systems vulnerable. Fast Access to Personal Information The main advantage of biometric technology over traditional security systems is that it detects things that are unique to the individual quickly and accurately. This means that a place of business can ensure that only the employees gain access and attempted crime can be pinpointed faster. Biometrics and a number of advancing technologies, including wireless power through resonant induction, have the ability to work cohesively. Items like this biometric sensor, which is printed directly on to human flesh (aka the biostamp) means, that a number of personally identifiable features has the ability to be obtained from the likes of a patient, in just seconds. Wireless power would provide the ability for the biostamp to communicate with internal devices like LVADs. This has the potential to help with medical information security, monitoring and identification. Accuracy of Identification Despite the misconception provided by popular sci-fi action movies, in real life modern biometric scanners detect capillary flow behind the eye or the finger, which means that they can only be used by a living, breathing person. Biometrics can also be used to stop security holes that other systems can not detect. For example, at Disney World, where a 5-day ticket can cost up to US$350, biometric fingerprint scanners are used to ensure that only the person who purchased a ticket can enter the park, which stops people from lending, sharing or even selling their ticket to others. Biometrics in use at Disney World (photo courtesy of Wikipedia) Absence of Anonymity Every time an eye scanner, fingerprint scanner or any other form of biometric is used to allow access for one person, that person’s details can be stored in a database. Not only is their physiological data stored which can be used to find their name, address and more but also information on when and where they logged in. By scanning for fingerprints, irises and other unique parts of an individual’s physiology, it ensures that it knows exactly who is trying to gain access at any given time. This information can be used across the board, from employers who simply want to make sure their employees are logging in and doing their work when they should, to those investigating crimes on the premises. We live in an age where everything from luxury handbags to prescription medication can be copied to the finest details, and this also applies to keys and key cards, the very things that keep us secure using current security standards. The beauty of biometrics is the near impossibility of duplication. As each individual is unique, and due to the complex nature of the biometric systems, moulds and other inanimate things cease to work. Loss and Theft Finally, whereas keys and codes can be lost, biometrics can not. Simply put, you can’t lose you. Not only does this mean that you will always have access, but it also guarantees that you are not leaving open access to your property on the subway or in the back of a taxi. Where your access is, you are also – which adds another layer of complexity and security to accessed gained. In conclusion, the idea of biometrics and security have an increasing number reasons to be adopted particularly for reasons related to security. Additionally, it is very possible that some forms of biometric security like vein pattern recognition and eye scanning will have the ability to provide hygiene benefits due to non-contact biometric interpretation. This was written by Robert Kramers. The technology enthused freelance blogger at RobertKramers.com
<urn:uuid:2509b7ef-ac45-4cd2-ba7b-d545de95aacc>
CC-MAIN-2017-04
http://blog.m2sys.com/category/vascular-biometrics/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00490-ip-10-171-10-70.ec2.internal.warc.gz
en
0.951314
1,028
2.859375
3
Audit User Passwords With John the Ripper If all your users choose passwords made up of at least twenty random characters or symbols, and if they are used with secure ciphers, then the chances of anyone cracking those passwords is just about nil. The problem is that normal human beings aren't good at memorizing long, random strings, so most users choose easy-to-remember passwords. Even if you enforce a measure of password security by insisting on a minimum password length of, say, eight characters, including a mixture of numbers and letters, the results are often fairly predictable: you'll find plenty of examples like password1, duckduck2 or pa55w0rd. While these passwords may look secure at first glance, they're not. They're words you'll find in a dictionary, modified by adding a digit on the end, repeated with a digit on the end, or modified using fairly predictable letter to number substitutions – a 5 for an s, a 0 for an o, and so on. Let's think how a hacker might attempt to get hold of passwords. On a Linux machine they may be kept in a password file in /etc/passwd, or more likely in linked shadow file, /etc/shadow. On a Windows machine they may be in the SAM, or in just about any folder that an application chooses. In fact you won't find the passwords in those places, but what you will find are hashes of those passwords - the result of putting the password through a secure hashing function. Since hashing functions are one-way or trap-door functions, having the hash won't help you get back to the original passwords. How do computer systems do it then? The answer is that they don't. When they request a password from a user, they simply hash the input from the user, and compare this hash with the hash in the password file. If the two hashes match, the correct password must have been entered (ignoring the possibility of hash collisions, when different input produces the same hash, for a moment.) So stealing a list of hashes gets a hacker nowhere in itself. He has to figure out the passwords which would produce those hashes, and there are no algorithms for doing that. There are a few options though: smart guesses, a methodical dictionary or wordlist attack that tries every word in a list, or a straightforward brute force attack that tries every combination of numbers and letters till it finds the right one. Smart guessing can be effective if you know a great deal about the person who's password you are trying to find – their pet's or child's name, or the make of the car they drive, for example. Dictionary or wordlist attacks are also highly effective, precisely because, as mentioned at the start of this article, people tend to choose English words as passwords. Since they tend to add numbers or apply simple transformations, it's necessary to combine a wordlist attack with word mangling rules which try variations of each word – in other words, trying password1, 2password, passwordpassword3, pa55w0rd123 as well as just password. Brute force attacks are theoretically very effective – the trouble is that they tend to take too long. Brute forcing a password can take days or weeks, or, more likely, centuries. As a network administrator, how do you know which users have chosen passwords that can quickly be guessed or discovered using a brute force or dictionary attack, and which have chosen secure ones? After all, you can't tell just by inspecting the hashes. That's where John the Ripper - or "John" to its friends – comes in. John is a multi-platform open source tool for carrying out smart guesses, wordlist attacks with word mangling, and even brute force attacks, on password hashes. Its primary purpose is to detect weak Unix password, but, according to Solar Designer, John's developer, "besides several crypt(3) password hash types most commonly found on various Unix flavors, supported out of the box are Kerberos AFS and Windows NT/2000/XP/2003 LM hashes, plus several more with contributed patches." Unlike many of the open source tools we've looked at over the past weeks, John has no built-in GUI (although a front end for the Windows version, called FScrack, can be downloaded separately). Fortunately it's pretty simple to use, so running it from the command line shouldn't be a problem. Let's imagine you have got a file containing a bunch of password hashes that you've taken from one of your systems. On a Linux system you can get these by combining /etc/passwd and /etc/shadow into a file called passwordlist.txt using John's unshadow command: unshadow /etc/passwd /etc/shadow > passwordlist.txt The result is a file which might looks this one, which has two users, user1 and user2, and two corresponding password hashes. (See Figure 1.) To run a test on the list of hashes, simply type: When run with no options, John gets to work on the passwordlist.txt file, first attempting a single attack, using login information from the password file to do a basic smart guess attack. It then carries out a wordlist attack using the default wordlist supplied with John, or any other list that you have configured John to use, followed by a brute force attack. (See Figure 2.) In this case, John finds the simple passwords dd (user1) and ddd (for user2) in a fraction of a second. This illustrates the point that short passwords can be found easily, and also shows the power of hashing functions. The two passwords dd and ddd differ only by one "d", yet the DES hashes they produce are identical in length and completely different. Very similar input leads to completely different output, and input of different lengths produces output that is always the same length – in this case 13 characters. John stores any passwords it cracks in a results files called john.pot, and you can view these passwords and their associated usernames by typing: john --results passwordlist.txt You can try to crack passwords in more than one list at once simply by adding the names of the extra lists: john passwordlist.txt passwordlist1.txt passwordlist3.txt There are many, many other options you can use to refine how john runs. Once of the most useful is: which only attempts to crack root user (UID=0) passwords. For a complete list of options and examples, and to download John, go to http://www.openwall.com/john/. How you decide to use John is up to you. You may choose to run it on all the password hashes on your system regularly to get an idea of what proportion of your users' passwords are insecure. You could then consider how you could change your password policies to reduce that proportion (perhaps by increasing the minimum length.) You may prefer to contact users with weak passwords and ask them to change them. Or you may decide that the problem warrants some sort of user education program to help them select more secure passwords that they can remember without having to write them down. If nothing else, John will very quickly alert you if you have a password security problem. Can you afford not to download it and give it a try?
<urn:uuid:0c4c7c88-a745-4261-a90f-b58dff634204>
CC-MAIN-2017-04
http://www.enterprisenetworkingplanet.com/print/netsecur/article.php/3744576/Audit-User-Passwords-With-John-the-Ripper.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00398-ip-10-171-10-70.ec2.internal.warc.gz
en
0.935045
1,521
3.09375
3
New NOAA Web site tracks Arctic sea ice loss Arctic Future Web site provides satellite measurements of climate change - By Doug Beizer - Mar 18, 2010 A new National Oceanic and Atmospheric Administration Web site provides satellite measurements of Arctic sea ice loss and examines Arctic science and policy issues, according to NOAA. The Arctic Future site, launched March 16, is designed to inform businesses, communities and governments about how changes in the Arctic region can also influence weather in the mid-latitudes where a large part of the global human population lives, according to NOAA. The site brings together cause-and-effect-graphics with links to the scientific literature that backs up the statements. The site also includes an explanation of global weather and climate effects from the loss of summer sea ice. “Pulling this information together on one Web site is a way to highlight the continuing loss of Arctic sea ice in summer and its broader implications for climate,” said James Overland, an oceanographer at NOAA’s Pacific Marine Environmental Laboratory whose work appears on the new site. “For example, climate models show that changes in the Arctic can impact weather in the mid-latitudes including the United States, Europe and Asia.” Doug Beizer is a staff writer for Federal Computer Week.
<urn:uuid:8ce3318c-9d59-47df-8b55-7ff9bc2b4d39>
CC-MAIN-2017-04
https://fcw.com/articles/2010/03/18/noaa-arctic.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00398-ip-10-171-10-70.ec2.internal.warc.gz
en
0.911114
266
2.9375
3
When you hire someone to do work or to deliver a product, you expect them to do it right, and they expect to be paid. When transactions are simple, such as the purchase of fuel at a gas station, there is no confusion about whether the services were delivered or the right amount paid. However, complex purchases (procurements) are not so easy to assess. Contracts become necessary when there is uncertainty about who will do what (scope), by when (schedule), and for how much (cost). They are used to clarify expectations and to define mechanisms for problem resolution in the event of misunderstanding that leads to conflict. Contracts are meant to solidify/clarify/explain commitments on both sides of an agreement. Contracts should state exactly what the seller will do or deliver, and when, as well as what consideration the buyer will provide (and when) in exchange for those goods and services. Allocation of Risk Contracts clarify the allocation of risk: “If this happens, it’s your problem; if that happens, it’s my problem.” Contracts explain what actions will be taken under various future outcomes so that there is no confusion about how problems will be resolved. Contracts do not keep people honest. They do not prevent fraud and criminal behavior. They just provide a method of recourse (the courts) in the event of dishonest behavior or disagreement between the parties (honest or not). Transfer of Responsibility From a practical perspective, contracts are used in business to ensure that responsibilities are transferred in exchange for benefits. For example, if I hire someone to do work and I pay them in advance, without a contract in place, then I am taking a chance of the work not getting done and the money being lost. If the ‘contractor’ does not complete the work for which they were paid, how can I prove that they were paid or show what the payment was for if there is no contract? On the other hand, if I hire someone to do work and commit to paying them in arrears (after the work), but there is no contract in place, they take a chance on doing the work and not getting paid. What a contract does is document the commitments on both sides. Agreements and commitments are written down before the work begins. If either side fails to live up to their promises, the dispute can be resolved using the courts. Risk transfer (from a buyer’s perspective) means making someone else responsible in exchange for payment. If the buyer wanted to transfer all of the risks to a vendor (schedule, scope, and cost inflation), they would need to find a vendor willing to sign a contract for a fixed price, with clearly defined scope and a rigid completion date. Of course, the contractor would need to be well-paid to accept all of these risks. Buyers are able to transfer risk, but it is not free. So, if a company decides to do the work themselves (internally), they retain all the risks. However, if they choose to contract out the work, they are able to transfer some portion of the risk in exchange for financial reward. BUT, and this is a big but, risks are not effectively transferred from a buyer to a seller unless there is a legally enforceable contract in place to ensure that the right work gets done, gets done properly, and gets done on time. This is where experience and the legal department come in.
<urn:uuid:e3e50105-41f0-4587-a1f3-d4b574c542cd>
CC-MAIN-2017-04
http://blog.globalknowledge.com/2010/10/07/how-are-contracts-used-in-projects-2/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00334-ip-10-171-10-70.ec2.internal.warc.gz
en
0.965436
715
2.9375
3
Change management involves understanding and controlling exposure to hazards so that the overall risk to the business is handled in an efficient and effective manner. The intent is to act as an enabler that provides a mechanism by which the business can quickly adapt and respond to changing conditions without the negative consequences that are often associated with hasty action. Change management supports business adaptation in several ways. - Effective change management offers a standardized method that evaluates the potential positive and negative aspects of change and allows for the prompt handling of all change-related activities. - Change management makes sure that all changes are recorded, evaluated, properly planned and accounted for so that the organization has an ongoing living history of change-related activities. - Change management minimizes the disruptions often associated with change at all levels. ITIL v3 describes a formal change management process that includes steps to make sure changes are formally described, adequately reviewed for their impact on the business, assessed and coordinated with other changes and ongoing business activities. Even the simplest changes could cause risk. For example, a regular update to a desktop operating system can result in users being unable to use desktop applications, which could cause unanticipated downtime and impact the business. The risk of change can often be identified in five ways. - The risk of unauthorized and properly assessed changes - The risk of unplanned outages - The risk of a low change success rate - The risk of high numbers of emergency changes - The risk of significant project delays The ITIL change management best practices propose that to address these five risks, seven questions must be answered about every change. By following a standardized process that answers these questions, organizations can reduce the numerous risks associated with change. For example, let’s consider a change that many organizations frequently face: an update to a set of firewall rules driven by an updated security policy. Using the seven questions, we might arrive at the following answers. - Who raised the change? This identifies both the business and IT sponsors of the change. - What is the reason for the change? Firewall rules are being updated to match recent security policy changes. - What is the required return? The policy changes were specific to a new business partner, so the expected return is that Internet traffic from this new business partner will be allowed through the firewall, which facilitates new business transactions at an estimated daily value of $25,000. - What are the risks? The firewall rules could be incorrectly set, resulting in malicious traffic being allowed into the enterprise and/or resulting in an inability to accept traffic from the new business partner. - What are the required resources? This identifies the specific tools and equipment used to deploy the change as well as the target configuration items for the change. - Who is responsible for the build, test and implementation? This identifies the people responsible for making sure that the change is correctly built, tested and implemented. - What is the relationship between this and other changes? Are any other mutually exclusive changes occurring at or near the same time as this change? Is there any known interaction between this change and any other changes? As you can see, an effective change management process uses these seven questions to generate enough information so that critical aspects of proposed changes are understood and informed decisions can be made about whether or not to proceed with a proposed change or if significant additional planning must occur before carrying out any specific change. Change management provides a mechanism by which organizations can understand and control their exposure to risk and, where possible, effectively coordinates aspects of change while considering interactions between changes as well as the impact of change upon business operations. Excerpted from the Global Knowledge white paper Understanding and Managing the Risk of Change.
<urn:uuid:2c87c056-7ca7-49b6-a4a7-77413ac41580>
CC-MAIN-2017-04
http://blog.globalknowledge.com/2012/08/29/what-is-change-management/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00334-ip-10-171-10-70.ec2.internal.warc.gz
en
0.943198
751
2.515625
3
Which method of Layer 3 switching uses a forwarding information base (FIB)? Which two statements are true about best practices in VLAN design? (Choose two.) Refer to the exhibit. On the basis of the information provided in the exhibit, which two sets ofprocedures are best practices for Layer 2 and 3 failover alignment? (Choose two.) If you needed to transport traffic coming from multiple VLANs (connected between switches),and your CTO was insistent on using an open standard, which protocol would you use? Under what circumstances should an administrator prefer local VLANs over end-to-endVLANs? What are some virtues of implementing end-to-end VLANs? (Choose two) Which of the following statements is true about the 80/20 rule (Select all that apply)? The Company LAN is becoming saturated with broadcasts and multicast traffic. What couldyou do to help a network with many multicasts and broadcasts? The Company LAN switches are being configured to support the use of Dynamic VLANs.Which of the following are true of dynamic VLAN membership? (Select all that apply) The Company LAN switches are being configured to support the use of Dynamic VLANs.What should be considered when implementing a dynamic VLAN solution? (Select two)
<urn:uuid:efda73c6-c5bb-4339-bc44-22ed78d3886a>
CC-MAIN-2017-04
http://www.aiotestking.com/cisco/category/exam-300-115-implementing-cisco-ip-switched-networks-switch-v2-0-update-january-15th-2016/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00454-ip-10-171-10-70.ec2.internal.warc.gz
en
0.928556
271
2.703125
3
The most popular and comprehensive Open Source ECM platform Electronic Health Records (EHRs) is electronic information that records the state and history of a patient’s health. The goal for EHRs is to enable the easy and secure exchange and sharing of digital health data between health care providers and institutions. EHRs can include any of the following types of data: demographics, medical history, medication and allergies, immunization status, laboratory test results, radiology images, vital signs, personal stats like age and weight, and billing information. EHRs have the potential to speed the delivery of medicine to patients, reduce costs, and minimize errors. A recent report from the Optum Institute for Sustainable Health shows that that while adoption of EHRs is steadily increasing, patients remain wary of the security of the technology. A Ponemon study on the security of EHRs found that the number of data breaches in 2011 surged by 32 percent. The Ponemon study also found that the cost of resolving each data breach is typically in the millions of dollars. But beyond security, other barriers that potentially block the success of EHR technology include: - EHR implementation costs are not insignificant. Government incentives have reduced this as a barrier, but costs still play into the equation. - EHRs often do not capture complete information. Hospitals say that key care information is available in the EHR only about half the time. - EHRs are typically implemented on proprietary systems and often the interoperability and exchange of data between multiple systems is difficult. - Management of EHRs brings along with it more responsibility for providers and institutions and with responsibility comes associated financial risks.
<urn:uuid:bb022115-e92d-4bc7-a58e-b8adeaf6aa00>
CC-MAIN-2017-04
http://formtek.com/blog/health-care-worry-of-continued-data-breaches-barrier-to-ehr-success/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00270-ip-10-171-10-70.ec2.internal.warc.gz
en
0.941527
343
3.03125
3
According to a recent Ponemon study, since 2010 cybercrime costs have climbed 78% and the time required to recover from a breach has increased 130%. On average, U.S. businesses fall victim to two successful attacks per week where their perimeter security defenses have been breached. Penetration testing (pen testing), also known as “ethical hacking,’ is an important and key step in reducing the risks of a security breach because it helps provide IT staff with an accurate view of the information system from an attackers point of view. The pen test process results in an active analysis of the system for any potential vulnerabilities that could result from poor or improper system configuration, from both known and unknown hardware or software flaws, or operational weaknesses in process or technical countermeasures. In other words, through pen testing, IT teams find the holes and vulnerabilities and quickly work to fix these areas to prevent attacks. The one thing that separates a pen tester from an outside malicious attacker is permission to gain entry to the information system. The pen tester will have permission to “attack’ and is thereby responsible to provide a detailed report of results found. Examples of a successful penetration would be obtaining confidential documents, identity information, databases and other “protected” information – all without the need for passwords or other security measures. Pen tests are a component of a full security audit. For example, the Payment Card Industry Data Security Standard (PCI DSS), and security and auditing standard, requires both annual and ongoing pen testing (after system changes). Pen tests are valuable for several reasons, including: - Determining the risk associated with a particular set of attack vectors - Identifying higher-risk vulnerabilities that result from a combination of lower-risk vulnerabilities exploited in a particular sequence - Identifying vulnerabilities that may be difficult or impossible to detect with automated network or application vulnerability scanning software - Assessing the magnitude of potential business and operational impacts of successful attacks - Testing the ability of network defenders to successfully detect and respond to the attacks - Providing evidence to support increased investments in security personnel and technology. Obviously, there are a variety of ways to secure databases, applications, and networks, as there are many layers and levels to be secured. But the only way to truly assess the security of an environment is through direct testing. A good pen tester can actually replicate the types of actions that a malicious attacker would take, giving IT a more accurate view of the vulnerabilities within a network at any given time. There are a number of high quality commercial tools available, that can be implemented to ensure that both testing parameters and results are high-quality and trustworthy, but nothing replaces a hands-on direct test. Even so, the quality of pen testing can vary by the skill and thoroughness of the pen tester. Given the limited time available for testing it is impossible to exercise all aspects of an application with all possible attack vectors. This problem is compounded in environments where secure coding practices have started to take root. Often the first phase of secure coding often involves limiting failure feedback to the users to limit the information a hacker has to determine he has discovered a flaw. Unfortunately these same limitation make the pen testers job more difficult as well. Unfortunately, this means it is highly unlikely that a pen tester will find all the security issues. To aid in finding these partially obscured vulnerabilities it is necessary to monitor the application from within. This insures that tests that breach the application but don’t create a response the pen tester can use will still be seen, as they are still vectors that could be exploited by a dedicated hacker. Further, it’s important to note that a pen test is a snap shot in time and new vulnerabilities appear every day. Companies have to employ continuous monitoring throughout their information systems including in the database tier and be vigilant against attacks. For example, if a pen test was performed on a Monday, the organization may pass the pen test. But what if the next day, there’s an announcement of a new vulnerability in database servers that were previously considered secure? And the next week or next month another vulnerability is announced? This is a scenario that plays out on a regular basis. Companies are constantly playing “catch up’ apply patches. Ongoing, regular pen testing is critical and has proven to be a highly accurate method in identifying information system vulnerabilities. To get the most out of a thorough pen test the system should be properly instrumented to log all activity at the network tier, web tier, and database tier. At the conclusion of the pen test the logs from these instruments can provide extremely valuable insight into the system vulnerabilities. As with most policies and procedures however, there still may be issues that need resolving. Many organization feel that pen testing is an area open for “abuse’ – most likely due to the fact that there are no firmly adhered to rules for the pen testing procedure. It is possible for a pen tester to skirt the process. The PCI DSS regulation has 12 mandatory requirements for stringently protective guidelines, built to preserve the safety and identity of cardholder data – and in particular, section 11.3 for example, gets to the heart of the pen test, which is quite different from the former sub-section requirements. 11.3 is technically not a new requirement. Previous versions of the PCI standard made assumptions merchants would always conduct legitimate pen tests. Unfortunately, 11.3 is an area of the PCI DSS regulation that has been excessively abused. Companies have previously cut corners on this requirement and many pen testers were know to conduct meaningless scans in place of real testing. The new 3.0 version of the PCI DSS regulation effectively ends this scenario and companies will be required to develop and adopt an official methodology for testing. However, some believe that V3.0 is still lacking with regards to the precise industry-accepted methodology for pen testing the merchant should implement. The good news is that the PCI Council has continued to follow up on this issue and is forcing new measures be adopted by organizations around the world. PCI DSS 3.0 requires that organizations identify the scope of their card data environment and have a pen test conducted that proves the card data environment is truly segmented from the rest of their network and the open Internet. With the new rules in place with V3.0, demand for pen testers is on the increase, which is probably a good thing. The new requirements should help stop the abuse, and foster policies for accurate pen testing. These new pen testing requirements are long overdue. Merchants need to take pen testing seriously and adopt the new requirements as soon as possible to ensure they’re prepared for their first PCI DSS 3.0 assessment this year.
<urn:uuid:f4705113-9b0a-43bb-b889-478d73b73c15>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2014/01/23/penetration-testing-accurate-or-abused/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281746.82/warc/CC-MAIN-20170116095121-00572-ip-10-171-10-70.ec2.internal.warc.gz
en
0.954245
1,389
2.890625
3
"There are a number of significant vulnerabilities in technologies relating to the IE domain/zone security model, the DHTML object model, MIME-type determination, and ActiveX. It is possible to reduce exposure to these vulnerabilities by using a different Web browser, especially when browsing untrusted sites," US-CERT stated in a vulnerability note. US-CERT is a non-profit partnership between the Department of Homeland Security (DHS) and the public and private sectors. It was established in September 2003 to improve computer security preparedness and response to cyber attacks in the U.S. US-CERT researchers say the IE browser does not adequately validate the security context of a frame that has been redirected by a Web server. It opens the door for an attacker to exploit the flaw by executing script in different security domains. To protect against the flaw, IE users are urged to disable Active scripting and ActiveX controls in the Internet Zone (or any zone used by an attacker). Other temporary workarounds include the application of the Outlook e-mail security update; the use of plain-text e-mails and the use of anti-virus software. Surfers must also get into the habit of not clicking on unsolicited URLs from e-mail, instant messages, Web forums or internet relay chat (IRC) sessions. See the complete story on Internetnews.com.
<urn:uuid:0f038bde-8fa8-48f5-9d40-dca024aba176>
CC-MAIN-2017-04
http://www.cioupdate.com/news/article.php/3375441/Dump-IE-Says-US-CERT.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00298-ip-10-171-10-70.ec2.internal.warc.gz
en
0.902882
284
2.96875
3
Entrust SSL Security Entrust SSL is Security Secure Sockets Layer (SSL) digital certificates are electronic files that are used to identify people and resources over networks such as the Internet. Digital certificates also enable secure, confidential communication between two parties using encryption. Certificates are issued by a Certification Authority (CA). Much like the role of a passport office, the CA validates the certificate holder’s identity and “signs” the certificate so that it cannot be tampered with or altered. Cutting Edge Encryption Your website’s security is our number one priority. That’s why Entrust certificates feature cutting-edge 256-bit encryption – the most secure encryption available – to secure your data. Entrust certificates support SHA-2 algorithms with ECC used in our root certificates, delivering the strongest security and increased performance. Protect and authenticate identities in the cloud The protection and authentication of digital identities is one of the key components in securing online transactions or communications. Entrust is diligent in ensuring we meet or exceed industry requirements for the issuance and management of publicly-trusted certificates and SSL Security. This added level of authentication makes it more difficult for your identity to be misused and your account compromised. Safe Use of Wildcard Certificates Properly managed Wildcard Secure Socket Layer Certificates can provide increased flexibility for system administrators, but they come with increased risk. Entrust recommends using proper safeguards when deploying Wildcard certificates. Safe Use of Multi-Server Digital Certificates Properly managed, multi-server certificates can provide increased flexibility. However, they also decrease SSL Security and increase the probability of eavesdrop and impersonation attacks. Entrust recommends using proper safeguards when deploying multi-server certificates. Entrust Legal Repository This page contains information relating to the use and issuance of certificates by Entrust
<urn:uuid:c6eadb05-9cf0-49f0-99bb-e17b894a5646>
CC-MAIN-2017-04
https://www.entrust.com/ssl-security/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280891.90/warc/CC-MAIN-20170116095120-00444-ip-10-171-10-70.ec2.internal.warc.gz
en
0.868866
384
2.890625
3
GCN LAB IMPRESSIONS Army developing laser-guided lightning bolts - By Greg Crowe - Jul 02, 2012 Engineers at the Army’s Picatinny Arsenal in New Jersey have managed to send lightning bolts down a laser beam. And they didn’t even use The Force (as far as we know). The idea of firing targeted lightning bolts — in this case, a short 500 billion-watt burst of optical power — has been a staple of science fiction fantasies, but has been out of the reach of anyone in the real world. The Laser-Induced Plasma Channel, LIPC, could change that. Light travels more slowly in a gas (such as our atmosphere) than in a vacuum, and gets even a little slower when the beam pulses more intensely. If the pulses are intense enough, the power output of the laser beam increases to the point that it ionizes the air surrounding it, turning it into plasma. This sheath of plasma conducts electricity far better than the surrounding, ionized air. So a high-voltage current can be sent along the path of the laser beam and into the beam’s target. To think that people get paid to think this stuff up. This can have many applications. Most notably, and I hope the first thing they put this to use for, it could be used to detonate unexploded devices (such as land mines or IEDs) safely from a distance. As long as the target conducts electricity better than the ground it’s sitting on, the current caused by the beam will cause it to detonate pretty reliably. I know you’re thinking, well, why not use this application of technology to beam power everywhere? Well, a couple of things. First, it would take a great deal of power to have a laser beam constantly ionize the surrounding air. Second, there would be nothing to stop the ionization from occurring inside a lens, or within the amplification device, so they have to send pulses down the laser beam so the electricity goes where they want it. So, I think for now we will have to stick to the wireless power technology we have available. A few things have to be accomplished before this can be ready for operational use in the field. Synchronizing the laser pulse with the high voltage is currently tricky at best. Also, many of the components need to be ruggedized to survive combat conditions. But hopefully someday soon we will see it in use, saving the lives of many of our soldiers by making bomb disposal safer. You can use your imagination to figure out other potential uses. Greg Crowe is a former GCN staff writer who covered mobile technology.
<urn:uuid:9e71b482-6f7e-4903-b253-371e68ed99d3>
CC-MAIN-2017-04
https://gcn.com/articles/2012/07/02/army-death-ray-laser-guided-lightning-bolt.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00352-ip-10-171-10-70.ec2.internal.warc.gz
en
0.950039
557
2.953125
3
Scientists Start Biggest Physics ExperimentBy Reuters - | Posted 2008-09-10 Email Print Experiments using the Large Hadron Collider (LHC), the biggest and most complex machine ever made, could revamp modern physics and unlock secrets about the universe and its origins. The project has had to work hard to deny suggestions by some critics that the experiment could create tiny black holes of intense gravity that could suck in the whole planet. GENEVA (Reuters) - International scientists celebrated the successful start of a huge particle-smashing machine on Wednesday aiming to recreate the conditions of the "Big Bang" that created the universe. Experiments using the Large Hadron Collider (LHC), the biggest and most complex machine ever made, could revamp modern physics and unlock secrets about the universe and its origins. The project has had to work hard to deny suggestions by some critics that the experiment could create tiny black holes of intense gravity that could suck in the whole planet. Such fears, fanned by doomsday writers, have spurred huge interest in particle physics before the machine's start-up. Leading scientists have dismissed such concerns as "nonsense." The debut of the machine that cost 10 billion Swiss francs ($9 billion) registered as a blip on a control room screen at CERN, the European Organization for Nuclear Research, at about 9:30 a.m. (3:30 a.m. EDT). "We've got a beam on the LHC," project leader Lyn Evans told his colleagues, who burst into applause at the news. The physicists and technicians huddled in the control room cheered loudly again an hour later when the particle beam completed a clockwise trajectory of the accelerator, successfully completing the machine's first major task. Eventually, the scientists want to send beams in both directions to create tiny collisions at nearly the speed of light, an attempt to recreate on a miniature scale the heat and energy of the Big Bang, a concept of the origin of the universe that dominates scientific thinking. The Big Bang is thought to have occurred 15 billion years ago when an unimaginably dense and hot object the size of a small coin exploded in a void, spewing out matter that expanded rapidly to create stars, planets and eventually life on Earth. Problems with the LHC's magnets caused its temperature -- which is kept at minus 271.3 degrees Celsius (minus 456.3 degrees Fahrenheit) -- to fluctuate slightly, delaying efforts to send a particle beam in the counter-clockwise direction. The beam started its progression and then was halted. "This is a hiccup, not a major thing," Rudiger Schmidt, CERN's head of hardware commissioning, told reporters, adding the second rotation should be completed on Wednesday afternoon. Evans, who wore jeans and running shoes to the start-up, declined to say when those high-energy clashes would begin. "I don't know how long it will take," he said. "I think what has happened this morning bodes very well that it will go quickly ... This is a machine of enormous complexity. Things can go wrong at any time. But this morning we had a great start." Once the particle-smashing experiment gets to full speed, data measuring the location of particles to a few millionths of a meter, and the passage of time to billionths of a second, will show how the particles come together, fly apart, or dissolve. It is in these conditions that scientists hope to find fairly quickly a theoretical particle known as the Higgs Boson, named after Scottish scientist Peter Higgs who first proposed it in 1964, as the answer to the mystery of how matter gains mass. Without mass, the stars and planets in the universe could never have taken shape in the eons after the Big Bang, and life could never have begun -- on Earth or, if it exists as many cosmologists believe, on other worlds either. © Thomson Reuters 2008 All rights reserved
<urn:uuid:c94ec7b2-4ffb-4b59-afd4-a33ccb4311c7>
CC-MAIN-2017-04
http://www.baselinemag.com/government/Scientists-Start-Biggest-Physics-Experiment
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279915.8/warc/CC-MAIN-20170116095119-00564-ip-10-171-10-70.ec2.internal.warc.gz
en
0.946339
813
3.34375
3
Gobbi F.,Center for Tropical Diseases | Angheben A.,Center for Tropical Diseases | Anselmi M.,Center for Tropical Diseases | Anselmi M.,Center for Community Epidemiology and Tropical Medicine | And 8 more authors. PLoS Neglected Tropical Diseases | Year: 2014 Chagas disease (CD) is endemic in Central and South America, Mexico and even in some areas of the United States. However, cases have been increasingly recorded also in non-endemic countries. The estimated number of infected people in Europe is in a wide range of 14000 to 181000 subjects, mostly resident in Spain, Italy and the United Kingdom.Retrospective, observational study describing the characteristics of patients with CD who attended the Centre for Tropical Diseases (Negrar, Verona, Italy) between 2005 and 2013. All the patients affected by CD underwent chest X-ray, ECG, echocardiography, barium X-ray of the oesophagus and colonic enema. They were classified in the indeterminate, cardiac, digestive or mixed category according to the results of the screening tests. Treatment with benznidazole (or nifurtimox in case of intolerance to the first line therapy) was offered to all patients, excluding the ones with advanced cardiomiopathy, pregnant and lactating women. Patients included were 332 (73.9% women). We classified 68.1% of patients as having Indeterminate Chagas, 11.1% Cardiac Chagas, 18.7% as Digestive Chagas and 2.1% as Mixed Form. Three hundred and twenty-one patients (96.7%) were treated with benznidazole, and most of them (83.2%) completed the treatment. At least one adverse effect was reported by 27.7% of patients, but they were mostly mild. Only a couple of patients received nifurtimox as second line treatment.Our case series represents the largest cohort of T. cruzi infected patients diagnosed and treated in Italy. An improvement of the access to diagnosis and cure is still needed, considering that about 9200 infected people are estimated to live in Italy. In general, there is an urgent need of common guidelines to better classify and manage patients with CD in non-endemic countries. © 2014 Gobbi et al. Source
<urn:uuid:cf1e358f-b1f3-4e0c-9832-a58a22b1adf6>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/center-for-epidemiology-and-community-medicine-478761/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280761.39/warc/CC-MAIN-20170116095120-00380-ip-10-171-10-70.ec2.internal.warc.gz
en
0.958155
489
2.640625
3
Experienced aviators warn novice pilots, "the problem with a multiengine airplane is that sometimes you need them all." Experienced aviators warn novice pilots, "the problem with a multiengine airplane is that sometimes you need them all." During takeoff, for example, failure of even a single engine is a high-risk situationbut with more engines, there is a greater chance of at least one failure. Distributed computing systems, such as grid computing, involve a similar paradox: The more resources the system has, the greater the number of points where the system can fail or degradeand the harder the task of ensuring adequate performance in all situations, without unacceptable overhead. A computing grid faces four new tasks, in addition to whatever problems it was built to solve. The grid must discover and allocate available resources as their availability comes and goes; it must protect long-distance interactions against intrusion, interception and disruption; it must monitor network status; and it must initiate and manage communications among the processing nodes to make each ones needs known to the others. There is no single optimal approach to any of these tasks but rather a family of possible solutions that match up with different types of problems. Delays in communication between widely separated nodes fall into two groups. Fundamental is the speed-of-light limit: A node at one location cannot possibly become aware of a change in state at another location in less than the straight-line, speed-of-light propagation time of almost exactly 1 nanosecond per foot of separation. That sounds good until its compared, for example, with modern local memory access times of, at most, a few tens of nanoseconds. Tightly coupled applications, such as simulation or process control, are therefore disadvantaged in distributed environments "until science discovers a method of communication that is not limited by the speed of light," as Aerospace Corp. scientists Craig Lee and James Stepanek wrote in their paper published in April 2001 (which can be accessed via www.eweek.com/links). There are problem decomposition techniques that arent as badly handicapped by the speed of light: for example, Monte Carlo simulation, or the kind of data parceling strategies made famous by the SETI@Home project, which distributes sets of radio telescope data for intelligent-life detection by screen saver software. When problems lend themselves to this approach, they often dont need frequent synchronization and therefore arent severely hampered by distance. What does affect the latter class of problem, though, is the limited bandwidth of networks and network interfaces. Plotting recent progress, Lee and Stepanek in the paper cited earlier find network access bandwidth, as determined by available interface cards, doubling every 2.5 years, ominously lagging the 1.5-year doubling time of processor performance, assuming continued Moores Law improvementwhich many project as likely through 2010. With processor speed outpacing the ability of interface cards to send and receive to the grid, it follows that some processing power will be best employed in boosting information content per bit: for example, by continuing the refinement of data compression algorithms using techniques such as the wavelet transforms in the JPEG 2000 standard. Data compression developments such as these are offset, howeverperhaps to devastating effectby the growth of data overhead entailed in the use of XML syntax to make data more self-disclosing than it is in application-specific binary data structures. Theres a difficult trade-off to be made between ad hoc availability of data for unanticipated uses and efficient, cost-effective packaging of data. Sad to say, a great deal of processing power may also be consumed by the calculations needed to implement data integrity and security measures, such as encryption for authentication of messages sent and received. Grid computing, in an open environment such as an IP network, invites both attempts to read the mail between the nodes and to analyze the patterns of traffic for what they might reveal about concentrations of valuable information. If network and computer are the same, it follows that the networkan inherently exposed assetis increasingly the locus of IT value. Enterprise IT architects and service providers will have to learn to protect it without crippling its hoped-for performance gains.
<urn:uuid:c30e2255-906a-4f1c-9eb9-4c89353604b3>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Data-Storage/Internet-Insight-The-Paradox-of-Grid-Computing
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00196-ip-10-171-10-70.ec2.internal.warc.gz
en
0.942677
854
2.546875
3
What is a Voice PRI Line? A Primary Rate Interface line (PRI) is a form of ISDN (Integrated Services Digital Network) line, which is a standard telecommunication phone line, that enables traditional phone lines to carry voice, data, video and others. A Voice PRI line can receive or send 23 calls simultaneously. A PRI line is an end to end digital circuit, so the voice quality is much better than analog trunk lines. The PRI circuit consists of two pairs of copper lines terminating on a modem from a service provider company to the customer’s office. Advantages of PRI Lines 1. Flexibility in managing incoming and outgoing calls. 2. Direct Inward Dialling (DID) provides direct access for external callers to different employees without the need for an operator. 3. Easy transfer of outside calls to an outside number. 4. PRI lines can be used for voice connectivity, video conferencing, faxing and data transfere. 5. All of the essential features at no extra charge: Call Display, Call Forwarding, Call Waiting, Three-way calling and many other features 6. In addition to many other features
<urn:uuid:e31e8660-6eec-43e0-b492-c051bb50403b>
CC-MAIN-2017-04
https://www.convergia.com/business/voice-and-mobility/isdn-pri/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00104-ip-10-171-10-70.ec2.internal.warc.gz
en
0.897777
251
2.71875
3
One important concept which most people overlook when it comes to website hosting or computing in general is the concept of client server relationships. You’ll hear our support team talking about the mail client or web client being at fault, or the server crashing. Now, some of you will instinctively know what these relate to and mean. But for a lot people, it all sounds like gibberish. So, if you are in the latter camp, this tech tip is for you! Imagine you’re at a restaurant. It’s an odd restaurant, with not much in the way of customer service. You’re the ‘client’ and you walk in and sit down. A waiter or waitress comes over and stands patiently waiting for you to ask a question – they’re the server. Now, you can ask the server a question, but if you don’t ask it in the right way they won’t understand what on earth you’re talking about. This way in which the communication takes place is called the protocol. Now, depending on what you want there’s a different protocol. These protocols are established by a committee somewhere else, but we’ll leave that there for now as that would be a whole other blog post! So for biscuits, the protocol that you would need to say is: ‘Hi. Give me biscuits’. The server would then provide you with biscuits. However, if you were to say, ‘Hi. Give me my favourite snack’, then the server obviously wouldn’t have a clue what you’re on about. It’s a similar relationship in computing. Your ‘client’ is a program on your computer that talks to a ‘server’ which is located in a data centre somewhere in the world. Your application requests the information it wants via a protocol depending on the type of information. Here are some examples of some protocols you could have: http (hypertext transfer protocol) which allows websites to be displayed; FTP (file transfer protocol) which allows files to be sent and received to and from a server; SMTP (simple mail transfer protocol) which is the protocol used for sending emails etc. So, going back to an earlier example, your ‘email client’ would be the program you use to read your emails and your ‘web client’ is the browser that you use to view websites. Hopefully it now all makes sense. Stay tuned for more pearls of wisdom!
<urn:uuid:a050f1fe-dd91-4a4a-bc90-2a29d2f3f94f>
CC-MAIN-2017-04
http://m247.com/blog/understanding-client-server-relationships/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00317-ip-10-171-10-70.ec2.internal.warc.gz
en
0.950487
521
2.578125
3
December 2010 saw a high level of malicious malware activity, with cybercriminals turning to shortened URLs as a means to direct users to infected websites, according to Kaspersky Lab following the publication of its Monthly Malware Statistics for December 2010. Kaspersky Lab blocked over 209 million network attacks in December 2010 alone, prevented over 67 million attempts to infect computers via the web, detected and neutralised over 196 million malicious programs and registered almost 71 million heuristic detections. A contributor to this was cybercriminals taking advantage of URLs shortened by popular services such as bit.ly. In December, the top trends on Twitter's main page included a number of entries with links that had been shortened and which, after several redirects, eventually led to infected websites. The report also revealed two fake antivirus programs made it into December's Top 20 malicious programs detected on the Internet – in 18th and 20th places. Genuine antivirus programs are now so effective at detecting their fake counterparts when they attempt to download to users' computers that the cybercriminals have moved their wares to the Internet instead. In the latter scenario, these rogue programs don't need to be downloaded to a computer; users just need to be lured to a fake antivirus website, which is a lot easier than bypassing real antivirus protection. Representatives of the Trojan-Downloader.Java.OpenConnection family remain extremely active. Instead of using vulnerabilities in a Java virtual machine, these Trojans employ the OpenConnection method of a URL class – standard functionality of the Java programming language. Two representatives of Trojan-Downloader.Java.OpenConnection were among the Top 20 malicious programs detected on the Internet in December in 2nd and 7th places. At the height of their activity the number of computers on which these programs were detected in a 24-hour period exceeded 40,000. Topping the list of web-based threats, well ahead of its nearest rival, was the adware program AdWare.Win32.HotBar.dh. As a rule, this program is installed along with legitimate applications and then annoys the user by displaying intrusive advertising. For the first time ever a malicious PDF file that makes use of Adobe XML Forms has made it into the Top 20 online threats. When a victim opens the file Exploit.Win32.Pidief.ddl, a script exploit is launched that downloads and runs another malicious program from the Internet. Exploit.Win32.Pidief.ddl occupied 11th place in December's rating of threats emanating from the Internet. December also offered virus analysts the chance to monitor cybercriminal activity as it adapted to a new Russian Internet domain. November 2010 saw the beginning of domain name registration in the .рф (Cyrillic abbreviation for the Russian Federation) zone of the Internet. Online scammers turned out to be most active in the new domain, registering sites that were used to spread malicious programs and make enticing offers of a fraudulent nature. Three types of malware were detected most of all: fake archives resembling music, film and other media content; dummy programs masquerading as useful services for the Odnoklassniki social networking site; and script Trojans that redirected users to malicious web pages. More detailed information about the IT threats detected by Kaspersky Lab in December 2010 is available at www.securelist.com
<urn:uuid:72e354b1-9172-408c-afbe-90221f233b05>
CC-MAIN-2017-04
http://www.kaspersky.com/au/about/news/virus/2011/Shortened_URLs_Direct_Users_to_Infected_Websites
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00317-ip-10-171-10-70.ec2.internal.warc.gz
en
0.942144
691
2.65625
3
Virtual machines are extremely useful tools, especially from an IT standpoint. However, your virtual machines are just as vulnerable to data loss as the storage devices containing them. A failure of your hard drive, NAS device, or server containing your virtual machines can be a disaster. If you’ve lost the data on your virtual partition or virtual machine, Gillware’s virtual machine data recovery experts can help you. What Is Virtualization? Put simply, virtualization is when you take something physical and make it… well, virtual. In terms of data storage, virtualization refers to taking a physical data storage device and making a digital version of it. You can do this with a floppy disk, a CD/DVD/Blu-Ray disk, or even a hard drive. You can create an exact digital copy of a storage device and clone it onto a real version of that device, or use software to “trick” your computer into thinking your virtual image is the real thing. When you create a virtual hard drive or virtual hard disk, your computer will treat it exactly like a real physical hard drive you’ve connected to it. But it’s really just a single giant file that pretends to be a hard drive. This virtual hard disk (or VHD) can be stored on your computer’s hard drive, or in a NAS device or SAN you have access to. Your computer looks at it and sees a partition table, a boot sector—everything it needs to read a hard drive. Virtual hard drives are sometimes called “soft” partitions. They don’t correspond to any physical data storage device or separate partition. They just emulate their behavior. You can store files on a soft virtual partition, or even put an entire operating system on it. And if you get tired of having a virtual partition, you can just delete it (hopefully after you’ve gotten all your important files off of it) and go back to normal. What Is a Virtual Machine? When you put an operating system onto a virtual hard drive, you get a virtual machine. Using a hypervisor, you can use this virtual machine as if it were your computer. Depending on the size of your hard drive or server, you can fill it with as many virtual machines as you want. Virtual machines have many uses, from fun and convenience to disaster recovery. Type I and Type II Hypervisors The software you use to create and manage your virtual machine is called a “hypervisor”. Hypervisors can be classified into one of two types: Type I and Type II. Both types of hypervisors have their own strengths and weakenesses. A chart depicting the “chain” from a host machine to a guest machine. Our virtual machine data recovery experts are experienced with all sorts of Type I and Type II hypervisors. A Type I hypervisor is also called a “bare metal” or “native” hypervisor. Software such as Microsoft Hyper-V and VMWare ESXi are Type I hypervisors. The software is installed directly on the “host” machine, with no need for a pre-existing operating system before creating the virtual “guest” machine. In this setup, the only thing standing between the host machine’s hardware and the guest machine is the hypervisor itself. A Type II hypervisor is installed on a system that already has an operating system. These can also be called “embedded” or “hosted” hypervisors. A Type II hypervisor like VMWare Workstation or VirtualBox lets you create and boot up your virtual machines from your computer. For example, let’s say you have a computer with Windows installed on it. This is your “host machine”. You use your hypervisor to create a virtual machine (or “guest machine”) with a Linux operating system. You first boot up your Windows host machine, and use the hypervisor software to access your guest machine. Type II hypervisors have one more link in the “chain” between your host machine and your guest machine than Type I hypervisors have. Microsoft Hyper-V Data Recovery Hyper-V is an enterprise-grade Type I hypervisor developed by Microsoft. It is based on the Windows Server operating system. Our engineers can recover your data if something happens within your Hyper-V virtual environments, or to the server containing them. VMWare ESXi Data Recovery VMWare ESXi is a direct competitor to Microsoft Hyper-V. It is also a Type I hypervisor developed for enterprise use. It is lightweight and runs on its own proprietary microkernel. If something has happened to your ESXi virtual machines or the server containing them, our ESXi data recovery experts can get your data back. The Advantages of Virtual Machines One use of virtual machines is to create dual-boot setups without physically partitioning the hard drive in your computer. You can create dual-boot setups that would be impractical or unfeasible in reality using virtualization. If things don’t work out, you can always delete the virtual machine and go back to the way things were with no fuss. Virtual machines can also make recovering from a virus attack or a software or operating system update gone wrong a breeze. One feature of virtualization is the “snapshot” function. When you take a snapshot of your virtual setup, it freezes the base image file. Every time you make a change to your machine, that change goes to a “delta” file instead. If something goes wrong—say a nasty virus invades your system, or your settings get irreparably messed up—you can delete the delta files and go back to your pristine and untouched disk image. Virtual machines are also incredible tools for disaster recovery. If you have a disk image of your server, you’ve got an extra little bit of insurance if your server crashes or needs to be taken down for maintenance. Instead of having hours or days of downtime, you can boot up your virtual machine and have an exact replica of your server to work with in the meantime. Any data backup business that offers full-image backup services makes use of virtual machines to make its clients’ lives easier. How Can Virtual Machines Fail? Virtual machines have a lot going for them. But at the end of the day, a virtual hard drive is only as safe as the device it’s stored on. If you’ve got virtual machines for all of your employees stored on a RAID-5 NAS device, you and your employees will be in deep trouble if two hard drives in the array fail. Virtual partitions and virtual machines can also accidentally be deleted from the device they’re stored on. When you have a virtual hard drive on a real hard drive, and that hard drive fails, the integrity of your virtual drive could be in jeopardy. Why Choose Gillware for Virtual Machine Data Recovery? Virtual machine data recovery is a complex process. There are many occasions in which a client will have a several-drive RAID array with multiple virtual machines stored on it. Our data recovery technicians need to repair not only multiple physical hard drives, but multiple virtual hard drives as well. Here at Gillware, our data recovery technicians and programmers have developed groundbreaking virtual machine data recovery techniques. Salvaging data from virtual machines requires an extremely high level of computer skills and analytic reasoning abilities. Our virtual machine data recovery experts have honed their skills over countless successful data recovery cases. These technicians are intimately familiar with the ins and outs of various hypervisors and virtual machine setups. We offer our data recovery technicians’ word-class skills at affordable prices and reasonable turnaround times. Expedited emergency data recovery services are available for those of you who need your data back on the double. We charge no fees for evaluation, and even offer to cover the cost of inbound shipping. The only time you ever see a bill is after we’ve completed our virtual machine data recovery service, and you don’t have to pay if we are unable to recover your data. Our virtual machine data recovery engineers work hard to provide both a high-quality and financially risk-free data recovery service. Make Gillware your choice for your virtual machine data recovery needs today. Ready to Have Gillware Assist You with your Virtual Machine Data Recovery Needs? Best-in-class engineering and software development staff Gillware employs a full time staff of electrical engineers, mechanical engineers, computer scientists and software developers to handle the most complex data recovery situations and data solutions Strategic partnerships with leading technology companies Gillware is proud to be a recommended provider for Dell, Western Digital and other major hardware and software vendors. These partnerships allow us to gain unique insight into recovering from these devices. RAID Array / NAS / SAN data recovery Using advanced engineering techniques, we can recover data from large capacity, enterprise grade storage devices such as RAID arrays, network attached storage (NAS) devices and storage area network (SAN) devices. Virtual machine data recovery Thanks to special engineering and programming efforts, Gillware is able to recover data from virtualized environments with a high degree of success. SOC 2 Type II audited Gillware has been security audited to ensure data safety, meaning all our facilities, networks, policies and practices have been independently reviewed and determined as completely secure. Facility and staff Gillware’s facilities meet the SOC 2 Type II audit requirements for security to prevent entry by unauthorized personnel. All staff are pre-screened, background checked and fully instructed in the security protocol of the company. We are a GSA contract holder. We meet the criteria to be approved for use by government agencies GSA Contract No.: GS-35F-0547W Our entire data recovery process can be handled to meet HIPAA requirements for encryption, transfer and protection of e-PHI. No obligation, no up-front fees, free inbound shipping and no-cost evaluations. Gillware’s data recovery process is 100% financially risk free. We only charge if the data you want is successfully recovered. Our pricing is 40-50% less than our competition. By using cutting edge engineering techniques, we are able to control costs and keep data recovery prices low. Instant online estimates. By providing us with some basic information about your case, we can give you an idea of how much it will cost before you proceed with the recovery. We only charge for successful data recovery efforts. We work with you to define clear data recovery goals for our technicians, and only charge you upon successfully meeting these goals and recovering the data that is most important to you. Gillware is trusted, reviewed and certified Gillware has the seal of approval from a number of different independent review organizations, including SOC 2 Type II audit status, so our customers can be sure they’re getting the best data recovery service possible. Gillware is a proud member of IDEMA and the Apple Consultants Network.
<urn:uuid:27123c00-20dc-4399-81e2-3e7ba47f300d>
CC-MAIN-2017-04
https://www.gillware.com/virtual-machine-data-recovery/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00317-ip-10-171-10-70.ec2.internal.warc.gz
en
0.915416
2,295
2.71875
3
It is widely accepted that HPC provides countries with critical scientific advantages and increased economic standing, yet not everyone signs off on this idea. Supercomputing can be a tough strategy to sell when the expenditures that are required – for hardware, software, expertise and operating expenses – are substantial. India, with a burgeoning HPC ecosystem, recognizes the benefits that come with a solid supercomputing strategy, but still struggles to boost adoption. These issues – lack of political will, funding constraints and a dearth of HPC professionals – are being managed creatively in India as part of an effort to provide academic institutions with affordable, user-friendly mini-supercomputers. According to an article at Livemint, this scaled-down supercomputer, called Onama, is being deployed to six engineering colleges. Onama was developed by the Centre for Development of Advanced Computing (C-DAC) to serve as a development vehicle for India’s emerging supercomputing ecosystem. By allowing the user to make decisions about the system’s configuration, costs can be kept to a minimum. This lowers the barrier to entry for institutions that may have avoided supercomputing because of budgetary constraints. C-DAC was tasked with building India’s supercomputing ability, notes Pradeep Sinha, senior director, high performance computing and research and development at C-DAC. But the organization soon realized that building supercomputers wasn’t enough. Computer science students in India were still being taught sequential programming; they weren’t familiar with parallel techniques. So C-DAC refined its strategy to be culturally-relevant and more user-friendly. “We wanted to address the problem at the grassroots level,” Sinha said. “There are a few high performance computing (HPC) labs in IITs and ISIs. However, when you talk to other engineering colleges, they do not have this facility. Many students in other engineering colleges have never even heard of supercomputing.” With Onama, C-DAC sought to provide affordable platforms and key software packages to colleges to ease new users into an HPC-type environment. Onama was developed as a small parallel processing system that works like a supercomputer. The system, launched in September 2010, provides a package of open-source serial as well as parallel computing applications and tools across several engineering disciplines, including computer science, mechanical, electrical, electronics, civil and chemical engineering.
<urn:uuid:7dad44ba-afad-4c86-92b1-e19618a0169a>
CC-MAIN-2017-04
https://www.hpcwire.com/2013/11/26/india-touts-entry-level-supercomputer/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00225-ip-10-171-10-70.ec2.internal.warc.gz
en
0.959003
506
2.90625
3
The use of Enterprise networks is becoming increasingly complex. The past two decades have seen a sea change in the landscape of networking due to increasing reliance on the Enterprise network for a wide range of applications. The convergence of voice, video and data networks has added a lot more variables to determine the behavior of a network. Different applications influence the network in different ways and this mandates exercising complete control of network bandwidth usage to ensure optimum network performance. To add to this, the social media bubble burst has had its own impact on the network. This leads to a completely complex network that handles numerous classes, types and subcategories of traffic. To identify and assign maximum priority to traffic that is critical to your business is extremely important to make effective use of your network. Quality of Service, popularly known as QoS is a great way to reach this objective and assign priority to the desired type of traffic on the network. This gives a host of benefits such as better predictability, security, measurability and guaranteed delivery of critical services. Here are some basics of QoS which you would need to know to design and implement it successfully on the network. Quality of Service (QoS) is the term used to define the ability of a network to provide different levels of service assurances to the various forms of traffic. QoS is a technique to optimize network usage by prioritizing traffic on the basis of your business objectives. Each organization is different in terms of the nature of business and the processes followed. Depending on the different business objectives, the network is used for different purposes based on which traffic needs to be prioritized on the network . This ensures high-priority delivery of business-critical and delay sensitive applications at all times. QoS is a set of standards and techniques to ensure high performance of critical applications on the network. QoS, as a mechanism, could be exploited by network administrators to put their resources to the best use without any need for expansion or enhancements on the network. QoS makes it possible to ensure a high-performing network, allotting maximum priority to those applications that are highly critical to business. This ensures prompt delivery of business-critical applications, thereby putting your Enterprise network to optimum use. In this current era of converged networks, it is one single network that handles various types of traffic like voice, data and video. All of this comes under the best effort delivery category which means that all of them have an equal chance of being dropped when congestion occurs. This leads to a situation where the battle between business-critical applications and other applications begins. For effective use of the network bandwidth, it is essential that the business-critical applications get higher priority over other applications. The fundamental requirement in this case becomes application classification. The applications running on the network need to be classified as two sets, the first one being those applications that are critical to business(CRM, ERP, business VoIP etc.) and the second one being bandwidth intensive applications that do not contribute to business(streaming, peer-to-peer file sharing, online gaming, Internet radio etc.). When a number of bandwidth-intensive applications run parallel on the network, the network is subject to congestion due to a much bulkier volume of traffic than it can actually handle. When congestion happens, traffic gets dropped which could result in data loss and failure of delivery of applications that might be critical to business. It thus becomes the top priority of a network administrator to attach maximum importance to business-critical applications over other bandwidth-intensive applications that are not relevant to the business. Let us consider a scenario where a congested network handles FTP and a VoIP call. FTP has a lower level of sensitivity to latency and network sluggishness. Although, the transfer may happen at a slower rate, the delivery of the file is not affected but for the speed of delivery. But, if VoIP packets get affected due to network sluggishness, it will result in a choppy audio at the receiving end, thereby defeating the very purpose of communication. The reliance on the network for voice and video has opened up a critical factor called 'delay sensitivity of an application'. Even the slightest delays can result in poor quality of the VoIP or the video call thereby affecting smooth functioning of the business that is largely dependent on the network. In this scenario, prioritizing the applications becomes an essential function of a network administrator. Network utilization patterns have drastically changed over the years. Video traffic volume on the network is increasing manifolds. The demand for High Definition and 3D demands more bandwidth and adds even more to the network congestion. Here are some statistics from Cisco Visual Networking Index findings: With the explosive rate at which the bulk of video traffic is increasing in the network, throwing in more bandwidth is one way of handling it. But, when approaches like 'sustainability', 'judicious use of existing resources' are the key to running a business in these times, making optimum use of the currently available resources is a better approach. QoS is a mechanism that helps in achieving this objective with great ease. An understanding of what constitutes your QoS set up is extremely important for effective implementation. The fundamental aspects of QoS are: As we have seen earlier, there are different types of applications that run on the network. Applications like mail, CRM, sharepoint, intranet, database, VoIP, streaming, gaming, file-sharing, file-hosting etc. rely heavily on the network and not all of these are important to business. Even among the business-critical applications, the level of importance of each application is different. As the first step, classifying the applications is essential to determine how to prioritize these different applications. The following 2 steps constitute classification of traffic: Traditionally, the access control lists (ACLs) were used as identification tools. The access list typically is a set of statements that defines a specific pattern that would be found in an IP packet. In this approach, the packet entering an interface is scanned for the specific pattern and the decision to allow or deny it depends on the pattern match. A major handicap of this approach is, the longer the list, the longer is the look-up time. In case of delay-sensitive applications, this approach wouldn't work well due to its insensitivity to latency. Application Sub Categories NBAR is thus an intelligent classification engine that helps in driving your QoS metrics within acceptable norms. NBAR is a system to classify traffic in the network. NBAR is Network Based Application Recognition is capable of monitoring Layer 4 through Layer 7 traffic and not just the application layer. With such deep visibility, it can recognize applications that use dynamic ports like skype that pertain to specific categories and subcategories of applications on the network. Some categories of applications running on the network are: In the marking action, the identified packet is associated with a unique value (marked with) pertaining to its class of traffic. The packet will be identified with this marked value, in QoS terms and its treatment will depend on this marked value. The common marking options available on Cisco routers and switches are IP Precedence, DSCP, CoS, ToS bits, QoS group, and MPLS EXP values. For optimum use of router resources, it is highly recommended to do the marking as close to the source as possible. Marking is the basis for assigning priority to traffic on the network. A queue is used to store traffic until it is allowed to pass. It is imperative to have a queuing mechanism in place even though the chances of congestion are very minimal. Queuing is particularly useful when organizations assign low-speed links for non-essential applications pertaining to web traffic like file sharing etc. High speeds are generally recommended for business-related applications like Citrix, Webex etc. Cisco recommends the following queuing guidelines for organizations: Out of the available queuing mechanisms, the network Administrator can follow an approach that best matches with the goals and objectives of the organization, the network type etc. But, it is highly recommended to have a queuing mechanism in place. The three key components of implementing QoS are: Class map is where the criteria Layer 3, Layer 4 and the Layer 7 are set which helps identify the class of traffic. This is an important element of classifying the different types of traffic in the network. A class map is the basis with which traffic is classified and one defines the different classes of traffic on the network with the information that we get from the Application, Transport and the Network layer. Class map is where the criteria Layer 3, Layer 4 and the Layer 7 are set which helps identify the class of traffic. This is an important element of classifying the different types of traffic in the network. A class map is the basis with which traffic is classified and one defines the different classes of traffic on the network with the information that we get from the Application, Transport and the Network layer. The key component of class maps is 'match statements' and 'match criteria'. A set of conditions are specified according to the network administrator's requirement to classify traffic. When the matching requirements are met, the packet is classified under the respective class name. If the match statement fails, the packet falls under the default class. The class under which a packet falls will determine its chances of being dropped or passed when a congestion occurs. A match statement could be written to segregate FTP packets and VoIP packets. The VoIP packets are much more delay-sensitive than the FTP packets. Thus, when the VoIP packets are separated from the default packets, it would help in prioritizing the VoIP packets considering their sensitivity to latency and delay. Once the class maps created, the next step is to decide how to handle the classified traffic. The segmentation of traffic into different classes is now over and the network administrator needs to decide how to handle these different classes in an efficient manner. Activating the policies happens using the service policy command. Once the classes are defined and the policy is determined, activating these policies is the implementation step that happens through service policies. After QoS design and implementation, validation is the final and key step. The only way to validate a quantity is to measure it. Thus, measuring and monitoring of QoS becomes a vital aspect of ensuring effective QoS policies in the network. The depth and scope of QoS monitoring varies on a case-by-case basis. But, by minimum standards, the monitoring should include link utilization trends and packet drop information. There are several ways to monitor QoS like Cisco-Class-based-QoS-MIB, NetFlow etc. Collecting a lot of data is possible using Cisco CBQoS but there needs to be a supporting back-end tool to classify the data and convert into useful information in easy-to-interpret forms. The tool must also be capable of sorting the data and flagging the drop rates. ManageEngine NetFlow Analyzer supports Cisco CBQoS and thus reports on the QoS policies that you have deployed. The report shows the pre-policy, post-policy and drops in different traffic class along with the queuing. CBQoS monitoring in such a deep level helps you validate the QoS policies. You can change your policies according to the reports, which you see in NetFlow Analyzer. This is a tool, which can be best used for QoS policy validation.
<urn:uuid:ec213790-d208-44aa-93af-fb3d823cc020>
CC-MAIN-2017-04
https://www.manageengine.com/products/netflow/allaboutqos.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00225-ip-10-171-10-70.ec2.internal.warc.gz
en
0.932382
2,320
2.96875
3
Big data is changing industries. Big data is a new application made possible by cloud computing. Big data is about patterns and trends — forward looking business intelligence. For example, a retailer can see declining sales from traditional monthly sales reports. However, why sales are declining and if they will continue to decline is another matter. Big data seeks to correlate multiple sources and types of data to understand the “why and if.” Big data is more complicated than scaling traditional data processing. Big data is hard because of 1) the sheer volume of real-time data being generated 2) the velocity of how quickly the data flows into an organization and 3) the variety of formats this data takes, both structured and unstructured. The benefits are real though. Insurance, banking, medical/pharmaceutical, retail, telecoms and many others see dramatic improvements in competitiveness, security and profitability. Big data is new, and that makes it hard. What You Need to KnowBig data uses a “3 V’s” model to describe the challenges: volume, velocity and variety. - Volume refers to the amount of data — structured and unstructured — generated in a connected society. Consider sensors in home appliances, smart phones apps, vending machines and kiosks, social media, click-stream analysis of web sites etc. This is a key point in big data. The amount of unstructured data generated far exceeds structured transactional data like product orders. - Velocity is the pace at which transactions occur, as well as the pace of decision-making based on analysis. Velocity includes the frequency of traditional structured transactions. More importantly, it also includes very valuable and non-traditional data streams. These real-time data streams don’t fit into a traditional database formats or structures. But they provide a powerful understanding of complex commercial, industrial, and social systems. Velocity also impacts the timeframes for processing, storing, and then sharing or using the data. Big data requires increased agility for using the knowledge gleaned. - The variety of data generators includes any device with an Internet connection or the ability to capture and store operational activities. The variety of data types and formats includes every conceivable human-human, human-machine, and machine-machine communication method, for example, map/GPS coordinates, images, telemetry, RFID, text, speech, etc. The variety of different data types often poses challenges for traditional relational databases. What You Need to Do The value of big data comes from trying to solve a specific business problem using Business Intelligence processing. This processing is so intensive that it often requires hundreds or thousands of dedicated virtual machines and massive storage. It can require radical re-engineering of applications and systems. If you want the business agility, innovation, and revenue growth cloud-driven big data can deliver, you most likely need significant changes in your people, process, and technology. Here is what you need to do: Explore the many types of data available to you. The key to big data is integrating multiple sources and types of data. Be careful — just because a data stream is available or affordable does not mean it’ll aid in solving your business intelligence questions. Understand what big data is and how and why it could work for your firm. This isn’t a technology conversation. It’s a business intelligence problem solving conversation. Assume that your database and business intelligence teams may not know the best sources of data. Your existing data bases may or may not be a goldmine. Be careful not to fall into the trap of just processing more existing transactional (in-house) data faster. While it might be useful, the highest ROI from big data comes from integrating non-traditional data sets and streams. Big data brings new roles around identifying and understanding the relationships and patterns between data sets. The “data scientist” is an emerging role. New skills may include identifying opportunities through the use of statistics, algorithms, mining, and visualization. Consider that big data can change the structure or culture of your analytics or business intelligence teams. Big data is new and that makes it hard. Failures are a given and success will require multiple efforts. You need to support a culture of innovation. Evaluate your infrastructure and security abilities and options. All approaches to big data analysis, including Apache Hadoop and Google MapReduce, require significant technical resources. Most traditional IT infrastructures (compute, storage, networking and software) will struggle to handle the integration and processing required. Public cloud services are one option. Private cloud is another option, but will likely require significant investments. When using external data sources, security also becomes a prime concern. Begin by creating a cross functional business and IT team. Have business leaders describe problems they’d like to solve. Understand how you’ll integrate big data into your existing business, IT, and governance frameworks. Task DBA’s to understand the limited role of SQL in big data. Ask infrastructure team members to understand the interfaces and capacities required. Have the software team members look into writing applications to analyze data. To get going you must understand what you want to achieve before you invest in technology. Develop business key performance indicators (KPI) to show success. Consider how you’ll scale, reuse, and repurpose your efforts. Only towards the end should you consider how you’ll solve your business problem with Hadoop clusters, MapReduce, cloud services, etc. How Big Data Challenges IT Storage Managers
<urn:uuid:171b1089-76c2-49fc-b8da-692af3436ff4>
CC-MAIN-2017-04
http://blog.globalknowledge.com/2012/07/30/how-big-data-changes-what-you-know-about-business-intelligence/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00435-ip-10-171-10-70.ec2.internal.warc.gz
en
0.911148
1,132
3.21875
3
Everyone and anyone is a user of the internet; yet there are still a large number of baffling acronyms being spouted by service providers, making the world of internet connectivity hard to get to grips with. That is where Gradwell’s ADSL ABC comes in. We’ve gathered up the connectivity acronyms and have put them into plain English: ADSL (ADSL2+): Asymmetric Digital Subscriber Line. This is the most common way that broadband is delivered in the UK. It uses your telephone line to send and receive data, splitting out the voice data from your telephone conversations and delivering the internet! ADSL2+ is the same thing, but faster. Offered at Gradwell as SMPF. MPF: Metallic Path Facility. Usually internet delivered via a telephone line, as with ADSL above, two providers use the connection to provide the service, and means you pay line rental to BT even if you use someone else for telephone and internet services. MPF removes the shared part as your provider takes ownership of the whole thing and provides telephone and internet services, which (with Gradwell anyway) means no line rental! EFM: Ethernet in the First Mile, uses the existing telephone line, like ADSL, but bonds pairs of copper wires together to create more stable connections. If a problem occurred on an ADSL line, the service would stop. If a problem occurred on an EFM line, one pair might go down; slowing the service, but there would still be a connection. FTTC/FTTP: Fibre To The Cabinet/Premises. This is the new super-fast broadband that everyone is talking about. Instead of using a line made up of just traditional copper wire, FTTC and FTTP use fibre optic cable, so you can achieve speeds of up to 1Gbps (60 times faster than ADSL). The difference between the two acronyms is that FTTC is delivered to a box in the street, then the data is sent over copper wire to you; meaning the further you are away from the box, the more speed you’ll lose. FTTP, on the other hand, is delivered right into (you guessed it) your premises. This means there isn’t a chance to lose any speed and you get rid of the “up to” phrase. NTE: Network Termination Equipment. When you have services like EFM, FTTC or FTTP, you need to have equipment that can take the data and turn it into something your router can handle. Think of oil and cars. Oil is pulled out of the ground, passed through a refinery and is then used as fuel for cars. The NTE refines the data for your router. LLU: Local Loop Unbundling. First off, BT Business and BT Openreach are different people. BT Business will provide a service, like Gradwell or TalkTalk can. BT Openreach owns the UK’s telecommunications infrastructure. Through LLU, other internet service providers can install their own equipment in telephone exchanges across the country to provide services to their own customers, while using some BT Openreach infrastructure to make it all work. Of course, a small fee says thank you to BT Openreach (that even BT Business has to pay) – your monthly line rental charge. If you’re still confused, or you would just like to know more about the above services, give us a call on 01225 800 808 or email email@example.com. (Image by Nauvasca)
<urn:uuid:0a1bf213-8f4b-478c-b91e-21b841abbfa7>
CC-MAIN-2017-04
https://www.gradwell.com/2012/01/04/adsl-abc-2/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00345-ip-10-171-10-70.ec2.internal.warc.gz
en
0.934066
740
2.5625
3
Older technology catches the solar system by its tail Reading through GCN’s recent anniversary issue, I was struck by how far technology has come in three decades. Look at the old Cray-X-MP compared to the Titan supercomputer of today. I began to wonder if any of that old technology was still in use. For the most part, any technology that was in government service 30 years ago has been retired. But there is one area where old technology is still alive and well, working in the one place where we can't refresh it, even if we wanted to: space. And in at least two cases, that technology is working just fine. Two satellite workhorses, the Voyager probes, are contributing data to NASA’s current work mapping the heliosphere — the tail of solar radiation and particles trailing off away from the sun — for the entire solar system. Launched in 1977 (the year the Apple II was born), they are both still functioning, with Voyager 1 traveling 11 billion miles away from the sun. It's very possible that Voyager 1 will be the first man-made object to leave our solar system. The Voyager space craft are identical. They have 10 instruments that record data about the planets they pass, and so far they’ve explored Jupiter, Saturn, Uranus and Neptune before heading off toward deep space to find the edge of the solar system. Other than one failure, all of Voyager's systems are still working fine, even equipment that, based on our experience with their Earth-bound counterparts, might not seem so durable. In all, there are three types of computers on a Voyager, with a total capacity of 68KB. The Flight Data Subsystem, for instance, is a single 8-track digital tape recorder. The 8-track records data and plays it back to Earth every six months. And despite its 36-year-old technology, the Voyagers have a long list of impressive accomplishments. So Voyager may be the most successful computerized system ever created. And it's just really cool, or should I say groovy, that it includes an 8-track tape deck. I suppose when people say that they don’t make things like they used to, looking at Voyager, you’d have to agree. I've had brand new computers die after a few years of terrestrial use. And 11 million miles away in space, some 1970s’ technology is still answering questions about the universe. Posted by John Breeden II on Jul 22, 2013 at 11:16 AM
<urn:uuid:ebe577da-6049-4a6b-943d-6901fd53227b>
CC-MAIN-2017-04
https://gcn.com/blogs/emerging-tech/2013/07/voyager-old-tech-mapping-heliosphere.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00069-ip-10-171-10-70.ec2.internal.warc.gz
en
0.960214
524
3.359375
3
Batch Renaming Using Irfanview The purpose of this guide, is to teach you how to batch rename a group of images, using Irfanview. Batch renaming, is the process of taking a group of images with different names, changing them to the same name, and numbering them sequentially (001, 002, 003, etc.). An example of where Batch Renaming could prove to be useful: You took 5 pictures of a sunset, at your house, on Jan 24, 2006. You transfer the pictures to your computer. The images are given names (P000001, P000002, P000003, etc.) that tell you nothing about the image, and make searching for a particular image, difficult. You want to rename the images to Sunset_Home_1-24-06", followed by a number (01, 02, 03, etc.). Instead of renaming each image individually, you do a Batch Rename. A short Flash presentation is available for viewing. I suggest you watch the presentation first, as this will give you an idea of what this Tutorial will cover. The written Tutorial, will give you a bit more detail about Batch Renaming. Batch Renaming Video - Open Irfanview, and click on File, in the Toolbar, and select Batch Conversion/Rename.... This will open the Batch conversion dialog box. - In the Look in: box, navigate to the images you want to rename. - Select (single Left click) the images to be renamed, then click the Add button. This will add the selected file to the Input files: box, to the left of the Add button. - Next, you need to decide where you want the files to be placed, after renameing. If you want the images to be placed in the same folder you loaded them from, click the Use this directory as output button. If you want to put them in a different folder, under Output directory:, click the Browse button, and navigate to where you want them placed. - Under Work as:, select Batch rename. - Under Batch rename settings:, next to Name pattern:, type in the name you want to use, followed by pound (#) signs. The number of pound signs used, will determine the number of digits following the name. Typing Sunset_Home_1-24-06_##, will give you Sunset_Home_1-24-06_01, for your first image. Typing Sunset_Home_1-24-06_###, will give you Sunset_Home_1-24-06_001, for your first image. - When everything is set the way you want it, click the Start button. A dialog box, that tracks the conversion, will appear. When it's finished converting, click the Exit button. Edited by Grinler, 17 April 2012 - 09:59 AM.
<urn:uuid:63ebbc9f-877d-4743-985f-14e9596d2f19>
CC-MAIN-2017-04
https://www.bleepingcomputer.com/forums/t/43854/batch-renaming-using-irfanview/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00033-ip-10-171-10-70.ec2.internal.warc.gz
en
0.855747
622
2.578125
3
Picture This: A Visual Guide to Redundant Arrays A basic understanding of the technology behind RAID (Redundant Array of Independent Disks) is a necessity for many entry-level certifications, yet the concept can be difficult to make sense of. Currently, certification candidates need to know the topic — again, at a basic level — for all three of the core CompTIA certifications (A+, Network+, and Security+), as well as Cloud+ and exams from other vendors. This guide uses the analogy of data being written to a cube to illustrate the differences between the primary RAID levels. It is important to note that while the concepts are illustrated using the same images throughout, there are differences between levels in that some work with bytes and others with blocks. In all cases, the visuals are intended to be consistent and the emphasis with them is on comprehension versus precision. What is RAID? RAID is simply a technology that uses multiple disks to provide fault tolerance. The acronym most commonly is associated with Redundant Array of Independent Disks, but you will see Inexpensive used as well, particularly in older documentation. Keep in mind that the usual purpose of RAID is to allow your servers to continue functioning if a hard drive fails. RAID levels are implemented either in software on the host computer, or in the disk controller hardware. A hardware-configured RAID implementation will generally run faster than a software-configured RAID implementation because the software implementation uses the system CPU and system resources. Hardware RAID devices generally have their own processors, and they appear to the operating system as a single device. So long as a failed hard drive can be replaced with a new hard drive without needing to bring the system down, the arrangement is said to be hot swappable. There are several different levels/implementations of RAID and they are discussed in the following sections. Even though the first letter of the RAID acronym stands for “redundant,” a RAID 0 implementation is not actually redundant. Known as disk striping, it uses multiple drives and maps them together as a single physical drive. This is done primarily for performance since there is no fault tolerance. The data is written to equally sized stripes (blocks) across all disks. Using multiple disks, reads and writes are performed simultaneously and that means that disk access is faster, making the performance of RAID 0 better than that of other RAID solutions and significantly better than that of a single hard disk. Figure One shows a representation of RAID 0: Notice that the data on the two drives is different and there is no relationship between them. A minimum of two drives is needed to implement and there is no loss of capacity by using this level of RAID. Figure One: With RAID 0, data is striped across multiple disks and there is no redundancy (fault tolerance). The biggest downside of RAID 0 is that if any disk in the array fails, the entire logical drive becomes unusable and must be restored from backup. The best way to understand RAID 1 is to consider the term commonly used to describe how it functions: disk mirroring. Disk mirroring provides 100 percent redundancy because everything is stored on two disks: If one disk fails, its mirror continues to operate. The failed disk can be replaced, and the RAID 1 array can be regenerated. Some implementations are called disk duplexing (duplexing is a less commonly used term) and the only difference between mirroring and duplexing is one more controller card. With mirroring, one controller card writes sequentially to each disk. With duplexing, the same data is written to both disks simultaneously. Disk duplexing has much faster write performance than disk mirroring. Many hardware implementations of RAID 1 are actually duplexing but they are still generically referred to as mirrors. RAID 1 offers load balancing over multiple disks, which increases read performance over that of a single disk. Write performance, however, is not improved. An important consideration is that other RAID levels using striping are often incapable of including a boot or system partition in fault tolerance solutions, but RAID 1 is perfectly suited for this. Figure Two shows a representation of RAID 1: notice that the data on the two drives is identical. A minimum of two drives is needed to implement RAID 1, and there is a 50 percent loss of capacity by using this level of RAID (effectively doubling the storage requirements). Figure Two: With disk mirroring, each disk is an exact replica of the other. Among the other shortcomings of RAID 1: It has a single point of failure, the hard disk controller. If the controller were to fail, the data would be inaccessible on either drive. Duplexing helps, but you still are at the mercy of only being able to survive the loss of one drive. RAID 3 and 4 These two RAID levels are rarely used today and the biggest difference between them is whether they work with bytes (RAID 3) or blocks (RAID 4). Both can be considered disk striping with a parity disk and the disk in question is a dedicated disk. They implement fault tolerance by using striping (just like RAID 0) in conjunction with a separate disk that stores the parity information. Parity information is a value based on the value of the data stored in each disk location. This system ensures that the data can be recovered in the event of a failure. The process of generating parity information uses the arithmetic value of the data binary. Using this math, there are only four things to remember: 0+0=0, 0+1=1, 1+0=1, and 1+1=0. When you use this math to add together 11000011 + 10110000, the resulting parity value is 01110011 and Figure Three illustrates this with the dedicated parity disk appearing in gold. Figure Three: With RAID 3 and 4, parity is computed and written to a dedicated disk. This process allows any single disk in the array to fail while the system continues to operate. If, for example, the second disk failed, you would be left knowing that 11000011 + (unknown) = 01110011 and be able to reverse it recreate all the missing values (11000011 – 01110011 = 10110000). Once a new disk was installed to replace the failed one, the parity information would be used to regenerate values needed. A minimum of three disks are needed and a problem with this solution is that the parity disk is dedicated. That means that it must always be written to with every operation and that can slow the system down. RAID 5 takes the concept of RAID 3 and 4 and turns it into disk striping with parity by doing away with the parity disk. The parity information (block-level striping) is spread (rotates) across all the disks in the array instead of being limited to a single disk. It requires a minimum of three disks and many implementations support a maximum of 32. Regardless of the number of drives in the array, it can survive the failure of any one drive and still be able to function, but it cannot survive the failure of multiple drives: If a second hard disk fails before the failed one is replaced, data loss could occur. Figure Four shows a visual representation of RAID 5; notice that there is a loss of capacity by using this level of RAID equal to the size of one disk (thus it can range from 1/3 to 1/32). Figure Four: With RAID 5, parity is distributed across all disks. RAID 5 is commonly used today — making it the most popular of the fault tolerant solutions — even though it does slow write performance since parity has to be computed and then written across several disks. The biggest disadvantage is that when regeneration is needed, after replacing a bad disk with a new one, recalculating the values for the replacement and writing them can take considerable resources from the server. Since the biggest weakness with RAID 5 is that it can only provide fault tolerance for a single drive, even though there can 32 in the array, RAID 6 addresses this by providing fault tolerance for up to two failed drives. It implements block-level striping with double distributed parity and is illustrated in Figure Five. Figure Five: With RAID 6, parity is calculated and written to two disks. A minimum of four drives are needed and the maximum is determined by the controller used. The loss of capacity by using this level of RAID equal to the size of two disks, thus it can range from 1/2 if you have four disks to 2/32if you have thirty-two disks, and so on. RAID 1+0 and 0+1 Sometimes RAID levels are combined to take advantage of the best of each. One such strategy is RAID 10, which combines RAID levels 1 and 0 and is also known as RAID 1+0. In this configuration, four disks are required. As you might expect, the configuration consists of a mirrored stripe set — creating a striped set from a series of mirrored drives. To some extent, RAID 10 takes advantage of the performance capability of a stripe set while offering the fault tolerance of a mirrored solution. As well as having the benefits of each, though, RAID 10 inherits the shortcomings of each strategy. In this case, the high overhead and decreased write performance are the disadvantages. RAID 0+1 is the opposite or RAID 1+0. Here, the stripes are mirrored (think of it as a “mirror of the stripes”). Figure Six shows an example of striped disks being mirrored. Figure Six: With RAID 0+1, the striped set is mirrored. Both RAID 10 and RAID 0+1 require a minimum of four drives. With RAID 10, two mirrored drives are used to each hold half of the striped data. With RAID 0+1, two mirrored drives are used to replicate the data on the RAID 0 array. While they sound confusingly similar, remember that the difference between them is the order of the operations: 10 is a stripe of the mirrors and 0+1 is a mirror of the stripes. Summing It Up The following table offers a summary of the most important attributes of each RAID level. |RAID Level||Known As||Disks Required||Disadvantages||Advantages| |No fault tolerance||Increased Read/write performance| |50 percent overhead and slow write performance||Provides fault tolerance and can incorporate a second disk controller (duplexing)| |3 or 4||Disk Striping with a Parity Disk| |Since parity must always be written to one disk, it can slow the system||Provides fault tolerance with a cost of only one disk in what could be a large array| |5||Disk Striping with Distributed Parity| |Regeneration can take time||Better read performance over other parity solutions| |6||Disk Striping with Double Distributed Parity| |Computing dual parity slows the system down more than RAID 5, slowing performance||Able to recover in the event of the loss of up to two drives| |10||Mirrored then Striped| |Expensive to implement and half the storage space is used by the mirrors||Redundancy of RAID 1 coupled with the speed of RAID 0| |0+1||Striped then Mirrored| |Just as expensive to implement as RAID 10 and half the storage space is used by the mirrors||Since no parity is computed, it is fault tolerant and fast|
<urn:uuid:fd54dcb8-6573-4fb5-910a-533a68b93a4f>
CC-MAIN-2017-04
http://certmag.com/picture-this-redundant-arrays/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00427-ip-10-171-10-70.ec2.internal.warc.gz
en
0.935399
2,361
3.734375
4
LeChiffre is yet another ransomware that recently has been observed to cause some major damage (in Mumbai – read more here). Not much material about it is available, so we decided to take a look. It is different than most of the ransomware present nowadays. Instead of spreading to users and automatically infecting their machines, LeChiffre needs to be run manually on the compromised system. Common scenario of infection is that attackers are automatically scanning network in search of poorly secured Remote Desktops, cracking them, and after logging remotely they manually run an instance of LeChiffre. It encrypts files and appends to their names an extension “.LeChiffre”. The name comes from French – it literally it means “the number” but it is also known as a verb “chiffrer” and noun “chiffrement” meaning encrypting and encryption – more details here: https://fr.wikipedia.org/wiki/Chiffrement. (thanks to @jeromesegura for the tip). Another possible explanation is that creators wanted to refer to a character from James Bond series. Analyzed sample: 4523ccfd191dcceeae8e884f82f5c7ad It is distributed as a typical Windows executable: When we run it what appears is a GUI with labels in Russian: Drops it’s copy in Recycle Bin, disguised as jpg: File encryption process starts after we run in manually. First button from the top scans all the available disks and encrypts files with given extensions. Sample result: User have a big level of control on the process of encryption. Clicking a 4-th button from the top (Отдельно – Separately) we can choose by our own a single file that we want to encrypt. Full process of encryption is possible off-line, without internet connection – it proves that keys are generated locally, not downloaded from the C&C server like in case of Cryptowall. Process of recovering files is also very strange in comparison to other ransomware – attackers want a victim to just sent them some encrypted files and the secret code (that is 128 byte long – base64 encoded). Leaving a backdoor Apart from encrypting files on the system, LeChiffre also leaves a backdoor, by replacing a file sethc.exe (C:\Windows\system32\sethc.exe) by cmd.exe. Windows runs sethc.exe when user presses SHIFT 5 times. It can be deployed even if user is not logged in in the system (on log-in screen). By replacing it by any other application, we are getting ability to deploy that replacement application from the level of not-logged user. By replacing it with cmd.exe, attackers gets access to the system command line without knowing a password or even gains ability to change the password. At startup, LeChiffre grabs data about geolocation by querying the address: api.sypexgeo.net – the country code is then displayed in the left corner of the GUI. If the Scan is started it also starts to communicate with a remote server: http://220.127.116.11/sipvoice.php by a simple, HTTP based protocol. To visualize encryption method we did an experiment. As an input we took a square-sized BMP Below you can see a visualization of the raw file. There is a small header at the beginning, and the raw bytes after that (BMP format keeps bytes in reversed order, that’s why the picture is upside-down): raw bytes of Koala.bmp: And this is the visualization of raw bytes of Koala.bmp.LeChiffre – the above file encrypted by LeChiffre: Most of the content didn’t changed! Only the beginning and the end of the file is encrypted by the malware. Left – raw bytes of original BMP, right – the same BMP encrypted by LeChiffre: No matter the size of the file, LeChiffre always encrypts first 0x2000 (8192) bytes at the beginning of the file: Then, similarly it encrypts 0x2000 bytes at the end of the file. After that it appends to the file 32 byte (265 bit) long content – possibly the AES key or initialization vector. Example below – appended content from 0x22F9E to 0x22FBD): this 32 byte long payload is not generated per file – experiment made on another file gave the same result (7A 02 5B 5A … A9 39 E1 98) The binary is UPX packed. After unpacking it we can find out, that it has been written in Delphi. Decompiled form (TForm.dfm) contains 3 base64 encoded elements. After decoding them, we can find out that they contain an HTML, divided in 3 chunks: After merging them we get a template for the “Attention” message: LeChiffre not only encrypt local files, but all available resources. Those that are shared in the local network… and others, mapped by RDP or some virtual environments: It enumerates also all available users: Sends the data to the hardcoded C&C during file scanning: LeChiffre looks very unprofessional. Code has been written in Delphi and packed by UPX – practically, no countermeasures against analysis has been taken. It can be justified by the fact, that this ransomware was not intended to be distributed in campaign, only used by attackers after they entered to the system. However, poorly implemented encryption and model of communication with victims (via e-mail), shows that this malware has been prepared lazily, probably by beginners. Nevertheless, it managed to make some damage. It only proves the point that increasing awareness about ransomware is very important. Even a badly implemented piece of malware still can cause careless users to lose money. - http://articles.economictimes.indiatimes.com/2016-01-11/news/69678894_1_hackers-computers-bitcoins – LeChiffre attacking in Mumbai - http://www.bleepingcomputer.com/forums/t/578220/lechiffre-ransomware-adds-lechiffre-extension-to-files/ – thread about LeChiffre on forum BleepingComputer
<urn:uuid:a4e543db-acba-42de-a434-00abef83177e>
CC-MAIN-2017-04
https://blog.malwarebytes.com/threat-analysis/2016/01/lechiffre-a-manually-run-ransomware/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00335-ip-10-171-10-70.ec2.internal.warc.gz
en
0.90486
1,374
2.546875
3
Fold-Up Electric Car in 2013? / January 31, 2012 Massachusetts Institute of Technology’s (MIT) Media Lab has created an electric vehicle (EV) prototype that folds up and multiple fit together like shopping carts, and is meant solely for urban driving. Originally called CityCar but now named the Hiriko project, this EV was presented in Brussels last week to Durao Barroso, president of the European Commission and is slated for production in 2013. Here are the details on this mini-EV: - The two-seater has a pod-like design and four-wheel drive. - Both driver and passenger enter through the windshield, which swings upward. - Its 60-mile range means it’s meant solely for urban driving. - These EVs will work much like a bike-sharing program, alerting users when one becomes available. - Hiriko is 100 percent electric and electronic, with no mechanical controls. Its lithium-ion battery pack is aimed to work with a smart electric grid that uses clean, renewable energy sources. - Because this EV folds up and multiple fit together like shopping carts, three of them can use a parking space typically needed for one standard sedan. - It’s equipped with a “state of the art information system for permanent communication in an intelligent city environment,” which means your smart phone can find it. - Because Hiriko is powered by four in-wheel electric motors that have independent regenerative braking systems, steering and suspensions — all of which are digitally controlled — the car will be able to spin on its own axis, making maneuvering into tight parking spaces easy. The image below shows a rendering of Hiriko in action near New York's Guggenheim Museum — with three others folded up and parked toward the right. Photo and image courtesy of Hiriko
<urn:uuid:78555e17-803e-4ad8-8abe-688be7636bf2>
CC-MAIN-2017-04
http://www.govtech.com/photos/Photo-of-the-Week-Fold-Up-Electric-Car-2013-01312012.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00059-ip-10-171-10-70.ec2.internal.warc.gz
en
0.939914
384
2.65625
3
The network NASA uses to deliver telemetry ground-based tracking, data and communications services to a wide range of current and future spacecraft needs a serious bump in security technology. That was the conclusion of the space agency’s Office of Inspector General which stated: “We found that NASA, [NASA’s Goddard Space Flight Center in Greenbelt, MD, which manages the network] failed to comply with fundamental elements of security risk management reflected in Federal and Agency policies. We believe that these deficiencies resulted from inadequate Agency oversight of the network and insufficient coordination between stakeholders. These deficiencies unnecessarily increase the network’s susceptibility to compromise.” +More on Network World: NASA details bleeding edge communications ideas+ The OIG went on to state that NASA’s network assets are located in extreme environments such as Alaska and Antarctica, making maintenance on the aging structures more difficult. Constrained budgets have also led the Agency to defer some maintenance activities, which, on at least one occasion, has contributed to the unexpected failure of network equipment. The Near Earth Network uses four NASA-owned ground stations, three in the United States –on the campus of the University of Alaska, in Fairbanks; on the Wallops Flight Facility (Wallops) in Virginia; and on the White Sands Complex (White Sands) in New Mexico –and one at the McMurdo Station in Antarctica to offer services to over 40 missions with satellites in low Earth orbit (LEO), geosynchronous orbit (GEO) highly elliptical orbit, Lunar orbit and missions with multiple frequency bands. +More on Network World: 26 of the craziest and scariest things the TSA has found on travelers+ “At the time of our audit, NASA was expanding the network’s capacity by installing new antennas at the Kennedy Uplink Station at Kennedy Space Center and at the Ponce de Leon Ground Station in New Smyrna Beach, Florida. A portion of this new capacity will be dedicated to supporting the launch activities for the vehicles NASA intends to use to send humans into deep space –the Space Launch System (SLS) and Orion Multi-Purpose Crew Vehicle (Orion). NASA also installed a third antenna at the Fairbanks facility, which became operational in July 2014,” the OIG stated. The problems cited by the OIG included: - Information system connections between the network and the external entities that support its operations are not managed in accordance with Federal and NASA policy. As a result, the agency does not have sufficient visibility into the security posture of these external systems and cannot ensure the owners are able to adequately respond to or report security events. - IT security controls, such as software that identifies malicious code, are not in place or functioning as intended. - Due to insufficient coordination between the Network, Goddard, and NASA Office of Protective Services physical security controls have not been implemented on NASA-owned and supporting contractor facilities in accordance with Agency or Federal standards. - Network components are at risk of unexpected failure due to their age and lack of proactive maintenance. Although the network is performing preventative maintenance on NASA-owned assets, it has not been performing or tracking depot-level maintenance on this equipment. This failure to proactively inspect and replace cables and mechanical systems that are reaching their failure point has already resulted in one unexpected breakdown and could require the network to purchase more costly commercial services in the future. - NASA assigned a security categorization rating of “Moderate” to the Near Earth Network and did not include the network in its Critical Infrastructure Protection Program. We believe this categorization was based on flawed justifications and that the network’s exclusion from the Protection Program resulted from a lack of coordination between network stakeholders. Given the importance of the network to the success of NASA Earth science missions, the contingency support it provides for the Space Network, and the plans for it to support human space flight in the future, we believe a higher categorization rating and inclusion in the Protection Program is warranted. The OIG said that NASA management agreed with almost all of its recommendations to fix issues with the exception of reclassifying the network completely. The OIG said that NASA’s Associate Administrator for Human Exploration and Operations and the Chief Information Officer agreed to recategorize the portion of the network that supports the SLS and Orion as a “High “system, but intend to retain the “Moderate “rating for the rest of the network because it is not critical to the operation of any NASA spacecraft or spacecraft program. We have concerns regarding this rationale. As discussed in our report, we do not believe the network operates simply as a “pass through “for communications. Rather, network components must store (albeit temporarily) and process data and commands prior to transmitting to the spacecraft. Given the importance of the network to the success of NASA Earth science missions and the launch and contingency support it provides other Federal agencies, we continue to believe the entire network should be categorized as “High,” the OIG stated. Check out these other hot stories:
<urn:uuid:665d45c6-7c17-4a1b-b1d3-9af2e0c91d95>
CC-MAIN-2017-04
http://www.networkworld.com/article/3045336/security/nasa-s-ig-tells-space-agency-to-bolster-space-network-security.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00059-ip-10-171-10-70.ec2.internal.warc.gz
en
0.936475
1,041
2.8125
3
The Linux operating system was initially created by Linus Torvalds, who began his work in 1991 and worked steadily until 1994 when Version 1.0 of the Linux Kernel was released. Developed under the GNU General Public License, Linux source code is freely available to everyone, and Linux is therefore often considered an excellent, low-cost alternative to more expensive operating systems. The Linux operating system may be used as an end-user platform as well as for a wide variety of other purposes, including networking and software development.By virtue of its functionality and availability, Linux has become quite popular. As a result, students around the world have been seeking Linux education. This book is a guide to setting up a complete Linux environment on which to learn about the various Web technologies. As you move through the text and the accompanying labs, you will build a system replete with a database management system, a Web server, and server-side Java.And you'll understand how it all works. Because the whole system is based on Linux, that wonder of the open-source era, everything you learn here is applicable to any platform on which Linux will run. These platforms include Apple hardware, Intel and Intel-compatible hardware, and, of course, the eServer iSeries by IBM. Each chapter contains hands-on labs to reinforce your understanding of just how powerful Linux is. |Author Name:||Don Denoncourt and Barry Kline| |Publication Date:||July 15, 2002| |Product Dimensions:||8.5 x 0.5 x 10.8 inches| |Shipping Weight:||1.2 pounds| $ 20.00 $ 79.95 Product Look Inside the Book Author Bio Specifications Database Design and SQL for DB2 engages readers with a hands-on approach that provides start-to-finish coverage of database design and SQL,... $ 28.00 $ 95.95 Product Look Inside the Book Author Bio Specifications Since its original publication, Programming in RPG IV has given thousands of students and professionals a strong foundation in the essentials...
<urn:uuid:351e6712-9938-45c8-a969-2e9449122c4e>
CC-MAIN-2017-04
https://www.mc-store.com/products/understanding-web-hosting-on-linux
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00059-ip-10-171-10-70.ec2.internal.warc.gz
en
0.90706
424
3.078125
3
It sounds like the stuff of science fiction, but for the past few years, researchers have tapped into brain waves to control everything from video games to wheelchairs. They are beginning to use computers to decipher our thoughts, too. I received an e-mail yesterday from a reader named Brad asking me for more details on an item listed in my recent story 10 technologies that will change the world in the next 10 years The story listed the most important items that Cisco chief futurist Dave Evans predicts will impact us in the decade to come. Brad was interested in item No. 9 which I called "Yes, there's a cure for that" The story said, “Today we have mind-controlled video games and wheelchairs, software by Intel that can scan the brain and tell what you are thinking and tools that can actually predict what you are going to do before you do it.” Brad asks: Do you have references you can provide for the Intel Software and the mind controlled items? Good question. I’ve written about mind-controlled software and games a little bit. It’s based on a technology called electroencephalogram (EEG) and the most well known commercial producer of a headset that uses EEG is a company called Emotive. By using that headset, other companies have developed things like mind-controlled video games, in which your thoughts control the game, not a mouse, keyboard, joystick or gesture-based device. One company that has commercial mind-controlled games on the market using the Emotive headset is Mind Technologies. Here's a video from September that demonstrates games from Mind Technologies. EEG was also the basis of a mind-controlled wheelchair developed by in the University of Zaragoza and demonstrated in 2009. That same year, Toyota also demonstrated a mind-controlled wheelchair also using an EEG cap Here's a 17-second clip if you just want to briefly see Toyota's wheelchair. Here's the video from UofZ that offers a somewhat technical explanation of how it works. Intel and Carnegie made news about a year ago with its NeuroSys project. This uses EEG and a few other such technologies like functional magnetic resonance imaging (fMRI) and magnetoencephalography (MEG) so that a computer can translate thoughts into words. It's not ready for practical uses yet, but it's amazing all the same. Here’s the video. Earlier this year, researchers at The University of Western Ontario from The Centre for Brain and Mind published research in Journal of Neuroscience of how they used fMRI to determine the action a person was planning, a few moments before the action took place. Take a look: So there you have it, Brad. I offer a rundown of technology demonstrations that allows our minds to control devices and for computers to read our minds. Mind blowing, isn't it?
<urn:uuid:e117e057-89d9-486a-a547-35e2e531b25e>
CC-MAIN-2017-04
http://www.networkworld.com/article/2220276/cisco-subnet/a-mind-blowing-look-at-today-s-mind-controlled--mind-reading-technologies.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00087-ip-10-171-10-70.ec2.internal.warc.gz
en
0.958293
588
2.78125
3
In information technology, a network is a series of places called nodes that are interconnected by some kind of communication. Networks can interconnect with other networks and contain subnetworks. |Check out more about networks:| If we speak about networks in a matter of distance between nodes, networks can be local area networks or LANs, metropolitan area networks or MANs, and wide area networks or WANs. Most common topologies of networks are bus, star, token ring, and mesh topologies. The type of data transmission technology in use on selected network can also characterize a given network. We will speak mostly about TCP/IP and Ethernet technology. Network will in different situations carry different kinds of data that includes voice, data, or both kinds of signals in the same time. Network can be characterized also by who can use the network (public or private). In some of the writing we will also be talking about the usual nature of network connections like dial-up or switched, dedicated or non-switched, or virtual network connections. The hardware part of the story is looking at the types of physical links that can be for example, optical fiber, coaxial cable, and Unshielded Twisted Pair. Large telephone networks and networks using their infrastructure (like Internet) have sharing and exchange arrangements with other companies so that larger networks are created. A computer network Computer network in these days is often simply referred as Internet. But that is not the real truth. Computer network is a collection of hardware parts and computers at the end that are interconnected by communication channels. With use of these channels, computers can share resources and information. If at least one device is able to send data and another device can receive this same data, then the two devices are connected in a small network. Networks may be classified according to different things such as the medium used to make the channel, communications protocol used to send and receive etc. Communications protocols are the rules that are saying to data what are the next steps in the communication process and in which formats must the data be presented to be able to go across the wire. To make the exchanging of information possible, we need to have. Well-known communications protocols are Ethernet, a hardware and Link Layer standard that is universal in local area networks, and the Internet Protocol Suite, which defines a set of protocols for internetworking. Internetworking is a term that can be used when we need to speak about data communication between multiple networks, as well as host-to-host data transfer, and application-specific data transmission formats.
<urn:uuid:bf56e0a9-2842-4f1d-b03b-24ec50671716>
CC-MAIN-2017-04
https://howdoesinternetwork.com/2012/computer-network
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00023-ip-10-171-10-70.ec2.internal.warc.gz
en
0.942856
520
3.84375
4
This is the first of five articles on graph databases. In this one I will discuss the technology in a general sense and consider why they are potentially "the next big thing". In the second article I will consider the similarities and differences between graph databases and NoSQL databases such as Hadoop and Cassandra, in the third and fourth articles I will discuss Neo4j (from Neo Technologies) and uRiKA (from YarcData) respectively, where the former is probably the leading general-purpose (that is, supporting both transactional and query-based applications) graph database and the latter is focused on graph analytics. Finally, in the fifth article I will consider how take-up of graph databases may impact the data warehousing market. So, what is a graph database? Or, to get us started, what is a graph? Because it is not a graphical representation of an equation. A graph, sometimes also known as a network (but network database would be confusing) consists of nodes and edges. The very first graph of this sort was created by Leonhard Euler in the 18th century to resolve the Königsberg (now Kaliningrad) bridge problem. The question to be answered was whether it was possible to traverse all the bridges in Königsberg (which at that time was based on both banks of the Pregel and on two islands in the middle of the river) just once and ending where you started (the answer was no). Euler generalised the problem by replacing the land masses with nodes and the bridge with edges. Graph theory has widespread applicability in, for example, operational research, where it is used to design networks of various sorts, such as road networks, pipeline networks and so forth, as well as large-scale IT networks (including the Internet itself). Major concerns in such applications are to a) discover the most cost-effective implementation of the network and b) to discover the fastest route across the network between any two particular nodes. Another major application is in circuit board design. In so far as graph databases are concerned, a node represents an entity (a person or thing) and an edge a relationship. Note that relationships (and therefore edges) may be one way or two-way. For example, if you follow me on Twitter but I don't follow you then I can influence you but you may have no way to influence me. In addition, graph databases support the use of attributes alongside both relationships and entities. So relationships can be qualified in some way. For example, suppose that you want to understand who influences whom in potential buying situations. Most of us are not influenced by a single person but by multiple people and some people are given more credence than others, so you might want to apply a weighting to a relationship. Or, if you have a relationship that involves ownership, say, then there are lots of things that you might own: a cell phone, a car, a laptop and so on, and these can be used as qualifiers to the ownership relationship. Pursuing this ownership example further, you might also apply attributes to the entity that you own, for example applying a model and year to your Chevrolet. Basically, graphs offer a holistic view of the relationships that an entity participates in. Applications that are interested in such relationships will typically be either about managing those relationships or discovering relationships that were not previously known. Such use cases are common: for example, a significant part of master data management consists of hierarchy management. Conversely, security services want to discover and understand the relationships that exist between criminals and/or terrorists. Other possible uses would be in network management; SIEM (security information and event management) where using graphs might make more sense than current approaches; bioinformatics; medicine (The Mayo Clinic is a YarcData customer) and capital markets; as well as various social media environments. It is also worth noting that the semantic web (Web 3.0) is predicated upon the use of the Resource Description Framework (RDF) and RDF statements are, in effect, graphs. As a result you may see references to RDF databases but this is just another way of saying graph database. So there is also the potential for graph databases to support applications that leverage the semantic web as this comes more and more into play. In essence, the way that a graph database works (I will talk about this further in subsequent articles) is that it stores entities and relationships, as discussed, but its processing is along the edges of the graph. This turns conventional approaches to data storage on its head. In a relational database, for example, the heart of the system is its entities (tables) and you only use relationships (primary/foreign keys) to get to another entity: what you are doing is processing data. In a graph database you are processing relationships. Finally, why do I suggest that graph databases may be the next big thing? There are two reasons. The first is that they provide a more focused approach for addressing issues with regard to relationships than either Hadoop or traditional approaches. And, as one vendor put it to me, "understanding relationships is the best way of looking at almost any question". That makes sense to me. Secondly, some big hitters are starting to enter this field: YarcData is actually a spin-off from Cray, and the result of years of research into this area in partnership with various government security organisations; I am also aware that at least one of the major data integration companies is looking into building connectors into this space; and IBM has just released graph store capability in its latest release of DB2. Note that this isn't a graph database per se but you can tag selected relational data to make it look as if the data is graphical. In other words it supports a graph-based logical view of the data. IBM also supports SPARQL, which I'll discuss in a subsequent article, an open source language for querying graph databases.
<urn:uuid:c60e00a3-6bf8-4ccf-8a7e-ae07e4c8cd5b>
CC-MAIN-2017-04
http://www.bloorresearch.com/blog/im-blog/graph/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00169-ip-10-171-10-70.ec2.internal.warc.gz
en
0.964888
1,216
2.890625
3
Fifty years ago, IBM unveiled the first System/360 mainframe, considered to be “the most important product announcement in the company’s history.” Despite the rapid pace of advance in the computing field, not only are mainframes still very much in use today, the launch of the System/360 introduced technical concepts that would become part of the fabric of modern computing. As EnterpriseTech editor-in-chief Timothy Prickett Morgan writes, “that these virtual card wallopers are still around is a testament to the fact that software is sticky, that change is difficult or sometimes not worth the trouble, either technically or economically, and that gradual evolution is what makes IT products endure.” The System/360 came to fruition after three years of development, under the direction of then IBM chairman Thomas J Watson Jr, assisted by chief architect Gene Amdahl, project manager Fred Brooks, and launch manager John Opel. IBM invested two years of revenue into the project. As TPM writes, this was a gutsy and expensive undertaking, a bold move the likes of which is seldom undertaken by public entities. A project that was budgeted at $675 million – for factories, hardware and software development – ended up with a 1961 price tag of $5 billion (worth about $39 billon in today’s dollars). The risk paid off handsomely, better than anyone could have imagined. Writes TPM: “IBM was breaking ground in so many new technologies, from chip manufacturing to software development, that it would have been hard to keep to the schedule and within the budget. The System/360 also turned IBM into a chip manufacturer on a large scale for the first time, and it also made the disk drives and reel-to-reel tapes that are visually synonymous with the mainframe in culture.” In the first five years post-launch, IBM sold 4,000 of the mainframes and had orders for 20,000 more. It did not take long for Big Blue to recoup its extravagant initial layout. Profits grew by 20-25 percent per year in the late 1960s, dipping in the 1970s as peak demand dampened. At that point, IBM started pushing System/3 minicomputers, which ate into mainframe sales a bit. The result, writes TPM: “two healthy – although unfortunately incompatible – product lines, which incidentally live on as the Power Systems and the System z mainframe today.” The 360 name reflected the machine’s general purpose nature. System/360 was intended for companies both big and small and for commercial as well as scientific use. The idea was radical, one machine that could span a wide performance range and could run the same operation system and application software to solve a wide range of business and scientific problems. The System/360 also was revolutionary for another reason. IBM essentially merged its five product lines into one compatible family using an architecture that featured 8-bit byte addressing, which lives on in every computer today. “After the S/360,” writes the company, “we no longer talked about automating particular tasks with ‘computers.’ Now, we talked about managing complex processes through ‘computer systems.’” “It was the first product family that allowed business data-processing operations to grow from the smallest machine to the largest without the enormous expense of rewriting vital programs… Code written for the smallest member of the family had to be upwardly compatible with each of the family’s larger processors. Peripherals such as printers, communications devices, storage, and input-output devices had to be compatible across the family.” The early IBM mainframes ran the performance spectrum from one to 50MHz. Memory ranged from a minimum 8KB up to 8MB in the high-end models. While some view mainframes as old and outdated, 80 percent of the world’s corporate data is still managed by mainframes. Although the first model was revolutionary, today’s descendants are many times more powerful. Today’s largest mainframes can execute 52,000 business transactions per second. 40-50 new businesses every year get on a mainframe. At a press event held in New York City today celebrating the half-century milestone, Steve Mills, IBM Senior VP & Group Executive, Software & Systems, ran through some of the highlights of the IBM mainframe 50 years after its introduction. - 23 billion ATM transactions per year are processed by the mainframe, worth more than $1.4 trillion. - $6 trillion credit and debit card payments processed annually. - 3 billion travelers a year access mainframes in making their arrangements. - 30 billion business transactions are processed daily. Throughout the five decades since the mainframe’s debut, IBM has continued to emphasize compatibility. “Applications must continue to work properly. Thus, much of the design work for new hardware and system software revolves around this compatibility requirement,” maintains IBM. In cases where it cannot provide that backwards compatibility, IBM aims to give users at least a year’s warning that software changes will be required.
<urn:uuid:369dcb70-83c4-4209-99df-c40736897cdc>
CC-MAIN-2017-04
https://www.hpcwire.com/2014/04/08/ibm-mainframe-celebrates-50-years/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00105-ip-10-171-10-70.ec2.internal.warc.gz
en
0.958181
1,068
2.8125
3
While Apple has created many fine things, they were yet to be created themselves when Xerox PARC scientists created Media Access Control addresses. These MAC addresses are 48 bits or 6 bytes long, so they are also known as MAC-48 or EUI-48. EUI stands for Extended Unique Identifier. It is written in hexadecimal characters as shown below: MAC addresses act as the physical addresses for local communications. They show up in most IEEE 802 networks including: 802.3 (as well as Ethernet II), 802.5, 802.11 (Wi-Fi), 802.15 (Bluetooth), and the ITU-T G.hn standards. The IEEE now manages MAC addresses. Their current projection is that the amount of addresses available with 48 bits (over 281 Trillion) will last until 2100. They have already planned to extend the MAC address space to 64 bits and will call it EUI-64. There are three types of MAC addresses: Unicast, Multicast, and Broadcast. The way to identify which address type you are viewing is simply look at the first byte. A unicast address’s first byte will be even, like 02, 04, 06, etc. The first byte of a multicast address is odd, such as 01, 03, 05, etc. The broadcast address uses all 1s binary or all FF hex. That way, any receiving interface can tell what kind of destination address it is reading after just reading one byte. To work correctly, each network interface has to have an address that is unique in its local segment of media. That address is the unicast address. To help that process, Ethernet card vendors support that uniqueness by registering with the IEEE. They get one or more Organizationally Unique Identifiers. TCP/IP literature refers to this OUI as the vendor address component of the MAC address. The first three bytes (pairs of hexadecimal characters) of any unicast address contain that vendor address component of the MAC address. The remaining three bytes carry the serial number of that vendor’s interface card. Many vendors have chosen to register multiple different OUIs for lots of different reasons. They may want to use each for a specific product or simply because they ran out of serial numbers on a previous OUI. The full list is available at http://standards.ieee.org/develop/regauth/oui/oui.txt. Although many vendors are careful to abide by the standards, others are not as careful. A vendor’s careless use of a code registered to another vendor may result in two or more NICs having the same Ethernet address. If cards with duplicate MAC addresses are installed on the same side of a router, results will be unpredictable.
<urn:uuid:620b542a-ee42-4ebe-8f6c-55d3ac4748dc>
CC-MAIN-2017-04
http://blog.globalknowledge.com/2012/07/12/does-a-mac-address-mean-apple-invented-it/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00134-ip-10-171-10-70.ec2.internal.warc.gz
en
0.920274
569
3.640625
4
State agencies that help protect our land, water and air and manage our natural resources have -- well -- an environmental problem. The regulations they enforce to reduce pollution and to ensure the wise use of trees, minerals, water and land are generating huge amounts of paper. Businesses must send in forms and documents to show they are abiding by the latest state and federal environmental regulations. Departments of environmental protection and natural resources have to churn out copies of the same documents for lawyers, federal bureaucrats and the public. It adds up to a lot of consumed trees, not to mention the side effects of pollution from pulp production and land development for offices and warehouses to store the paper. At the same time, paper generated by the permits and regulations consumes scarce government resources to cover the filing, distribution and analysis of the documents. For example, Iowa's Air Quality Bureau processes permits that allow the controlled emission of air pollution in the state. In 1996, the bureau will issue only 283 permits, but one application for a permit can run 7,000 pages. The bureau received as many as 3 million pages of documents this year, all of which have to be carefully analyzed to ensure the state doesn't allow too much pollution into the air. "We were in a real bind because of all the paper," said Peter Hamlin, chief of the Air Quality Bureau. "We've had to rent warehouse space to help out with storage." In Utah, where water is a precious natural resource, the story is the same. An individual or business just can't drill a well or tap into nearby surface water. They have to obtain the rights to use the water through a special process that's administered by the state's Division of Water Rights, in the Department of Natural Resources. Since 1897, Utah has been tracking all state water rights and has on file 8 million pages of documents, all of which are open to public access. As environmental controls tighten and management of limited resources becomes more complex, the amount of regulation in this field can only grow. To avoid a regulatory collapse brought on by too much paper, state agencies are turning to imaging technology to alleviate the burden of storing, retrieving, distributing and processing documents. Advances in client/server technology, object-oriented software, document management, workflow, relational databases, high-speed scanners, CD-ROM storage and the Internet make the job of protecting and regulating the environment more manageable. In a report produced by Vermont's Agency of Natural Resources, potential imaging applications include state land records, permit applications, publications, hazardous site manifests, well logs, engineering drawings and permit application site plans, hunting and fishing licenses, staff training and public education. Despite the numerous possibilities, imaging is a relative newcomer to the field of environmental protection and natural resources. While the number of installations is growing, the imaging applications now in operation are few and their scale is often quite large. Take, for example, Florida's Department of Environmental Protection, which has installed a $7.5 million document imaging system to process documents for the statewide cleanup of underground storage tanks that are contaminating groundwater. The project involved the complete reengineering of the department in charge of waste cleanup, and a massive backfile conversion of paper documents &endash; of which the state has more than 7 million relating to underground storage tanks alone. The system was built by Digital Equipment Corp., using high-speed Alpha servers, an Oracle database management system and Highland Technologies' Highview imaging and workflow software. According to John Willmott, bureau chief of the department's information services, the imaging system will significantly advance the department's ability to process the authorization and reimbursement for storage tank removal. "That, in turn, speeds up the protection of Florida's environment," he said. Florida's removal and cleanup of underground tanks is a state problem, funded by state legislation. In Iowa, issuing permits that allow the controlled emission of air pollutants is a federal mandate, conducted under the 1990 Clean Air Act. Fortunately, the act stipulates that large polluters have to pay states a fee based on the amount of air pollution released. From that fee the state can fund the use of technology, such as imaging, to manage the information gathered on polluters. When the Air Quality Bureau made plans to use the funds for imaging, the industries that paid the fees demanded a cost-benefit analysis first, to ensure the project wouldn't end up as an expensive boondoggle. "The results showed that the system would pay for itself in less than two years by cutting our labor costs," remarked Hamlin. The bureau installed a $1.5 million imaging system in November, built by Wang and Radian International, a technology firm specializing in environmental projects. The 175-user system consists of Wang's imaging and workflow software, Hewlett-Packard servers, a Cygnet jukebox, an Oracle database, PCs running Microsoft Windows and UNIX workstations. Not only does the imaging system automate the distribution of the permit documents to the bureau's staff, but it also adds value by running some basic calculations based on data that is read by the system's optical character reading software. "It will calculate the potential emissions generated by an applicant based on the data they submit," said Hamlin. "It's going to save our permit reviewers a tremendous amount of time." In Pennsylvania, imaging is helping the state track the "who, what, when and where" concerning hazardous municipal and industrial waste. The state's Bureau of Land Recycling and Waste Management has installed a $1.2 million document management system that uses imaging to convert documents on hazardous waste manifests and related fee collections -- worth $35 million annually -- into a database of information for environmental analysts. The system, which serves 24 users, can also process incoming faxes, electronic data interchange files, mainframe reports and e-mail messages. The software, an object-based electronic document management product suite, was developed by Vantage Technologies, a firm recently purchased by Wang. According to Bureau Chief Jeff Beatty, the system's biggest benefit is the way it speeds up the flow of information. "That time savings allows us to collect fees much faster than in the past," he said. It has also allowed analysts to spend more time analyzing information and less time searching for it. "It's liberated our analysts in terms of time. That's a positive experience for us." Public access is another service that environmental and natural resource departments must provide. By linking imaging systems with the Internet, states can extend access far beyond what was ever thought possible. Utah's Division of Water Rights has begun putting documents on the World Wide Web at: . Iowa's Air Quality Bureau plans to do the same. Though, as Hamlin remarked dryly, "I can't imagine a lot of people will want to read this stuff. Some of it's pretty boring."
<urn:uuid:0849757c-2bce-483f-9907-afdf3156fedf>
CC-MAIN-2017-04
http://www.govtech.com/magazines/gt/Imaging-Takes-on-the-Environment.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00556-ip-10-171-10-70.ec2.internal.warc.gz
en
0.945205
1,396
2.71875
3
From our studies, we know that the International Organization for Standards (ISO) created the Open Systems Interconnection (OSI) networking model to standardize data networking protocols, to enable communication between all computers and devices across any network anywhere in the world. The OSI model is now mainly used as a point of reference for discussing the specifications of protocols used in network design and operation. The upper layers of the OSI reference model (application, presentation, and session = Layers 7, 6, and 5) define functions focused on the application. The lower four layers (transport, network, data link, and physical = Layers 4, 3, 2, and 1) define functions focused on end-to-end delivery of the data. When we consider the seven layers of the OSI Reference Model, there are two that deal with addressing the data link layer and the network layer. The physical layer is not strictly concerned with addressing at all, only sending at the bit level. The layers above the network layer all work with network layer addresses. When we discuss end-to-end delivery of data, we must necessarily talk about how datagrams are addressed. We find out that addressing is done at two different layers of the OSI model and two different layers are used, which are very different types of addresses that are used for different purposes. Layer 2 addresses, such as IEEE 802 MAC addresses, are used for local transmissions between hardware devices that can communicate directly. They are used to implement basic LAN, WLAN, and WAN technologies. In contrast, layer 3 addresses, which are most commonly 32-bit Internet Protocol addresses, are used in internetworking to create a virtual network at the network layer. The most important difference between these types of addresses is the distinction between layers 2 and 3 themselves. Layer 2 MAC addresses enable communication between directly-connected devices residing on the same physical network. Layer 3 IP addresses allow communications between both directly and indirectly-connected devices. For example, say you want to connect to the Web server at http://www.cisco.com. This is a Cisco Web site that resides on a server that has an Ethernet card used for connecting to its Internet service provider site. However, even if you know its Layer 2 MAC address, you cannot use it to talk directly to this server using the Ethernet card in your home PC. This is because these two devices are on different networks. In fact, they may even be on different continents! Instead, these devices communicate at layer 3, using the Internet Protocol and higher layer protocols such as TCP and HTTP. Your request is routed from your home machine through a sequence of routers to the Cisco server. The response is then routed back to you. The communication is, logically, at layers 3 and above. You send the request, not to the MAC address of the server’s network card, but rather to the server’s IP address. While we can virtually connect devices at Layer 3 through routers, these connections are really conceptual only. When you send a datagram that has been created using the OSI 7-Layer-Model, it is sent one hop at a time, from one router to another, from one physical network to the next. At each of these hops, an actual transmission occurs at the physical and data link layers. When your request is sent to your local router at layer 3, which is usually referred to as your default gateway, the actual request is encapsulated in an Ethernet frame using whatever method you use to physically connect to the router. It is addressed and sent to the default gateway router using the router’s data link layer MAC address. The same happens for each subsequent step until, finally, the router nearest the Cisco Web server, sends the datagram to the destination using the data link (MAC) address of the NIC card of the Cisco Web server. In my next blog, I will discuss the Address Resolution Protocol (ARP) that is a method used for finding a device’s link layer MAC hardware address when only its Internet Layer IP address is known. Author: David Stahl
<urn:uuid:08fc0be5-c58c-456f-86cd-054611cdc6c8>
CC-MAIN-2017-04
http://blog.globalknowledge.com/2009/08/22/network-layer-utilities-end-to-end-data-delivery/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00372-ip-10-171-10-70.ec2.internal.warc.gz
en
0.920643
837
4.25
4
Schwager J.,Cerema Direction Territoriale Est Laboratoire Regional Of Nancy | Schaal L.,Cerema Direction Territoriale Est Laboratoire Regional Of Nancy | Simonnot M.-O.,CNRS Reactions and Process Engineering Laboratory | Claverie R.,Cerema Direction Territoriale Est Laboratoire Regional Of Nancy | And 2 more authors. Journal of Soils and Sediments | Year: 2015 Purpose: The increasing surface area of green roofs (GR) may have a significant impact on the quantity and quality of urban drainage. However, the chemical quality of effluents produced by GR in comparison to atmospheric deposit and other roof surfaces has to date been poorly assessed. It is necessary to determine whether a green roof acts as a sink or source of pollutants. This work was conducted to study the capacity of four materials commonly used to build green roofs. Materials and methods: Leaching tests experiments were performed on three substrates and one drainage material. Sorption kinetics and isotherms were also established for Cu and Zn thanks to batch experiments. Results and discussion: Results showed the variability of release according to the material and pollutant considered. The equilibrium time for adsorption was high (5 h to 3 days) for all materials. Expanded clay was identified as the material with the highest ability to retain Zn and Cu; also, desorption was limited with this drainage material. In the substrates, Cu was mainly sorbed by organic materials, which induce an important desorption rate due to organic matter leachability. Conclusions: In conclusion, the study showed that the effect of green roofs on water quality is strongly dependent on the materials used. That is why a characterization of the leaching and sorption capacities of materials should be carried out prior to green roof construction in a context of storm water quality management. © 2014, Springer-Verlag Berlin Heidelberg. Source
<urn:uuid:4f4c884d-eebb-4e3e-a1fe-c82747f49147>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/cerema-direction-territoriale-est-laboratoire-regional-of-nancy-2464911/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00372-ip-10-171-10-70.ec2.internal.warc.gz
en
0.925688
400
2.734375
3
The process of globalization, increasingly driven by new information technologies, is dramatically changing what economic development means for state and local government. Today, the economic health of any region hinges on competitiveness, not just with neighboring regions, but with communities that may be located halfway around the globe. "Four years ago, when I became director general of the World Trade Organization, it was almost universally accepted that regionalism would be the new shape of world trade," Renato Ruggiero told senior elected officials from the 29 nations of the Organization for Economic Cooperation and Development (OECD). "Today the reverse is true. In a digital world -- where Singapore is as close to Toronto as Chicago -- the idea of regional preference and integration begins to lose its logic. Regional arrangements still have an important role to play, but increasingly as catalysts for the global system, not as alternatives." Already a quarter of the global output is exported, Ruggiero said, up from just 7 percent in 1950. Developing countries' share is even higher -- almost 40 percent. If this trend continues, soon well over 50 percent of all economic production in most areas is likely to be for export. Moreover, these exports will include not just goods, but also an increasing number of services that will be ordered and often delivered digitally over the Internet. So the whole emphasis of economic development at the state and local levels will shift. What is important will be not what can be brought or pulled into a region to make it economically attractive and robust, but rather what one can push out from it to other economic centers around the world. "[Regions] are obligated to turn outward, not inward," Ruggiero said. Economic development in the future will be increasingly tied to electronic commerce, or what Lou Gerstner Jr., chairman and chief executive of IBM, prefers to call e-business. "Last year we started using the term e-business to describe all the ways individuals and institutions will derive value from the Net," he said. "We coined this term because what's going on here transcends just e-commerce. E-business includes transactions among employees inside an enterprise; among trading partners in a supply chain; and it includes the way governments deliver services to citizens, educators teach students, and physicians treat patients. "The real revolution isn't about the end-user experience, and it's not even about the technology. The real revolution is about banks, universities, government agencies and commercial enterprises making fundamental change in the way they currently do things." Gerstner argues that the existing models simply do not apply. "Not only is e-commerce screaming ahead, moving much faster that any bureaucracy, committee or legislating process ever could," he said, "it's a grave error to think the Internet will develop under the kind of regulation we could apply to, say, the phone system back in the days when coal and steel were determinants of a nation's greatness, and many markets were underdeveloped." Governments around the world now regard e-commerce as the engine of economic growth in the decades ahead. "It is a new age in which the accumulation and distribution of information will form the basis of a new society," said Sanzo Hosaka, Japan's secretary for International Trade and Industry, at the recent OECD Ministerial Conference. "Electronic commerce will be at the core of all this. Electronic commerce has an enormous potential to revolutionize the very roots of our societies and economies." U.S. Secretary of Commerce William Daley said, "I am the last American commerce secretary of this century -- assuming I do my job right. I don't know of another commerce secretary this century who saw, as I am seeing, a technology totally change the way commerce is done. Here is a technology that could give every business, regardless of size, the ability to sell products to every corner of the planet like a Fortune 100 company does. This is an era of truly sweeping changes." "In some ways the Internet has come to symbolize [a] powerful, yet uncertain world," Ruggiero said. "For we are not just talking about a new service or a new communications network. We are talking about technologies that are shaping a new kind of global economy -- the closest thing yet to a single, borderless world market. This development has many implications, but the most important is that it is greatly accelerating the process of globalization -- and making it even more irreversible than it is already." This electronic marketplace is redefining our notions of interdependence economically and socially. "In a certain sense, we find ourselves between two worlds -- between an economic system that is increasingly global, and institutions and structures which have not caught up with this complex world," Ruggiero said. "The challenge we face is to bring these two worlds together by reshaping the global rules and policies needed to support our globalizing economy." The formation of new global rules to provide a framework for electronic commerce is precisely why an OECD Ministerial Conference was held in October in Ottawa, where senior representatives from the 29 OECD member nations agreed on a number of broad principles concerning such issues as taxation, privacy and consumer protection. It was agreed, for instance, that taxation should be neutral and equitable, and that "taxpayers in similar situations carrying out similar transactions should be subject to similar levels of taxation." If this general principle is pushed through into real tax policies, it would mean, for example, that no regional area could offer tax incen-tives to encourage regional business. Moreover, it was agreed generally that consumption or sales taxes would be levied at the point of consumption of the product. So, in a sense, economic development in the future, certainly insofar as this being important to state and local government revenues, has as much to do with what is being consumed in any region as it does what is being produced there -- perhaps even more. The more consumption, the more tax money local and state governments will collect to finance important services to citizens. People in any region may be buying goods and services from many countries around the world. This has far-reaching implications on economic development strategies. "Governments are finding they have similar opportunities to create competitive advantage, increase service and build productivity," IBM's Gerstner said. "In fact, I believe that governments that in the past competed for industrial investment or jobs based on incentives like tax structures or access to skilled labor will compete in the future in large measure on their electronic readiness and capability." How does any government gauge its preparedness? How do officials think through what steps to take? "Industry has to help you make this assessment, and we're building tools to do just that," Gerstner said. He argued that realizing the full potential for e-commerce ultimately depends not on technology but on government policy. "Governments hold the reins here and can give this new economic opportunity its head or bring it to its knees," he said. "And if governments refuse to cooperate on this one, the latter is destined to happen -- because unilateral or uncoordinated tax policies will cause confusion and stifle growth." An OECD report prepared for member countries, entitled "The Economic and Social Impact of Electronic Commerce," makes the point that e-commerce "has the potential to radically alter economic activities and the social environment. "The overall effect of electronic commerce on employment will be the balance of direct new jobs, indirect jobs created by increased demand and productivity, and job losses (due to workers, e.g., retailers and other intermediaries, being replaced by electronic commerce)," the report continues. "Gains and losses may differ by industry, by geographical area, by skill group." Calculating the likely impact of e-commerce on jobs in any area is proving to be a complex task. "To assess the impact of electronic commerce, it is essential to understand for which industries it is generating or will generate new demand and growth, which types of jobs will be destroyed and which created, and what the overall needs are in terms of skills," the OECD report says. Such assessment is further complicated by the decline of geographical importance in many production activities. "The 'death of distance' that is intrinsic to information networking is probably the single most important economic force shaping society at the dawn of the 21st century," the report states. "Both for individual citizens and for businesses, affordable access to the information infrastructure has become a necessity for effective participation in a knowledge-based economy and society." However, other factors will also play a significant role in the economic prosperity of any region. Two of the more important are the skills and digital literacy of the workforce, and the confidence and trust people around the world have in the companies and governments in any region. "While both conventional and electronic markets rely on high levels of mutual trust, electronic transactions create specific challenges for both businesses and individuals," the OECD report observes. "Because they are remote, these exchanges make mechanisms that reduce or eliminate risk especially important." In industrial societies, economic development had much to do with building infrastructure that would attract new businesses to a region. However, if electronic commerce takes off as most experts now expect, the emphasis will shift to electronic infrastructure. This, Ruggiero said, involves making computer and telecommunications networks available and compatible worldwide. "The Internet revolution has not developed in a vacuum," he said. "It stands on the shoulder of an equally profound revolution in global telecommunications. Here again, the trading system is playing an important part. In February , 69 countries, representing 95 percent of the global market, concluded a massive agreement to free telecommunications services, opening many markets which had up till then been dominated by state-owned monopolies. Two months later, 40 countries, accounting for over 90 percent of the world trade in IT products, agreed to the elimination of tariffs on computer and telecommunications products by the start of the year 2000." Improve the Social Fabric These and other developments mean there really will be very little difference between the electronic infrastructure available to businesses from one area to another. Economic development in the future really will depend to a large extent not on infrastructure, but rather on social issues. In many ways, education and measures to strengthen a satisfactory sense of community are important ingredients in emerging models of economic development. Electronic trade abolishes distances and frontiers. "But without investments in the human infrastructure -- skills, training, know-how -- no amount of investment in physical infrastructure will help," Ruggiero said. Increasingly, economic development means investing in people and the things that attract people to a region, not because business is there, but rather because they want to live there. It means social policies that help to create communities where people feel safe and satisfied and where they have the support needed to effectively develop their skills and knowledge to continue to be personally competitive in the world market. So the way it's starting to shape up, the new economic development model is really about people and community development. December Table of Contents
<urn:uuid:fec46fe6-0005-4104-acdb-5bf93a107891>
CC-MAIN-2017-04
http://www.govtech.com/featured/The-New-Model-Its-Now-About.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00098-ip-10-171-10-70.ec2.internal.warc.gz
en
0.961031
2,247
2.546875
3
ZoneAlarm revealed the common behaviours of younger Facebook users that increase their susceptibility to encountering cyberbullying, predators and other security threats. A ZoneAlarm report examined the online activities of 600 children worldwide, aged between 10-15, who regularly use Facebook. Three activities in particular showed a positive correlation with the occurrence of security threats: children adding Facebook “friends’ that may be strangers, playing Facebook games that request access to private account information, and using Facebook late at night. Of the three activities that contribute to an increase in security threats, late night usage is highlighted as a major factor. According to the survey, children who are active on Facebook after midnight are exposed to more risks, and experience almost twice as many problems as users who log out before midnight. These late-night users – which the study calls Facebook’s “Wild Children’ – are four times more likely to have large friend networks, consisting largely of individuals whom the users have never met in person. Alarmingly, 60% of Facebook “Wild Children’ report having experienced serious problems including cyberbullying, account hacking, and unwanted attention from strangers. 43% of children on Facebook have experienced at least one serious problem: - problems may include cyberbullying, hacked accounts and strangers. - 40% of children take Facebook quizzes / play Facebook games that access personal information. - 33% of children have Facebook friends they have never met. Almost 25% of children surveyed are active on Facebook after midnight – Facebook’s “Wild Children”: - Many “Wild Children’ are still online after 3am - Children online after midnight are four times more likely to have extremely large friend networks. - 44% of Facebook “Wild Children’ have Facebook friends they have never met in person. - 40% of Facebook “Wild Children’ have Facebook friends who do not know any of their other friends. Facebook “Wild Children’ experience twice as many serious problems: - 60% report serious problems on Facebook - 15% report they have been approached by strangers - 20% report they have been cyberbullied - Despite these problems, 30% say they are unconcerned about the dangers on Facebook - And 30% have done nothing to improve their privacy. The complete report is available here.
<urn:uuid:a90ef0dd-f6ad-44c1-84ee-b0f9f198ded5>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2012/11/12/young-facebook-users-are-most-vulnerable-to-security-threats/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00428-ip-10-171-10-70.ec2.internal.warc.gz
en
0.94775
489
2.875
3
On Web's 25th Anniversary, Web Inventor Berners-Lee Speaks Out The 25th anniversary of the creation of the World Wide Web was marked on March 11, and just in time to commemorate that special moment, Sir Tim Berners-Lee, who is credited with inventing the Web, presented his observations in a special guest post on The Google Official Blog. "Today is the web's 25th birthday," wrote Berners-Lee. "On March 12, 1989, I distributed a proposal to improve information flows: 'a web of notes with links between them,'" while working for the CERN laboratory, he said. "Though CERN, as a physics lab, couldn't justify such a general software project, my boss Mike Sendall allowed me to work on it on the side. In 1990, I wrote the first browser and editor. In 1993, after much urging, CERN declared that WWW technology would be available to all, without paying royalties, forever." Those first pieces formed the basis for what would become tens of thousands of people who began working together to build the Web, wrote Berners-Lee. "Now, about 40 percent of us are connected and creating online. The Web has generated trillions of dollars of economic value, transformed education and health care and activated many new movements for democracy around the world. And we're just getting started." Now on the 25th anniversary of those events, Berners-Lee wrote, it's time to celebrate and to "think, discuss—and do" in regard to the future of the web as we know and use it. "Key decisions on the governance and future of the Internet are looming, and it's vital for all of us to speak up for the web's future," he wrote. "How can we ensure that the other 60 percent around the world who are not connected get online fast? How can we make sure that the web supports all languages and cultures, not just the dominant ones? "How do we build consensus around open standards to link the coming Internet of Things? Will we allow others to package and restrict our online experience, or will we protect the magic of the open web and the power it gives us to say, discover, and create anything? How can we build systems of checks and balances to hold the groups that can spy on the net accountable to the public? These are some of my questions—what are yours?" To answer these and future questions and challenges, he wrote, Berners-Lee asked in his post that every Web user get involved in the future of this tool. "On the 25th birthday of the web, I ask you to join in—to help us imagine and build the future standards for the web, and to press for every country to develop a digital bill of rights to advance a free and open web for everyone. Learn more at webat25.org, and speak up for the sort of web we really want with #web25." It's amazing to realize that the Web has only been with us for 25 years and that companies like Google, which was founded in 1998 in a Menlo Park, Calif., garage that was rented for $1,700 a month, have been around for even shorter periods. Google just celebrated its 15th birthday on Sept. 26, 2013. The world of search has changed quite a bit in those brief 15 years. Back then, users sat down to their desktop computers with their screechy dial-up modems and waited while information slowly loaded on their machines. Back then, Google was a pure-play Internet search company with a starkly simple home page and a revolutionary plan to improve Web search using its own special algorithms. Today, search is still the crown jewel of Google's business, but it was a springboard to the company becoming a highly diversified global enterprise that dominates the Web with search-driven online advertising, cloud applications as well as myriad other tangential ventures that it can afford to pursue due to its wealth and influence.
<urn:uuid:bf3008ce-dd94-4a58-8278-ddaac5b1ad52>
CC-MAIN-2017-04
http://www.eweek.com/blogs/upfront/on-webs-25th-anniversary-web-inventor-berners-lee-speaks-out.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00244-ip-10-171-10-70.ec2.internal.warc.gz
en
0.964918
818
2.6875
3
Oracle Implements Thai Electronic Medical RecordsBy Stacy Lawrence | Posted 07-17-2005 The project involves the development of a large-scale database of unified electronic health records and is intended to provide safer and more effective medications as well as reduced health care costs. Pharmacogenomics defines how a person's genetics affects his or her body's response to medications. It is the basis for individualized medicine, through which drugs are tailored and applied to cater to a person's personal genetic makeup. Pharmacogenomics could be a leap forward for health care. At the clinical trial level, it involves information sharing, investigator and patient management, and terminology translation. At the health care level, it involves biomedical surveillance, determining clinical pathways for standardized patient care and gathering patient and physician information into a centralized database. Keeping electronic health records is also an effective monitoring tool to curb outbreaks of emerging diseases such as SARS and avian influenza, more popularly known as bird flu. These outbreaks can have a devastating effect on the economy and the livelihood of Thais. The availability of a tool for fighting the spread of such diseases in a timely manner could help to control outbreaks, especially at a national level. The first step in the project involves developing a nationwide system to capture clinical and genetic patient information that can be used to define a correlation to a benchmark of information about the general population. This will allow for the classification of patients by genostrata. "This ground-breaking initiative will have far-reaching impact on Thailand's health care system," said Mr. Suvit Khunkitti, the Minister of Information and Communication Technology. "It's clear that economies and nations that can ride the wave of life sciences and biomedical innovation will grow and prosper," he said, "while those that fall behind could miss out on the world's next industrial revolution and experience a decline in growth rates, incomes and power."
<urn:uuid:57ffb210-8df4-47bc-b6cf-64a75bdef58a>
CC-MAIN-2017-04
http://www.cioinsight.com/print/c/a/Health-Care/Oracle-Implements-Thai-Electronic-Medical-Records
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00272-ip-10-171-10-70.ec2.internal.warc.gz
en
0.927354
390
2.703125
3
IETF Outcomes Wiki Launched As an organization, the Internet Engineering Task Force (IETF) measures its success by its publication of RFCs (see previous article). It does not explicitly ask itself whether published work is adopted and used by the greater Internet community. The IETF's dialogue about success started to change with the production of RFC 5218, "What Makes for a Successful Protocol?" which documented case studies and empirical data about some of the factors that appear to correlate with success, in terms of community uptake for IETF work. Taking a different approach in assessing long-term IETF impact, another tool is now available: A wiki that lets community participants list the success or failure of significant standards. The Outcomes Wiki divides listings according to the "areas" used for managing technical work in the IETF, such as Applications or Transport. Outcomes are rated according to a 6-point scale, ranging from "complete failure" to "massive adoption, plus extensive derivative work." The wiki began in June 2009, as an independent effort among a small set of IETF participants, to test its feasibility and evolve its design. For example, it quickly became clear that the single attribute of success vs. failure needed to be qualified by another attribute that indicates who the work is intended for, called "Target Segment." Work that is intended to support the internal operations of an Internet Service Provider (ISP) is not necessarily visible to the billions of Internet users and will, at best, be part of only a few thousand organizations. In terms of Internet scale, that is considered minuscule. However wide adoption of a tool among ISPs can have substantial benefit, and thereby qualify as "massive adoption." The wiki can serve both as a means of recording the IETF's track record of successes and failures, as well as providing a means of encouraging community dialogue about the quality of different IETF efforts. In addition, it can provide a window onto completed IETF work for the broader Internet community. D. Thaler and B. Aboba, "What Makes for a Successful Protocol?" RFC 5218, July 2008. Final Phase of Four-byte AS Number Policy Begins in APNIC Region From 1 January 2010, the Asia Pacific Network Information Centre (APNIC) ceased to make a distinction between four-byte only and two-byte only Autonomous System (AS) numbers. Instead, all AS numbers are now considered to be four-byte AS numbers. This change marks the third phase of the transition to four-byte AS numbers. For more information on the implementation phases of the four-byte AS number policy, please see "Policies for Autonomous System number management in the Asia Pacific region," section 6.3, "Timetable for moving from two-byte only AS numbers to four-byte AS numbers," available from: To learn more about how the transition to four-byte AS numbers may affect your network, see: http://icons.apnic.net/asn Charting the Course for Future Internet Leaders As the importance of the Internet grows in all aspects of modern life, so too do the challenges of those in positions of leadership and responsibility. Responding to the need for well-qualified leadership, the Internet Society (ISOC) is now accepting applications from people seeking to join the new generation of Internet leaders to address the critical technology, policy, business, and education challenges that lie ahead. Successful candidates in ISOC's Next Generation Leaders Program will gain a wide range of skills in a variety of disciplines, as well as the ability and experience to work with people at all levels of This program, under the patronage of the European Commission, blends course work and practical experience to help prepare young professionals (aged from 20 to 40) from around the world to become the next generation of Internet technology, policy, and business leaders. "The Internet Society's Next Generation Leaders Program is a unique opportunity to identify potential Internet leaders and help them accelerate their careers," said Bill Graham, responsible for strategic global engagement at ISOC. The key to the Internet’s success lies in the Internet Model of decentralized architecture and distributed responsibility for development, operation, and management. That model also creates important leadership opportunities, especially in those spaces where technology, policy, and business intersect. "We have designed the Next Generation Leaders Program to prepare young professionals for leadership, bridging the boundaries between business, technical development, policy, and governance on local, regional, and international levels," said Graham. Full details of the Next Generation Leaders Program are available at: http://www.isoc.org/leaders/
<urn:uuid:8e131533-7479-41c3-a84e-00ac502977a6>
CC-MAIN-2017-04
http://www.cisco.com/c/en/us/about/press/internet-protocol-journal/back-issues/table-contents-47/131-fragments.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281746.82/warc/CC-MAIN-20170116095121-00574-ip-10-171-10-70.ec2.internal.warc.gz
en
0.926109
953
2.6875
3
Researchers with the Sandia National Laboratory have tied together 300,000 virtual Android-based devices in an effort to study the security and reliability of large smartphone networks. The Android project, dubbed MegaDroid, is carefully insulated from other networks at the Labs and the outside world, but can be built up into a realistic computing environment, the researchers stated. That environment might include a full domain name service (DNS), an Internet relay chat (IRC) server, a web server and multiple subnets, said John Floren a computer scientist with the project. MegaDroid features what Floren called a "spoof" Global Positioning System (GPS) experiment. Researchers created simulated GPS data of a smartphone user in an urban environment, an important experiment since smartphones and such key features as Bluetooth and Wi-Fi capabilities are highly location-dependent and thus could easily be controlled and manipulated by rogue actors Floren said. According to a statement from Sandia: The researchers fed data into the GPS input of an Android virtual machine. Software on the virtual machine treats the location data as indistinguishable from real GPS data, which offers researchers a much richer and more accurate emulation environment from which to analyze and study what hackers can do to smartphone networks, Floren said. The idea is to help cyber-researchers better understand and ultimately limit the damage from network disruptions due to glitches in software or protocols, natural disasters, acts of terrorism or other causes. These disruptions can cause significant economic and other losses for individual consumers, companies and governments, Sandia said. In the end, the group's work is expected to result in a software tool that will let others in the cyber research community model similar environments and study the behaviors of smartphone networks. Ultimately, the tool will enable the computing industry to better protect hand-held devices from malicious intent. The Sandia testbed comes on the heels of a report recent report by the Government Accountability Office the lamented the rapid growth of attacks on mobile devices. For example, the GAO found: - The number of variants of malicious software aimed at mobile devices has reportedly risen from about 14,000 to 40,000 or about 185% in less than a year. - New mobile vulnerabilities have been increasing, from 163 in 2010 to 315 in 2011, an increase of over 93%; - An estimated half million to one million people had malware on their Android devices in the first half of 2011; - Three out of 10 Android owners are likely to encounter a threat on their device each year as of 2011; According to Juniper Networks, malware aimed at mobile devices is increasing. For example, the number of variants of malicious software, known as "malware," aimed at mobile devices has reportedly risen from about 14,000 to 40,000, a 185 percent increase in less than a year, the GAO reported. "Threats to the security of mobile devices and the information they store and process have been increasing significantly. Cyber criminals may use a variety of attack methods, including intercepting data as they are transmitted to and from mobile devices and inserting malicious code into software applications to gain access to users' sensitive information. These threats and attacks are facilitated by vulnerabilities in the design and configuration of mobile devices, as well as the ways consumers use them. Common vulnerabilities include a failure to enable password protection and operating systems that are not kept up to date with the latest security patches," the GAO stated. MegaDroid follows the lab's 2009 testbed made up of over 1 million virtual Linux machines known as MegaTux, and on a later project that focused on the Windows operating system, called MegaWin. Sandia researchers created those virtual networks at large scale using real Linux and Windows instances in virtual machines. Check out these other hot stories:
<urn:uuid:960b16a5-b036-4bb5-80c4-031bf81cdf1f>
CC-MAIN-2017-04
http://www.networkworld.com/article/2223239/malware-cybercrime/sandia-lab-fires-up-300-000-virtual-android-devices-to-test-out-security.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280730.27/warc/CC-MAIN-20170116095120-00538-ip-10-171-10-70.ec2.internal.warc.gz
en
0.937699
770
3.390625
3
In October 2015, at the ACM CCS 2015 conference, my colleagues Dennis Andriesse and Victor van der Veen from the Vrije Universiteit Amsterdam presented a paper, co-authored by me, a researcher at Lastline Labs, on control-flow integrity entitled "Practical Context-Sensitive CFI". This paper discusses PathArmor, a system that protects users from exploits using return-oriented programming (ROP) to launch an attacker’s code on the victim machine. In a nutshell, PathArmor uses recent extensions of CPU hardware to collect detailed information about the execution of a program at runtime. It uses this data to examine if the program behaves “as expected”. For example, if an attacker exploits a vulnerability to trick the program to execute a shellcode, PathArmor raises an alert since it sees that the execution does not conform to the behavior implemented by the programmer. More specifically, the system uses the 16 Last Branch Record (LBR) registers available in modern Intel processors to store the targets of control flow changing instructions (such as jump) exercised at runtime. What is Control-Flow Integrity (CFI)? CFI is a well-known technique in the research world and has been around for more than a decade. In its purest form, CFI reliably stops code reuse attacks, such as ROP or return-to-libc, against binary programs. Typically, such attacks circumvent common defense techniques such as DEP/W+X or ASLR by diverting a program's control flow and executing a set of Return-Oriented Programming (ROP) gadgets. CFI prevents exploitation attempts by ensuring that all control transfers follow the program's original Control Flow Graph (CFG), as defined by the programmer. For instance, if function A() calls B(), CFI checks that any return from B continues at the B’s callsite in A(). If, on the other hand, an exploit can force function B() to return to a different location of the attacker’s choosing, CFI finds this discrepancy and terminates the program before an attacker can do any harm. What’s the problem with CFI? Even though CFI was originally proposed in 2005, researchers are still struggling to design a practical implementation. To make enforcing control-flow integrity efficient, one flavour of common CFI solutions relaxes constraints on the legal targets of control edges. For example, they may dictate that a call instruction needs to always target a function in the program, without checking if the programmer ever calls this particular function at the call site in question. While doing so stops many current exploitation attempts with reasonably low performance overhead, unfortunately it also leaves a lot of freedom for attackers. A string of recent research publications shows how to circumvent all these lightweight solutions with relatively low effort. A fundamental problem with current CFI solutions is that they enforce only context-insensitive CFI policies. In other words, they examine control edges in isolation. This allows attackers to freely chain edges together and form paths that are infeasible in the original CFG. For example, if (at runtime) function A() has called B(), the context-insensitive CFI policies would allow a return from B() to any callsite of B() in the program, not only to A(). However, when coming from A(), a context-sensitive CFI would only allow a backward edge to A(). While the idea of context-sensitive CFI has been acknowledged by the research community years ago, it has been dismissed as too resource expensive in practice. How does PathArmor address this limitation? PathArmor implements a context-sensitive, low-overhead CFI solution: it considers each control transfer in the context of recently executed transfers, so CFI checks are enforced per path, rather than per edge. In the example above, PathArmor monitors where function B() was called from, and it enforces that execution returns to its call site in A(). As illustrated in the figure below, PathArmor builds on the following major components: 1) a kernel module employing hardware support to efficiently monitor execution paths, 2) an on-demand static analysis to examine if the observed paths are legal in the program, and 3) a binary instrumentation to actually enforce the CFI invariants. - Kernel module. The kernel module used by PathArmor has two tasks: first, it intercepts sensitive/dangerous system calls that are required to launch a successful attack, e.g., exec or mprotect. Next, on each such execution point, it provides a Branch Record core to support control transfer monitoring. We use the 16 Last Branch Record (LBR) registers available in modern Intel processors, which lets us observe paths of recently exercised control transfers. Since this monitoring comes with virtually no overhead, PathArmor yields comparable performance to previous CFI implementations. At the same time, it has enough runtime information to much more thoroughly examine if the program behaves “as expected”, offering a significantly stronger security protection. - Static analysis. The static analysis component verifies at runtime if a particular path reported by the kernel module is valid. To this end, it consults the CFG of the binary and searches for the path in question. The paper discusses details on how PathArmor builds the CFG and overcomes the path explosion problem. - Dynamic instrumentation. The dynamic binary instrumentation component sets up a communication channel with the kernel module to enable and manage path monitoring. In practice, this component is also essential for proper handling of libraries used by the protected program - for details please refer to the paper. Want to learn more about PathArmor? For more info on PathArmor, check out the full paper here. To try it out and see PathArmor prevent an exploit from taking over your machine, you may download a prototype implementation from https://github.com/dennisaa/patharmor.
<urn:uuid:4b55818e-1b08-4f7e-8594-02041a0f6a96>
CC-MAIN-2017-04
http://labs.lastline.com/patharmor-practical-rop-protection-using-context-sensitive-cfi
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00014-ip-10-171-10-70.ec2.internal.warc.gz
en
0.894449
1,207
3.140625
3
What You'll Learn - Describe how z/OS now integrates problem determination simplification - Understand how all elements work together for an integrated IBM soft failure solution - Implement and use the following soft failure Detect/Diagnose/Avoid capabilities in z/OS, and learn how they interact: - Analysis/Diagnosis: Predictive Failure Analysis + Runtime Diagnostics + IBM zAware - Avoidance: Health Checker - First point of defense: z/OS components - Problem data management: IBM z/OS Management Facility (z/OSMF) - Identify and resolve problems in a z/OS environment - Know the procedures to properly collect problem data, avoid potential problems, and diagnose failures - Implement the basic diagnostic approaches for various problems such as abends, loops, hangs, overlays - Describe the various kinds of problem documentation available in z/OS debugging - Use the common tools for problem determination, and the main sources of diagnostic data (logs, dumps, tracing, performance documentation helpers) Who Needs To Attend This intermediate course is for anyone who has to diagnose software problems that occur while running the operating system. This person is typically a system programmer for the installation. Information Technology (IT) professionals responsible for z/OS problem determination and diagnosis and subsystem programmers will also benefit from this class.
<urn:uuid:8dc85ad2-bd03-4d33-b8f8-c9c54f4bc490>
CC-MAIN-2017-04
https://www.globalknowledge.com/ca-en/course/120615/zos-health-check-and-troubleshooting/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00255-ip-10-171-10-70.ec2.internal.warc.gz
en
0.840643
279
2.59375
3
What’s Fused Fiber Optic Coupler Fused fiber optic coupler is a kind of fiber optic couplers, which is formed based on Fused Biconical Taper (FBT) technology. Thus, it is also known as FBT coupler. As an important passive component in fiber optic communication systems, the perform functions of fused fiber optic coupler include light branching and splitting in passive networks, wavelength multiplexing / de-multiplexing, filtering, polarization selective splitting and wavelength independent splitting. Fused Biconical Taper (FBT) Process A fused fiber optic coupler is a structure formed by two independent optical fibers. These two parallel optical fibers are twisted, stretched and fused together so that the coupling substantially takes place through interaction between the cladding modes. During the operation, the power output values form the output ports are monitored, and the process can be stopped at any desired coupling ratio (Figure 1). This process is known as the Fused Biconical Taper (FBT) process. The fused biconical taper is the most widely used menthod in facture of optical fiber coupler, with many advantages of low excess loss, precise coupling ratio, good consistency and stability. Figure 1. The Fused Biconical Taper process How Does Fused Fiber Optic Coupler Work Before talking about the working principle of FBT couplers, we firstly understand the evanescent wave. An evanescent wave is a near-field wave with an intensity that exhibits exponential decay without absorption as a function of the distance from the boundary at which the wave was formed (Figure 2. The red tails are the evanescent wave). In the FBT process the cores of two identical parallel fibers are so close to one another that the evanescent wave can “leak” from one fiber core into the other core which allows an exchange of energy. The FBT couplers work as a result of energy transfer between the optical fiber cores and the energy transfer is dependent on the core separation (d) and the interaction length (L). It is easy to see that if the coupling length is long enough, a complete transfer of energy can take place from one core into the other. If the length is longer still, the process will continue, shifting the energy back into the original core. Figure 2. Light propagating down an optical fiber. The red region represents the evanescent wave. For example, here is a 2×2 50/50 coupler (Figure 3.), assume that we launch 1mW into port 1 and 1mW into port 4. Obviously, we will measure 1mW at each output port, the light form each input port having split into two equal parts. In other word, if we launch 1mW into port 1 and 2mW into port 4, each path gets split into two equal parts again, so now we end up with 1.5 mW at each output port (0.5 mW contribution from port 1 and 1 mW contribution from port 4). Similarly, if it is a 1 x2 coupler and we launch 2 mW into port 1, we will end up with 1 mW at ports 2 and 3. Figure 3. 1×2 or 2×2 50/50 coupler As we know, optical fiber couplers allow bi-directional coupling and can be used to either split or combine signals. This is what we call “Bidirectionality”. Through the above example, we may have an idea of reversing the launch direction on a 2×2 standard coupler. In fact, the process is completely bi-directional. However, confusion arises sometimes when presented with a 1×2 coupler. The apparent non-symmetry of the device creates the false impression that the device somehow works differently. Continuing the 1×2 coupler, what happens if light is launched into one of the two “output” ports (ie. ports 2 or 3 above)? Does 100% of the light exit at port 1? I am so sorry to tell you the answer is not. Light still “wants” to exit from port 4 as well. So if we consider 1 mW launched into port 2, we will have just 0.5 mW exiting from port 1. Or, if we launch 1 mW into port 2 and 2 mW into port 3, we will have 1.5 mW exiting from port 1. Why? Because a 1×2 coupler is just a 2×2 coupler with one fiber cut short, crushed (to reduce back reflection from the end facet), and tucked away inside the housing of the coupler. In this case we can easily understand how a fused fiber optic coupler works. Types of Fused Fiber Optic Coupler Fused fiber optic couplers should be selected based on the window type or fiber type. Regardless of the port types used, fiber optic couplers can be designed for single window, dual window or even three window (wideband). In addition, according fiber types, there are couplers with 1×2, 1×3, 1×4, 1×5, 1×6, 1×8, 1×12, 1×16, 1×18, 1×20, 1×24, 2×2, 2×4 configurations with single mode or multimode fiber. Applications of Fused Fiber Optic Coupler As an important passive components in fiber optic communication systems, there is a wide range of application of FBT couplers. In addition, with its advantage of small size, FBT coupler is available individually or integrated into modules for fiber protection switching, MUX/DMUX, optical channel monitoring, and add/drop multiplexing applications.
<urn:uuid:f98bc909-bcbf-4514-a654-7897d5294425>
CC-MAIN-2017-04
http://www.fs.com/blog/how-are-fused-fiber-optic-couplers-made-how-do-they-work.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00191-ip-10-171-10-70.ec2.internal.warc.gz
en
0.914394
1,205
3.546875
4
Chinese government and U.S. technology experts have agreed on specific measures to curb spam originating from both countries, says a researcher who coordinated bilateral talks. The recommendations include establishing protocols to separate legitimate messages from junk mail; educating consumers about the risk of so-called botnets -- infected personal computers programmed by hackers to send bulk e-mails -- and preventing spam by, for example, encouraging Internet service providers to use "feedback loops," which allow e-mail recipients to blacklist suspected spam. The EastWest Institute, a global think tank, worked with the government-controlled Internet Society of China, a consortium of tech companies overseen by China's Information Industry Ministry, to develop a report on computer security that will be released next month. What many Americans may not realize, say some researchers, is that the United States is responsible for sending the most spam worldwide. This country outputs 18.83 percent of all junk mail while China is nowhere on the most recent "Dirty Dozen" list of heaviest spam-relaying countries released by security firm Sophos in January. However, while the United States is transmitting the junk, Americans aren't necessarily creating it. Botnets controlled by foreigners, including people in Russia and China, often disseminate mass e-mails remotely via compromised computers in the United States, unbeknownst to the computers' owners. According to EastWest officials the experts focused their paper on spam partly because China has restrained spam in recent years. President Obama and Chinese President Hu Jintao in January agreed to join forces in addressing cybersecurity. "This cooperative effort will not end with this report," said Yonglin Zhou, a director at the Internet Society of China. "Rather, it is a part of an ongoing process between Chinese and United States experts to open dialogue and foster mutual understanding."
<urn:uuid:1e4d4503-5223-45fb-92a2-85f2cb3b333d>
CC-MAIN-2017-04
http://www.nextgov.com/cybersecurity/cybersecurity-report/2011/02/chinese-government-us-techies-agree-on-anti-spam-measures/54310/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00191-ip-10-171-10-70.ec2.internal.warc.gz
en
0.943417
368
2.78125
3
Fiber Transceivers are integrated circuit chips that rotue and receive data. These Optical Transceivers use fiber circuits to send and receive information rather than common electrical wire. Fiber optic circuits are the words for transferring light beams of information through fiber, and cable wire interfaces. Central hubs are linked to end users at extremely fast speeds with these transceiver chips. To give an idea of how this works, these fiber transceivers can get your home and office joined to the internet, telephone communication and digital television services in record speeds. The convenience of today’s transceiver chip technology is the speed of signal transfer rates. Tests have shown that these optics can transfer signals up to 160Gbps. In transmission of 1,600 times quicker than Ethernet. Production used to make these small transceiver chips involves semi-conductor materials. They are slight in size but big in power. Internet joining is just one of the things that this growing technology is good for. They are also useful for area and wide range networks, home and business use, and downloading films in record times. Fiber Transceivers are physical from factors that are created by industry standards. Under the Multi-Source Agreement, all professional manufacturers are held to same design standards. These transceivers are sectioned into support transmission speeds. Each transceiver works in support of specific speeds from 1 Gbps to 10 Gbps. 1 Gbps transceiver from factors are usually known as XFP modules and sfp modules. As an example, a GBIC module is utilized with one end plugged into an Ethernet port and another end that connects a fiber optic patch cord with a fiber optic network. The fiber optic network and Ethernet information is transformed by this type of module. Hot pluggable optics make changing interfaces from one type of external device to another easy. The another fiber optic transceiver example is XENPAK Transceiver. This 10G transceiver fiber module is the largest in size, and contains a dual SC fiber interface. A typical copper line has a max distance of 15 meters, while multimode fiber line functions up to 300 meters. FiberStore is an professional manufacturer & supplier of transceivers. All of our transceivers are tested in-house prior to shipping to guarantee that they will arrive in perfect physical and working condition. We guarantee transceivers to work in your system and all of our transceivers come with a lifetime advance replacement warranty.
<urn:uuid:f19e8643-d047-44dc-90cd-35a325462e70>
CC-MAIN-2017-04
http://www.fs.com/blog/about-fiber-transceiver-modules-technology.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00521-ip-10-171-10-70.ec2.internal.warc.gz
en
0.940309
500
3.09375
3
In 1876, Alexander Graham Bell altered communications history when he invented the telephone. Just four years later, there were approximately 47,900 telephones in the United States. But telephone service was expensive, and not everyone had the financial means to obtain it, creating an analog divide between the haves and the have-nots. Working to get around this, telephone companies offered party lines that gave customers the option of sharing a phone line for a reduced price. Today's society also struggles with a divide, but of a digital nature -- between those who can afford broadband Internet access and those who cannot. Affordable Mesh Networks An Illinois group is working to bring wireless mesh networking to communities at a price they can afford. Sascha Meinrath founded the Champaign-Urbana Community Wireless Network (CUWiN) initiative. He and his team have been working on wireless, ad hoc mesh network technology using open source software free to the public. CUWiN hopes to gain cooperation from organizations that want to pool bandwidth for their communities in an effort to bridge the digital divide while moving toward a digital future. Meinrath explained that a lot of flat-rate, prepurchased bandwidth goes underutilized by entities such as businesses, municipalities and schools -- especially after business hours. "There's a tremendous amount of bandwidth right now that people are paying for and not using," said Meinrath. "Our goal is to create a way to efficiently use this bandwidth in addition to introducing the idea of communitywide networks." CUWiN's software utilizes the unlicensed 2.4 GHz frequency band to communicate via rooftop antennas in direct line of sight with one another. Because this frequency range is optimized for shorter distances, inexpensive, lower-powered antenna/radio combinations can be used to transmit the signal. Radios convert digital data to wireless signals so information can be sent to nearby antennas, forming a network of available connections called a local area network (LAN). Communications are made possible through nodes composed of an antenna/radio combination, a computer, a wireless card and the CUWiN software. Weaving the Nodes Each node joins the network automatically. Mesh topology allows the node to send data through multiple routes to multiple neighbors. For this reason, the network provides redundant connections with several paths for data transport. In addition, data sent directly to neighbors within the LAN without accessing the Internet travels at faster speeds -- much faster than a T1 line, according to Meinrath. Mesh topology also allows for decentralization, which is beneficial in two ways. "With mesh topology, there is no need for a central server system or centralized administration of the network," he explained. The CUWiN software uses the Hazy Sighted Link State (HSLS) protocol so each node can choose the best path for data to travel the network. The chosen paths are based mainly on signal quality and the amount of nodes, or links, the data must travel through to reach a destination. Data is then sent using the shortest path with the best available signal strength, taking into account each link's established reliability over time. The HSLS protocol essentially becomes the eyes of the network, with the ability to "see" when direct links go down and are no longer available. When this happens, the HSLS protocol sends updates to neighboring nodes, signifying the change. By only sending this information to direct links instead of the entire network, and by only sending it when a change occurs in the network versus sending it at regular intervals, the protocol reduces the overall amount of updates sent out to the network. These updates are considered overhead, as they are not part of the original data sent by network users. Minimizing overhead lets the network grow
<urn:uuid:4f8d7188-cb75-4637-a563-7a8d4981c0c5>
CC-MAIN-2017-04
http://www.govtech.com/e-government/Open-Mesh.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00337-ip-10-171-10-70.ec2.internal.warc.gz
en
0.931707
773
3.4375
3
Definition: An abstract data type storing items, or values. A value is accessed by an associated key. Basic operations are new, insert, find and delete. Formal Definition: The operations new(), insert(k, v, D), and find(k, D) may be defined with axiomatic semantics as follows. The modifier function delete(k, D) may be defined as follows. If we want find to be a total function, we could define find(k, new()) using a special value: fail. This only changes the return type of find. Also known as association list, map, property list. Generalization (I am a kind of ...) binary relation, abstract data type. Specialization (... is a kind of me.) See also total order, set Some implementations: linked list, hash table, B-tree, jump list, directed acyclic word graph. Note: The terms "association list" and "property list" are used with LISP-like languages and in the area of Artificial Intelligence. These suggest a relatively small number of items, whereas a dictionary may be quite large. Professionals in the Data Management area have specialized semantics for "dictionary" and related terms. A dictionary defines a binary relation that maps keys to values. The keys of a dictionary are a set. Contributions by Rob Stewart 16 March 2004. If you have suggestions, corrections, or comments, please get in touch with Paul Black. Entry modified 2 September 2014. HTML page formatted Mon Feb 2 13:10:39 2015. Cite this as: Paul E. Black, "dictionary", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 2 September 2014. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/dictionary.html
<urn:uuid:d2d859ff-7dd3-40b5-8225-b73181bb68bf>
CC-MAIN-2017-04
http://www.darkridge.com/~jpr5/mirror/dads/HTML/dictionary.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00145-ip-10-171-10-70.ec2.internal.warc.gz
en
0.86055
406
3.375
3
As with any innovation or trend, as the Internet of Things (IoT) matures, we will see some ingenious applications for IoT devices and data. But we also will see some ignoble ones as well. For example, IoT projects on Kickstarter include a smart trash can that can alert you when to take out the garbage, a smart jump rope that counts your jumps, and a smart desk that learns your habits and can even order food, make appointments, and set reminders. Perhaps slightly more useful is the smart wallet, which has lights and alarms that go off when it’s stolen, as well as a GPS tracker to tell you when and where you lost it. This is a handy solution for a vexing inconvenience, but hardly an attempt to take on the greatest challenges facing mankind. In other words, not every IoT-based innovation sets out to change the world. But this doesn’t mean that these efforts are without value. Although these innovations might seem insignificant or even silly, they share a common feature with IoT-based innovations whose value to mankind is immediately evident. Could IoT Devices Cure Cancer? Unlike the examples cited above, the contribution of some IoT-based innovations to the greater good is clear. Take IoT fitness trackers, for example. One of the first examples of mass-produced wearable technology, they are, for many people, their first foray into the world of smart, connected devices (beyond their smart phones, of course). Wearable fitness trackers have changed the types and amount of data we can collect about people’s health. It used to be that if your doctor wanted you to wear a heart-rate monitor, the device was bulky and expensive. Now, it’s an affordable plastic bracelet. But the possibilities go way beyond counting your steps or calculating your resting heart rate. As Apple debuted its ResearchKit app, researchers already were imagining innovative ways to use it. And researchers already are using it to help patients with asthma, Parkinson’s disease, diabetes, breast cancer, and cardiovascular disease. This is a whole new kind of data for the field of medical research, not based on patients’ self-reporting or on data that are gathered in a controlled lab setting, but real-world data. The usefulness of that kind of information can’t be ignored. OK, so maybe the data itself won’t cure cancer or any of these other diseases, but it will advance the research and delivery of care faster and more reliably than any other innovation in recent memory. Tracking Other Data Of course, it’s unrealistic to think that the IoT is going to be focused only on curing cancer or improving how people with chronic diseases manage their conditions. Other new sources of data are appearing every day with new IoT devices, with their own applications to improve the human condition. Test projects abound, including some proving that sensors can help grow healthier vegetables and protecting bee colonies with automatic heaters. They have even got Internet-connected cows (seriously) to help farmers catch disease among herds sooner and produce higher quality milk. Smart electrical grids and smart homes have the potential to revolutionize the way we consume and distribute power. Internet-connected appliances will help manufacturers with research and development of new products and will help retailers predict consumer demand. But what these “smart” innovations, with their obvious and immediate applications to the biggest challenges facing mankind, have in common with seemingly silly and frivolous applications of IoT technology is that they all are generating whole new kinds of data. And with the variety of IoT devices and all the many and varied new forms of data they produce, anything is possible. Even apparently inconsequential IoT devices, for example, like smart frying pans, yoga mats, or the aforementioned smart desk could turn out to produce invaluable data about cooking or exercise habits. And the applications for this data could reach well beyond furniture that can order your dinner. Bernard Marr is a bestselling author, keynote speaker, strategic performance consultant, and analytics, KPI, and big data guru. In addition, he is a member of the Data Informed Board of Advisers. He helps companies to better manage, measure, report, and analyze performance. His leading-edge work with major companies, organizations, and governments across the globe makes him an acclaimed and award-winning keynote speaker, researcher, consultant, and teacher. Subscribe to Data Informed for the latest information and news on big data and analytics for the enterprise.
<urn:uuid:f20cd951-fd61-4ce8-85fd-25698856a3bc>
CC-MAIN-2017-04
http://data-informed.com/silly-iot-projects-and-growing-heterogeneous-data/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00171-ip-10-171-10-70.ec2.internal.warc.gz
en
0.943925
922
2.8125
3
Bad news, Randians: A study suggests there is no evolutionary value to selfishness. Researchers at the University of Pennsylvania used a classical game theory match-up called "Prisoner's Dilemma" and mathematical models to demonstrate why "cooperation and generosity have evolved in nature," Penn said in a statement. Associate professor Joshua Plotkin and postdoc researcher Alexander Stewart, both of Penn's Department of Biology in the School of Arts and Sciences, analyzed the outcome of Prisoner's Dilemma as played by a large, evolving population of players. Here, by the way, is Prisoner's Dilemma: Two members of a criminal gang are arrested and imprisoned. Each prisoner is in solitary confinement, unable to speak or exchange messages with the other. Police admit they don't have enough evidence to convict the pair on the principal charge. They plan to sentence both to a year in prison on a lesser charge. Simultaneously, the police offer each prisoner a Faustian bargain: * If A and B both confess the crime, each of them serves 2 years in prison * If A confesses but B denies the crime, A will be set free whereas B will serve 3 years in prison (and vice versa)* If A and B both deny the crime, both of them will only serve 1 year in prison The point of the game is to study how people choose whether to cooperate. As Penn explains, "In the game, if both players cooperate, they both receive a payoff. If one cooperates and the other does not, the cooperating player receives the smallest possible payoff, and the defecting player the largest. If both players do not cooperate, they receive a payoff, but it is less than what they would gain if both had cooperated. In other words, it pays to cooperate, but it can pay even more to be selfish. The italics are mine, because that last part appears to offer a rationale for selfishness. Building on previous research, Plotkin and Stewart "began to explore a different approach to the Prisoner's Dilemma": Instead of a head-to-head competition, they envisioned a population of players matching up against one another, as might occur in a human or animal society in nature. The most successful players would get to "reproduce" more, passing on their strategies to the next generation of players. The two researchers soon determined that "extortion strategies wouldn't do well if played within a large, evolving population because an extortion strategy doesn't succeed if played against itself," Penn said. So they looked at "generous" strategies in games with multiple players, where players are willing to cooperate with opponents and will suffer more than their opponents if they don't cooperate. They simulated the effects of generous strategies on players and then built a mathematical proof demonstrating the evolutionary value of generosity. "Our paper shows that no selfish strategies will succeed in evolution," Plotkin said. "The only strategies that are evolutionarily robust are generous ones." Selfish people of the world, consider yourselves warned. Now read this:
<urn:uuid:a8f1526c-332f-4910-a97b-b246c5542df0>
CC-MAIN-2017-04
http://www.itworld.com/article/2704307/enterprise-software/hey--selfish-people--you-are-doomed-in-the-long-run--doomed-.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00504-ip-10-171-10-70.ec2.internal.warc.gz
en
0.962781
624
2.90625
3
IBM the Humanitarian Opinion: By using its expertise in backing the World Community Grid project, IBM also gets the chance to demonstrate the benefits of grid computing.Making use of unused CPU cycles on your client computers isnt a new idea. Going back into the 1980s, there was network database software that let you install a small piece of agent software on your client computers that, after normal business hours, would allow the database server to distribute its indexing load to any computer that was running its agent. In the early 1990s, graphics software was developed that, using the same agent model, was able to distribute the image rendering process to many different types of client operating systems, speeding up what is still a very CPU-intensive process, rendering graphic images. But the end of the 20th century saw not only a massive increase in the number of network computers, but also freely distributable client software that worked together with a centralized server to complete a specific task. These clients, such as the RSA encryption cracking contest tool from Distributed.net and the search for extraterrestrial intelligence from SETI@Home, provided clients for just about every common client operating system, let the user determine how much CPU resource they would use and when the software would run, and gave users a sense of camaraderie in creating teams that competed to devote the greatest number of excess computing cycles to the selected project. Now IBM has taken this concept a step further by stepping up as the technical muscle behind the World Community Grid project, joining United Devices (the folks behind SETI@Home) and a host of academic and scientific organizations to create an organization that uses these spare CPU cycles to work on projects designed to benefit humanity.
<urn:uuid:4e30428b-4450-498e-aac6-b3f93d929785>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Cloud-Computing/IBM-the-Humanitarian
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00438-ip-10-171-10-70.ec2.internal.warc.gz
en
0.95634
346
2.640625
3
Drive through any stretch of American farmland, and you’ll see miles of reinforced fencing that protects crops and keeps out unauthorized visitors. It’s clear that security is a high priority at these critical purveyors of the U.S. food supply — and, of course, that’s for good reason. However, in an increasingly data-centered world, U.S. farms and agricultural companies have rapidly changed how they do business and remain competitive. Much like U.S. healthcare vendors — who have undergone a lightning-fast shift toward digital storage and sharing patient records — ag businesses find themselves working frantically to keep pace with technology. And those digital fences need some serious work. For some time, agribusiness leaders such as Monsanto and Deere have maintained records on their customers for payment and basic record keeping, and the protection of this information is a serious concern. But it’s the newer, more robust level of data sharing involved in “smart farming” technologies that has cyber security analysts anxious and has even caused federal agencies to take action. Smart farming encompasses a broad range of technologies that enhance the precision of farming operations through various information-sharing capabilities. Satellite-guided tractors and crop management programs that operate on farming data sharing are just two examples of this quickly growing field. Smart farming can benefit farms of all sizes. The problem, however, is that the companies developing these technologies — and, thus, holding large repositories of clients’ data — are new to this type of business model, and they have limited experience in data security. In April of this year, the FBI published a bulletin warning farmers that smart farming technologies may draw more interest from hackers. Additionally, cybersecurity analysts have published speculations that foreign hackers may be planning to mine the agribusiness sector for intellectual properties behind U.S. farming. The National Cybersecurity Institute at Excelsior College even weighed in to explain how a failure on the cybersecurity front could even damage America’s food supply. Clearly, agribusiness in general, and vendors of smart-farming solutions in particular, need to move quickly to enhance any defense protecting client data and their own intellectual properties. And while the goal may be to develop internal cyber security programs over time, partnering with cyber experts will help these firms more quickly get their bearings. Whether it’s support for building a cybersecurity governance plan or a full-scale managed security service leveraging our state-of the-art SOC, Lunarline has the expertise and tailored solutions that agribusiness vendors need for the current challenge. For information about our services, you can reach one of our security experts online today or visit our website to get a better understanding of our solutions. This is the third article in a three-part series on agriculture and cyber security.
<urn:uuid:e2106d3b-6970-4f3f-8f0e-0d01776abc67>
CC-MAIN-2017-04
https://lunarline.com/blog/2016/09/hacking-heartland-part-3-safeguarding-farming-data/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00348-ip-10-171-10-70.ec2.internal.warc.gz
en
0.930129
578
2.53125
3
Definition: A list implemented by each item having a link to the next item. Also known as singly linked list. Specialization (... is a kind of me.) doubly linked list, ordered linked list, circular list. Aggregate parent (I am a part of or used in ...) jelly-fish, separate chaining. See also move-to-front heuristic, skip list, sort algorithms: radix sort, strand sort. Note: The first item, or head, is accessed from a fixed location, called a "head pointer." An ordinary linked list must be searched with a linear search. Average search time may be improved using a move-to-front heuristic or keeping it an ordered linked list, in which binary search may be effective; see below. An external index, such as a hash table, inverted index, or auxiliary search tree may be used as a "cross index" to help find items quickly. Binary search may be effective with an ordered linked list. It makes O(n) traversals, as does linear search, but it only performs O(log n) comparisons. For more explanation, see Tim Rolfe's Searching in a Sorted Linked List. A linked list can be used to implement other data structures, such as a queue, a stack, or a sparse matrix. an introduction, a Java applet animation (Java). If you have suggestions, corrections, or comments, please get in touch with Paul Black. Entry modified 29 August 2014. HTML page formatted Mon Feb 2 13:10:39 2015. Cite this as: Paul E. Black, "linked list", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 29 August 2014. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/linkedList.html
<urn:uuid:092c6630-3963-47c1-878f-d6b9208bff3f>
CC-MAIN-2017-04
http://www.darkridge.com/~jpr5/mirror/dads/HTML/linkedList.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00072-ip-10-171-10-70.ec2.internal.warc.gz
en
0.856806
410
3.609375
4
Can the nation get smart about cybersecurity? - By William Jackson - Aug 12, 2011 Declaring that “our nation is at risk” from vulnerabilities in the critical online infrastructure, the National Institute of Standards and Technology has released a draft plan for improving cybersecurity awareness, developing educational resources and creating career paths for IT professionals. The increasing importance of IT to the economy and in everyday life has made a basic awareness of security a necessity in nearly all aspects of business and daily life. Efforts outlined in the plan for the National Initiative for Cybersecurity Education (NICE) are intended to enhance the nation’s overall cybersecurity posture. Federal IT security workforce could double in 5 years* How to get hired fast: Be a cyber pro “The United States must encourage cybersecurity competence across the nation and build an agile, highly skilled workforce capable of responding to a dynamic and rapidly developing array of threats,” the plan says. Also, “Americans must be made more aware of the tools and practices that can help protect them from the negative consequences that cyber threats represent.” NICE would help to create a pipeline of skilled IT professionals, beginning with science, technology, engineering and mathematics (STEM) curricula in kindergarten through high school and continuing through professional training, licensing and certification programs. One objective is for U.S. students to move from middle of the pack in STEM ability to the top in international assessments over the next decade. Beginning in fiscal 2013 federal cybersecurity education budgets are expected to be aligned with these goals. NICE is a product of the 2009 Cyberspace Policy Review, which recommended creation of a national public awareness and education program to promote cybersecurity. NIST was designated the lead agency for the effort, and will be working in cooperation with the private sector, academia and other federal agencies including the Homeland Security, Education and Defense departments, the National Science Foundation, National Security Agency and the Office of Personnel Management. The draft plan is a high-level document listing goals and objectives with few specific activities or responsibilities. NIST will hold workshops and conferences to bring together stakeholders and partners in the process and work out specific agendas. A three-day NICE workshop on cybersecurity education is scheduled for Sept. 20 through 22 at the NIST campus in Gaithersburg, Md. The program’s goals are to: - Raise awareness among the American public about the risks of online activities, responsible use of the Internet, and cybersecurity as a career path. - Broaden the pool of skilled workers and encourage interest in STEM disciplines. - Develop and maintain an unrivaled, globally competitive cybersecurity workforce through education, training, employment, and certification. Achieving the goals will depend upon creating partnerships for cooperation between broad segments of government, business, non-profits, commercial organizations and academia. Although improving the security of IT products and enabling professional development are primary objectives of NICE, raising awareness in and providing resources for the general public also are major elements of the program. “Americans lack authoritative, affordable and readily accessible sources of information on which they can depend to help them distinguish cybersecurity hype from fact and good tools from bad ones,” the plan says. “Government, academia, and industry need to work together to provide resources and tools.” Improving the level of early computer education is intended to increase awareness, improve digital literacy and strengthen the IT workforce. NICE would establish rigorous academic compute science programs in high school so that students would enter college with requisite skills and knowledge for pursuing degrees in this area, thereby strengthening undergraduate cybersecurity curricula. Comments on the draft strategic plan should be entered into the Draft-NICE.xls comment template available at http://go.usa.gov/KFw and e-mailed to firstname.lastname@example.org by Sept. 12. William Jackson is a Maryland-based freelance writer.
<urn:uuid:9a9fecf6-3435-44fe-bdee-dc86fa7f3f9c>
CC-MAIN-2017-04
https://gcn.com/articles/2011/08/12/nice-plan-for-cybersecurity-awareness-education.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00402-ip-10-171-10-70.ec2.internal.warc.gz
en
0.930628
802
2.734375
3
Search Engine Poisoning (SEP) attacks manipulate search engines to display search results that contain references to malware-delivering websites. There are a multitude of methods to perform SEP, including taking control of popular websites, using the search engines' "sponsored" links to reference malicious sites to inject HTML code. Search Engine Poisoning via Cross-Site Scripting: Search Engine Poisoning can also be performed by manipulating the search engine to return search results contain references to sites infected with Cross Site Scripting (XSS). The infected Web pages redirect unsuspecting users to malicious sites. When unsuspecting victims follow one of these references, their computers become infected with malware. This technique is of particular importance since it does not require the attacker take over, or break into any of the servers involved in the scheme. Search Engine Poisoning is comprised of the following steps: - The attacker sets up a server that delivers malware upon request. The malware can be delivered in different ways, such as via an HTML page that exploits a browser vulnerability (aka "drive-by-download"), a "Scareware" scheme, or in any other variety of methods. - The attacker obtains a list of URLs vulnerable to Cross Site Scripting (XSS). In order to have an impact, these URLs should be taken from domains that rank high in search engines. The attacker usually obtains this list by an activity called "Google Hacking" – looking for specially crafted search terms in search engines that reveal the potential existence of specific vulnerabilities. - Using this list, the attacker creates a huge number of specially-crafted URLs that are based on the vulnerable ones and include the target keywords and a script that interacts with malware delivery server. - The attacker obtains a list of applications that support simple user content generation. These could be forums, pages that take user comments or applications that accommodate user reviews. The attacker then floods the content accepting applications with the variety of specially crafted URLs. - Popular search engine bots that scan the entire Web pick up the specially crafted URLs and follow them in order to index their content. As a consequence, the target keywords become associated with the specially crafted URLs. Since the attacker picked up URLs of high ranking domains to begin with, and due to the large amount of references into these URLS, the poisoned results get high ranking for the target keywords. - An unsuspecting user searching for one of the target terms clicks on one of these URLS and as a consequence become infected with malware. SEP is an extremely popular method used by hackers to widely spread their malware. As shown, attackers exploit XSS to take advantage of the role of third-party websites as mediators between search engines and the attacker's malicious site. Recommendations to the Web Administrator: Abusing a Web site in this manner may lead to brand damage, loss of customer base and potential visitors. Moreover it has a clear negative impact on the sites accessibility through search engines including decreased ranking, marking references as harmful and even altogether removal from the search index. Ultimately, this leads to devastating economic implications. Protecting the Web application against XSS attacks will prevent these sites from being abused as the attacker's conduit for a SEP campaign. Recommendations to Search Engines: Protection of users from malicious references returned as search results is also a responsibility of search engines. Current solutions that warn the user of malicious sites lack accuracy and precision and many malicious sites continue to be returned un-flagged. However, these solutions may be enhanced by studying the footprints of a SEP via XSS. This will allow more accurate and timely notifications as well as prudent indexing. - Hacker Intelligence Summary Report: Search Engine Poisoning via Cross-Site Scripting - Video: Anatomy of an Attack - Search Engine Poisoning via Cross-Site Scripting - Infographic: The Case of the Search Engine Poisoning - Article: Mass iFrame Injectable Attacks
<urn:uuid:e6bdd241-9d0c-40da-ae33-c5864d0b7ef8>
CC-MAIN-2017-04
https://www.imperva.com/Resources/Glossary?term=search_engine_poisoning_sep
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00218-ip-10-171-10-70.ec2.internal.warc.gz
en
0.900302
800
3.09375
3
Supercomputing can be daunting to the uninitiated. A peek at the hardware, for example, with row after row of black boxes, reveals little of the exciting science that is being enabled by these ultra-fast computations. To help make the world of HPC a little less opaque, staff members at Oak Ridge National Labs (ORNL) created a scaled down version of the Titan supercomputer. The team developed a nine-core portable unit that communicates its core activity with lights. This scaled down supercomputer looks a bit like a child’s toy, but there’s a reason for the multi-colored madness. Each color represents a processor and images on the connected monitor use the same colors to show what each processor is doing. The more colors that light up, the faster the program is running. The fewer the colors, the slower the execution. “It’s a lot better when you can actually visualize the difference,” says Austin Peay State University student Samuel Cupp. Program backers are using the tiny supercomputer to introduce students to parallel computing. It’s the future of computer science education, says Robert French of the OLCF User Assistance and Outreach group. “Everything from your smart phone to your 3D television has multicore processors, so students that don’t have parallel programming skills will be left out of 21st century engineering jobs,” he adds. French presented Tiny Titan at an ORNL event called the Next Big Idea Competition. While it did not win a monetary award, it did receive the People’s Choice Award. The problem with current HPC outreach is that many programs are still teaching serial processing or computer processing on just one core. “Students need to learn to program in parallel,” maintains French. “Computers will continue to get more cores and become more complicated.” The scaled down systems were constructed using Raspberry Pi Foundation ARM-based microcomputers, at a cost of just $35 each. The Pi chips, like other ultra-low-power processors on the market, are economical and can be combined with other components to build an affordable multicore processor. The details are shared here. Now that they’ve built the system, the next step for the Tiny Titan development team is to create a curriculum for teaching parallel computing in high schools and STEM programs. The group is working to get the system in front of students at local schools. Their initial outreach efforts will focus on schools that already have STEM programs in place. Tiny Titan was recently featured on a local news show, which produced the clip below.
<urn:uuid:12eddd4c-9872-4de2-b8a2-e70a28a39469>
CC-MAIN-2017-04
https://www.hpcwire.com/2014/06/16/tiny-titan-preps-students-multicore-era/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00090-ip-10-171-10-70.ec2.internal.warc.gz
en
0.939529
543
3.625
4
Refueling aging satellites that were never meant to be refueled is the goal with a emerging NASA system that could save millions. NASA this week said since April 2011, engineers have been working to build robotic satellite servicing technologies necessary to bring in-orbit inspection, repair, refueling, component replacement and assembly capabilities to spacecraft needing aid. The project could also lead to life extension or re-purposing in Earth orbit or Earth-bound application like robotically fuel satellites before they launch, keeping humans at a safe distance during an extremely hazardous operation., NASA said. +More on Network World: Gigantic changes keep space technology hot+ Two of NASA's leading development groups -- Kennedy Space Center and the Goddard's Satellite Servicing Capabilities Office (SSCO) teamed on the most recent advancement. Specifically SSCO demonstrated that a remotely operated robot - with supporting technologies - could transfer oxidizer into the tank of another orbiting spacecraft not originally designed to be refueled. Kennedy's propellant transfer system was an essential part of this Remote Robotic Oxidizer Transfer Test, or RROxiTT. Satellite fuel, known as hypergolic propellants, includes fluids such as hydrazine and nitrogen tetroxide are the most frequently used fuel and oxidizers for maneuvering satellites in Earth orbit, NASA noted. According to NASA, the team at Goddard shipped an industrial robotic arm to Kennedy for the test. From 800 miles away in Maryland, the team remotely controlled the robotic arm with its attached SSCO oxidizer nozzle tool to connect with a propellant fill and drain valve on the simulated satellite's servicing panel. Downstream, the Kennedy-provided propellant transfer system and hose delivery assembly flowed oxidizer through the tool into the client fill-drain valve, with all hardware located in the Kennedy facility in Florida. Hypergolic propellant was controlled remotely at various flight, pressures and flow rates to prove the concept worked, NASA stated. "This is a unique test that's never been done, as far as we know, anywhere in the world," said Brian Nufer, a fluids engineer in the Fluids Engineering Branch of NASA Engineering and Technology. The full contingent of operating spacecraft is right around 1,000 with more than 400 in the geosynchronous (GEO) Earth orbit belt some 22,000 miles above Earth. GEO is home to more than 400 satellites, many of which deliver such essential services as weather reports, cell phone communications, television broadcasts, government communications and air traffic management. By developing robotic capabilities to repair and refuel GEO satellites, NASA said it hopes to add years of functional life to satellites and expand options for operators who face unexpected emergencies, tougher economic demands and aging fleets. NASA also hopes that remote refueling technologies will help boost the commercial satellite-servicing industry that is rapidly gaining momentum. Check out these other hot stories:
<urn:uuid:58c821f2-69eb-45ac-94d2-e1010899e22e>
CC-MAIN-2017-04
http://www.networkworld.com/article/2226833/security/nasa-developing-unique-robotic-satellite-refueling-system.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00118-ip-10-171-10-70.ec2.internal.warc.gz
en
0.943291
586
3.28125
3
Healthy VoIP Nets�Part IX�Stocking the Toolbox for the Upper Layers Our last tutorial looked at some tools that are useful for managing the lower layers of a VoIP networking infrastructure. We now continue that discussion and consider more complex network management systems that can also handle the challenges of the upper protocol layers. Protocol Analyzers: A protocol is simply a set of rules, and the protocol analyzer determines if the data that was transmitted adheres to the rules that were developed for that system. Going back 25 years or so, the first protocol analyzers were called datascopes, and were used to decode a Data Link layer protocol, such as IBMs SDLC on a serial line such as a WAN. Those early devices had several limitations: They were typically single-protocol, single-use (such as decoding DECNET over an Ethernet LAN); they displayed the results in ASCII or hexadecimal encoding, not English, requiring further user interpretation; and they typically operated on only one protocol layer, without the ability to interpret end user application data. The advent of the PC changed the protocol analyzer market from a hardware to a software business, and also dramatically improved the capabilities of these devices. Now, multi-protocol, multi-layer functionality is the normwhich gives the user the ability to examine each layer of the protocol stack (for example, physical connection, Ethernet frames, Internet Protocol (IP) packets, User Datagram Protocol (UDP) datagrams, Real Time Protocol (RTP) voice samples, and so on). Furthermore, many of these devices have embedded expert analysis engines that provide the user with specific details that describe the nature of a problem (e.g., a duplicate IP address), and may also point toward a solution (the hardwareor Ethernetaddresses that identify the offending stations). Enterprise Network Management Systems and Remote Probes: In the early 1990s, as network architectures were evolving from host-based, centralized systems to distributed routers and servers, the Internet Engineering Task Force (IETF) developed a network framework system and protocol to address this changing environment. That protocol is the Simple Network Management Protocol (SNMP), now in its third generation of release (SNMPv3), and defined in RFCs 3410-3418 (see ftp://ftp.rfc-editor.org/in-notes/rfc3410.txt). SNMP-based network management systems entered enterprise environments in the late 1990s, and have become one of the most fundamental network management tools since that time. An outgrowth of that research is called RMON, which stands for remote monitoring. With RMON systems, probes are placed at strategic locations within the network, or embedded into those devices, such as routers. The probes then monitor various network parameters and operational characteristics, and report that information back to the network management console. These concepts are beginning to move into converged networking, with embedded probes that can monitor the health of a VoIP network, reporting information such as MOS scores, call performance statistics, and so onsome of which can also be integrated into existing SNMP-based systems. Examples of companies that market remote probes of one kind or another include: Telchemy, Tektronix, and RADCOM. Network Design and Optimization Tools: As we have discussed many times in the course of these tutorials, the concept of combining voice and data into a common networking infrastructure has a number of inherent challenges, including variations in the amount of traffic sent during a call, the call holding times, the WAN circuits needed to complete the end-to-end connection, and so on. As a result of these differences, the design and optimization of a converged network from a pure engineering point of view is quite difficultif not impossibleto accomplish using typical analytical tools such as spreadsheets. Over the years, many firms have attempted to develop software solutions to address this challenge, and a few have survived. Also in this category are hardware products that stress-test the network, applying simulated loads on the networking components, and then determine how it will perform under such conditions. Examples of companies that are active in the design and optimization area include: Fluke Networks, Ixia, OPNET Technologies, Spirent Communications, and Westbay Engineers. One caveat: The categories enumerated above are somewhat general, and many products may fit into more than one area. But well save that discussion for another day. Our next tutorial will begin our examination of specific vendors products, and show how these products can meet specific VoIP network management challenges. Copyright Acknowledgement: © 2008 DigiNet Corporation ®, All Rights Reserved Mark A. Miller, P.E., is President of DigiNet Corporation®, a Denver-based consulting engineering firm. He is the author of many books on networking technologies, including Voice over IP Technologies, and Internet Technologies Handbook, both published by John Wiley & Sons.
<urn:uuid:76c50614-604a-46b6-b3ea-f2963116451d>
CC-MAIN-2017-04
http://www.enterprisenetworkingplanet.com/print/unified_communications/Healthy-VoIP-Nets151Part-IX151Stocking-the-Toolbox-for-the-Upper-Layers-3723616.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00026-ip-10-171-10-70.ec2.internal.warc.gz
en
0.933367
1,015
3
3
As the time goes by and the network with more and more virtualised servers and other devices are making that network more complicated, overlay technologies are rising to save the day for network administrators. Virtual Extensible LAN – VXLAN is a new encapsulation technology used to run an overlay network on current Layer 3 communication network. An overlay network is considered as a practical network that is set up on the top of current layer 2 network. It also considers additional layer 3 technologies to aid flexible computer architectures. VXLAN will make sure it is very easy for network engineers to level out the right cloud computing setting while reasonably separating cloud applications and tenants. A cloud computing environment is defined as a multitenant, every tenant needs its separately configured logical network, which in return needs it’s very own network ID or identification. What the hell that means? What it this VXLAN doing actually. To put it simple, VXLAN can create logical network to connect your virtual machines across different networks. It is enabling us to make a layer 2 network for our VMs on top of our layer 3 network. That’s why VXLAN is a overlay technology. In “normal” network if you are connecting virtual machine to get the connection to some other virtual machine on different subnet, you need to use a layer 3 router to make a connection between networks. With VXLAN we can utilize VXLAN gateway of some sort to connect them without even exiting into physical network. Normally, network engineers have made use of virtual LANs – VLANs to separate applications and tenants in a cloud computing setting but VLAN requirements just permit or allow for up to 4,096 network identifications to be allocated at a specific given period – which may not be adequate addresses for a very big cloud computing setting. The main goal of VXLAN is to lengthen the VLAN address space just by including 24-bit sector identification and maximizing the number of accessible identifications to 16 million. The virtual extensible LAN – VXLAN segment identification in every frame makes individual logical networks stand out which means millions of separated Layer 2 VXLAN networks that can stay on normal Layer 3 infrastructure. Just like VLANs, just virtual machines in the same rational network can commune with one other. If accepted, VXLAN is capable of potentially permitting network engineers to transfer virtual devices across extended distances and play a very vital role in software-defined networking – SDN, an up and coming structural design that lets servers or controllers tell network switches exactly where they need to send packets. In conventional networks, every switch has proprietary program that tells it exactly what to do. In SDNs, the transfer or packet decisions are consolidated and the flow of network traffic can be planned separately of all personal switches and information center equipment. To put to use software-defined networking with VXLAN, supervisors can make use of current hardware and software, this feature helps to make the technology strong and appealing financially. There are so many vendors who are rolling out VXLAN gateways because it helps to bridge network services between software based network overlays and fundamental physical infrastructure. A lot of vendors have been able to pitch network overlays set on gateway protocols such as VXLAN or virtual extensible VLAN, as a method to implement software based, virtualized cloud networking. It is a very amazing, however network overlays do not restore the physical setting, and they just abstract it. The physical network is still available and it has to be well organized. Also, a lot of network overlays are organized in hybrid settings where a lot of the information center is still ruled by legacy architecture and network services, like firewalls and load balances, are still put into place in hardware. Due to this, companies will need VXLAN gateway in other to expand services and administration across both physical and virtual networks. There are VXLAN gateways available in programs, but hardware support will be better scaled. VXLAN is mostly the easiest implementation of the traditional network to virtualization border, got from legacy networking to complete virtualized networking. So many companies are benefiting from these networks in running their businesses.
<urn:uuid:b0a46797-b725-41f3-a653-9691ea4dc98b>
CC-MAIN-2017-04
https://howdoesinternetwork.com/2014/vxlan
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00026-ip-10-171-10-70.ec2.internal.warc.gz
en
0.92475
850
2.828125
3
When the smart phone first made its debut in the world, people marvelled at the number of tasks the small device could perform, and the number of devices the little box made redundant, for example, GPS units, music players, cameras, etc. Fast forward to the 21st century, we realize that our lives are easier still when these smart technologies are integrated into devices we use every day. The prime example of this is a smart car. The “Internet of Cars” is an umbrella term including for the concept of “internet of everything”, which plans to go far beyond the current level of digitizing the world – a grid of densely interconnected people, technologies, businesses, and processes that transcend boundaries of geography, and streamline and even eliminate manual intervention. The global market for internet of cars is projected to be almost US$ 47 million by 2020. The market for it in Asia-Pacific is estimated to be worth US$ XX.XX million in 2015 and it is expected to grow at XX.XX% CAGR to US$ XX.XX mn by 2020. It is estimated that the Asia Pacific connected car market ranks after that of North America and Europe. According to Cisco, one of the leading smart car technology makers of the world, the Internet of Cars can unlock about US$1400 in benefits each year per vehicle. Higher connectivity through a maingrid knowledge frame will make smart car technology-fitted automobiles cognizant of which routes to take, and which to avoid, for greater time saving, and even safety. There are a slew of benefits from this – less traffic, more productivity, lesser accidents, lower insurance costs, immediate and automatic crisis notification and response, and much more. Other benefits include remote sensing between cars – the smart automobiles of the future will be able to drive themselves, staying connected through smart networks with adjacent vehicles on the road, sharing information about road safety pooled and accessed through Big Data, and much more. Transportation would no longer be restricted to manual attention on the road, but can actually be used for productive purposes. Furthermore, transport infrastructure will benefit from an automated and well organized transport system. Fuel savings will culminate from timely reminder for servicing, taxation and pricing for vehicles. Most easily scalable benefits of connectivity desired by customers include accessing mobile applications without a jeopardizing life on the road, and tethering of mobile phone connections to the car to access internet while driving. Connected car market potential is limitless. The ecosystem for such a transportation system will be difficult to instate, but will guarantee efficiency, savings, and high returns. Moreover, an enthusiastic range of smart car vendors, outdoing each other in terms of technologies offered are testing cars of the future. Technology, as well as automobile bigwigs are pooling in their R&D to make prototypes which have been launched, and others which are slated to be launched soon. Some of technology players leading the game are Google, Canada based QNX, Delphi, Cisco, AutoTalks, NVidia, Mobileye, and others. The added advantage to these new generation cars, fitted to perform almost all connectivity and technological functions without manual intervention is the freedom to use clean/bio fuel for powering the cars, opening the door to another related and extremely relevant market. High degree of interest, with more than 80% of respondents in China & Indonesia, more than 66% the population in Malaysia, South Korea and India, and half of Australia (according to an Alcatel survey) for time savings, enhanced GPS, location and maintenance live recording and reminders, and Wi-fi services, and additionally, fuel efficiency guaranteed for such cars form the drivers for connected cars. High cost, issues related to privacy online, security, fear of malfunction & breakdown were the top challenges expected to form the bottlenecks for this market in APAC. What the report offers
<urn:uuid:d8e034c3-8833-465f-a97a-fb743686c6f5>
CC-MAIN-2017-04
https://www.mordorintelligence.com/industry-reports/asia-pacific-internet-of-cars-market-industry
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00026-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947488
793
2.78125
3
Lesson 1: Workspace, Project and Sessions This lesson will teach you how to create/use a workspace, a project and a session. This part is mandatory to be able to use Watobo Watobo organizes the projects as follows: - Workspace: Physical path where project files will be saved. - Project: Projects are included in a workspace and contain sessions. - Session : Sessions are contained in a project. Either click on the [+] icon or select File > New/Open from the menu. Then fill in the following screens:
<urn:uuid:b28e29fe-9df5-4a2f-80f9-fb0a323f9fb5>
CC-MAIN-2017-04
https://www.aldeid.com/wiki/Watobo/Usage/Project-workspace-session
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00146-ip-10-171-10-70.ec2.internal.warc.gz
en
0.784484
116
2.703125
3
Researchers from University College London claim to have reached a data rate of 1.125 terabits per second, the fastest data rate ever recorded between a single optical transmitter and a receiver, according to an article on the university's website. It's quick enough to download the entire high-definition Game of Thrones series in one second, the scientists claim. A way of combining carriers into what the scientists call a "super-channel" is key to obtaining the speeds. Super-channels are used for sending bulk data between cities and continents, they explain. But, in this case, the super-channel handles distortion better than has been accomplished before. That's something scientists have been trying to achieve, and is necessary for the faster speeds. Nonlinear distortion and signal-to-noise ratio (SNR) has always constrained practical fiber communication throughput, the scientists say in a paper published in Nature. Their system fixes some of these problems by using multiple channels, and by encoding the signals in such a way as they adapt "to distortions in the system electronics," the university article explains. They use unique coding techniques that are "commonly used in wireless communications, but not yet widely used in optical communications," the article goes on to say. One outstanding issue is how to scale the transmission over distance. Long distances increase distortion. The team intends to test and measure data rates in a long-distance setting as part of the next step. Different wavelengths are used for the optical signals in each of 15 channels that make up a "super-channel." The channels are then modulated using 256QAM—a format used in cable modems—combined, and delivered directly from the transmitter to a single optical super-receiver for detection. The grouping of the 15 channels results in the "super-channel." Special optimization of it, and the super receiver, obtaining the entire super-channel in "one go," provides the high throughput. Super-channels, of the kind they're using, "which although not yet commercially available, are widely believed to be a way forward for the next generation of high-capacity communication systems," the article goes on to say. "Using a single receiver varies the levels of performance of each optical sub-channel so we had to finely optimize both the modulation format and code rate for each optical channel individually to maximize the net information data rate," says Dr. Robert Maher of the university's Electronic & Electrical Engineering department, in the article. "This ultimately resulted in us achieving the greatest information rate ever recorded using a single receiver," he says. The speeds are fast. For comparison, the speed obtained, 1.125 Tbps, "is almost 50,000 times greater than the average speed of a UK broadband connection." That's based on "24 Mbps, which is the current speed defining ‘superfast' broadband," Maher, a UK-located scientist says. This article is published as part of the IDG Contributor Network. Want to Join?
<urn:uuid:87f674aa-928d-4f8d-8b28-ea8fba80463a>
CC-MAIN-2017-04
http://www.networkworld.com/article/3033987/lan-wan/researchers-reach-data-rates-50000-times-faster-than-home-internet.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00356-ip-10-171-10-70.ec2.internal.warc.gz
en
0.937442
618
3.078125
3
DENVER, CO--(Marketwired - Jan 14, 2014) - EPS Geofoam is proving itself as a material with great potential for protecting pipelines from seismic activity. Steven Bartlett, associate professor of civil engineering at the University of Utah, and his team have been examining geofoam's mitigating effects on pipeline damage due to seismic faulting since 2007. According to Bartlett, high-pressure gas lines are one of the most important items to protect. "If they rupture and ignite, you essentially have a large blowtorch, which can be catastrophic," explains Bartlett. "During the summer of 2007, Questar Gas Company requested that the University of Utah evaluate a conceptual EPS Geofoam cover system for a steel, natural gas pipeline crossing the Wasatch fault in the Salt Lake City valley," explained Bartlett. "The fault rupture is expected to produce an earthquake with a potential magnitude of 7.5 and several feet of potential fault offset at the pipeline crossing." Many buried pipelines lay under six to eight feet of soil. Bartlett and his students at the University of Utah showed that a pipeline protected with a lightweight geofoam cover could withstand the fault offset and reduce the force on the pipe by up to four times the amount of force as a pipeline covered with conventional soil backfill. When the 37-mile long section of natural gas pipeline had to be replaced between Coleville and Ogden, Utah, approximately 20,000 cubic feet of ACH Foam Technologies' EPS Geofoam was specified to reduce movement, shears, axial forces and strains imposed on the pipeline. EPS types 22 and 15 were shipped from ACH Foam's local plant in Murray, Utah. "Geofoam has a low mass density, which reduces the vertical and horizontal stresses on buried utilities and compressive soils," explained Terry Meier, geofoam expert at ACH Foam Technologies. "This reduction in loading and deformation will likely improve the performance of a pipeline during and after a major seismic event along the fault area. Geofoam is also used as a compressible inclusion for systems undergoing static, monotonic and dynamic loadings. Its controlled compression can be used to reduce earth pressure against buried structures as well as deformation induced by structural loadings. Bartlett's team confirmed that the loadings that cause compression may include static and dynamic lateral earth pressure swells, frost heave pressures, settlements of support soils, faulting, liquefaction, landslides and traffic loads," Meier added. For more information visit achfoam.com.
<urn:uuid:f5d555f5-9de7-49be-80f3-2a54ea43e41b>
CC-MAIN-2017-04
http://www.marketwired.com/press-release/geofoam-protects-pipelines-from-earthquakes-1868931.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00200-ip-10-171-10-70.ec2.internal.warc.gz
en
0.952949
531
2.78125
3
Many IT leaders are recognizing the need to implement master data management (MDM) processes and technology to better manage enterprise master data. Central to an MDM program is the implementation of an architectural framework that will support the management of master data, create an authoritative source of enterprise master data, and ensure appropriate access to master data for all relevant applications. This note provides an introduction to MDM architectures. The topics covered include: - Differences between a system of entry, a system of reference, and a system of record. - Three types of MDM strategy. - Four types of MDM architecture. - Advantages and disadvantages of each type of architecture. With a better understanding of the available architectures and their virtues and vices, IT will be one step closer to implementing an MDM solution that will provide real value.
<urn:uuid:7d3dc6f4-0ec1-4178-9ca0-c83a22ff26b2>
CC-MAIN-2017-04
https://www.infotech.com/research/finding-a-master-data-management-architecture-that-fits
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00413-ip-10-171-10-70.ec2.internal.warc.gz
en
0.867593
170
2.5625
3